Will real-time data processing replace batch processing?
At Confluent's user conference, Kafka co-creator Jay Kreps argued that stream processing would eventually supplant traditional methods of batch processing altogether.
Absolutely: Businesses operate in real-time and are looking to move their IT systems to real-time capabilities.
Eventually: Enterprises will adopt technology slowly, so batch processing will be around for several more years.
No way: Stream processing is a niche, and there will always be cases where batch processing is the only option.
AI / Rust / Software Development

Candle: A New Machine Learning Framework for Rust

Hugging Face created a new minimalistic machine learning framework for Rust, as well as SafeCoder for creating enterprise code assistants.
Sep 7th, 2023 7:06am by
Featued image for: Candle: A New Machine Learning Framework for Rust
Image by Myriams-Fotos from Pixabay

Artificial intelligence (AI) company Hugging Face recently released a new minimalistic, machine learning (ML) framework for Rust called Candle. It’s already attracted 7.8 thousand stars and 283 forks on GitHub.

Hugging Face has also rolled out a new coder tool called SafeCoder, which leverages StarCoder to allow organizations to create their own on-premise equivalent of GitHub Copilot. Earlier this year, the open source company released a JavaScript library that allows frontend and web developers to add machine learning capabilities to webpages and apps.

Hugging Face is investing in developer tools that will extend the reach of its 300,000 open source machine learning models, explained Jeff Boudier, head of product and growth at the startup.

“The big picture is that we’re developing our ecosystem for developers and seeing a lot of traction doing it,” Boudier told The New Stack on the heels of a $235 million fund raise that included support from Google, Amazon, Nvidia, Salesforce, AMD, Intel, IBM and Qualcomm. “Now with the support of all these great platforms and players, we can make sure that we have support for the community, whichever platform they use to run their machine learning models.”

Candle, the Rust ML Framework

ML frameworks typically are written in Python and supported by frameworks like PyTorch. These frameworks tend to be “very large, which makes creating instances on a cluster slow,” Hugging Face explained in Candle’s FAQ.

Candle is designed to support serverless inference, which is a way to run machine learning (ML) models without having to manage any infrastructure. Candle does this by allowing the deployment of lightweight binaries, the FAQ explained. Binaries are the executable files that contain the necessary files and resources for the application to run on a target environment.

Candle also allows developers to remove Python from production workloads. “Python overhead can seriously hurt performance, and the GIL is a notorious source of headaches,” the FAQ explained, referring to the Python GIL, or Global Interpreter Lock. The GIL offers benefits, but prevents CPython from achieving full multicore performance, according to cloud storage vendor Backblaze, which explained it in this blog post.

There are three Candle app demos that developers to check out:

SafeCoder: A Co-Pilot for Enterprises

One of the reason why enterprises aren’t rushing to Copilot is their code can go toward training the model, which means data out the door. Not surprisingly, organizations aren’t in a rush to embrace that.

SafeCoder will allow that code information to stay on-premise while still informing the model, Boudier explained.

Customers can build their own Code LLMs, fine-tuned on their proprietary codebase, using open models and libraries, without sharing their code with Hugging Face or any other third party, he said.

“With SafeCoder, Hugging Face delivers a containerized, hardware-accelerated Code LLM inference solution, to be deployed by the customer directly within the Customer secure infrastructure, without code inputs and completions leaving their secure IT environment,” wrote Boudier and Hugging Face tech lead Philipp Schmid in an Aug. 22 blog announcing the tool.

It’s based on StarCoder, an open source LLM alternative that can be used to build chatbots or AI coding assistants. StarCoder is trained on 80 different programming languages, he said, including Rust.

“StarCoder is one of the best open models to do code suggestion,” Boudier said. “Star coder is an open, pre-trained model that has been trained on over a trillion tokens of commercially permissible open source project data. That’s a training data set that you can go look on the Hugging Face hub, you can see if any of your code is within the data set, so it’s really built with consent and compliance from the get-go.”

VMware is an early adopter of SafeCoder, he added.

“I can have the solution that’s uniquely tailored to my company and deployed in our infrastructure so that it runs within the secure environment,” Boudier said. “That’s the promise of a SafeCoder.”

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.