Candle: A New Machine Learning Framework for Rust

Rate this post

Artificial intelligence (AI) company Hugging Face has just released a new minimalistic, machine learning (ML) framework for Rust called Candle. It has already attracted 7.8 thousand stars and 283 forks on GitHub.

Hugging Face also introduced a new coder tool called SafeCoder, which leverages StarCoder to allow organizations to create an on-premise equivalent of GitHub Copilot. Earlier this year, the open source company released a JavaScript library that allows frontend and web developers to add machine learning capabilities to webpages and apps.

Hugging Face is investing in developer tools that will power its 300,000 open source machine learning models, explained Jeff Boudier, head of product and growth at the startup.

“The big picture is that we’re developing our ecosystem for developers and seeing a lot of traction while doing that,” Boudier told The New Stack on a $235 million fundraiser that included backing from Google, Amazon, Nvidia, Salesforce, AMD, Intel, IBM and Qualcomm. “Now with the support of all these great platforms and players, we can make sure we have support for the community, no matter what platform they use to run their machine learning models.”

Candle, Rust ML Framework

ML frameworks are typically written in Python and supported by frameworks such as PyTorch. These frameworks tend to be “very large, which creates slow instances on the cluster,” Hugging Face explained in Candle’s FAQ.

Candle is designed to support serverless inference, which is a way to run machine learning (ML) models without managing any infrastructure. Candle does this by allowing the deployment of lightweight binaries, the FAQ explained. Binaries are executable files that contain the files and resources required to run the application in the target environment.

Candle allows developers to extract Python from production workloads. “Python overhead can seriously hurt performance, and the GIL is a notorious source of headaches,” explained the FAQ, referring to the Python GIL, or global interpreter lock. The GIL offers advantages, but prevents CPython from achieving full multicore performance, according to cloud storage vendor Backblaze, which explains this in this blog post .

There are three candle app demos that developers should check out:

Safecoder: Co-Pilot for Initiatives

One of the reasons enterprises don’t rush to Copilot is because their code can go to training the model, which means the data goes out the door. Not surprisingly, organizations are in no rush to adopt it.

SafeCoder will allow that code information to remain on-premise while informing the model, Boudier explained.

Customers can build their own code LLM, fine-tuned to their proprietary codebase, using open models and libraries, without sharing their code with Hugging Face or any other third party, he said.

“With SafeCoder, Hugging Face delivers a containerized, hardware-accelerated code LLM inference solution that customers can deploy directly into secure infrastructure, leaving their secure IT environment without code input and completion,” wrote Boudier and Hugging Face Tech Lead. Philip Schmid announces the tool in an August 22 blog post.

It is based on StarCoder, an open source LLM alternative that can be used to build chatbots or AI coding assistants. StarCoder is trained in 80 different programming languages, he said, including Rust.

“StarCoder is one of the best open models for code instruction,” Boudier said. “StarCoder is an open, pre-trained model trained on trillions of tokens of commercially permissioned open source project data. It’s a training data set that you can go to the Hugging Face Hub, see if any of your code is in the data set, so it’s really built with consent and compliance from the get-go.”

VMware is an early adopter of SafeCoder, he added.

“I can have a solution built uniquely for my company and deployed in our infrastructure so that it runs in a secure environment,” Boudier said. “That’s the promise of SafeCoder.”

group Created with Sketch.

Leave a Comment