Deci Releases Open Source Deep Learning Training Library
Developers looking to implement deep learning often spend too much time sorting through various open source repositories — trying to integrate tools, replicate results, and increase accuracy. In response, Deci, a deep learning development platform, has taken its expertise in these tasks and released SuperGradients. It’s an open source deep learning training library for computer vision models that “enables developers to train PyTorch-based models for the most common computer vision tasks, including object detection, image classification and semantic segmentation with just one training script,” according to a release.
Yonatan Geifman, co-founder and CEO of Deci, explained to The New Stack that the state of open source deep learning is “very fragmented” and that SuperGradients is attempting to address this fragmentation.
“What we did with SuperGradients is we collected all these repositories, all the techniques, and integrated all of them into one library that can bring you all the value from the open source community into one place,” said Geifman. “And by that, you’re able to reproduce state-of-the-art results and adapt them to your use cases, data sets, and stuff like this, very easily.”
Since its beginning in 2019, Deci has been working to simplify working with deep learning — whether that is addressing the limitations of hardware or datasets, or building and training the models themselves. SuperGradients, said Geifman, is an open sourcing of the company’s “base component of training models” that aims to share “all the knowledge that we collected about how to effectively build models and train them, all the new architectures that we integrated in our internal tool, and the tricks in order to get better accuracy, and to train faster and to scale training across multiple GPUs.”
More specifically, SuperGradients focuses on three aspects of deep learning — image classification, object detection, and semantic segmentation — providing libraries for each, such as YOLOv5, DDRNet, EfficientNet, RegNet, ResNet, MobileNet. The company says it has often optimized these libraries to deliver higher accuracy compared to existing training libraries. At launch, SuperGradients includes more than 20 computer vision models (and more are on the way), giving developers a clear code structure that they can use to integrate them into their codebase. Geifman claims that SuperGradients helps developers not only deal with boilerplate code, but also achieve state-of-the-art results.
“We’re working hard to get all the tricks and best practices in training deep learning into that repository in order to push higher the results,” said Geifman. “For example, we have a family of models [where] we see an accuracy improvement of 1.5%, compared to the academic paper that announced them. We don’t put the model [in] if we don’t believe that the accuracy that we give, or that you can gain, with that model on SuperGradients isn’t competitive to what you can get in another place. We want SuperGradients to be the one-stop-shop for getting state-of-the-art results.”
Using SuperGradients, developers can either use a pre-trained model, fine-tune a model for their specific use case, or build a model themselves. Geifman said that each of these tasks, which they might perform on their own separately, is made easier and more efficient with SuperGradients. Adding a pre-trained model, he said, would take just a few lines of code, fine-tuning a model could take 10 to 20, and building from scratch could reduce the code needed in half.
Moving forward, Geifman said the company is planning to not only add more models, but also to “push the accuracy levels higher with more and more tricks and best practices.” After that, better integrations and working closely with cloud providers is on the roadmap.
“We want to be a part of the ecosystem in terms of integration [with] other tools that people use today,” said Geifman. “SuperGradients could be one component in a wider AI platform that companies are building today.”