Modal Title
Frontend Development / Machine Learning / Software Development

New Tool Moves AI from the Backend to the Edge

Deci is among a group of new tools that are shifting artificial intelligence from the backend to the edge by reducing AI compute requirements.
Aug 9th, 2022 8:00am by
Featued image for: New Tool Moves AI from the Backend to the Edge
Feature image via Shutterstock.

Artificial Intelligence is moving from the backend to the frontend, in part thanks to emerging solutions designed to optimize how AI runs.

The market is still young, but Deci is one of the emerging challengers in this space. The Tel Aviv-based company aims to bring AI to “the real world.” The New Stack asked co-founder Yonatan Geifman to explain what that meant.

”A lot of AI is currently in the lab, for example on Kaggle on some experimentation phase, and we are trying to help people to get from the lab to the real world,” Geifman said.

The company essentially uses AI to improve AI, he explained. It offers a deep learning development platform to help data scientists build, optimize and deploy AI models. That in turn reduces the compute power — and associated cost — needed so that AI can be deployed at the edge, including on mobile devices.

As an example, Deci is working with one Fortune 500 company in the space of image and document editing, he said. The company is building a new AI application in its research lab, but found it couldn’t deploy that application into production on edge devices, which is where the application will run.

Reducing the AI Efficiency Gap

Currently, there’s a gap called the AI efficiency gap, which is the gap between experimental machine learning to production-ready machine learning, Geifman explained. This gap is caused by the computational complexity of the algorithm and the available compute of the software.

“Prior to Deci, data scientists used to manually tweak and design the structure; it’s called the architecture of the model of the deep learning model,” he said. ”Our technology is making automatic design of the structure of the neural network and this brings a lot opportunities in getting better performance — either [through] getting better accuracy or getting better runtime performance.”

In other words, Deci brings automation to the model design or selection stage of building deep learning models.

“We are leading an approach that is called production aware development, that help AI developers and data scientists to build from day one, with a production-in-mind approach that will let our algorithms run in real life or real-world applications after the first development stage, and not having two levels of iterations for the productization of those solutions,” he said.

Deep learning development platforms are still an early market, Geifman said, dominated by open source solutions developed by Google (TensorFlow) and Meta (PyTorch). However, two companies that he considered competitors are OctoML and Neural Magic, both of which take a different technology approach than Deci, he added.

“We’re working on the model level on the model design, and they are mostly working on the runtime level, like how these models are being run in production,” Geifman said.

Neural Architecture Search Technology to Automate

Geifman explained that with AI model selection, there’s often a trade-off between accuracy and latency.

“I can take larger models that will probably have more predictive accuracy, but will work slower,” he said. “We have an automatic approach for designing proprietary architectures, or model structures for neural networks that can break that trade-off between accuracy and latency to a new state-of-the-art level of accuracy/latency trade-off.”

Deci’s AutoNAC engine contains a neural architecture search (NAS) component that revises a given trained model to optimally speed up its runtime, according to its website. NAS is a technique for automating the design of artificial neural networks.

Co-founder Ran El-Yaniv, Deci’s chief scientist, has authored a technical paper, available on the vendor’s site, that explains how the engine optimizes AI performance for edge hardware in more detail. It explained that the AutoNAC “is a data- and hardware-dependent architecture optimization algorithm.”

Open Source Repository Key to Solution

There are two main components to the solution, Geifman said. One is an open source repository — available on Github — called Super Gradients, that helps developers build computer vision models. The second component is the SaaS application, which handles model selection, benchmarking and optimization capabilities. The SaaS component is available to try through Deci’s site. That platform helps manage and optimize the results of the AI models. The deployment, however, is on-premise.

The model serving capability is a Python-based inference engine that users can download from the SaaS platform and leverage in their environment to streamline the deployment process, a spokesperson told The New Stack.

“Developers can build and optimize and deploy AI applications based on that toolset that we provide for free, so we are expanding the coverage of the use of those tools for more and more use cases, from computer vision to natural language processing,” Geifman said.

Use cases include computer vision in the edge, which could be applicable to self-driving cars, robots, and smart city applications, he pointed out. It’s also useful for low-latency applications, such as search engines or SaaS or web applications that are driven by AI. It’s also used for image editing and visual search.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.