Intel is taking a small step in a reboot of a decades-long initiative to successfully make money off software.
The chip maker’s new machine learning toolkits are being targeted squarely at specific verticals as a one-stop-shop to quickly deploy artificial intelligence applications.
The toolkits are based on open source tools and targeted at manufacturing, utilities, health care and other industries.
The release includes executable models and hardware-level instructions that speed up AI training and inferencing. The tools are packaged inside Intel’s OneAPI parallel programming framework, which works in the background to extract the most horsepower in server environments that include CPUs, GPUs and AI accelerators.
The chip maker has been taking steps to empower OneAPI, which is advertised as an open-source toolkit. While Intel advertises the new toolkits as being open source, they have been tweaked specifically to server environments with Intel’s chips.
The toolkits are more like freebies being dangled at verticals, which may ultimately employ Intel for deployments rather than doing it in-house. Intel dominates the server CPU market and also offers AI chips that include Gaudi accelerators, graphics processors, and field-programmable gate arrays.
Intel is trying to replicate the enormous success that Nvidia had with CUDA, which started off as a set of programming tools but evolved into ready-to-deploy AI programs for specific verticals. Nvidia’s CUDA tools are targeted at systems using its GPUs, while Intel’s OneAPI works across chips from all major brands.
The software has become a focal point to cover up the complexity of AI computing, which requires multiple chips and different pieces of hardware to execute. Intel is also buying SaaS companies as it expands its software-as-a-service strategy.
Intel last month snatched up Codeplay Software, which provides tools, runtimes and execution models so standard C++ code can be adapted for concurrent execution across CPUs, GPUs and other processors. Codeplay fits well into the OneAPI parallel programming framework.
Intel also acquired Granulate and Linutronix in its ongoing buying spree of SaaS companies.
The AI toolkits, developed with Accenture, include optimizations targeted at Intel hardware and software. The utility asset health includes 10 million data points and Intel-optimized libraries to monitor field assets that help in the efficient distribution of energy. The system was trained with Intel’s AI Analytics Toolkit, and “optimized training and inferencing to be 20% and 55% faster,” compared to the standard Accenture kit not optimized for Intel, according to an Intel statement.
The other toolkits include intelligent document indexing, which adds more context to structured and unstructured data by using AI to categorize and classify documents. “These tools improve data pre-processing, training and inferencing times to be 46%, 96% and 60% faster,” compared to Accenture’s intelligent document indexing kit, the chip maker said.
Intel’s OpenVINO runtime is also a key component in the new inferencing toolkits.
Intel’s software efforts are picking up steam as its CPU shipments face delays. Intel has acknowledged it has delayed the shipment of its Sapphire Rapids chips. The delays come as Intel plans a massive semiconductor expansion plan, with new factories in the U.S. and Europe.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Granulate.
Featured image via Pixabay.