Intel Takes a Deeper look at Software Development

When Pat Gelsinger returned to Intel in February after more than a decade away, he made short work of putting effort and investments into shoring up the company’s famed manufacturing business, which in recent years had been dogged by production delays and missed deadlines and made the world’s dominant chipmaker suddenly look vulnerable.
It included promising to spend $20 billion to build two new chip factories — or “fabs” — launching a formal chip foundry business to build other people’s processors, promoting efforts to have more chips made in the United States and then announcing last month that it would invest another $95 billion to build more fabs in Europe.
These moves not only announced that Intel was committed to remaining a major chip manufacturer, but also was a push back against the gaining strength of rivals such as AMD, Arm and Nvidia as well as the growing number of smaller companies making chips optimized for such specific workloads as artificial intelligence (AI) and machine learning.
VMware’s Software Influence
However, Gelsinger — who spent the previous eight-plus years as VMware’s top executive — also came back to Intel with a decidedly “software-first” mantra. In an IT world where data is coin of the realm, hybrid cloud is the emerging operating model, the edge is growing and modern workloads like AI, machine learning and analytics are coming to the fore, software becomes a central focus and the underlying hardware — including semiconductors — its enabler.
That focus on software and developers will take center stage this week at Intel’s inaugural Intel Innovation event, essentially the reboot of the company’s massive Intel Developer Forum, which was last held in 2016. There will be a range of announcements that will hit on hardware and chips, but Intel officials will make its renewed efforts to focus on software, developers and the open source community the main event.
“Our view is, give the market choice, give the developers choice and we want to be the trusted partner that will provide significant amounts of open source technology, which we already do but we haven’t talked much about it over the last few years,” Greg Lavender, corporate CTO and senior vice president and general manager of Intel’s new Software and Advanced Technology Group, said during a briefing with journalists. “We’ll be talking about the breadth and depth of what we have and its availability.”
Bringing the Parts Together
Lavender came to Intel in June in large part to create a software group within Intel, pulling the disparate pieces scattered throughout the company under a single umbrella. He and Gelsinger have known each other for decades and spent four years together at VMware. He said Gelsinger, who spent 30 years at Intel before leaving in 2009, has a deep understanding of Intel hardware history, but that after his time at VMware, the software-first approach is “now part of his DNA.”
At the event, Intel relaunched its Developer Zone, making such developers’ assets reference designs and toolkits available via a single portal. The components developers will have access to span everything from clients and data center hardware to AI, the cloud and edge, and gaming. The Developer Zone also will include a consolidated Intel Developer Catalog of software offerings and an improved DevCloud development environment for testing and running workloads on such Intel components as CPUs, GPUs, field-programmable gate arrays (FPGAs), accelerators and software tools.
Company officials also said the company is ready to ship the latest toolkits for oneAPI, a programming model launched last year designed to simplify software development across Intel’s CPUs and accelerators like GPUs and FPGAs. OneAPI 2022 will include 900 new features, including the ability to develop software for CPUs and GPUs through a new unified C++/SYCL/Fortran compiler and Data Parallel Python (for Python development). The new toolkits also expand Advisor accelerator performance modeling to include VTune Flame Graph and extend integration with Microsoft Visual Studio Code and support for Microsoft WSL 2.
In addition, new partners to Intel’s oneAPI Centers of Excellence, including Oak Ridge National Lab, the University of Tennessee the University of California Berkeley, are providing strategic code ports, more hardware support, curriculum development and new technologies and services aimed at accelerating the adoption of oneAPI.
Working with the Open Source Community
The chipmaker also highlighted the work it is doing with the open source community with some of its newest chips, including the Next Generation Xeon Scalable processor — codenamed “Sapphire Rapids” — and Ponte Vecchio GPU. Intel is partnering with SiPearl, a chipmaker that is designing an Arm-based processor to be used in Europe’s exascale supercomputers, by adding Ponte Vecchio as an accelerator. In addition, SiPearl is leveraging oneAPI as the open software specification to improve developer productivity and workload performance.
Lavender noted that Intel’s compiler technology falls under his purview and is critical to Sapphire Rapids and other chips because it optimizes open source code as well as code from ISVs.
On the programmable networking front, Intel is introducing an ASIC-based intelligence processing unit (IPU) codenamed “Mount Evans” developed in conjunction with Google Cloud includes support for industry-standard programming language the open source Infrastructure Programmer Development Kit to ease access for developers to the technology housed in Google Cloud’s data centers. Tofino 3, Intel’s fabric processor, leverages P4 programmability and AI workload acceleration to bring intelligence to switching.
Getting the Message Out
Lavender said Intel has had a strong software development team and relationship with the open source community for years but hasn’t been vocal enough about it in the past. What’s key for the chipmaker is ensuring that its software is available and accessible to the broadest range of developers who are working with Intel platforms.
For example, the latest version of the TensorFlow library for AI and machine learning workloads — version 2.6, released in August — includes Intel’s accelerators and has been downloaded more than 8 million times, he said.
“We’re reaching out so that it’s relatively seamless,” Lavender said. “If you want to run TensorFlow on the Xeon processor or an Intel Core processor, wherever you happen to be running your inferencing technology, you have that advantage. Our OpenVINO technology [for deep learning and AI inferencing workloads] is widely used and developers can take that as an entire platform and ecosystem and don’t have to understand the underlying hardware particularly to get value to deploy their models to the edge and do inferencing on the data at the edge.”
The key, he said, is that Intel “has to meet developers where they are.”
“We want to get people interested in our technology and bring consultants and advisors in for free to help them grow their companies,” Lavender said. “Whether they’re in Azure Cloud, whether they’re in Amazon Web Services, whether they’re in Google Cloud, whether they’re in Alibaba Cloud, with IBM Cloud, whether they’re running on their own private cloud through their data centers on Red Hat or SUSE or any other distribution, our software technology is going to be there and we think the oneAPI toolkit will run on all those environments today. We need to work in all those environments.”