Intel Invokes Linus Torvalds to Push Software Tools
A visibly uncomfortable appearance by Linux legend Linus Torvalds at Intel CEO Pat Gelsinger‘s keynote was headline news at the company’s Innovation conference this week.
In a conversation with Gelsinger, Torvalds recalled using a PC with a 386 chip in 1991 to create an OS that eventually became Linux. The conversation had the desired effect — it showed that software development on x86 mattered back in 1991, and it still does now.
Commitment to Openness
“We’ve talked about our commitment to being open — systems, software and hardware standards. Our collective potential as an industry is unleashed when we enable openness, choice and trust,” Gelsinger said, before introducing Torvalds.
Torvalds recalled writing Linux to the 386 CPUs, which was the main chip in the PC at the time. Today’s machines are becoming far more diverse and complicated, with CPUs offloading applications such as AI and analytics to accelerators like GPUs, FPGAs and ASICs, which excel at such tasks.
Over the two days of the show, Gelsinger and Intel executives repeatedly pushed the notion of “open accelerated computing” in which software developers can eliminate the idea of writing to specific chips or hardware.
The company announced the 2023 version of its oneAPI programming toolkit, which ensures coders don’t have to worry about the hardware by automating the execution of code across CPUs, GPUs and other accelerators.
“We have sort of a single common programming language that could write once, run everywhere. That’s a really key concept. That was fundamentally a groundbreaking change that Java created,” said Greg Lavender, Intel’s chief technology officer, on a day two keynote.
Intel earlier this year acquired Codeplay Software as it repositions itself for a diverse chip future beyond just CPUs. Codeplay was largely known for its work around the SYCL parallel-programming model, which includes tools, runtimes and execution models so standard C++ code can be adapted for concurrent execution across CPUs, GPUs and other processors.
The oneAPI compiler takes SYCL source code, which is just C++, and generates code for GPUs, CPUs and FPGAs belonging to Intel and other hardware companies. Intel’s approach is divergent to a closed-source approach taken by Nvidia, whose CUDA proprietary programming frameworks lock developers into the company’s GPUs.
The 2023 version of oneAPI has 42 different tools and includes support for the 4th Generation Scalable Xeon chip family, which is code-named Sapphire Rapids. Intel is expected to ship Sapphire Rapids chips early next year after delays due to snags in validating the chips.
The oneAPI tools can migrate CUDA source code automatically into the SYCL C++ language. “We call that SYCLomatic. It’s like a washing machine that washes it from proprietary to open,” Lavender said.
Intel is also opening up governance of oneAPI, with contributors now being involved in the decision-making process for the specification. Open governance should give developers more efficiency in terms of coding and taking control of their own environment.
“Up until now, in the nascent stages of oneAPI, we’ve been community-driven, and there’s a great track record of community pull requests and support. But ultimately, we’re the author of the governing body of the spec. By moving to open governance, all contributors who have got standing have the ability to vote changes and determine the specification,” said Joe Curley, Vice President & General Manager – Intel Software Products & Ecosystem, in an interview with The New Stack.
The oneAPI specification has multiple components, including the Khronos-driven SYCL language, standardized library bindings, and tools for developers to write code.
Intel Developer Cloud
Intel also announced Intel Developer Cloud, which will provide cloud-based access to yet unreleased Intel chips that include Sapphire Rapids and Gaudi 2 AI chips. The goal is to provide a platform for developers to write code, so software is ready by the time the chips start shipping in volume.
The Developer Cloud will provide access to the latest version of oneAPI and SYCL layer so developers can deploy standard C++ applications in heterogeneous environments. Another goal is to help developers write cloud native applications that can ultimately be deployed to Google Cloud, AWS or Microsoft Azure for instances using Intel’s upcoming chips.
SYCL is emerging as an important component in Intel’s foundry strategy, which involves manufacturing chips for customers that include CPUs, GPUs and other accelerators in a tight chip package. Intel previously manufactured x86 chips for itself but has embraced ARM and RISC-V architectures after opening up its factories to customers.
Intel expects to make chips for customers that mix ARM, x86 and RISC-V cores organized as tiles in a single chip package. Intel is pinning its hopes on SYCL so customers can write code that just runs regardless of the composition of the chip package.
“The core is the differentiated IP as well as the software services, the multi-instruction set architecture choices and then also oneAPI at the top,” said Gary Martz, senior director for Intel Foundry Services, RISC-V enablement, during a breakout session to outline the foundry services strategy to attendees.
Parallelism is not a first-class citizen in standard C++, though improvements are coming. Intel believes there are many features in SYCL that provide greater levels of accelerated computing features. Ultimately, the company hopes some of those features will make it to future versions of standard C++ releases, though it may not be anytime soon.
“There were … some interesting proposals that didn’t carry through on C++ 23. 26 is probably the likely intercept and we think as an industry we’ve got a lot of work to do to get that in,” Intel’s Curley told The New Stack.
OneAPI is compatible with languages and programming models that also include Python, OpenMP, and Fortran.
Other New Projects
Intel also announced Getti, a computer vision AI software kit for developers to rapidly develop AI models. One of the early access partners is the Royal Brompton Hospital in the UK to help identify rare respiratory conditions without any AI expertise. The team can analyze medical images rapidly, which helps in the diagnosis and treatment options for patients with severe respiratory conditions like cystic fibrosis.
The chip maker also announced new features in Project Amber, which is a confidential computing service that secures the trustworthiness of data as it moves between devices.
The attestation service generates a code when data leaves a source computing device, which needs to be matched by the destination device. If the code matches, the destination device lets the data into a secure enclave in which the tasks can be executed. If the codes don’t match, the data isn’t let in.
Project Amber gets rid of worries about where the data is being executed along waypoints in a distributed cloud, said Anil Rao, vice president and general manager for systems architecture and engineering at Intel’s Office of the CTO.
This is particularly important for AI and machine learning with data coming in from multiple sources, including sensors, which need to be verified as genuine before letting it into the learning models.
One new Project Amber feature is TDX instructions, which create an encoded secure virtual machine layer. TDX instructions are included with the upcoming Sapphire Rapids chips. When entering or exiting applications in the virtual machine, TDX removes the hypervisor from the trust boundary, Rao said.
The attestation service already features SGX, which is already in Intel chips, to create a secure execution layer in memory. Project Amber will support attestation in hybrid cloud environments and with multiple cloud service providers.
“You don’t need to have a different attestation mechanism as an enterprise when you go to different kinds of clouds,” Rao said.