Intel Poaches Open Source Execs from Netflix, Apple to Boost Linux Efforts

Chipmaker Intel in recent weeks has poached open source veterans from the likes of Apple and Netflix as it looks to clean up the Linux kernel and reestablish a dialog with the open source community.
The chipmaker last month hired Arun Gupta, who formerly was at Apple, to be vice president and general manager for open ecosystem. Shortly after, the company tapped Brendan Gregg, formerly from Netflix and an expert on Linux performance tools, to be an Intel fellow.
“I’m attracting some of the best talent in the industry. They’re leaving the big players, coming to Intel because of … what [CEO Pat Gelsinger] and I are driving with our software-first commitment. Our open source commitment is back,” said Greg Lavender, chief technology officer at Intel during a press conference at last week’s Vision trade show.
Intel employs 17,000 software engineers and remains one of the largest contributors to the Linux kernel. The company relies heavily on the open source community to develop underlying software so it can sell more chips to enterprises.
Lavender said Intel will soon resuscitate its public engagement and dialog with the open source community, an effort that waned in recent years. The company previously engaged with the open source community through its Open Source Technology Center and website 01.org, which are now dormant. Imad Sousou, who led the open source efforts for the company, left Intel in 2020.
Open Source Vision
Intel outlined its software-first strategy at its Vision trade show near Dallas, which ended up being more of a preview event for the big announcements coming up at its Innovate conference in San Francisco in September.

At the 2019 O’Reilly Software conference, Arun Gupta, then at AWS, introduced Kubernetes to Java developers.
The company’s priority is to clean up the Linux kernel to support the newer styles of computing driven by applications like artificial intelligence. Deterministic workloads rely heavily on accelerators like AI chips and GPUs, which Intel introduced in recent years and work adjacent to the company’s CPUs.
Specific Linux distributions still support x86 architectures and hardware from 1999, and it is time to clean up the code and add support for newer hardware, Lavender said.
For one, Intel is trying to remediate issues like thread locking resulting in wasted CPU cycles and excess energy usage.
“The whole industry, I think, could contribute a lot better quality built into the open-source ecosystem. This one wouldn’t have to do with the Linux kernel in particular, and make it more resilient to badly written code,” Lavender said.
Intel earlier this year made an under-the-radar acquisition of German open-source company Linutronix. With the acquisition, Intel also got founder Thomas Gleixner, an outspoken but prominent voice in the open-source community.
Linutronix is the architect of the PREEMPT_RT patch set, which allows the Linux kernel to prioritize real-time applications for processing. The patchset had been languishing for about two decades for the lack of developers and financial commitment, but Intel’s involvement will help upstream it faster to the mainline Linux kernel.
AI Factory
“We want to drive more real-time Linux capabilities because for the edge and automotive industry — that’s a good investment to make. As we make those investments, we want to monetize the top of the stack,” Lavender said.
Chipmakers are relying on more revenue through software models, in which services are offered on top of their chips. For example, Intel is looking to offer code-optimization services through assets like Granulate, another company it recently acquired. Intel is taking a more conventional open-source approach by letting partners develop tools, with Intel monetizing by plugging its services on top of code.
Nvidia is taking a largely closed source approach by bundling its software, hardware and services. For example, Nvidia is offering what it calls an “AI factory” with its GPUs, software stacks and data sets. Companies can drop raw AI requests into Nvidia’s factory, and the output is a fully-realized AI product ready for deployment.