AMD has emerged as a legitimate threat to Intel’s dominance in the chip market but also knows it needs a coherent software portfolio to supplement its hardware offerings.
“We are a hardware company, but we are very clear how important the software platforms are to build out the solution set,” said the company’s CEO Lisa Su during an investor conference last week.
The company laid down its long-term software strategy during the conference, which includes new drivers and middleware offerings to speed up applications running on its CPUs, graphics processors, AI accelerators and networking chips.
Impact of Acquisitions
AMD this year made two acquisitions that give it a comprehensive stable of chips it can sell to customers. The company bought Xilinx in a $50 billion deal, which adds specialized AI chips for data centers, cars, equipment and edge devices. AMD also agreed to buy networking hardware company Pensando for $1.9 billion.
The acquisitions also boost AMD’s software stack for artificial intelligence, which the company views as one of the pillars for growth in the coming years.
“Today we have approximately 5,000 software engineers. That’s been strengthened by bringing Xilinx and Pensando into the mix. But we have been greatly increasing our own efforts in this area,” Su said.
AMD is making broad investments in enablement tools to make its CPUs and GPUs run better in terms of drivers, tools and compilers, Su said.
Integrating the Stacks
The company today offers a broad set of software tools for its chips which are mostly disconnected from one another.
The AI software stack for CPUs includes optimized inference models, compilers, libraries and runtimes for Linux and Windows. The ROCm, which is open source, has a similar set of tools for the heavy lifting of parallel applications across CPUs and GPUs. The company has Xilinx’s Vitis platform for specialized AI chips.
The next step is to integrate these disconnected software stacks into a single platform.
“It’s not just silicon…we have to enable software development,” said Victor Peng, president of AMD’s adaptive and embedded computing group, and formerly CEO of Xilinx.
Unified AI Stack
AMD announced a unified software strategy called Unified AI Stack 2.0, which unites ROCm, the CPU software stack, and Vitis environments. The front-end, which is focused on inferencing, will support major industry frameworks that include TensorFlow and Pytorch. The development and deployment stacks will include tools for quantization and pruning, which eliminates parts of the network to improve performance and reduce the resource requirements of a model.
“We’re going to unify even more of the middleware, or now we’re going to have a commonality in terms of our ML graph compiler. We’re going to have much more commonality in our library APIs and inferences. We’re going to definitely also roll out a lot more pre-optimized models for these targets,” Peng said.
AMD will also deliver more updates in the training stack, Peng said, but didn’t announce specific tools at the conference.
AMD’s rivals Intel and Nvidia already have well-established software stacks to complement the hardware offerings and are looking to generate revenue through subscription services.
Nvidia’s CUDA, which was introduced in 2007, has morphed from a development platform into proprietary AI applications for automotive, security, quantum computing, healthcare and other vertical markets. The applications are designed to run best on Nvidia GPUs.
Intel is developing cloud-based software services based on its hardware that will be offered on a subscription basis. One such project, called Project Amber, is an attestation service for the independent verification and trustworthiness of users and customer assets. The company will also provide gaming services from its new GPUs installed in cloud servers.