Modal Title
Machine Learning

This Week in Numbers: Favored Processor Architectures for Artificial Intelligence

Aug 12th, 2017 9:00am by
Featued image for: This Week in Numbers: Favored Processor Architectures for Artificial Intelligence

If you were developing a minimal viable product (MVP) for the consumer software market, would it be for Windows PCs, Macs, Linux machines, iOS and Android devices? No, that would be foolish. Yet, it appears that developers are refusing to choose between CPUs, GPUs and Field Programmable Gate Arrays (FPGAs) when optimizing their artificial intelligence (AI) applications.

According to an Evans Data survey, the vast majority of developers optimize their AI applications for specific hardware. Of those that do, 63 percent target CPUs, with almost as many (62 percent) targeting GPUs. As a newer technology, FPGAs does well with 53 percent targeting it. Only 4 percent target other accelerators like Tensor Processing Units (TPUs). This data confirms the skepticism Moor Insights analyst Karl Freund recently expressed about the market for ASICs.

Even though each architecture has advantages for different use cases, that does not mean a developer can focus on all of them. The average developer should probably ignore a lot of the vendor hype about very specialized hardware for AI. For now, a good strategy is to focus on either GPUs or FPGAs while continuing to monitor the other category.

Will one platform emerge that allows developers to program for multiple types of hardware all at once? Despite several such products for mobile app developers, this approach never gained critical mass in that market. Instead, developers looked to common abstractions like APIs and HTML browsers as a way to do an end run on hardware specs. Whether or not that will happen with AI and machine learning applications unknown.

However, another approach — common standards — may have a better chance at viability. The Khronos Group is an industry supported body that promotes open standards and cross-platform technology. For now, the group has mostly cared about compatibility between types of GPU hardware. Looking at its roadmap, we wonder whether the OpenCL standard can move beyond its FPGA focus and become an easy way for developers to migrate applications between different hardware categories.

Recent TNS coverage of this topic can be found in FPGAs and the New Era of Cloud-based ‘Hardware Microservices’ as well as coverage about Apache Spark from the NVIDIA GPU Technology Conference.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.