TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Operations

Intel Cues New Xeon Chips for an AI Future

Jul 11th, 2017 9:01am by
Featued image for: Intel Cues New Xeon Chips for an AI Future

Should software define how chips are designed? Or vice versa? Intel has struggled with that question for decades, but now is downplaying its CPUs and treating the data center like one large computer.

That was the underlying theme at Intel’s launch of new Xeon Scalable processors on Tuesday at an event in Brooklyn, New York. Intel called the new chips the “biggest data center advancement in a decade,” and a general purpose compute engine that would drive artificial intelligence, networking, storage and the cloud.

Intel’s trying to break away from being a CPU-only company and is building a roster of chips including FPGAs and ASICs to drive computing in PCs, servers, drones and autonomous cars. The new Xeon chips are designed to work well will co-processors, with one example being its Altera FPGAs for machine learning.

The Xeon chips are also designed to work harmoniously with network infrastructure, in this case the homegrown high-speed OmniPath interconnect. The goal is to speed up communication between storage, memory and other hardware, and to increase the overall productivity of a data center.

Intel’s Xeon Scalable chips are named like credit cards, with the most valuable metal name being the fastest. The Xeon Scalable Platinum processor is the fastest with up to 28 cores, followed by Gold (up to 22 cores), Silver (12 cores) and Bronze (8-core) chips. These processors are based on the Skylake architecture and have been five years in the making, said Navin Shenoy, executive vice president and general manager for the Data Center Group at Intel.

The Xeon Scalable processor is up to 1.6 times faster than its predecessor based on the Broadwell architecture, the chip maker claimed. Intel’s also incorporated a new mesh design to speed up communications between CPUs and memory. The mesh design cuts latency by adding more avenues for the CPUs and memory to communicate, which is an improvement from the one-way avenue incorporated on earlier chips. The mesh design itself isn’t new; it has been used in research chips and commercial products like the 2010 Tilera Tile-GX chip.

There are more than 100 features that make Xeon Scalable faster for machine learning and inferencing, said Naveen Rao, vice president for the artificial intelligence products group at Intel, in a video. Compared to older chips, Intel claimed the chips were 113 times faster for AI workloads based on custom software. The AVX-512 instructions is a key AI feature that provides increased parallelism and vectorization, which is important for faster processing of machine learning algorithms.

The previous-generation Xeon chips had AVX-256, so the new chips provide two times the performance boost in flops per cycle. The AVX-512 feature is also important for scientific modeling and simulation.

Alibaba applied AVX-512 instructions to several workloads like image recognition and saw significant latency reduction and throughput enhancement. With the Xeon Scalable chips, the Chinese company also saw 80 percent improvement in data-center applications, said Ming Zhou, general manager for Alibaba Infrastructure Service, in a video.

Amazon Web Services will also be putting the Xeon Scalable chips in AWS. Amazon has worked with Intel to improve inferencing on its EC2 C5 instances by up to 100 times, primarily via optimizing its deep-learning engines and Intel’s improvements in the latest version of its Math Kernel Library.

Google will also offer servers based on the Xeon Scalable chips in its cloud services.

At the event, AT&T’s chief strategy officer John Donovan said it had seen a 30 percent performance improvement in its data-center server performance over the prior chip. AT&T for years has used a software-defined network in which telecom and wireless capacity can be quickly reassigned based on customer needs, with Intel’s CPUs playing a role in speeding up the infrastructure performance. The new Xeon Scalable chips will also assist in AT&T’s rollout of Flexware, an NFV service where customers are handed over control over how network resources are assigned.

Intel dominated the server market for years, but the Xeon Scalable chips are arriving in a more competitive landscape. AMD recently launched its Epyc x86 server chip, which has only AVX-128 but offers lower-priced chips to cloud service providers. Qualcomm is pushing out ARM server chips that are also targeted at cloud service providers. On the high-end, IBM later this year will roll out Power9 server chips, which has drawn the attention of companies like Google and Rackspace.

Intel made no mention of support for high-speed interconnects like Gen-Z, which is being backed by the server chip rivals. Intel is also taking a risk by pushing its expensive proprietary technologies like OmniPath and not-yet-mature Optane storage along with the server chips, which may be unattractive to customers. However, companies will be able to buy servers with only the chips from the likes of Lenovo, HPE and Dell.

Here’s some bonus coverage from the event:

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.