TNS
VOXPOP
Where are you using WebAssembly?
Wasm promises to let developers build once and run anywhere. Are you using it yet?
At work, for production apps
0%
At work, but not for production apps
0%
I don’t use WebAssembly but expect to when the technology matures
0%
I have no plans to use WebAssembly
0%
No plans and I get mad whenever I see the buzzword
0%
AI / Cloud Services / Hardware

AWS Goes Deep on AI, Chip Power… and Cost Savings

Amazon Web Services highlighted AI and LLMs as a common thread among the slew of new products and services announced at re:Invent.
Dec 7th, 2023 9:20am by
Featued image for: AWS Goes Deep on AI, Chip Power… and Cost Savings
Featured image via Unsplash.

Amazon Web Services lived up to its tradition at its Re:Invent annual user’s conference last week, with a spate of announcements of more powerful Graviton processors, databases, serverless and an onslaught of AI and LLM. These announcements show that it’s ready to offer the requisite hardware computing sources for organizations seeking to offload operations onto the cloud, both for those with high scaling needs and those new to the field of cloud native and LLM and machine learning.

At the same time, given the state of affairs, costs continue to rise significantly. Cost optimization and spending less on cloud costs have become prevalent themes in IT and DevOps today, of course. It is hard to find someone these days who is not complaining about rising costs. Amazon shares this guilt with Google Cloud, Azure and other cloud providers, of course, as overall cloud computing costs continue to rise (not all costs, though, of course). In this context, AWS has not foregone its vision of the “everywhere cloud” but used the conference venue to communicate cost-optimization strategies, or as reflected in the key theme of Amazon CTO Werner Vogels’ conference keynote, the “frugal architect.” He discussed, in addition to a number of other topics, a philosophy of cost-cutting, analysis, and optimization.

Vogels also went to a more sobering point when discussing sustainability. This could describe the state of concern, especially regarding the environment and climate change. But as far as the industry state of CO2 emissions for software goes as Niki Manoledaki, a software engineer for Grafana, astutely stated during KubeCon+CloudNativeCon, measuring the energy and carbon footprint of software is “not very widespread.”

But you have to start somewhere. At a minimum, extending observability to gauge resource consumption can be considered a starting point. Fresh ideas are urgently needed.

“Sustainability is a freight train that is coming your way that you cannot and should not escape. Additionally, self-imposed constraints around system building in terms of costs and sustainability is a very good idea,” Vogels said. “Try to believe that constraints, even self-imposed, can bring creativity.”

For those organizations running retail operations on the cloud, “We need to understand that retail margins are razor thin and we need to have total control over our costs at any time,” Vogels said. “Now, I also know that quite a few of you are literally running hundreds of applications and it’s sometimes really difficult to really understand, sort of, what are the metrics” that organizations need to care about.”

myApplications, announced last week, is intended to offer more visibility for application health, security and performance for applications on AWS cloud environments. With it, there is an application assigned to monitor different resources to “get a single view of this observability into many of the standards, functional requirements and costs, all of which is a proxy for sustainability,” Vogels said.

As a way to gather metrics, traces and logs for what AWS calls real-user and synthetic monitoring — which can be assumed is AI-assisted —  Amazon CloudWatch Application Signals, which Vogels announced, is designed to help instrument applications, in order to adhere to best practices for application performance.

“By automatically instrumenting the application that you’re building, you can have one single dashboard looking at all the metrics that are relevant for your EKS application,” Vogels.

However, always make sure that your observability metrics always “include costs and sustainability,” Vogels said.

Already, Amazon CloudWatch is “the most used observability platform worldwide,” EMA analyst Torsten Volk told The New Stack. “Now Amazon is pushing to have customers consolidate monitoring and logging across competing clouds and on-premises infrastructure to CloudWatch” Volk said. “This places them in direct competition with today’s largest observability platform vendors such as DataDog, Dynatrace, and NewRelic.”

However, as Amazon never had a lot of success with its hybrid and multicloud offerings, Volk said: “It will be interesting to observe their progress in the general observability arena. They have added a number of AI capabilities at the infrastructure monitoring level and CloudWatch Application Signals now propels them into the app observability arena. In this area, as Vogels said, instrumentation is the one biggest pain point for app developers and whoever can help alleviate this pain best, will be in a great position to win.”

LLM Blast

To say that one of re:Invent’s main themes was LLM and ML, would be an understatement. Announcements, workshops and talks extended across various facets of just how organizations developers and operation teams can best latch on to the ML dragon tail. Just where this is all going to end up nobody knows exactly, of course. AWS Bedrock announcement this week emphasized its compatibility with different LLMs and other ML tools and platforms organizations may adopt and replace in the future.

“The main reasons customers gravitate towards Bedrock is the ability to select from a wide range of leading foundational models that support their unique needs,” Swami Sivasubramanian vice president, database, analytics and ML at AWS, said during his keynote this week.

Since “we believe no one will be that still in early days” of ML and LLM, LLM models will continue to “evolve at unprecedented speeds and customers need the flexibility to use different models at different points for different use cases,” he said.

It was “stunning” to see generative AI as the main topic of every single re:Invent keynote, with AWS executives claiming that LLMs will “enhance every single one of their services,” Volk said. “We all agree that LLM is a disruptive discipline, therefore AWS wholeheartedly embracing it makes sense. I heard them say many times how they are the leaders in generative AI, way ahead of Google and Microsoft, which of course is a statement that should be questioned,” Volk said. “However, LLM has leveled the AI playing field, with Google the previous AI leader losing a tremendous amount of credibility in that area, Microsoft closely partnering with OpenAI, the undisputed market leader in generative AI, and now Amazon focusing a significant amount of its product development effort on this topic.  This race will be an interesting one to watch.”

Chips Ahoy

AWS communicated more compute-power advances its in-house designed AWS Graviton processors offer, which AWS custom builds using a 64-bit Arm design. AWS communicated that its new Graviton4 provides up to 30% better compute performance, 50% more cores, and 75% more memory bandwidth than current generation Graviton3 processors, delivering the best price performance and energy efficiency for a broad range of workloads running on Amazon EC2.

According to AWS CEO Adam Selipsky:  “Graviton4 is the most powerful, and the most energy-efficient chip that we have ever built.” With 50% more cores and 75% more memory than with the Graviton3, Graviton4 chips are 30% faster on average than Graviton3 and perform better for “certain workloads,” and are 40% faster for database applications and 45% faster for Java applications, Selipsky said.

Synonymous these days with AI and GPUs, Nvidia will play a part in AWS LLM and processor offerings to power infrastructure. Nvidia CEO and founder Jensen Huang was on hand to discuss AWS and Nvidia’s partnership. This includes this week’s announcement that:

AWS is going to be the first cloud provider to make the latest Nvidia GH200 (NVIDIA GH200 Grace Hopper).

“We’re both really passionate about ARM processors,” Huang said. “And the reason why Arm is so incredible is because we can mold it to exactly the kind of computing needs we have. It’s incredibly low energy and incredibly cost-effective.”

Meanwhile, AWS has invested heavily in its silicon development of LLMs, “which is the right thing to do,” Volk said. “I doubt that there is much differentiation at that level between them, Azure, and GCP, but from a marketing perspective, doubling down on LLM even in chip technology, makes sense,” Volk said.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.