Generative AI Cloud Platforms: AWS, Azure, or Google?
With the rise of generative AI, the top hyperscalers — Amazon Web Services, Google, and Microsoft — are engaging in yet another round of intense competitive battles.
Generative AI needs massive computing power and large datasets, which makes the public cloud an ideal platform choice. From offering the foundation models as a service to training and fine-tuning generative AI models, public cloud providers are in a race to attract the developer community and the enterprise.
This article analyzes the evolving strategies of Amazon, Google, and Microsoft in the generative AI segment. The table below summarizes the current state of GenAI services offered by the key public cloud providers:
Amazon Web Services: Betting Big on Amazon Bedrock and Amazon Titan
Compared to its key competitors, AWS is late to the generative AI party. But they are quickly catching up.
Amazon SageMaker JumpStart is an environment to access, customize, and deploy ML models. AWS recently added support for foundation models, enabling customers to consume and fine-tune some of the most popular open source models. Through the partnership with Hugging Face, AWS made it easy to perform inference or fine-tune an existing model from a catalog of curated open source models. This is a quick approach to bringing generative AI capabilities to SageMaker.
In private preview, AWS revealed Amazon Bedrock to be a serverless environment or platform to consume foundation models through an API. Though AWS hasn’t shared many details, it does look like a competitive offering compared to Azure OpenAI. Customers would be able to access secure endpoints that are exposed through the private subnet of the VPC.
Amazon has partnered with GenAI startups such as AI21Labs, Anthropic, and Stability.ai to offer text and image-based foundation models through the Amazon Bedrock API.
Amazon Titan is a collection of home-grown foundation models built by its own researchers and internal teams. Titan is expected to bring some of the models that power services such as Alexa, CodeWhisperer, Polly, Rekognition, and other AI services.
I expect Amazon to launch commercial foundation models for code completion, word completion, chat completion, embeddings, translation, and image generation. These models would be exposed through Amazon Bedrock for consumption and fine-tuning.
Amazon may also launch a dedicated vector database as a service under the Amazon RDS or Aurora family of products. For now, it supports pgvector, a PostgreSQL extension for performing similarity searches on word embeddings available through Amazon RDS.
Google Cloud: Built on the Foundations of PaLM
A plethora of GenAI-related announcements dominated Google I/O 2023. Generative AI is important for Google, not just for its cloud business but also for its search and enterprise businesses based on Google Workspace.
Google has invested in four foundation models: Codey, Chirp, PaLM, and Imagen. These models are available through Vertex AI for Google Cloud customers to consume and fine-tune with custom datasets. The model garden available through Vertex AI has open source and third-party foundation models. Google has also launched a playground (GenAI Studio) and no-code tools (Gen App Builder) for building apps based on GenAI.
Extending the power of LLM models to DevOps, Google has also integrated the PaLM 2 API with Google Cloud Console, Google Cloud Shell, and Google Cloud Workstations to add an assistant to accelerate operations. This capability is available through Duet AI for Google Cloud.
A native vector database is missing in Google’s GenAI portfolio. It should add the ability to store and search vectors in BigQuery and BigQuery Omni. For now, customers will have to rely on the pgvector extension added to Cloud SQL or use a third-party vector database such as Pinecone.
For a detailed review of Google’s generative AI strategy, read my deep dive analysis published at The New Stack.
Microsoft Azure: Making the Most of Its OpenAI Investment
With an exclusive partnership with OpenAI, Microsoft is ahead of its competitors in the generative AI game. Azure OpenAI is one of the mature and proven GenAI platforms available in the public cloud.
Azure OpenAI brings most of the foundation models (excluding Whisper) from OpenAI to the cloud. Available through the same API and client libraries, customers can quickly consume engines such as text-davinci-003 and gpt-35-turbo on Azure. Since they are launched within an existing subscription and optionally a private virtual network, customers benefit from security and privacy for their data.
Microsoft has integrated foundation models with Azure ML, a managed ML platform as a service. Customers can use familiar tools and libraries to consume and fine-tune the foundation models.
Microsoft has also invested in an open source project named the Semantic Kernel, which aims to bring LLM orchestration, such as prompt engineering and augmentation, to C# and Python developers. It’s similar to LangChain, a popular open source library to interact with LLMs.