Google’s Duet AI Launches GenAI across Full SDLC in the Cloud
What does a day in the life of a developer look like at this stage of generative AI? What does the full software development lifecycle (SLDC) in the cloud look like with GenAI assistance built into the developer experience? At a time when cognitive load and pressure to deliver faster run high in the face of both tech layoffs and the increasingly complex cloud landscape, can generative AI already drive value for software developers?
Today, Google released Duet AI for Developers – which has since been renamed Gemini – into general availability. Duet AI for Developers includes coding assistance, integrated development environment [IDE] and console chat, and ops tooling — in an aim to integrate assistance right where developers are already working.
Developer Advocate Megan O’Keefe and Chief Evangelist Richard Seroter, both for Google Cloud, took this occasion to talk to The New Stack, to help our readers begin to imagine the end-to-end developer experience with generative AI. Not just with Google Cloud, but interacting with search, Bard AI, Google Workspace and more, trying to meet developers where they already are, not across the usual 40 tabs and 14 tools.
“So much of software development isn’t coding. It is conversations, email threads, messy whiteboard sessions and discussions,” O’Keefe told The New Stack, reflecting on their past life as a software developer, which had much of their day taken up by design, operations, orchestration and security. “Often the easiest part is writing those 10 lines of code. It’s everything before, during and after outside of that IDE.”
Following a year that saw more of a prioritization of developer productivity than ever before, it’s time to get past the promise of GenAI in order to understand how it can be applied right now to the average dev’s day.
Generative AI for the Software Developer
Duet AI is already an AI collaborator available for some clients of Google Cloud, Workspace, Docs, Gmail, chat, and more. In addition, released to general availability today, Duet AI for Developers integrates AI assistance into IDEs and Google’s own Cloud Console.
“It’s an integrated chat. It’s for security and SRE [site reliability engineering], and data and dev,” Seroter explained. “So it’s more or less putting AI into the cloud experience,” making the tools where devs are already working smarter with AI.
For the purposes of our discussion about what a “day in the life” of an AI-assisted team might look like, O’Keefe donned the hat of a TypeScript engineer at an online grocery retailer, “tasked with delivering a small feature into production in a short amount of time.” In this case, it’s an e-commerce site running on Google Cloud, and that new feature is a new products page showcasing the latest snacks.
This assignment came in through an email. They use Gmail’s “help me write” feature (currently available to some testers in U.S. English) to talk out the design objectives and to book a meeting room, where, with a colleague, they “rubber duck” or discuss and then whiteboard to plan out the implementation.
O’Keefe goes back to their desk to clean up the whiteboard diagram with the Google Cloud Architecture Diagram Tool.
From there, O’Keefe uploads their architectural diagram to Bard, which leverages Google Lens to read it. Together, dev and bot have a conversation and brainstorm around the architecture.
“Bart is able to understand the contents of this diagram, know what Google Cloud products we are using, and get the juices flowing here,” O’Keefe said. “It’s not writing my design doc for me but it is helping with inspiration,” before exporting that chatbot conversation into Docs, which, with the help of ‘help me write’, they and their colleague create an outline. This helps them focus on trickier design questions, they said, like how they might cache Firestore document database queries.
At this point in the demo, O’Keefe points out that they hadn’t even opened an IDE yet — which holds true to the typical software development lifecycle.
“What you see here is the frontend team, which is a totally separate team working on a mock-up for this new feature page,” O’Keefe explained. “And my job is going to be to take this and write the backend code using the help of AI assistants.”
Once the design docs are approved, it’s time to start coding that backend.
“Any customer that builds an API, needs a good amount of management — these long specs outlining what the API does and how much work, managing things like proxies. So Apigee’s Duet AI feature that has recently launched is around opening the OpenAPI spec generation.” Here O’Keefe prompts Apigee in natural language to generate a starter specification on new and existing products.
“It comes back to engineering culture and the inherent human part of generative AI which is that it is here to help us as humans. It is not here to automate things away. Because if something goes wrong, we won’t know how to fix it. The onus is on me, the developer, to understand the output, to make sure that this is going in a production dashboard, and that I am working with experts.” — Megan O’Keefe, Google
The next step is to query a Google Cloud database using a client. At this point, especially if they were new to using Google Cloud, they would typically have to open up quite a few tabs in order to do Google’s and Stack Overflow’s next steps, alongside reading the documentation.
“I’d have to learn how the client works,” they said, but “all I want is to get new products — it should be pretty simple to query. But, if I’m new to Google Cloud, it’s not so simple. What we can do is use Duet AI’s code completion to prompt Duet AI to help here and do what AI can infer, based on the contents of my open file [and] what my database schema is. It knows how that Firestore [document database] call should look like.”
Of course, once you build it, you have to test it. Duet AI chat is trained on Google Cloud documentation and sample code so it can talk out the error and help O’Keefe fix it. They continued that, “An eternal problem when writing code, especially prototyping, is the debugging step and trying to figure out: What is happening? Why is this error occurring? My thing is not working, help!”
They spent much of their life as a developer searching for answers — looking to Google, Stack Overflow, Reddit and colleagues to help solve problems. And they are not alone. Last year’s StackOverflow Survey found that 62% of all respondents spent more than 30 minutes a day searching for answers or solutions to problems, while 25% spent more than an hour — every day. This avoidable frustration both breaks flow state and increases cognitive load and developer burnout. Integrating that help into the developer workflow will dramatically drive developer productivity, allowing more problem-solving and less frustration.
O’Keefe even said you can already copy/paste an error into Google and find some generative AI ready to help.
Generative AI for the DevOps Side of Dev Work
So once O’Keefe is done designing, building and testing their new feature, it’s time to release. This is where the DevOps workload often creates a lot of fences, frustrations and friction on the road to production. This is what Syntasso’s Abby Bangser would call “not unimportant, but not differential work.” A lot of these important hurdles involve humans in the loop for approvals, deployments and code reviews.
“A human in the loop still is incredibly important,” O’Keefe said. “Let’s say I’ve shipped this feature. It has rolled out the new arrivals page [and it] is visible to our customers. So that’s exciting. But there’s so much that happens. Imagine I’m going to go on call as an engineer. The first thing I need to be able to find as a new Google Cloud Developer are the logs and the metrics for my service. So what you’re seeing here is [that] I’ve opened up Duet AI in the Google Cloud Console.”
Kind of like 1996’s Microsoft Clippy, but useful, they can click on the Duet AI icon, inside the console and answer questions about where to find the logs, how to query logs and what this log message means. What would be at least six open tabs — which break that ability to achieve flow state — now happens within the console where they already are. It also becomes a performance win without a gazillion tabs open.
Next up is O’Keefe’s favorite upcoming feature: “Help me modify,” which is used to create complex queries on service health. For instance, if they aren’t an expert in Prometheus, they could leverage this, in natural language, to describe what they hope to achieve, with Duet AI responding with the proper syntax in place.
“Querying metrics, things like latency, or these sort of deep operational level things that devs may not really know you know, these are important signals — like SRE tasks, alerting, restoring from outages — but the query syntax is really hard to understand,” O’Keefe said, noting this is especially the case across PromQL or SQL queries. “You can do a natural language prompt like, Okay, I want this exact query but show for each Google Cloud region zone, and it can generate that query for you, which you can then pop that chart into a dashboard. It’s bridging a knowledge gap. It is kind of upskilling me and helping me learn.”
But is the developer actually learning or is GenAI just doing it for them?
“I think it comes back to engineering culture and the inherent human part of generative AI, which is that it is here to help us as humans; it is not here to automate things away,” O’Keefe said. “Because if something goes wrong, we won’t know how to fix it. The onus is on me, the developer, to understand the output, to make sure that this is going in a production dashboard, and that I am working with experts who do know what they’re talking about — in this case that would be the SRE and ops team to verify this output.”
This critical consideration of generative AI becomes even more important, they continued, when applied to CI/CD pipelines, orchestration and security.
“We’re probably supervising a little more than we’re creating in an AI world […] but if we don’t know the things, we can’t validate their responses,” Seroter echoed his colleague, even going so far as recommending customers to not fire staff, but instead invest in upskilling them to better prepare them to work with generative AI. “This is a call to action to IT managers and leaders: It’s time to upskill and now this is what makes your team even more exceptional, but don’t negate the fundamentals.” He analogized this to how his son is prepping for his driving test; in California, you aren’t allowed to leverage the rear-view cameras during the exam. We all must learn the fundamentals to be able to then leverage AI with a critical eye, was his point.
Generative AI Must Enable Developer Flow State, Not Impede It
One of the biggest objectives of developer productivity engineering is increasing developer flow state, where they can really get in the zone, working free of distractions, context switching and anything that makes it hard to get back on task. AI isn’t about eliminating collaborative tasks, but about increasing their effectiveness.
Seroter said that it’s about getting quick feedback within the context of your organization and technology. “How long does it take a lot of devs to get an architecture review board set up — a day, a week, a month? If I can actually get some quick expert-level architecture guidance — even if it’s not perfect and I need to double-check key points — that’s going to help teams validate their designs more quickly. If I get that architecture, guidance and all these things, I’m not asking every developer to queue up for days and weeks waiting to get tests reviewed, architectures reviewed, coding assistance.”
This is not a generic model, Seroter emphasized. Duet’s generative AI is able to provide expert advice because it has been trained on Google Cloud’s documentation and samples. He said, “We want this to be a Google Cloud expert for you.”
As of today, all Google Cloud customers will be able to opt into this service. Initially, it is only trained on Google Cloud products, docs and code samples, but customer-driven customization is on the roadmap.
“A company I was talking to yesterday, they would like to be able to come in and say, ‘Hey, does this code meet our security standards?’” Seroter said. “When you think of the entire SDLC, there has to be a level of personalization or sort of guidance that also knows enough about you to tell you, ‘Hey, that’s cool, generally, but that’s not cool for us. That’s important.’”
Similarly, Duet AI could also help write tests. After all, second to only keeping docs up to date, developers complain about and habitually avoid writing unit tests.
“It can look at the structure of other things in my open file. So imagine I have a big test file with other tests for all of my existing functions,” O’Keefe said. “It can use the same tools, the same test structure, the same best practices that we are using for our current tests and output code that matches.”
As generative AI matures, context will be what really drives value — as a helper not a replacement of the developer.
In this new Age of GenAI, the difference between the humans and the robots should be more striking, not less. Generative AI, especially when leveraged within the context of your organization and your role, should work to enable these creative workers to focus on problem-solving, not rote and repetitive tasks.
“We’ll get there as an industry,” Seroter said, probably faster than we can even imagine. “That’s where we’re trying to think in that big picture, not just the hands-on keyboard coding time.”
Update: This February, Google AI announced that it was renaming its chatbot Bard and its developer tooling suite Duet AI to be combined under one offering called Gemini. This is part of the effort to create a more seamless developer experience across Google Cloud and portends the more deeper integration of these generative AI features, as Alphabet folds Duet AI into Google Workspace.