Unleash the Power of Generative AI to Shift Left
Nearly every day, companies are pushing out generative AI announcements as the rapid advancement of the technology affects just about every industry — especially software development, transforming the industry and influencing the way teams work.
Software engineers are now the AI role organizations hire most often, McKinsey found, more often than data engineers and AI data scientists, indicating that enterprises have shifted from experimenting with AI to actively embedding it into their applications. According to IDC Group’s Stephen Elliot, the AI goal for modern organizations should “go beyond chatbots and language processors to holistic intelligence solutions that ensure a single source of truth, break down silos, ensure knowledge workers work smarter and deliver clear business results.”
While the breadth of new possibilities for using AI in any field can seem immense, one clear use case in software development is using generative AI tools powered by large language models (LLMs) to significantly amplify an engineer’s productivity throughout the entire software development life cycle. In observability, this means empowering engineers, regardless of their experience level, to write new code and test cases, understand legacy code, and identify and resolve issues faster before they affect customers and the business.
In this piece, I will outline three key stages in applying generative AI to improve software development and help engineers shift left, keeping pace with the evolution of observability.
Stage One: Enhancing the User Experience through AI Assistance
Right now, generative AI solutions are still in their infancy and typically require context switches from domain-specific products to a general AI assistant, like ChatGPT. For example, a user might ask ChatGPT or Google Bard how to do a task within a specific product, and then switch to that product in order to execute the instructions. While this is a great start, it is time-consuming and far from ideal — it’s certainly not the user experience we’ve all grown accustomed to in our digital interactions.
Moving forward, we’ll start to see the user experience improve as products introduce domain-specific assistants that act similar to ChatGPT but are available directly within the product experience. These integrated AI solutions will require no additional context, as all the necessary information is already in the pipeline, meaning these AI assistants will be able to go far beyond those of general AI assistants and produce responses that are directly and strategically aligned with the task at hand. With this approach, generative AI assistants will produce more accurate and reliable outputs, which is crucial for building trust in generative AI’s ability to automate complicated tasks.
For engineers, new solutions make it possible to ask plain language questions like, “Why is my service not working?” and receive an instant root-cause answer based on an analysis of customer-specific telemetry data. Such domain-specific innovations will be crucial in optimizing the user experience for software development and beyond.
Stage Two: Strengthening Efficiency and Accuracy with Predictive Decision-Making
As engineers continue to build upon pre-existing AI assistants, these assistants will ultimately be able to provide proactive suggestions, insights and advice without receiving explicit instructions.
For example, a user browsing an application might receive unsolicited advice to “adjust settings in the Java VM to improve performance.” If they accept, the assistant can schedule a task to implement the recommendation, and if they reject the recommendation, the assistant learns to avoid similar recommendations. It is important to keep in mind that AI is only as good as the data it can access, and AI assistants will become smarter and adjust to the user’s needs. Transparent controls, guidelines, and governance need to be established in order to prevent data and prompts from being used to enhance LLMs without appropriate consent.
For engineers, these automated recommendations unlock immediate value without requiring years of experience — and frees senior engineers to focus on higher-level tasks.
Stage Three: Accelerating the Process of Shifting Left with Automation
In this stage, we will see the introduction of automation to support the entire practice of shifting left — moving software testing to earlier in the process — with AI assistants acting on behalf of users through varying degrees of both autonomy and supervision from humans.
During this advanced stage, users can give the AI assistant specific objectives and strict guidelines. From there, the process can be similar to devising an outlined plan or agenda. The AI assistant will use a range of tools and telemetry data in order to accomplish tasks and come up with a plan that will be necessary to achieve it successfully, learning and building its own self-awareness and direction.
Ultimately AI assistants should be viewed as a tool rather than an imminent threat. Changing the way software is developed and maintained, generative AI allows engineers to spend more time brainstorming and developing, and spend less time on troubleshooting. Although still in the early stages, we are seeing even more complex autonomous systems in development today, such as self-driving cars. When organizations learn to embrace this transformative technology, they will see significant improvement in their performance and user experience.