2022: The Year AI Came to Coding

2022 was the year artificial intelligence really came to coding.
This was the year that saw GitHub Copilot move from a plug-in on Jetbrain, where it was first launched in 2021, to broad availability for the Visual Studio IDE in March. It was followed by the release of Amazon’s code completion service, Code Whisperer, in June, and Replit’s Ghostwriter in October. Tabnine, an AI startup for code generation, secured $15.5 million in funding, while another code-completion startup, Kite, died in the wake of Copilot’s popularity.
And then, too, by the end of the year, it all ended up as a big question mark when GitHub wound up in litigation over its use of open source repositories in Copilot.
AI: More for Developers than Code Completion
Although much of the focus in 2022 was on automated coding and code completion, it turns out that AI technologies transformed code in more subtle ways in the past year.
“We don’t believe we’re going to see AI replace DevOps engineers or platform engineers, but really augment them,” said Zach Zaro, co-founder and CEO of Coherence, a DevOps automation startup that leverages AI. “You have a lot happening at the application layer level — AI coming to help developers write application code, not infrastructure code.”
Zaro added that for those use cases, it will make DevOps more important because there will be more code to test more frequently.
Coherence identified a number of AI use cases for DevOps in a December blog post, including:
- Improved code quality;
- Strengthened monitoring and alerting systems;
- Better security measures; and
- Increased engineering productivity.
“We’re more about machine learning models, about finding the interesting log lines or metrics, or finding tests that don’t need to be run so you can cut down your build times,” Zaro said. “We’ve seen companies that do things like that. All of that is augmentation use cases. It’s not job-threatening use cases.”
Beyond Code Completion
What developers want is to be faster, and AI can help, said David DeSanto, who leads GitLab’s product organization that focuses on providing a single platform for the entire DevSecOps lifecycle. GitLab surveyed 5,001 DevSecOps professionals and found that 31% of respondents now use AI/machine learning as part of code review, and nearly half say they’ve achieved full test automation.
GitLab predicts that in the coming year, AI/ML will further enable development, along with helping with security remediation, improved test automation, and better observability.
The survey also found that 70% of respondents talked about the pressure to release code faster, DeSanto said. AI assistants can help by improving code review and helping developers monitor and triage production events, so that CI/CD can be faster and more effective, he said.
“Organizations are trying to automate and improve their test automation and part of that shift to shipping faster means you have to find ways to optimize what you’re doing,” DeSanto said. “If you’re going to ship continuously, you are pulling the humans out of the process. And how do you pull humans out of the process? You begin to build a smarter process for validating your code, in code review, code quality, and deployment.”
Is It Dependable?
AI as part of DevOps is still in the toddler phase, DeSanto cautioned. He said it’s still growing up, although when used in the right ways, it’s very effective — if not perfect.
DeSanto reminded us that even before AI, technology has always required training: Just a decade or so ago, web application firewalls had to witness a significant amount of production network traffic before they could identify anomalies versus normal behavior.
“Don’t just assume the AI is going be right the first time. Just like everything else, there’s going to be a training period for it to get more accurate,” DeSanto said. “We have another feature in beta, which is suggested labels to help with workflow automation. And when we first launched that feature for ourselves, it was, I think, less than 30% accurate. But over the time of learning our project and seeing how we label it, it’s gotten a lot more effective.”
The same will hold true for all AI and machine learning, he said, when asked if developers should be wary of its use.
“If you go in realizing that like any new technology, it has to get smarter, […] there’s not really anything that is being done today that is counterintuitive or shouldn’t be done,” he said.
He offered a real-world example of the right use case. GitLab acquired a company last year and in the process discovered that GitLab wasn’t as effective as it could be about assigning the right coder for code changes.
“This company had this ability to leverage the knowledge of your project to identify the right code reviewer to keep your team moving faster,” He said, “When we applied it to our own code, prior to acquisition as part of trying to improve ourselves, we found that it was more effective than we were.”
The program helps speed the process by learning the project and code and then assigning the right coder to any code changes that need to be made.
“It’s allowed developers to continue to move faster in their development; they’re not just blocked trying to find someone to review their code,” he said.
AI and Observability
AI could also play a role in improving observability in the software supply chain, said Hans Dockter, founder and CEO of Gradle, a company focused on improving developer productivity with both Gradle Enterprise, a unifying platform for software development at scale, and its open source Gradle Build Tool for multilanguage software development.
“The toolchain is not instrumented and not observable, which is, in a way, it’s crazy, right? The industry that has made all the other industries so observable and so efficient — every factory is fully instrumented, you know exactly what’s happening,” Dockter said. “When it comes to the tools the developers [are] interacting every day with most, organizations don’t have the basic data. How many tests have you run today? How often have you come back? How long did it take? How often did it fail? Why did it fail? That is a weird state for the software industry to be in.”
In developer surveys, Gradle has asked about how much such insight would improve productivity and the vast majority say at least two times, but possibly as much as three or four times as productive, with better observability into the process.
“Imagine what you can do with all that data,” Dockter said.
He pointed to LinkedIn as an example. LinkedIn runs 600,000 build test executions per day — which translates into probably 10s of millions of tests per day. From analyzing that data, LinkedIn learned that certain tests did not need to be run against the new code. It’s a pattern Dockter has seen with other companies.
“Why do we run them all the time — we can basically guarantee you with 99.999% likelihood that this test will not fail if you change that area of the code,” he said. “We have machine learning models behind that. So now we have something that’s called predictive test selection, you run your build and tests, and often 80 or 90% of the tests don’t need to be run at all.”
That’s a lot of wasted compute, he added, that can now be saved.