TNS
VOXPOP
Which agile methodology should junior developers learn?
Agile methodology breaks projects into sprints, emphasizing continuous collaboration and improvement.
Scrum
0%
Kanban
0%
Scrumban (a combination of Scrum and Kanban)
0%
Extreme Programming (XP)
0%
Other methodology
0%
Bah, Waterfall was good enough for my elders, it is good enough for me
0%
Junior devs shouldn’t think about development methodologies.
0%
Security

Security with ChatGPT: What Happens When AI Meets Your API?

It only takes one API vulnerability for an attacker to gain access to critical information — are we ready to cede that responsibility to AI just yet?
Feb 24th, 2023 10:00am by
Featued image for: Security with ChatGPT: What Happens When AI Meets Your API?
Image via Pixabay.

Since the recent release of ChatGPT’s free research preview, people have been using artificial intelligence (AI) and artificial general intelligence (AGI) in new and unique ways, which has also prompted more questions than answers on AI-related cybersecurity risks. But, perhaps in the act of rebellion against AI, we’re demonstrating what defines us as humans: creativity and emotional intelligence.

AI has the potential to significantly improve mundane day-to-day tasks by increasing the speed of business operations, providing invaluable business efficiencies and, over time, avoiding mistakes. Through the lens of a developer, business leader or employee, this is exciting.

In recent years, we’ve seen AI explode and work to power new innovations across nearly every sector, but more needs to be done to address the cybersecurity concerns that follow. In 2021, Microsoft released GitHub Copilot, which uses the OpenAI Codex to translate the natural language to code and suggest entire functions in real-time, right from the editor. In contrast, ChatGPT needs to be prompted step-by-step to guide it to a desired outcome.

The code GitHub Copilot produces and suggests isn’t necessarily secure code as its training input wasn’t validated against cybersecurity best practices and known vulnerabilities like those listed in the OWASP API Top 10. Instead, Copilot was trained on a large set of open source code, some of it secure, some of it not at all secure, which is why the product’s AI doesn’t always produce the best output from a cybersecurity perspective.

A company’s go-to-market speed is becoming increasingly important, and vendors are driving innovation to help accelerate developer output. Unfortunately, this is sometimes at the expense of cybersecurity. Much remains to be learned about how much enterprises can rely on ChatGPT for cybersecurity efforts and how ChatGPT 3.5 will inform the workings of the next version, ChatGPT 4.0. Even OpenAI’s CEO Sam Altman is skeptical about these advancements.

AI Meets API

Right now, the brightest minds in cybersecurity are envisioning how to employ better AI and machine learning (ML) security. Such as CyberArk researchers, who recently discovered how to easily trick ChatGPT into creating polymorphic, malicious malware.

At the heart of AI’s cybersecurity concerns is the proliferation of APIs (application programming interfaces). While developers are working to simplify and accelerate architecting, configuring and building serverless applications with the help of new AI systems like DeepMind’s AlphaCode, problems arise as ML becomes responsible for generating and executing code.

What happens when we ask ChatGPT to write our APIs for us? Would that speed things up?

ChatGPT was trained on data from the likes of Google and StackOverflow, so it may be like asking a theoretically specialized developer in all languages and has no technology religion. Of course, you can ask for specifics in your conversation, putting the onus back on the human, not only being creative but also getting argumentative.

Back to our example, we also asked ChatGPT to document the API.

As you can see, the product is far from perfect, but it is amazing how well it understands intent and produces relevant output. Will this replace human developers? Definitely not anytime soon. After all, the memory of how Twitter users taught Microsoft’s AI chatbot, Tay, bad language and inappropriate responses and the failed Dutch Tax Authority is still fresh in our minds.

As our understanding of newer languages increases, and we keep inventing new ones, we will continue to update the body of knowledge systems like ChatGPT to improve itself.

Upholding the Integrity and Accuracy of ChatGPT

The integrity and accuracy of the ChatGPT algorithm is a singular and existential risk to its viability. Users of ChatGPT will increasingly look to it as a source of truth.

The book Applied Software Measurement demonstrates the cost of catching bugs earlier in the development life cycle and where these occur in the coding process. So even with our move from waterfall to agile development, bugs are a natural part of creating software.

ChatGPT might be able to come up with specific functionality, but what happens when we pass the unit tests and connect it to the rest of our systems for functional testing?

The crux of the matter is that we need to validate the output no matter who or what created the input. With Noname Active Testing, we can validate the security profile of API code before moving it into production, thus validating the efficacy of the ChatGPT model.

Furthermore, if we take the adversarial approach and ask ChatGPT to create API attacks that might work in innovative ways (i.e., previously unknown attacks), we can use the Noname Security platform to monitor real-time API transactions and catch malicious traffic in the act. Noname’s ML model would single out the previously unknown and unusual traffic patterns created by a ChatGPT-informed attack and flag it as malicious.

Convenience vs. Effective Cybersecurity

AI code-generating systems, such as ChatGPT or Codex, have the opportunity to make development work easier and faster. However, in terms of generating secure code, the jury is out.

So far, with OpenAI’s ChatGPT, the results look good and may even work well. That said, many results aren’t perfect and could incorporate flaws that aren’t evident upon initial review. Whether the coder is an AI system or a human, organizations still need a strong approach to application security that will catch vulnerabilities in code and provide suggestions on how to remediate them.

Even if application security tools could teach AI systems the necessary best practices and frameworks like the OWASP Top Ten, there is still the concern of “good enough” versus “truly secure.”

Just like with human adversaries, ChatGPT uses the breadth of existing knowledge but lacks human creativity. You prepare yourself by focusing on the basics of cybersecurity, just like with human actors. It only takes one vulnerability for an attacker to gain access to critical information — are we ready to cede that responsibility to AI just yet?

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Noname Security.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.