TNS
VOXPOP
How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
0%
No change in plans, though we will keep an eye on the situation.
0%
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
0%
What recent turmoil?
0%
AI / DevOps / Security

Better Security with ChatGPT: Using AI’s Defensive Strengths

Generative AI can help cybersecurity teams do threat intelligence analysis, incident response guidance, vulnerability management, and more.
May 26th, 2023 6:00am by
Featued image for: Better Security with ChatGPT: Using AI’s Defensive Strengths
Image via Shutterstock 

While ChatGPT has grabbed negative headlines recently due to cybercriminals’ use of the technology to strengthen attacks, it can also be a formidable asset for cyber defense, helping companies maximize their security posture while promising to bridge any skills gaps in their workforce.

That’s particularly relevant as security teams become increasingly overwhelmed by an ever-expanding threat landscape — according to the results of a recent Cobalt survey, 79% of cybersecurity professionals say they’re having to deprioritize key projects just to stay on top of their workload.

Mike Fraser, vice president and field CTO of DevSecOps at Sophos, told The New Stack that generative AI has an enormous amount to offer to those overloaded security teams. “ChatGPT can be utilized for threat intelligence analysis, incident response guidance, security documentation and training generation, vulnerability management, security policy compliance, and automation,” he said. “With automation alone, the cybersecurity use cases are endless.”

The Cloud Security Alliance (CSA) recently published a white paper examining ChatGPT’s offensive and defensive potential in detail. CSA technical research director Sean Heide, one of the paper’s authors, said one key strength of the tool is that it allows users to simply ask in natural language for a specific attribute they need written for a task, or to make tasks more efficient with new suggestions.

“These tasks would typically take teams, depending on experience, a few hours to properly research, write out, test, and then push into a production scenario,” Heide said. “We are now seeing these same scripts being able to be accurately produced within seconds, and working the same, if not better.”

And Ric Smith, chief product and technology officer at SentinelOne, said it’s important to keep in mind that ChatGPT itself isn’t the only way to make use of large language models — dedicated solutions like SentinelOne’s recently announced AI-based threat-hunting platform can do it in a more focused way. “Companies need to think of LLMs as expert services and maintain a level of pragmatism in how and where they leverage generative AI,” he said. “You can create a fantastic generalist like GPT-4. But in reality, having a complex model is optional if the task is more focused.”

Bridging the Skills Gap

Chang Kawaguchi, vice president and AI security architect at Microsoft, said generative AI tools like his company’s Security Copilot can serve both to assist highly skilled employees and to fill in knowledge gaps for less-skilled workers. With Cybersecurity Ventures reporting a total of 3.5 million cybersecurity job vacancies worldwide (and expecting that number to remain unchanged until at least 2025), there’s a real need for that kind of support.

“We’re definitely hoping to make already skilled defenders more effective, more efficient — but also, because this technology can provide natural-language interfaces for complex tools, what we are starting to see is that lower-skilled folks become more effective in larger percentages,” Kawaguchi said.

At every level, Smith said, ChatGPT can simply make the work more approachable. “By enabling analysts to pose questions in their natural form, you are reducing the learning curve and making security operations more accessible to a larger pool of talent,” he said. “You are also making it easier to move more rudimentary operations to junior analysts, freeing veteran analysts to take on more thought work and sophisticated tasks.”

That’s equally true for the summarization and interpretation of data. “When you run hunting queries, you need to be able to interpret the results meaningfully to understand if there is an important finding and the resulting action that needs to be taken,” Smith said. “Generative AI is exceptionally good at both of these tasks and reduces, not eliminates, the burden of analysis for operators.”

It’s not that different, Smith said, from what spell check has done in freeing writers to focus on content rather than on proofreading. “We are lowering the cognitive burden to allow humans to do what they do best: creative thinking and reasoning,” he said.

Still, it’s not just about supporting less-skilled users. Different levels of generative AI capability, Kawaguchi said, are better suited for different levels of user expertise. At a higher level, he said, consider the potential of a tool like GitHub Copilot. “It can provide really complex code examples, and if you’re a highly skilled developer, you can clearly understand those samples and make them fit — make sure that they’re good with your own code,” he said. “So there’s a spectrum of capabilities that generative AI offers, some of which will be more useful to lower-skilled folks and some of which will be more useful to higher-skilled folks.”

Handling Hallucinations

As companies increasingly leverage these types of tools, it’s reasonable to be concerned that errors or AI hallucinations will cause confusion — as an example, Microsoft’s short video demo of Security Copilot shows the solution referring confidently to the non-existent Windows 9. In general, Kawaguchi said Security Copilot strives to avoid hallucinations by grounding it in an organization’s data or in information from trusted sources like the National Institute of Standards and Technology (NIST). “With grounding the data, we think that there’s a significant opportunity to, if not completely eliminate, greatly reduce the hallucination risk,” he said.

Basic checks and balances, Heide said, are also key to mitigating the potential impact of any hallucinations. “Much like there are review processes for development, the same will need to be taken around the usage of answers received from ChatGPT or other language models,” he said. “I foresee teams needing to check for accuracy of prompts being given, and the type of answers being provided.”

Still, Fraser said one of the key remaining barriers to adoption for a lot of companies lies in concerns about accuracy. “Thorough testing, validation and ongoing monitoring are necessary to build confidence in their effectiveness and minimize risks of false positives, false negatives or biased outputs,” he said.

It’s similar, Fraser said, to the benefits and challenges of automation, where ongoing tuning and management are key. “Human oversight is necessary to validate AI outputs, make critical judgments and respond effectively to evolving threats,” he said. “Security professionals can also provide critical thinking, contextual understanding and domain expertise to assess the accuracy and reliability of AI-generated information, which is essential to a successful strategy using ChatGPT and similar tools.”

Understanding the Benefits

While many companies at this point are more concerned about the threat from ChatGPT than they are invested in its potential as a defensive tool, Heide said that will inevitably shift as more and more users understand its potential. “I think as time goes on, and teams can see how quickly simple scripts can be completed to match an internal use case in a fraction of the time, they will begin to build more pipelines around its usage,” Heide said.

And as we move forward, Kawaguchi said, there’s an inevitable balancing act to be found between proceeding carefully in adopting generative AI and staying ahead of adversaries who may be surging forward with it. “It does feel relatively analogous to other step changes in technology that we’ve seen, where both offense and defense move forward and it’s a race to learn about new technology,” he said. “Our goal is to do so responsibly, so we’re taking it at an appropriate speed — but also not letting offense get ahead of us, not letting the malicious use of these technologies outpace just because we’re worried about potential misuse.”

Ultimately, Fraser said ChatGPT’s future as an asset for cyber defense will depend on responsible development, deployment, and regulation. “With responsible usage, ongoing advancements in AI and a collaborative approach between human experts and AI tools, ChatGPT can be a net benefit for cybersecurity,” he said. “It has the potential to significantly enhance defensive capabilities, support security teams in their fight against emerging threats, solve the skills gap through smarter automation, and enable a more proactive and effective approach to cyber defense.”

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.