GPT-4 AI-Powered Microsoft Security Copilot Arrives
I should have seen this coming. OpenAI GPT-4 is hotter than hot. No less a person with a clue about technology named Bill Gates called generative AI the “most important advance in technology since the graphical user interface.” Meanwhile, GitHub Copilot, Microsoft’s automated pair-programming service, has become wildly popular in its own right. So, combining the two was a natural move. The result is Microsoft Security Copilot.
Indeed, GitHub Copilot for Business, the OpenAI Codex extension, came out in February. It included an updated version of OpenAI Codex and a real-time vulnerability filter for catching security bugs while coding. The new vulnerability filter uses large language models (LLMs) to “approximate the behavior of static analysis tools.”
This extension also translates natural language into code. This service can be used with such editors as Microsoft Visual Studio, Neovim, VS Code, and JetBrains IDE.
In other words, OpenAI and Copilot were already working together. Combine all this with the desperate need for skilled security professionals — Microsoft estimates there are 3.4 million openings — and this may prove to be yet another Microsoft service destined to mint money. Not to mention, it should also improve our code’s security while also speeding up production.
Well, that’s the hope, anyway. We’ll see what we see.
To do its work, Security Copilot was trained on data from the Cybersecurity and Infrastructure Security Agency (CISA), the NIST vulnerability database, and, of course, Microsoft’s own threat intelligence database.
Security Copilot works by accepting natural language inputs. For example, you can ask for a summary of a particular vulnerability, feed in code snippets for analysis, or have it analyze incident reports. It keeps a full audit trail of all its inputs and results. You can share its results in a shared workspace.
It all sounds pretty darn useful to me.
Microsoft Security Copilot aims to provide end-to-end defense at machine speed and scale. It integrates an LLM with a security-specific model from Microsoft, which incorporates a growing set of security skills and is informed by Microsoft’s global threat intelligence and more than 65 trillion daily signals. Running on Azure’s hyperscale infrastructure,
The AI-powered tool will assist security professionals in detecting threats, improving response times, and strengthening their organization’s security posture. Though it won’t always provide perfect results, Security Copilot is a closed-loop learning system that continuously learns from user feedback to improve its performance.
In addition, Security Copilot integrates with Microsoft Security products. Eventually, it will expand to support third-party solutions. Microsoft also promises that it’s designed with privacy at its core, ensuring that user data remains under its control and is not used to train foundation ML models. I’m not sure how that will work with its closed-loop learning system.
It will be interesting to see how that works out. For all of its popularity, questions remain about whether Copilot’s use of its training code was legal or ethical.
Be that as it may, anything that can help programmers produce more secure code will be welcomed. Frankly, we need all the help we can get.
However, most of you won’t be able to use it today or tomorrow. It’s only available to a few partners. Microsoft hasn’t announced a release date yet. I expect, however, that if the beta goes well, Microsoft will release it sooner rather than later. I know a lot of developer teams who would be eager to give it a try.