How to Prevent AI from Hurting Your Kubernetes Deployments
AI is exploding around the globe, but not everyone has good intentions with this rapidly advancing technology.
Today’s turbulent cybersecurity environment means IT leaders must be aware of the various risks AI poses to their operations, such as generating sophisticated phishing attacks, creating synthetic data for malicious purposes, tricking AI systems with adversarial inputs and poisoning data to compromise AI models.
From a cloud native perspective, we are witnessing instances of AI technologies being exploited to bypass traditional security measures. Take the recent advent of WormGPT, a generative AI (GAI) tool being used to launch sophisticated phishing and business email compromises. This, combined with the rampant proliferation of ransomware, creates an ever-evolving risk landscape for IT leaders. In the past year alone, 85% of organizations suffered at least one ransomware attack, according to Veeam’s 2023 Ransomware Report.
Bringing AI to Your Kubernetes Environment
AI technologies introduce natural language as the new human-computer interface, automate every aspect of the Kubernetes toolset and synthesize information across large data sets, promising to greatly enhance the benefits of cloud native applications and operations. But at the same time, bringing AI to your Kubernetes environment can create many challenges. Let’s consider four such challenges and how you can overcome them.
It’s a Moving Landscape
AI is still too new for security teams to fully understand the risks it poses. For example, one emerging threat is the use of synthetic data generated by AI to create fake identities, documents or credentials that can be used to bypass security measures or impersonate authorized users. Similarly, many organizations that are deploying Kubernetes do not know the ins and outs of the system, and when AI is added to the mix, the organization is inherently at risk of unknown threats. The evolution of generative AI means that new attack surfaces are inevitable. As with all ransomware attacks, it’s no longer a question of if, but when.
AI has a learning curve, and we’re making and remaking the rules as we go along, continuing to experiment and test its capabilities. For Kubernetes leaders looking to deploy AI, it helps to have an active playbook to ensure you are fulfilling basic and crucial conditions such as revamping your cyber governance policies.
There are several frameworks to get a running start in this evolving field, including the MITRE Machine Learning Threat Matrix and the NIST AI Risk Management Framework. Since the landscape is constantly evolving, the playbook should evolve and update accordingly. This is valuable not just for meeting base-level conditions, but also for tracking progress, response and impact of AI in your Kubernetes environment.
Identifying the Use Case
Finding your use case is the first step in the process of adopting AI. This raises the question — and challenge — of identifying whether your use case is low risk/low impact or high risk/high impact. While your existing technologies have historically demonstrated their respective risk and impact levels, the same cannot be said now.
You need to review the risk assessment with AI as an added lens. For example, with adversarial inputs an attacker could add subtle noise or distortion to an image or a sound that would cause an AI system to misclassify it or provide incorrect information. This could have serious consequences for applications that rely on AI for decision-making, such as autonomous vehicles, medical diagnosis or facial recognition.
As with anything bright, shiny and new, AI should be approached with caution — but still be approached. As the daily news articles remind us, there is no shortage of innovations and active AI projects in the cloud native domain, ranging from GitHub Copilot, with a claim of 55% productivity improvement, to Kubernetes support for Nvidia GPUs to increase performance.
Organizations should get their hands dirty, evaluate various development tools and experiment with AI to identify its advantages and limitations in their own operating environment. Trying this in safe, controlled settings initially, while paying close attention to copyright concerns and the evolving regulations in their industry, is a prudent way to get started.
Building on DevSecOps
Security is a shared responsibility in an organization that requires collaboration and coordination from development to deployment. To keep productivity high, there is a separation of concerns that allows each team to focus on their core competencies and tasks. However, this also creates silos and gaps that can compromise the security posture of the organization. DevSecOps practices and automation tools have been implemented to bridge these silos, ensuring that security is not an afterthought but a priority.
As an example of a potential gap, do your current policies and practices protect against data poisoning? Couple that with the sig-security audit findings that the Kubernetes audit logging feature does not capture all the relevant information for security analysis, such as the source IP address, user agent and request body of API requests. Enhancing the audit logging feature to include more information that can help identify the source and intent of API requests, and to support different log formats and destinations, can aid in that shared context.
Applications incorporating AI tend to be more complex and dynamic, requiring continuous testing, monitoring and validation throughout the life cycle with a growing number of stakeholders. Enhancing DevSecOps tools and practices can facilitate these activities by automating them in a continuous integration and delivery pipeline, enabling faster feedback and error correction.
For instance, incorporating data governance policies to maintain encrypted, auditable and controlled access to not just your container images but also training and production data are imperative. Compliance as code constructs need to be augmented in your development environment to incorporate the various regulatory standards that are being augmented for AI including NIST.
Revisiting Red and Blue Teams
A Red team is a group of ethical hackers who simulate real-world cyberattacks on an organization’s systems, networks and applications, with the goal of testing their security posture and identifying vulnerabilities.
AI technologies can now supercharge your Red teams to better emulate the tactics, techniques and procedures of real adversaries. However, at the same time, given the growing attack surfaces that AI brings to an organization, you also need to reskill or employ the appropriate third-party organization to protect you against the growing threats mentioned above, including model and data extraction or tampering, data poisoning as well as adversarial inputs. Refer to reports and guides like Microsoft’s AI Red Team.
A Blue team is typically focused on proactive defense strategies, threat detection, incident response and vulnerability management. AI technologies promise to be a productivity booster for such teams since AI-powered solutions can synthesize and analyze large amounts of data from disparate sources as well as analyze anomalous behavior. Refer to reports like Google’s Secure AI framework or guidelines on securing AI pipelines. Invest in these upgrades and new tool sets; they will serve you well.
Strengthen Your Last Line of Defense
As a parting but critical thought, remember your security measures may not stop all attacks. So you need a last line of defense for your Kubernetes environment in case of attacks such as ransomware. A data protection solution that works with any environment (storage, cloud or distribution) and keeps up with the latest developments is what you need.
You also need to protect your Kubernetes applications as a whole, including the AI applications that may use vector databases as well as SQL/NoSQL data services. By getting ready with proactive tools and processes that can recover, detect and prevent attacks in this fast-paced cloud native environment, you will reap long-term benefits.