Security / Sponsored / Contributed

Governance, Risk and Compliance with Kubernetes

28 Apr 2021 5:00am, by

Cristian Klein
Cristian Klein is senior cloud architect with Elastisys. He works on Compliant Kubernetes, a Kubernetes distribution to facilitate multicloud compliance with various regulations. He received a Ph.D. in 2012 from INRIA, France. Cristian is passionate about cloud security and regulatory compliance, acting variously as a consultant, practitioner, teacher and researcher.

Have you ever woken up at night thinking that you don’t do “enough security”? Have you ever been frustrated that your organization does “too much security” and so it’s impossible to get anything done? In this blog post, we interviewed a risk officer to bring some clarity — and a good night’s sleep — into this contentious discussion.

2020 was a year like no other. If anything positive came out of it, it’s that we learned the difficulty of balancing various risks. The “Schrems II” ruling ended PrivacyShield, causing chaos and uncertainty on how we choose cloud providers. Just in case we hoped this was a problem we could ignore, GDPR fines were administered at an alarming rate reminding us that lacking data privacy and security can have direct financial consequences.

Meanwhile, at KubeCon + CloudNativeCon NA 2020, security was one of the most attended tracks. The already crowded Cloud Native Computing Foundation (CNCF) landscape on security got even more crowded; for example Falco was incubated and OpenPolicyAgent was graduated. Kubernetes itself was caught in the “secure by default” vs. “works by default” battle.

With such a rapidly evolving risk and tech landscape, engineering anxiety is at an all-time high. Shall we adopt tech X, like everyone else seems to be doing? Should we move our workloads into cloud Y? Are we doing enough security? Are we doing “too much” security and losing market share? Am I sleeping well at night? Don’t you just wish you could safely and honestly talk to a risk officer?

Sarah Clarke
Sarah is a Data Protection and Cybersecurity GRC specialist. She works for her own firm Infospectives Ltd, and provides advice and support to a wide range of clients. She is also a Fellow of the ForHumanity AI Institute, supporting their development of AI Audit and governance regimes and a guest lecturer in vendor security governance for the University of Manchester IT Governance Masters course.

To bring some clarity into this complex issue, we offer this Q&A with Sarah Clarke, a data protection and cybersecurity GRC specialist. With over 17 years of experience in IT risk management, she has witnessed many waves of regulations and new technology. She is unimpressed with technical jargon and product names, and is quick to figure out what you actually do to protect data. Here is a conversation I had with her (and she totally promised not to fire me!)

Sarah, when should architects and developers involve people like you?

Yesterday.

What about “break glass”? How should I find the right trade-off between being locked out vs. abusing emergency access?

Sign off hierarchy, and access audit and monitoring. This is not about stopping you, it is about showing your workings out. Trust, but verify, always.

I have to be honest with you. We did take a few “shortcuts” just to ship things and get feedback from the market. Now they have gotten adopted and we are stuck with them. It will take us months to plug those holes. How should we reason about minimization of privileged access?

Same again. If it won’t get fixed in the next sprint, or the one after. If it’s at risk of making it into your MVP, log it. We all forget things. Your need to rapidly iterate is not your risk, but failing to keep track of compromises made to do it, absolutely is. Escalate (call me or your CISO, or bang it into a log you know we will review). Have sensible criteria for when you do that, beyond basic CVSS.

So do I need to fix everything before I can ship anything?

No need to aim for perfection, but you constantly need to manage the risk. Managing the risk can include doing nothing, but you MUST record your reasons for that, and involve the folk who have skin in the game.

But I need to push features. How do I make time for that?

Is there a working feedback loop? The need to push comes from someone accountable. Not having time to do your job is a risk all of its own. An Engagement Risk.

It is a risk you should log as early as possible and make visible to management, along with vulnerabilities and other control holes.

We have nobody formally on the hook for this. What do I do?

Best way to surface that is to do a RACI — get agreement upfront on who is Responsible, Accountable, Consulted, or Informed about a usefully granular list of tasks and inputs. Documenting what you need from everyone, all of your dependencies, limitations, and time constraints. Done upfront, it’s risk management. Done the week before launch, it’s an excuse.

But we would become very uncompetitive. What should we do?

The board’s core job is managing risk, constantly. If they want to trade security or privacy for a feature or launch date, they need to sign their name to that; but you need to be clear about hard regulatory and legal lines and the sliding scale of other risks, then ensure all of that is documented.

How should I choose my suppliers?

Their own downstream supplier due diligence, their privacy and security control and incident response capability, their in-house expertise. Their transparency and responsiveness. In our mainly cloud-based supply chains, rapid visibility is vital for everything, not just performance and throughput.

How should I think about my customers?

Like your friends and family. If your business model depends on obscuring things, on shipping a dangerously holey kit, on carefully phrasing what you plan to do with personal data, on dark patterns to confuse them about consent — is that ok?

What should I do if I “need to be compliant,” but my manager is pushing me to skip security for the sake of shipping fast?

If there are concerning security or privacy compromises that may not get fixed in iterative dev, pre-launch, or early post-launch iterative churn, and there is potential for lots of risk (are you servicing needs of vulnerable people?) then you are probably not on the right side of history, but are you accountable? Your initial move is always to define accountability, triage risk, document the danger from the perspective of diverse stakeholders, and escalate.

They don’t want to sign their name to that? Guess what? They already did. Accountability can only belong to those who have the knowledge and influence to effect change. We are accountable for providing that knowledge. They are accountable for the decisions they then make.

To learn more about Kubernetes and other cloud native technologies, consider coming to KubeCon+CloudNativeCon Europe 2021 – Virtual, May 4-7.

Feature image via Pixabay.

A newsletter digest of the week’s most important stories & analyses.