This post is sponsored by Prisma Cloud by Palo Alto Networks in advance of Building a Scalable Strategy for Cloud Security: A Virtual Event on Jan. 26. 2021.
Cloud native computing is gaining in popularity. In 2020, for example, the Cloud Native Computing Foundation (CNCF) reported 91% of surveyed companies were using Kubernetes, with 83% of them in production. Thanks to cloud native computing, teams can easily build and manage cloud services using containers, microservices, immutable infrastructure, and declarative APIs without worrying about the underlying servers. Cloud native security, however, remains a real challenge.
Oddly enough — or perhaps not oddly at all — the biggest problems with cloud native security aren’t anything to do with all the new technology. No, it’s the same old security problems coming back to haunt us. In fact, there are five primary cloud native security concerns that organizations should aim to address if they want to start off the New Year in a stronger position, according to Matthew Chiodi, Palo Alto Networks Chief Security Officer, Public Cloud. Some are easy, low-hanging fruit to fix while others require a larger organizational overhaul. Let’s take them one by one.
Concern No. 1: Failing to use Multifactor Authentication
In Palo Alto Networks’ Unit 42 Cloud Threat Report, 2H 2020, a Unit 42 Red Team found a single, simple identity and access management (IAM) misconfiguration that enabled them to compromise an entire, Amazon Web Services-based cloud environment and bypass essentially all security controls. Yes, how we manage user ID, passwords, and authentication is, if anything, more of a real problem in cloud native environments than it ever was when your PC users insisted on using “password” for their Windows password. They could only hurt your company in “retail” sized doses. With the cloud, we’re talking wholesale havoc.
“Cloud native identity and access management should be a number one area of focus for organizations. You’ll hear a lot of talk around ‘Hey, make sure you have multifactor authentication (MFA) enabled.’ That’s a best practice,” Chiodi explained. But, it’s still not done often enough.
Adding insult to injury, as the threat actor Dark Halo showed in the Solarwinds security fiasco, it’s possible to crack multifactor authentication measures. Still, as Chiodi observed, deploying MFA will make your services more secure. Chiodi suggests using MFA in “cloud native identity access management should be a top priority if not the number one focus right now.”
Chiodi also observed that the “Cybersecurity and Infrastructure Security Agency (CISA) recently issued warnings about MFA cloud service attacks which enabled attackers to log into an organization’s cloud services.” Specifically, the CISA found successful MFA cloud attacks had used phishing, email forwarding vulnerabilities, and brute-force attacks on remote users using poor cyber hygiene and a “pass-the-cookie” attack. The key to failure here is not complex cloud native dark magic but simple old end-user security failures.
In other words, Chiodi said, “A lot of low hanging fruit in the security space.” There’s more than just phishing involved. For example, in Palo Alto Networks’s most recent cloud threat report, “we found that about 66% of organizations don’t rotate their access keys as often as they should. The best practice is supposed to be every 90 days but 66% of organizations are not doing that. So, just by taking care of your basic identity hygiene, you’ll be ahead of the game.”
Concern No. 2: Over-privileged Access
That said, there’s another related problem which you also know from past experience: Over-privileged, non-administrator user accounts. When are we ever going to learn that letting users run programs as root is just asking for trouble?
The most common example of this remains Docker-based containers running as root. Just because you can run containers as root, doesn’t mean it’s a good idea. As Cat Cai, Fair Financial’s Director of Platform Engineering remarked you should use the principle of least privilege. That is, “both your applications running in and your developers accessing the cluster should only get access to the resources that they need.”
Concern No. 3: Misconfigurations
Another all too common old security hole, Chiodi remarked is “cloud storage buckets and databases that are open to the Internet. These are misconfigurations that should never exist in anyone’s cloud environment.” The answer is not to fix them one at a time, but to “automate away these simple problems.”
This needs fixing in the worst way. A cloud storage bucket created by Advantage Capital Funding and Argus Capital Funding didn’t use any form of encryption, authentication, or access credentials. The result? Over half a million confidential legal and financial documents were leaked. The companies are far from the only ones guilty of such simple security sins.
True, “cloud storage is cheap,” said Chiodi, but “if you look at compliance requirements like California Consumer Privacy Act (CCPA) and the EU’s GDPR both have stringent personal data protection standards and the fines are very costly.” Those legal costs are much more expensive than any savings you’ll get from poorly securing your cloud storage and databases.
Prisma Cloud delivers the industry’s broadest security and compliance coverage—for applications, data, and the entire cloud native technology stack—throughout the development lifecycle and across multi- and hybrid-cloud environments.
Concern No. 4: Ignoring Build Security
Chiodi is also a big believer in shifting security left. This phrase means moving security to the earliest possible point in the development process. That’s particularly important since, as Chiodi pointed out, all too often in continuous integration and continuous delivery (CI/CD) “security teams only become involved in the concluding steps of deployment and operations.”
By shifting left, you’ll not only reduce your security risks but also your development costs. IBM’s System Sciences Institute discovered that addressing security issues in design was six times cheaper than during implementation and 15 times cheaper than addressing security issues during testing.
This all starts, Chiodi said with “getting a handle on how and where software is created in your organization. Depending on the size of your company, this could run the gamut from straightforward to extremely challenging.”
How challenging? Very.
“Large organizations will likely spend a few months digging,” observed Chiodi. That’s because “development is outsourced to multiple vendors, which will require additional work and sometimes contract reviews and each business unit will have its own software development process and tools.”
Chiodi added, “You must also look at your infrastructure as code templates. We’ve found the majority of these templates have one or more security misconfigurations. As part of your company’s shift left strategy, you must make sure to look for common security misconfigurations in them.”
While doing that work, wise companies will also look at their build environments. David A. Wheeler, The Linux Foundation‘s Director of Open Source Supply Chain Security in a recent report, Preventing Supply Chain Attacks like SolarWinds, noted that the SolarWinds disaster was caused in part because the attacker had cracked the business’s build system.
Chiodi agreed, “There needs to be equal focus on the actual build environment itself because as we’ve seen with these supply chain attacks, if your build environment is not secure that puts every piece of software and every line of code that moves through it equally at risk.”
The solution, according to Wheeler, is to use verified reproducible builds.
There are, Wheeler wrote, builds “that always produce the same outputs given the same inputs so that the build results can be verified. A verified reproducible build is a process where independent organizations produce a build from source code and verify that the built results come from the claimed source code.”
Until that day comes, Chiodi said you should lock down your build environments as much as possible using the basic security suggestions he’s already made.
That done, you can use the information you’ve learned about how your organization builds software to improve your organizational security. That, Chiodi continues, will enable you to “develop a strategic plan on how to handle security in your development teams.”
Concern No. 5: Focusing on Alerts
Finally, Chiodi stated, the “last area your organization should focus on is risk metrics versus alerts. An alert tells you that something may have gone wrong. Risk metrics focus on effectiveness and efficiency, so you can spot vulnerabilities before things go wrong. For many organizations, this will be a big shift because they’re very much focused on alerts. But, I think they need to focus more on risk-based metrics.”
That’s because companies should be more proactive rather than reactive to security issues. If all you’re doing is looking at alerts, well, the problem’s already happened. But, if you’re monitoring your risks, you can stop them before a sysadmin is summoned at 2 a.m. to fix a misbehaving service at time and a half. Monitoring those metrics largely depends on giving security and development teams visibility across all deployments, which can also have its drawbacks.
“As companies move to the cloud or expand their cloud presence, Security Teams need added visibility to everywhere their company data is. (But) adding tools can add alerts and dashboards to already overloaded Security Teams,” said Tyler Warren, director of IT security at Prologis, a Palo Alto Networks customer. “The cloud enables easier integration between cloud-based tools and this should be taken advantage of to implement alert consolidation, data enrichment, and automation.”
Besides, Chiodi continued, “risk metrics that focus on effectiveness and efficiency can actually help you to spot vulnerabilities in development pipelines. They can also aid you in identifying how efficient DevOps teams are in discovering vulnerabilities, pre-production, and post-production.”
Finally, as Nikesh Arora, CEO and chairman of Palo Alto Networks, recently said about SolarWinds, “100% prevention, 100% of the time is impossible. … But against bad guys who are always attempting to out-innovate us, security has to be more proactive and future-proof: If you are not able to prevent an attack in realtime, you need to detect and investigate near real-time. The days of fragmented security and lengthy investigation cycles are behind us, we need good data and real-world AI to get ahead. … Sophisticated hackers spend years planning campaigns — we must devote similar resources to our defenses.”
Easy? No. Necessary? Yes. Oh my yes.
Building a Scalable Strategy for Cloud Security: A Virtual Event will be held on January 26. 2021.
Amazon Web Services and The Linux Foundation are sponsors of The New Stack.