Twistlock sponsored this post.
When people talk about the cloud, they tend to center discussion around applications.
To a point, that makes sense. Applications are one thing that runs in the cloud, and hosting applications is an important reason for using the cloud.
However, by focusing only on applications, it can be easy to overlook the broader context in which they live. At the end of the day, applications are only one part of broader cloud workloads. To gain a full understanding of deployment, monitoring and security operations; it’s necessary to think about cloud workloads — and not just applications.
If you think I’m splitting semantic hairs here, keep reading. Below, I explain why the distinction between applications and cloud workloads is important, as well as how deploying, monitoring and securing cloud workloads is different from doing the same with applications.
Applications vs. Cloud Workloads
Let’s start by defining the difference between applications and cloud workloads.
In the simplest sense, an application is code that performs a given function — nothing more, nothing less.
In contrast, a cloud workload is all of the resources and processes that are necessary to make an application useful. A cloud workload typically includes an application, but it also involves things like data served to and generated by the application, network resources required to connect users to the application (or to connect different parts of the application together) and users — without whom your application would not really serve its purpose.
Until about 10 years ago, when cloud computing became a thing, the distinction between an application and a cloud workload wouldn’t have made sense to most people. Back then, most applications were deployed on-premises and accessed locally. Apart from databases that they might have connected to, there wasn’t much else to worry about. Your application was the main thing you focused on when you were deploying, monitoring or securing your environment.
Today, however, that is no longer the case. Today’s applications live in a more complicated world, and rely on a host of external resources to do their jobs. All of those resources require additional compute resources (and possibly other infrastructure) to run.
Deploying, Monitoring and Securing Cloud Workloads
That means (among other things) that there is more to think about when it comes to deployment, monitoring and security for cloud workloads, as compared to applications.
With Applications, Life Is Easy
In the days before the cloud, deployment was pretty simple — and it was typically the system admin’s or end-user’s responsibility, because someone had to install software locally on a machine, whether it was for personal use or enterprise use.
Monitoring was also simple. Since you were really only monitoring your application (and the on-premises server hosting it), rather than a broader set of resources, there was less monitoring data to collect and worry about.
Security, too, was a lot simpler and sweeter. You could focus on finding vulnerabilities within application code, and typically didn’t need to worry about things like API security or access control because, again, you were only concerned with an application, not a broader workload.
The Complexity of Cloud Workloads
Fast forward to the present, and those days of application-centric simplicity are gone, because you now have to think about an entire cloud workload, not just application code.
To deploy a cloud workload, you need to coordinate the availability of a host of different resources — compute instances, storage volumes, IAM services and network load balancers, to name just the minimum set of items that typically go into a cloud workload on top of the application. Not only do you have to set up all of these resources and connect them together to complete your deployment, but you also ideally need to integrate their deployment into continuous delivery pipelines so that your cloud workload can be continuously updated and that deployed automated.
Monitoring cloud workloads is a lot tougher, too. You have to monitor all components of the cloud workload separately to ensure their individual health, while at the same time keeping track of how they work together. An application front-end and a database that appear perfectly healthy monitored individually might turn out to have a performance problem when they connect to each other, and you will only know about it if you monitor your cloud workload holistically. On the other hand, you also want to monitor each component individually in order to discover problems early on, before they extend across the rest of your workload.
And then there’s security, which is arguably the hardest thing of all to achieve for a cloud workload. Not only does securing a cloud workload require you to perform security analysis on each component of your workload, but you also need to run multiple types of analyses.
Checking for vulnerabilities inside application code is not enough — it needs to be augmented by analyses of environment configurations to find vulnerabilities. You may also need to think about compliance checks to make sure your cloud workload meets any internal or external policy requirements related to security.
You also need to secure the network, of course. This is particularly tricky in most cloud workloads because they actually involve multiple layers of networking, some of which are external and some of which are exposed to the public Internet.
Yes, You Can (Manage a Cloud Workload)
The good news is that you can manage all of the complexity that a cloud workload entails. It requires a more sophisticated strategy — one that addresses each of the components that form a cloud workload, as opposed to focusing just on application code — but that’s one of the necessary tradeoffs for being able to take advantage of the cloud.