Cloud Native Computing Foundation sponsored this post, in anticipation of the virtual KubeCon + CloudNativeCon North America 2020 – Virtual, Nov. 17-20.
The 21st century is all about cloud native applications, but designing containerized applications or microservices is only the tip of the iceberg. What we do in design needs to push the boundaries even further.
Software is not just about providing a service to customers, it is also meant to be highly reliable and available. This is only possible through severely restricting development processes, as well as focusing on continuous integration, delivery, and production access.
Of course, as a developer, you don’t want to have too many restrictions while creating your software, writing your code, or defining your DockerFile or equivalent. But you have to recognize that the more constraints you impose on yourself and your teams, the more stable the output will be.
Then when these constraints become a habit, you won’t even notice them anymore. All the while, you will have learned a lot more about security operations than you expected, improved your onboarding processes, and made the “go-to-production” decision far simpler.
Kubernetes is not magic. I was disappointed when I started using it and realized that a part of my life had been sucked into yaml files. That being said, once you dig a little deeper, you realize that it can actually be a powerful tool for various kinds of production environments.
In prior times when we developed an application or a product, we usually followed these steps (more or less in this order):
- Find and define your market needs.
- Define your software architecture, databases, models, etc.
- Add some testing, CI/CD processes.
- Build your production environment.
- Go to production.
This used to be fine; at least, when you have a good System Administrator taking care of the security of your precious servers or instances. If you were lucky, as a developer, you actually would hate them — because they would always criticize the way you did things that “you were not supposed to do”.
Depending on your generation, or the companies you have been a part of, you may have never experienced this.
But now, what if we did things a little differently, to ensure that what we offer to our customers is indeed highly reliable, available, and built with best practices in mind?
Your Production Environment Is Key
In the end, we know that any part of our workload can crash — that is, any part but our production environment. Customers never have to be understanding while using a service; they are paying for it after all. They are our first resellers and should never be let down. Taking care of them is our main responsibility.
Your production system and your customers are interdependent.
Our goal is always to avoid downtime and provide the best service possible. So, instead of putting the code first, why not change your way of designing things and start by defining our production architecture first?
The choice of your database or models can wait, as well as your front-end application, and so on.
So let’s update our product creation process:
- Find and define your market needs.
- Define your production architecture: types of third parties you will need, deployment strategy (CI/CD, tools, etc.), container technology.
- For each item listed above, define guidelines corresponding to a restrictive approach for maximum security. For instance:
- Define templates and guidelines for your Dockerfiles
- List and define the configuration of webhooks to check file modifications (linter, templating, format validation, etc.).
- Define the user roles and access to each item and environment.
- Implement the constraints you defined.
- Now feel free to make the last architecture decision (databases or other third parties).
- Coding might feel a bit restrictive in those conditions, but it’s for the best.
- Go to production. Everything is ready, right?
Following this process for each new architecture item change might be time-consuming at first, but if we’re looking at the big picture, we know it will build a rock-solid working environment for your teams and reliable software for your customers.
Secure the Kubernetes Hype
Developers choose to work with Kubernetes for multiple reasons: curiosity, wanting to learn about a “new” technology, scalability needs, and so on.
Unfortunately, Kubernetes is not magic. I was disappointed when I started using it and realized that a part of my life had been sucked into yaml files. That being said, once you dig a little deeper, you realize that it can actually be a powerful tool for various kinds of production environments.
I am convinced that Kubernetes is a tool worth discovering and working with, even though it’s not as simple as we would like.
Also, if it is a part of your production architecture choice, it gives you a whole host of possibilities for constraining your environments; as well as the containers deployed in them.
Unfortunately, everyone needs to get their teeth into the configuration of a Kubernetes environment to discover these possibilities, as they are not embedded in most Managed Kubernetes products by default.
Additionally, for Role-Based Access Controls (RBAC), you can define policies on all layers of your cluster — from the cloud to the code.
Admission controllers will provide you with specific rules, applied to all objects deployed to your Kubernetes cluster. These allow you to force a container pull policy, certificate configuration, pods privileges and accesses, and so on.
This is why defining the interaction between Kubernetes objects, containers and pods is crucial. After you have your production up and running, it can be a real challenge to delineate restrictions as you will need, to ensure that all of your objects are redeployed and compliant with the new policies.
Third Parties and Code — Security Impacts
It is up to you to choose which tools or external services you need to create your product or software, but the responsibility of justifying such a choice and its associated risks falls to everyone.
As a developer, a few years back I didn’t really think about the existing security issues in the third parties I was using — like Cassandra, Redis, PostgreSQL, ElasticSearch, Kibana, etc.
I will hold my hand up and say that I honestly never looked into the security side of things. I used the tools solely for the purpose they served.
Third parties, like all software, have one fault though: it has all been coded by humans, and, as we know, we are not perfect… and neither is our code. Each time an external tool is used, it should be done so respecting a security assessment process. It is okay to use an imperfect tool, as long as we know it and are able to integrate the necessary layers of security.
Code security issues also impact our own code. For example, we use multiple libraries (sometimes even different versions of the same one) but we rarely update the versions and the associated code, remove deprecated functions, apply patches after a vulnerability alert, etc.
Speed Can Never Trump Security
Nowadays, we need to go fast. Unfortunately, speed boosts impact the security and reliability of our products directly, making us vulnerable to attacks and impacting the availability of our services. Which, in turn, impacts our customers.
We work for our products and for our customers. Yet, sometimes, the best way to serve our purpose is to take a step back, look at our mistakes, and learn from them.
Reliability is everything for your customers and if you ask them, they will most certainly tell you that they place a far greater value on this than their deadline.
To learn more about Kubernetes and other cloud native technologies, consider coming to KubeCon + CloudNativeCon North America 2020, Nov. 17-20, virtually.
The Cloud Native Computing Foundation is a sponsor of The New Stack.
Feature image via Pixabay.