This article is from our ebook, “The State of the Kubernetes Ecosystem,” and is part of a larger section on how to assess Kubernetes for end users, teams, and organizations. Read more in the full ebook, including articles about Kubernetes components and architectures, Kubernetes and the DevOps pipeline, and original data research findings from our Kubernetes user survey.
Kubernetes is but one permutation of orchestrated distribution. At the moment, it’s the one with the largest plurality of users, but this horse race has just started. Docker Swarm and Mesosphere Enterprise DC/OS are competitive players. If you do select Kubernetes, it will be after you’ve made a multitude of important considerations.
What the CIO Needs to Consider
The typical goals of the IT department are to provide the organization with robust and reliable applications. Yet as IT faces the challenges of modernization for today’s economy, the chief information officer of the organization not only expects IT to provide robustness and reliability, but also become more agile and more efficient with respect to resource consumption. Kubernetes helps IT in its quest to optimize resources and operate at a much higher scale than any time in the past.
The key to the success of the modern enterprise lies with two key areas of empowerment:
- How IT empowers developers by delivering the foundational services they need, at the scale the business needs.
- How IT empowers DevOps with the tools and support they need to deliver software to customers with greater agility.
Kubernetes is the secret sauce transforming IT from the gatekeepers to the innovators of the modern enterprise. Here’s what the CIO will need to consider with respect to how Kubernetes can serve as the vehicle for that transformation:
- Value assessment: It’s important that the CIO make a strategic assessment of the business value of IT transformation, and how containers and Kubernetes can impact the organization’s transformation. An organization may find business value from something that adds revenue to the organization, or something that gives them strategic competitive advantage. Kubernetes is not a magic pill that can cure all IT woes. CIOs should get a buy-in from all stakeholders, which requires a careful discussion of potential disruptions, and a plan for how to mitigate them as Kubernetes is being rolled out. Stakeholders here may include IT administrators, developers and business users. Modern IT involves a buy-in from all these stakeholders. This helps usher in cultural change, along with the technology and architectural changes brought in by modernization.
- Legacy assessment: Kubernetes supports legacy applications through stateful storage options, although these are typically not ideal workloads for Kubernetes. CIOs should prioritize the right high-value workloads for migrating to Kubernetes, from both a technical and business perspectives. There may be a cost associated with architectural changes to applications, and CIOs should consider these costs as well. For instance, a mission-critical application might get disrupted — leading to a loss in business value — if it gets moved to Kubernetes just for the sake of using Kubernetes.
- Process assessment: Using a platform like Kubernetes can provide agility to IT and can help deliver business value fast. The CIO should think through their organization’s entire value delivery process, taking into account potential pitfalls, and deciding if the investment in Kubernetes is the right one. ROI can be maximized when architectural changes to applications can be coordinated along with the move to containers and Kubernetes.
- Paradigm shift: Using containers and orchestrators requires a mindset change among IT decision makers, especially CIOs. Instead of thinking about the reliability of applications, the IT professional needs to think in terms of their resilience. It is important for CIOs to usher in this mindset change for both IT and developers.
- Architecture shift: When infrastructure and application components are treated as cattle instead of pets, you’ll soon need to rethink existing application architectures. CIOs should be prepared for this shift, and get buy-in from developers well ahead of time.
- Placement shift: Kubernetes may be deployed across multiple cloud providers, or with on-premises infrastructure. The CIO should develop a deployment strategy based on the organization’s needs first, and the needs of the infrastructure later.
- Storage shift: It is important for the CIO to identify whether the organization’s applications require stateful storage. They should ensure that the needs of stateful, storage-oriented apps — needs that won’t disappear under Kubernetes — will be supported.
- Declaration shift: With Kubernetes, you always need to consider the system as a whole, and tap into its declarative model for deployments. This requires a mindset change both at the infrastructure and application levels. Kubernetes espouses a declarative model, as though it were asking you, “tell me what you want, and I’ll try to do it,” as opposed to the classic, imperative model, “please instruct me as to how I should create exactly what you want.”
People and Process Considerations
- Talent considerations: A move to Kubernetes will require cross-functional talent. The CIO should have a strategy in place for either hiring new talent or retraining existing talent, not only to better comprehend the new technologies, but to embrace the process changes that come with a move of this scale.
- Cultural considerations: Just embracing Kubernetes is only one leg of the bigger journey. The time for complete compartmentalization of development teams from operations teams in the modern enterprise, is over. And although there are a number of variants of this concept called DevOps, they all have in common the notion that they should share responsibility, and communicate more directly, with each other. Kubernetes is not a collaboration platform for Dev and Ops, or for Dev and Sec and Ops, or whatever syllabic mixture is in vogue today. But adopting it properly, and embracing the concept of distributed systems orchestration, does require the people who create applications and the people who manage them to, more than infrequently, have lunch together and swap stories. It is the responsibility of the CIO to empower stakeholders to facilitate this communication, so that these teams don’t have to wade through bureaucracy just to have a constructive chat.
- Failure considerations: Containers and orchestrators provide an opportunity for developers to experiment and fail fast. The CIO should empower stakeholders to experiment in such a way that failure becomes the first step toward remedy and improvement.
What the IT Implementer Needs to Consider
Kubernetes is very powerful at container orchestration, but it isn’t necessarily a perfect fit for every development context. Key stakeholders within organizations should ask themselves these questions first:
Will your applications need a distributed architecture (e.g., for microservices)?
While Kubernetes can work in a monolithic infrastructure, its focus is on orchestrating a large number of small services at big scale. What you will need to consider is whether the services you run today, plus those you plan to run in the future, can be decoupled from the application that uses them. Put another way, can the code that does the work be deployed behind an API? If so, you can use Kubernetes to orchestrate those services separately from the clients that call upon them. That separation is essential to making those services scalable.
Are monitoring tools in place to support a Kubernetes deployment?
Infrastructure monitoring helps ensure the health and availability of resources — storage, CPU, network — for the applications it serves. If you don’t have such tools in place already, you should invest in a robust monitoring mechanism that can track the health of underlying nodes, as well as that of the workloads running in Kubernetes. There are open source and commercial monitoring tools that integrate well with Kubernetes environments. Start considering monitoring tools, along with your other tools, right away. We touch more on monitoring considerations in the next chapter, as Kubernetes can bring about unique challenges.
Are your applications container ready?
Containers are different from virtual machines. Everyone in the organization — including developers, system administrators and DevOps practitioners — should have a basic understanding of containers. If they don’t yet appreciate the business value delivered through containerization (and not everyone will at first), they should at least respect the leaders of the organization and their reasoning behind this investment. Teams should first adopt containers in nonproduction environments such as development, testing, QA and staging.
Are your people container ready?
Adopting Kubernetes comes much later in the transition process than adopting containers. Docker adds value to dev/test environments and to continuous integration / continuous development (CI/CD) processes. That value should already have been added before starting with Kubernetes — or, for that matter, any orchestrator. A full appreciation and acknowledgment of the business and technical value of containers are prerequisites before you can use Kubernetes effectively to manage containerized workloads in production. Management should be on board with the benefits of adopting container technologies. Most importantly, all stakeholders should be trained on working in a distributed systems environment with containers. Google offers certified training for Kubernetes professionals, specifically for Google Cloud Platform.
Are you planning to migrate legacy applications into Kubernetes?
The migration approach you choose for legacy migrations might be challenging, whatever it may be. One common approach, for example, would be to deploy an API Gateway, then decompose the monolith into Kubernetes pods one feature set at a time, over an extended period of time. It’s an effective and manageable approach, but it’s not necessarily easy.
Are you planning, instead, to start with a fresh greenfield deployment?
The challenges of developing and deploying applications to a completely fresh Kubernetes environment are altogether different. You may be freer to take risks with what are essentially completely new technologies. At the same time, you’ll be encountering the pain points along with everybody else.
Will you choose a commercial version or a community distribution?
Kubernetes is available as a stock open source distribution, or as a managed commercial offering. Depending on your internal IT team’s skill set, you may either choose the stock open source version available on GitHub or purchase a commercial orchestrator based on Kubernetes from a vendor, such as CoreOS and Canonical, that offers professional services and support.
Are you ready to invest time and energy in building your own container images?
A containers is based on a pre-configured image. Such an image typically includes the base operating system. Unless the contained application is a compiled binary, the image may also include the libraries and other dependencies upon which it may rely. Your organization may wish to invest in a private registry, which stores both base images and custom images. Some commercial registries come with image scanning and security features that scan for vulnerabilities. Even so, a recent study examining images stored on the Docker Hub registry found some four images in five contain at least one documented security vulnerability. So you may choose instead to completely compile your image components from binary files, using libraries your organization knows and trusts to be safe. Alternately, you might consider an architectural approach suggested by CoreOS engineer Brian Harrington, called minimal containers — a more spartan approach to assembling containers.
Is your storage ready for high-performance workloads?
A Kubernetes cluster may be based on distributed file systems like NFS, Ceph and Gluster. These file systems may be configured on solid-state storage backends that deliver high throughput. Stateful applications running in Kubernetes can take advantage of these underlying storage primitives, which can make all the difference for running production workloads.
What is your expected level of uptime?
Your customers’ service level agreement (SLA) requirements will have a major impact on every aspect of your Kubernetes deployment: how you configure your environment, how you configure each application, how much complexity you can withstand, how many simultaneous deployment pipelines you can support … and the list doesn’t stop there. All of these variables impact the total cost of your application. For a bit of context, Table 4.1 below explains the expected downtime for each “9” in your availability goal (from a book by Susan J. Fowler). It’s up to you to determine the amount of effort required for your organization to achieve each level.
|‘Nines of Availability’ Downtime Allowance|
|Availability||Per year||Per month||Per week||Per day|
|99.9%||8.76 hours||43.8 minutes||10.1 minutes||1.44 minutes|
|99.99%||52.56 minutes||4.38 minutes||1.01 minutes||8.64 seconds|
|99.999%||5.26 minutes||25.9 seconds||6.05 seconds||864.3 milliseconds|
Table 4.1: How downtime in “nines” translates into real time.
Do you have, or plan to have, a release management system?
One of the key benefits of moving to Kubernetes is automating the deployment of applications. Deployments in Kubernetes support rolling updates, patching, canary deploys and A/B testing. To utilize these capabilities, you should have in place a well-configured build automation and release management platform. Jenkins is one example of a broadly deployed automation tool that integrates well with Kubernetes, by building container images from source code that can be pushed to both production and nonproduction environments.
Are you prepared for all the logs?
Kubernetes supports cluster-based logging, allowing workloads to log container activity in a centralized logging destination. After a cluster is created, a logging agent such as Fluentd can absorb events from the standard output and standard error output channels for each container. Logs generated by such providers may be ingested into Elasticsearch and analyzed with Kibana.
CoreOS and Google are sponsors of The New Stack.
Feature image via Pixabay.