TNS
VOXPOP
Will real-time data processing replace batch processing?
At Confluent's user conference, Kafka co-creator Jay Kreps argued that stream processing would eventually supplant traditional methods of batch processing altogether.
Absolutely: Businesses operate in real-time and are looking to move their IT systems to real-time capabilities.
0%
Eventually: Enterprises will adopt technology slowly, so batch processing will be around for several more years.
0%
No way: Stream processing is a niche, and there will always be cases where batch processing is the only option.
0%
Cloud Native Ecosystem / Kubernetes

How Cloud Foundry, Red Hat Open Shift Ease Kubernetes Deployments

The most challenging experiences for Kubernetes, as with many other tools in the technology space, come from running production workloads.
Jun 24th, 2022 4:00am by
Featued image for: How Cloud Foundry, Red Hat Open Shift Ease Kubernetes Deployments
Feature image via Pixabay.

Ram Iyengar
Ram is a developer advocate for the Cloud Foundry Foundation. He's an engineer by practice and an educator at heart. He was pushed into technology evangelism along his journey as a developer and hasn’t looked back since.

Let us explore the taxonomy of Kubernetes clusters by examining them against the twin axes of utility and effort. This schematic is derived from observing the lifecycle of applications that run on Kubernetes and the progression that many teams go through.

The most challenging experiences for Kubernetes, as with many other tools in the technology space, come from running production workloads. A long tail of efforts arise from maintaining them and “keeping the lights running” with them. The complexities arise from disparate technical and non-technical areas, especially for a tool that has a rather vast impact.

Types of Kubernetes Clusters

The first type of Kubernetes clusters that developers encounter are those created for prototyping applications.

In small organizations, a couple of driven engineers often pick up new tech and drive the case for it. Larger organizations often tend to create pilot teams which drive the responsibility for adopting new tech for their organizations.

In either case, clusters that have marginal utility are created (and destroyed), largely to demonstrate viability of projects. The efforts surrounding the creation and maintenance of these clusters is designed to be low, often not impacting the core applications of the organization in a big way.

Low vs high effort set against low vs. high utility.

At the diametric opposite end of the spectrum are the clusters meant for production use. These clusters run large-scale workloads and typically function with all the bells and whistles. That is, it contains the entire spectrum of tools required for development, deployment, monitoring, maintenance, and operations to function fully.

Where Kubernetes and business critical apps fit into the picture.

Clusters in production have the longest span in the life cycle of Kubernetes-based operations. Operations teams have to expend the most effort to understand how their infrastructure works and integrates into the technical architecture of their full stack. Teams that are incapable of balancing their need for deployment velocity put their Kubernetes strategy at a high risk.

An illustration of an iceberg showing Kubernetes trouble spots.

To render a Kubernetes cluster ready for use in production, several tools need to be attached to it. By itself, the container orchestrator accomplishes a lot. However, it is incomplete without specialized additions. This aspect of Kubernetes forms the basis for the whole Cloud Native Computing Foundation landscape. Several hundreds of companies and thousands of contributors come together as a lively community to make this a reality.

A (short) list of some of the most commonly needed characteristics of Kubernetes clusters required for running production-grade workloads is:

  • Access control
  • Logging & Monitoring
  • Ingress
  • Storage and Backups
  • Secrets

Without these capabilities, Kubernetes clusters cannot be considered ready for use in production. An examination of each component follows, along with an explanation about the specific role it plays in a Kubernetes cluster.

The core purpose of this article is to examine if a middle path exists. A means for platform operators to provision clusters so that developers can avail a batteries-included approach to creating infrastructure. Opinionated PaaS platforms allow you to take advantage of the speed and agility, but come with the problem of customizations.

Paas Substrates abd abstractions are low effort and high utility.

History of PaaS tools

Numerous tools have cropped up over time, that are meant to simplify the experience of deploying apps and maintaining them on remote instances. These have commonly taken the form of providing underlying operating systems and runtimes.

These were typically built on software-defined- networking, storage, and other virtualized components. Amongst the earliest PaaS tools were EngineYard and Google App Engine. The most popular PaaS tools that have been used are Heroku, Red Hat OpenShift, and Cloud Foundry.

In due course, some of these PaaS tools have evolved to incorporate containerization, and now provide an abstraction over Kubernetes. Particularly, the capabilities of OpenShift and Cloud Foundry stand out within the larger developer community.

These two tools have demonstrated the engineering wherewithal and evolved along with the different technology cycles with virtual machines, containers, and now Kubernetes.

In the following sections, some relevant details have been added about these two tools are provided, largely to compare and contrast their approaches and provide a clear picture of their value addition to various organizations. Some of the common features of these two platforms are their Kubernetes integration, open source nature, enterprise-scale operations, and suitability for hybrid cloud deployments.

What Is OpenShift?

OpenShift is a PaaS developed by RedHat. It shares roots with the RedHat Enterprise Linux operating system. Ergo, it builds on the historic success of RHEL’s ubiquity, performance, scale, and security.

What Value Does OpenShift PaaS Add?

Primarily, OpenShift provides a consistent and reliable experience for developers during deployments.

It enables an elastic consumption experience over Kubernetes. With OpenShift installed over Kubernetes, application developers can exercise a self-serve ability for provisioning cloud infrastructure and consuming services. This allows them to take full advantage of public cloud architecture. It can incorporate full-stack automation needs for deployment and other operations.

What Is Cloud Foundry?

Cloud Foundry is a PaaS developed by a team at Pivotal. It was fully open sourced in the year 2015, and has since been owned and governed by the Cloud Foundry Foundation. It is fully open source with a large community working as contributors and maintainers. Cloud Foundry has demonstrated the ability to function across cloud providers and with multiple languages and frameworks. With deployments at large scale, Cloud Foundry lives up to its claim of being the modern standard for deploying mission critical apps at global organizations.

How Does Cloud Foundry PaaS Add Value?

For developers, Cloud Foundry makes it possible to go from ‘code’ to ‘running application’ with a single command. Irrespective of VM, container, or Kubernetes based workflows. Different projects within the ecosystem provide the plumbing required for this, while the cf cli is architected for convergence.

For operators, Cloud Foundry allows the definition of best practices which can be propagated upwards for app developers. Automation of various processes is also facilitated because of simple triggers being used. The extensibility and interoperability of Cloud Foundry make it particularly useful for working with diverse toolchains that can be found in different organizations.

General PaaS Advantages

What’s interesting about both tools is that they help complete the Kubernetes puzzle for various teams. The capabilities that these tools provide can be grouped into two major categories. The “build time” and the “run time.”

Build time and run time.

PaaS tools are typically built to offer velocity and convenience. This involves creating workflows to accommodate two distinct areas of software development – namely the build time and the run time.

The build time is the duration that the process takes in order to prepare an immutable artifact from source code. Run time is when the application is operated on production. In the midst of these two phases is the “deploy” phase which takes the artifact exported at the end of the build phase, and pushes it to production.

Build deploy and Run

In the Kubernetes world, the “build” phase translates to a containerization process. The process in itself is a declarative one, that basically listens for any change in source code and triggers a new artifact to be exported as a container. This container is then copied (or updated) to a container registry from where it is deployed to the Kubernetes nodes.

PaaS tools simplify this by getting the keys to the kingdom, ie. — get access to the source code. Then, they abstract the whole build process. Finally, they will use a configured remote endpoint to which to make the deployment. This could be VMs running on a private cloud, an IaaS, or a Kubernetes cluster.

In conclusion, all the workflows that are required to get a team functional with Kubernetes are accommodated with modern PaaS tools (such as OpenShift and Cloud Foundry). Using these will simplify a great deal of the complexity associated with Kubernetes. High-utility, low-effort Kubernetes are a reality for those willing to try a PaaS abstraction over Kubernetes.

This article has been written to mark the release of Korifi, the cloud native transformation tool for Cloud Foundry workloads. You can get started with Korifi on GitHub here.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.