Favorite Social Media Timesink
When you take a break from work, where are you going?
Video clips on TikTok/YouTube
X, Bluesky, Mastodon et al...
Web surfing
I do not get distracted by petty amusements
Cloud Native Ecosystem / Kubernetes

The Cloud-Native Architecture: One Stack, Many Options

Sep 20th, 2017 9:00am by
Featued image for: The Cloud-Native Architecture: One Stack, Many Options
Feature image via Pixabay.

Gou Rao, Portworx
Gou Rao, Portworx co-founder and CTO, was previously CTO of Dell’s Data Protection division and of Citrix Systems’ ASG; co-founder and CTO of Ocarina Networks and of Net6; and a key architect at Intel and Lockheed Martin. He holds computer science bachelor’s (Bangalore University) and master’s (University of Pennsylvania) degrees.

As the chief technology officer of a company specialized in cloud native storage, I have a first hand view of the massive transformation happening right now in enterprise IT. In short, two things are happening in parallel right now that make it radically simpler to build, deploy and run sophisticated applications.

The first is the move to the cloud. This topic has been discussed so much that I won’t try to add anything new. We all know it’s happening, and we all know that its impact is huge.

The second is the move to cloud native architectures.  Since this is a relatively new development, I want to focus on it — and specifically on the importance of pluggable Cloud-Native architectures — in today’s post.  But before diving into how to architect for Cloud Native, let’s define it.

What does Cloud Native mean?

Pinning down an exact definition of Cloud Native is difficult, but most people would agree with this short list of attributes:

  • The architecture is microservice-based
    • Loosely coupled systems are easier to build, deploy and update
  • It is automated
    • CICD, APIs, automated configuration management — everything is automated.
  • DevOps drives it
    • The people who build an application also run that application. No more throwing applications over the wall.

The combination of microservices, automation and a DevOps culture leads to a radical improvement in two areas: the agility of software teams and the resilience of applications.


The big don’t eat the small anymore. The fast eat the slow. Software-powered innovation is transforming every major industry. If a team can speed up the build-test-deploy cycle faster than its competitors, it can capture a larger share of the market because it can respond better to changing conditions.


Cloud-native applications improve IT team agility by breaking an application into multiple smaller parts that can be independently built, automatically tested and deployed by small teams,  and each part does not affect any other part of the application.

These cloud-native applications stand in contrast to what are often called “monolithic” applications. Often, in a monolithic application, improvement to one part of the code requires changes to another. This tight coupling of features into a single codebase leads to infrequent and high-risk software releases. Enterprises that release new versions of software only quarterly or annually open themselves up to disruption by more nimble competitors.

On the other hand, cloud-native applications improve agility by putting a premium on automation. Automating a task means that it can be done faster and more frequently, without increasing the risk of human error. Automation also lets you concentrate on automating other tasks that are currently done manually and are error-prone.


Container-based microservices are faster to build, test and deploy, but are they of higher quality? The evidence from enterprises would suggest yes.

This is because microservices are “loosely coupled”; a failure in one part of the system is less likely to affect another. For example, if an online banking service built using microservices is having a problem with its ‘transfer funds’ function, a user can still check account balances or pay bills online because each of the individual features is its own microservice, complete with its own database. While the user might experience a degraded experience, the service is still useful as a composition of the functioning parts.

Contrast this cloud-native resilience with a monolithic banking application; if a user is unable to access a single Oracle database, he or she is also unable to check account balances, transfer funds or pay a bill.


Cloud Native Means Container Native

So, cloud native is awesome: I get it, you say.  But how do you do it?  The answer, increasingly, is with containers.  Containers have some very compelling benefits for cloud-native architectures:

  • Fast — Containers start up much faster than their VM-based brethren because multiple containers on the same host share an OS.
  • Lightweight — Because containers are so lightweight, you can get more of them on a single host than you can VMs, and Linux does a good job of providing resource isolation.
  • Consistent — Because a container is packaged with its dependencies, it is easier to run a containerized application consistently in different environments.

In fact, containers are so compelling that it is hard to imagine cloud-native architectures that are not based on containers. What about Netflix, you say? Even it is moving to containers.

So, the question becomes, what does a cloud-native architecture look like?

Bring Your Own Building Blocks

Thanks to the Cloud Native Computing Foundation (CNCF), there is an emerging consensus that a cloud-native architecture includes a few layers, all of which are pluggable and based on the best tool for the job.

The following CNCF chart provides a simplified view of such an architecture:

On the top is a scheduler, in this case, Kubernetes, but we could also include DC/OS or Swarm if we wanted.

Underneath the scheduler is container execution runtime, which today is based on the Open Container Initiative (OCI) spec, of which runC is the most popular option. But again, because this is a pluggable architecture, you can use any OCI-compliant runtime without having to dramatically re-architect your applications.

Two other important standards are underneath the OCI: the Container Network Interface (CNI) and the Container Storage Interface (CSI).

CNI allows different tools to provide overlay networks to multi-host container deployments.  A user can use Weave, Contiv or Docker Network to provide networking services and swap them out as requirements dictate. Because all these services conform to CNI, the cost of switching is low, and users can try multiple solutions to find the best fit.

CSI functions the same way, but for the cloud-native application data layer, often called the persistence layer.

My company Portworx is heavily involved in the CSI project, and we believe that like the other layers of the stack, CSI provides users choice with respect to which tools they run. CSI makes it easy to adopt GlusterFS for file-based workloads alongside Portworx or RexRay or StorageOS for database workloads. The key is that a user is never locked into a single provider.  Because any number of service providers can plug into CSI, a user who starts with one solution can move to another.

At Portworx, for example, we think we are a great solution for Kubernetes persistent storage, but we don’t believe our users should be locked into an architecture that forces them to use Portworx if it is not the right tool for the job. We’re proud to be a part of the community pushing forward on CSI and other projects to make cloud-native architectures one stack with many options.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.