Cloud Native Ecosystem / Kubernetes / Open Source / Technology / Sponsored

How Integrations Help Enterprises Level Up with Kubernetes

3 Jan 2022 5:00am, by
white electrical plug on blue background

A cloud native system that runs on Kubernetes can give an organization a lot of choices. And wow, what an understatement.

A glimpse at the Cloud Native Computing Foundation landscape, with its more than 1,000 tools and services, can induce vertigo. So many options! The release of Kubernetes in 2014 led to a whole new industry of projects and products designed to make a cloud-based system more efficient, observable, secure, and user-friendly.

Today, infrastructure is composable: Rather than putting together single-stack solutions (say, by using application servers like Oracle’s WebLogic or IBM’s WebSphere), more developers and architects are assembling best-in-class options from across the CNCF landscape — and heavily using open source software.

The benefits of cloud native are obvious for enterprises and their IT teams: It’s faster and more scalable. The right tools and applications can speed up productivity, enhance observability, and keep things secure.

But increasingly, teams deploy to a mix of public clouds, virtual machines, bare metal, on-premises data centers and edge computing resources.

“We’re seeing a lot of enterprises saying, I have Azure, I have Google and Amazon and I have my data centers. I need my architecture and applications to be everywhere because of the nature of my business,” said Rohit Bakhshi, who works in product management at Confluent.

The variety of tools and platforms being used in a Kubernetes-managed system can make it more complex to make everything play nicely together.

“Integration is difficult and especially in distributed systems,” said Sherwood Zern, product manager of product strategy at Oracle. He has worked with enterprise customers around the world and can attest that the pain points are universal.

“I was in Malaysia one time, and I was talking to this CIO and he asked, ‘Why does this have to be so difficult?’” Zern recalled. “I just responded back to him, ‘Well it’s difficult because you’re dealing with distributed systems, which means there are multiple areas for failure.”

Kubernetes systems that run on multiple clouds introduce greater complexity. “While Kubernetes itself is a de facto standard, the implementations of Kubernetes may differ from platform to platform,” said Stephen O’Grady, principal analyst and co-founder of RedMonk.

How Containerized Integrations Help

Containerized microservice applications are the building blocks of cloud native software, and declarative programming — in which programs list their required results without explicitly listing the steps to get those outcomes — is used to create those apps. Kubernetes itself is declarative, and it abstracts and handles configurations for the apps it runs.

“People have embraced a declarative mindset using the Kubernetes API. Those practices apply to traditional applications, but also to integrations,” said Sebastien Goasguen, co-founder and head of product at TriggerMesh, which focuses on application integration.

Declarative code frees developers to spend their time on meeting the business use case for their applications, said Zern. “As a developer, I can focus on my application and not have to worry about, OK, now I need a test environment. Now I need to go build out my Kubernetes cluster. What are all my networking rules? What hardware do I use?”

Some big challenges that enterprise teams grapple with in a Kubernetes-run system, Goasguen said, are how to modernize their applications — and how to modernize the integrations between services in a multi/hybrid cloud environment.

The major cloud providers — Amazon, Google, Microsoft — provide integrations for Kubernetes-run applications. But they don’t always work with their competitors’ cloud platforms.

Event-driven architecture, in which events (for example, changes in state) are used to trigger and communicate between decoupled services, can help integrate applications.

Such architecture “gives you a persistence and a history of events. It models everything as an event, and allows you to replay and model your architectural events,” said Bakhshi, of  Confluent.

Events enable a DevOps team to take the best of breed from all applications and tie them together in an event-driven architecture. CloudEvents, an open source project, is a specification for describing event data in a common way. With seamless integrations, cloud services become, in essence, the libraries of cloud native apps.

Open Source as Your Default for Tooling

Let’s return now to the CNCF landscape. All those choices offer not only the opportunity to select the best tools for each use case but also the danger of picking a short-lived flash in the pan.

The products that crowd the cloud native market, Zern reminded us, “are competing, and eventually, there are going to be some that fall off to the wayside. And I do want to make sure that I try to pick the one that’s still standing in the end. How do I go about doing that?”

One way to guard against picking a short-lived solution is to turn to open source as your default for tooling. Open source projects, given their development and maintenance by a large base of contributors, may be more likely to survive than those created by a small company, no matter how much venture capital money that startup has attracted.

Infrastructure-as-code tools — declarative code that allows users to manage their cloud native infrastructure, and deploy that infrastructure to DevOps teams — include such popular open source projects as Ansible, Chef, Puppet and Terraform.

Being open source gives these tools the advantages of flexibility, the freedom for outside developers to inspect the code, and no vendor lock-in.  But they are also more automated than a legacy configuration tool might be.

If built with declarative code, a tool can make it easier for a developer to make changes without a deep knowledge of that particular tool — a benefit for an enterprise that wants to roll out changes among many servers simultaneously. As code, it can also be used with version-control tools, and rolled back to previous versions if an error occurs.

What Is Integration as Code?

Integration as code — the concept behind newly open source TriggerMesh’s API and integrations — aims to make it easier and faster to connect data and cloud native applications across multiple clouds and on-premises data centers.

It does this by ingesting events from an application through its API — detecting a change of state, for instance — and if needed, transforming that event so it can integrate with public clouds, on-premises servers, or both.

For enterprises, TriggerMesh and its capabilities as containerized integration can magnify the advantages of Kubernetes in a number of ways, according to Goasguen:

Logs and metrics collection.

For instance: if you want to back up all of your Salesforce or git commit events in Elasticsearch, in order to create a data lake and do business analytics, Goasguen suggested, that would be an example of a use case for TriggerMesh.

Creating event-driven applications.

PNC Bank uses TriggerMesh, Goasguen said, in conjunction with its DevOps governance, a practice that is becoming more popular among financial institutions.

“They receive events in real-time and then they want to be able to trigger risk-control evaluation on-demand,” he said. “So their risk controls are running as serverless functions. And then the actual event triggering an event flow is codified with the TriggerMesh API.”

Making traditional workflows more efficient.

Being able to write cloud native applications that integrate seamlessly with any environment in which they’re deployed helps enterprises gain the most advantage from their Kubernetes-run architecture.

TriggerMesh’s move to make its flagship product open source was undertaken in part to help speed up adoption of the integration-as-code tool, said Goasguen.

The move makes sense as a way to help enterprise developers dig in, keep improving the tool and find new uses for it, Confluent’s Bakhshi suggested.

If, Bakhshi said, “as a developer, I’m building something new, I’m going to do it against open protocol and open API. Because I know that I can always evolve. I can start by deploying it somewhere locally, I can take it and deploy it on a cloud server that speaks to the same application API, the same protocol, and that’s not gonna change.

“So that’s super important. It’s a way to get more upstream developers to come in when things are open source. They can pick it up, it’s easy to deploy, and they can play around with the code.”

Featured image by Markus Winkler via Unsplash.