TriggerMesh sponsored this post.
Almost anyone who has been tasked with a new infrastructure project in the last few years has considered using cloud computing to solve their problems. The faster time to value and the variety of services to quickly solve virtually any need, have made the initial decision an easy one. Although in the long term, well-known Andreessen-Horowitz venture capitalist and former entrepreneur, Martin Casado, has observed certain circumstances where organizations have repatriated applications to their own data center once a workload gets to a certain size, to provide some economic benefit. In his recent blog post with Sarah Wingo, The Cost of Cloud, a Trillion Dollar Paradox, Casado notes:
Now, there is a growing awareness of the long-term cost implications of cloud. As the cost of cloud starts to contribute significantly to the total cost of revenue (COR) or cost of goods sold (COGS), some companies have taken the dramatic step of “repatriating” the majority of workloads (as in the example of Dropbox) or in other cases adopting a hybrid approach (as with CrowdStrike and Zscaler). Those who have done this have reported significant cost savings: In 2017, Dropbox detailed in its S-1 a whopping $75M in cumulative savings over the two years prior to IPO due to their infrastructure optimization overhaul, the majority of which entailed repatriating workloads from public cloud.
Open Technologies for Cloud Independence
It’s likely that cloud adoption will continue to grow. However, it would behoove most organizations to keep their options open, by adopting technologies that provide a path from one cloud to another (or to multiple clouds) as easily as it does to on-premises data centers. Here is a list of the open technologies you should consider.
The first technology on the list is a no-brainer and most cloud native users are familiar with it. Kubernetes, also known as K8s, is an open source system for automating deployment, scaling and management of containerized applications. Using Kubernetes as your deployment layer allows for portability from virtually any cloud to any other cloud, or your data center, without a rewrite of your applications. Not only does Kubernetes provide a way to orchestrate containerized applications, but it also hosts serverless infrastructure. Kubernetes is becoming the de facto fabric of cloud computing.
Terraform from Hashicorp is an open source infrastructure-as-code software tool that provides a consistent CLI workflow to manage hundreds of cloud services. Terraform codifies cloud APIs into declarative configuration files. Terraform is a cloud-independent deployment tool that works on multiple cloud providers — for example, even though AWS has its own tool (AWS CloudFormation), it also supports Terraform. Terraform allows infrastructure to be expressed as code. The easily read language is called HCL, short for Hashicorp Configuration Language.
Knative is a Kubernetes-based platform to deploy and manage modern serverless workloads. Knative is mostly known for providing a way to deliver serverless function-as-a-service capabilities — similar to Amazon Lambda. One other very powerful capability is the Knative eventing system, designed to address a common need for cloud native development by providing composable primitives to enable late-binding event sources and event consumers. This means you can use these events to provide changes in state, from system to system. At TriggerMesh, we use Knative to provide an event-driven cloud native integration platform to automate workflows. We felt that this was the logical way for us to build integrations for Kubernetes and provide a future-proof integration platform for cloud and on-premise applications. Knative Eventing is approaching its 1.0 release and is reaching the maturity that production users would expect.
Tekton is a powerful and flexible open source framework for creating CI/CD systems, allowing developers to build, test and deploy across cloud providers and on-premise systems. Tekton allows developers to automate cloud native deployment pipelines for all languages and frameworks. Because Tekton is open source, engineering teams can customize the tool and integrate it with other tools. Tekton is a project of the Continuous Delivery Foundation.
5. Apache Kafka
Apache Kafka is an open source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and other mission-critical applications. As our reliance on the real-time availability of data grows, it’s a tool that allows you to get your data to the place it needs to be, even more quickly. Originally developed at LinkedIn, Apache Kafka was a scalable messaging queue, but now Kafka is much more than a messaging queue. It is a powerful tool for working with data streams and it shines as you grow your event-streaming needs to a greater scale. If your message volume is low, you are perfectly well-served to use event streaming in your existing cloud providers (Kinesis in AWS, Pub/Sub in Google, and EventGrid in Azure). However, if you want one streaming platform for multiple clouds, then Kafka is a good solution.
Multicloud, Hybrid Cloud, Private Cloud — It Doesn’t Matter
As with any technology, it’s always a consideration to weigh vendor dependence as a risk. In segments that are likely to be commoditized — container hosting, serverless, storage, and other services that offer little differentiation — a strategy that allows for migration between providers is good practice. Accomplishing this means selecting tooling that supports multiple clouds and on-prem hosting.
That’s not to say that some tools don’t provide a greater value even if that means risking dependence on a vendor. Solutions like Salesforce for CRM, or Snowflake for cloud-based data warehousing, provide compelling value that may be worth the vendor dependence. However, make sure that choosing a solution that provides sustainable improvements in productivity doesn’t introduce unintended dependencies on other less valuable technologies.
Open source and tools that fall under an open model (albeit not always open source by definition) are a pervasive and important part of the cloud native landscape. While not all tools are open source and perhaps not even software (some may be useful services, e.g. Splunk), there are a very large number of high-quality tools that provide opportunities for a successful cloud native strategy. It would behoove many organizations to consider these things as they embark on their cloud native journey.
Lead image via Pixabay.