Oracle sponsored this post.
More than a decade after the launch of the first big cloud services, we’re now in a golden age of development services and tools. DevOps practices and cloud deployments are now the foundation of new applications and services. And these tools aren’t just for the startups or cloud companies with massive budgets.
But not every development team has made the transition to cloud, or to using open source cloud native technologies. We’re at a point where we have to make cloud native methodologies and patterns open to everyone, and to every project. That means bringing more services, tools and best practices to the large number of enterprise developers who have on-premises deployments, leverage battle-tested platforms like Java or WebLogic, and have critical applications that rely on these technologies to run their business. They need a roadmap to bring both their teams and their related core business apps to the cloud and to cloud native technologies like Kubernetes.
The biggest issues facing organizations making a shift to the cloud revolve around people, process, and technology — creating a drag associated with technical debt. It’s not easy to retrain and transition larger development teams to a DevOps culture or take existing applications and rewrite them for the cloud using new open source technologies. And while lift-and-shift options to cloud-hosted infrastructures are often a good first step, the economics of running applications as-is in the cloud can mask the benefits that come from a much deeper adoption of cloud technologies. But they can’t leave those foundational applications behind.
Additional issues come with the need to support multiple clouds, support additional geographies or comply with regional data protection rules, or even to arbitrage costs, sending requests to the cheapest service.
The Three C’s: Culture, Code and Cloud
Cloud native adoption challenges break down into three parts, said Bob Quillin, vice president of developer relations at Oracle Cloud Infrastructure: culture, code, and cloud. Together, these parts determine the potential depth and breadth of cloud native adoption across an organization for new and existing applications.
- In a recent August 2018 survey by the Cloud Native Computing Foundation (CNCF), “cultural changes with development team” was cited as the number one challenge in using & deploying containers today. The culture that’s needed to deliver modern applications, while continuing to evolve from early DevOps practices, has left many enterprise developers and teams behind. Add to that, new roles like site reliability engineers (SREs) and new tools that extend the DevOps workflow into the complete application lifecycle, security, and product management further stress the need for better training and tools designed for enterprises not unicorns. Cultural change isn’t easy, but the result is a way of working that improves everything from agility to reliability.
- Much of the open source code that drives these changes originated with the big hyperscale cloud platforms. Google’s Borg gave birth to Kubernetes, adopted so enthusiastically by the major public clouds and the global open source community that over half of code commits now come from outside Google. As the platform grows, it’s adding tools to manage deployments and to control and monitor distributed applications. Working through the CNCF, the same code runs on everyone’s clouds, from edge systems on Raspberry Pis, to on premises with OpenStack and Oracle Linux, to the big public clouds’ managed Kubernetes instances. There’s a wealth of options; in fact, there are so many that it can be confusing to choose between them and virtually impossible to manage and administer yourself.
- The cloud itself is the final element, as organizations take advantage of the compute and storage power available in the public cloud’s massive data centers to offer both resources and a new set of coding abstractions. Where the first cloud deployments were identical servers, now open hardware projects like the Open Compute Project have led to an explosion of hardware designs mixing custom CPUs, GPUs, and dedicated hardware for machine learning, along with FPGAs to mix hardware and software. It’s infrastructure that lets us work at scale and at speed, supporting new technologies like big data, IoT, and machine learning, but that richness also brings more complexity.
Cloud Native Creates Results
Getting these three factors right delivers results that are game-changing for companies, like the TravelTime platform. A geographic search tool that uses time boundaries, TravelTime maps places by time, showing commute distances and local services.
“We provide tools for large-scale consumer searches by location, answering questions like ‘what can I do within one hour?’ That meant a lot of complex data processing, of public transport, of walking, of driving,” Charlie Davies, co-founder of iGeolise, the company behind the TravelTime platform, said.
Initially, the platform supported only the UK, and it used to take more than a week to process each update to the data needed for their service. With global expansion, the amount of data needed to support the application exploded.
“We were initially using our own servers,” Davies said, “but working with the Oracle Global Startup Ecosystem program we were introduced to Kubernetes, and began to build microservices in containers. That let us parallelize operations, and what used to take a week now took less than eight hours.” That gave them a lot more control; “We could use Kubernetes to spin containers up and down as they were needed.”
Shifting to Oracle Container Engine for Kubernetes changed the way TravelTime worked. “We removed the overhead problem,” Davies said, “with more time to spend on what makes our system great.” By not having to spend time thinking about the underlying infrastructure, he was able to fundamentally change how the company worked, from the investment strategy to how and where the company could expand. “You don’t think about the servers, you just get to be the best at doing software.”
Cloud Native Framework Helps Orgs Go Cloud Native
As cloud native development matures, new options are becoming available. Design patterns like operators, sidecars and adaptors will allow existing monolithic applications to be integrated into distributed systems, and at the same time help deconstruct and reconstruct old code. These approaches are still very new, but they’re starting to help with cloud migrations.
Also just emerging are the tools and services to help with this transition. That goes hand in hand with building new inclusive cultures, expanding on the work of the DevOps movement to help teams understand how to build and operate distributed systems in the emerging multi-cloud world. Competition in the cloud means that compute and storage keep getting cheaper, but there’s still plenty of concern about lock-in. It’s all very well building on one platform, but what if your code can’t migrate or your data is subject to hefty egress charges?
The Oracle Cloud Native Framework is one answer to this question, and it’s what iGeolise is using to build TravelTime. It’s a consistent set of open source-based managed services and software that can be run in the cloud, on premises, or hybrid, based on a Kubernetes foundation and wrapping. It consists of a rich set of new OCI cloud native services and the recently-announced Oracle Linux Cloud Native Environment. It also comes with additional management tooling built on other CNCF projects, including Resource Manager which leverages the familiar Terraform provisioning tools and additional monitoring services. Oracle Functions adds the open source Fn serverless platform and makes it suitable for working with IoT and wide range of event-driven projects.
All the elements for the application lifecycle of a modern cloud solution are there, from orchestration and provisioning, to application definition and development, as well as observability and analysis.
The whole solution is provided as containers, running anywhere there is a certified Kubernetes, eliminating vendor lock-in. And with a pay-per-use billing model, Oracle offers enterprises a way to take advantage of the new open source, cloud native world through managed services. Teams no longer have to manage their own resources but still get the familiarity of their support structure.
The result is a complete solution, with a range of parts to pick and choose from, that’s open source and cloud agnostic. It can support your code on any cloud — or even work across multiple clouds, either for hybrid solutions or for multicloud operations.
In the next post, this series will dive deeper into the future of DevOps and how enterprise developer teams are addressing cultural challenges today.
The Cloud Native Computing Foundation is a sponsor of The New Stack.
Feature image via Pixabay.