DevOps / Monitoring / Sponsored / Contributed

7 Ways DevOps Can Overcome Scalability Challenges Using Automated Orchestration

28 Apr 2021 12:00pm, by

Andreas Grabner
Andreas is a DevOps Activist at Dynatrace. He has over 20 years of experience as a software developer, tester and architect, and is an advocate for high-performing cloud operations. As a champion of DevOps initiatives, Andreas is dedicated to helping developers, testers and operations teams become more efficient in their jobs with Dynatrace’s software intelligence platform.

Software delivery is trending towards a self-service platform model that applies DevOps principles at all stages of the software delivery pipeline. According to the Puppet 2020 State of DevOps Report, this model spans private and cloud infrastructure and development environments, and encompasses monitoring, alerting, audit logging, and continuous, progressive delivery. This self-service platform approach favors DevOps practices and containerized Kubernetes-based architecture, and is designed to help DevOps teams develop and release high-quality, secure software more efficiently, enabling them to drive new innovations and business value for their organizations.

And yet what I’m hearing from developers, engineers and operations teams is so often the opposite. Many teams are struggling to release better software, accelerate the pace of innovation, and scale continuous delivery practices across their organizations. The trend lines of where DevOps should be heading often do not line up with the daily reality that so many DevOps teams experience.

What’s Causing This Disconnect for DevOps?

Both the Puppet State of DevOps Report cited above and DevOps user surveys conducted by my own company reveal some of the major challenges that teams face with DevOps adoption and scalability:

  • Each failed deployment requires approximately five hours of “heroics” to fix and six retries to get it right.
  • About 95% of pipeline engineering time is spent maintaining complex pipelines.
  • Approximately 80% of pipeline lead time is spent on manual quality validation.
  • Around 90% of troubleshooting time is spent on manual production remediation.

Why is this happening? Why are deployments eating up so much time and resources? Why have so many organizations been unable to unlock the true potential of DevOps to make better and more secure software faster?

To answer these questions, I’ve identified some key factors that are stifling DevOps adoption and scalability across an organization. I’ve also outlined how these factors have become a drag on DevOps resources and productivity, and how organizations can overcome these challenges through data-driven delivery and operations orchestration.

The 7 Bottlenecks Slowing DevOps Adoption and Scalability

  1. Lack of multicloud observability: Limited access and visibility into hybrid-cloud and multicloud environments obscures the true status of DevOps adoption. The less observability into your environment, the harder it is to mature and automate DevOps practices within the organization. As a result, the success stories of certain “lighthouse projects” paints a rosier picture than is true.
  2. Reliance on legacy tools: Microservices have different needs than monolithic applications, yet many teams continue using the same legacy tools for delivery.
  3. Reliance on legacy processes: Similarly, not all microservices are equal, but organizations often apply sequential, waterfall-type development processes across the board, as if they were equal.
  4. Tightly coupled architecture: Certain organizational structures and processes (e.g., silos) result in tight architectural coupling and interdependent systems, making it more difficult to scale DevOps internally.
  5. Customizations: Missing integration standards results in heavily customized, manually intensive tool integrations.
  6. Lack of standards: Missing validation standards leads teams to make manual, “gut feeling” calls on go/no-go decisions.
  7. Lack of automation: The predominant focus on automating delivery neglects the need to also automate operations.

What DevOps Needs Is Self-Service, Data-Driven Delivery, and Operations Orchestration

The missing element behind these bottlenecks is end-to-end observability, automation, and AI to fuel data-driven delivery and orchestration. These needs inspired the development of a new open source initiative called Keptn: a Cloud Native Computing Foundation (CNCF) sandbox project that provides self-service progressive delivery of microservices, automated standards-based quality gates, continuous feedback, and automatic remediation of production issues.

Using a data-driven, declarative programming approach to orchestration, Keptn eliminates the need to put processes into scripts. Based on GitOps, service-level objectives (SLOs), and open source interoperability standards (such as CloudEvents for communicating with tools), Keptn enables developers, operations, and site reliability engineers to identify their bottlenecks and automate resolutions — from quality gates based on SLOs and site reliability automation, to continuous delivery and auto-remediation.

Let’s apply this to one of the problems highlighted earlier: about 95% of the time allocated to pipeline engineering is spent on extending processes, changing tools, and applying fixes after updates. All because traditional pipelines are too complex to scale.

The solution in this case is to remove hard dependencies and custom integrations. By separating processes (such as build, prepare, deploy, test, notify, rollback) from tooling and capabilities (such as configuration, management, deployment, rollback, monitoring, testing, and ChatOps), teams can instead use an event-driven architecture to connect these processes and capabilities. The orchestration paradigm Keptn is built on makes it possible to rapidly scale and adopt these DevOps processes across an organization.

The 7 Ways Keptn Resolves DevOps Bottlenecks

Here are the seven ways Keptn resolves those same DevOps adoption and scalability challenges described earlier:

  1. Built for multicloud: Keptn is designed for modern, cloud native stacks and existing enterprise technologies.
  2. Flexible tool orchestration: Instead of using the same legacy tools for delivery, it orchestrates all tools depending on an organization’s unique stack and architecture.
  3. Adaptable processes: Rather than applying the same legacy processes across all microservices, it applies the process that best fits.
  4. Decoupled architecture: Instead of tightly coupled interdependencies, Keptn runs processes independently of the underlying infrastructure.
  5. Customization agnostic: An open integration standard ensures connectivity with all DevOps tools with no vendor lock-in provisions.
  6. Clear standards: Uses standardized SLOs for data-driven lifecycle orchestration.
  7. Built for automation: In response to a model that previously focused on automating delivery but not operations, Keptn orchestrates both.

The goal of DevOps is to release better and more secure software faster. Bottlenecks to adopting and scaling DevOps processes keep too many teams from realizing the full benefits of this approach, and limit teams’ ability to take their operations to the next level. To transform the way they work — and to foster more efficient collaboration, faster innovation, and more positive impact on the business — DevOps teams need to be able to leverage an adaptive, self-service platform model for data-driven delivery and operations orchestration (such as Keptn), to scale DevOps delivery and drive adoption throughout their organization.

Feature photo via Pixabay.

A newsletter digest of the week’s most important stories & analyses.