Enabling DevOps Control for Those Who Need It Most — Developers
The evolution of infrastructure operations has been a long journey that is still ongoing, but the real game changer will come when developers control their own destinies.
On the surface, it seems IT requirements are the catalyst for the evolution of infrastructure technologies. In reality, the fundamental shift has been brought on due to developer productivity needs. IT serves at the will of developer productivity who have a more direct influence on business outcomes. No matter what the security argument is, if IT builds a walled garden around infrastructure, productivity will suffer and developers will leave.
Enterprise Computing: IT operations with Scripting and CLI
In the late ’90s and early 2000s, companies left mainframes for on-premises “productivity.” Microsoft and Sun made great strides in enabling developers with .NET and Java, respectively. Applications were getting fancier.
On the infrastructure side, VMware disrupted the industry with virtualization, Cisco with networking and EMC with storage technologies. Microsoft’s monopoly with Windows was strengthened by Active Directory. Over the years these infrastructure technologies became so complex that it became a niche skill to manage them.
The purpose of this increasingly complex infrastructure was to ship software faster; which, in turn, meant developers needed to be more productive. Ideally, developers should have been given direct infrastructure access through an abstraction layer that automated the low-level provisioning details. Instead, IT tightened control.
Cloud Computing: DevOps with Infrastructure-as-Code
The public cloud eliminated all aspects of physical infrastructure and delivered Infrastructure-as-a-Service (IaaS) at exponential speed with significantly less complex implementations. While AWS started with IaaS, Microsoft started Platform-as-a-Service (PaaS), which proved to be a bit ahead of its time. Subsequently, they course-corrected to focus on IaaS as well.
Nevertheless, the vision was clear: siloed disciplines of server, network and storage admins were eliminated. Infrastructure provisioning through code became the de facto model and was significantly faster than its predecessor.
While the core disruption was realized by the fundamental architecture of the cloud itself, the improvement in the infrastructure operations tools has been less disruptive. Terraform, Cloud Formation, Chef and Puppet started to make significant improvements in the automation space, but then came microservices which exponentially increased the number of moving pieces.
Cloud orchestrators, on the other hand, are focused on the hybrid cloud by normalizing all clouds down to IaaS, leaving hundreds of native cloud services like DynamoDB, SQS, SNS, Kinesis and others out of scope. At best, they acted as a facade to consume static templates that had little flexibility and self-service as they constantly relied on administrators to update them. Fundamentally, all these infrastructure scripting tools are not meant for the consumption of developers.
Spinning up new environments takes days or weeks. Even at the most efficient companies, the OpEx-to-CapEx ratio is still about 1:1. For example, if they spend a million dollars on AWS, they would require six to 10 DevOps engineers. IT, and its newly monikered platform engineering, remains a big cost center.
What Needs to Change
The fundamental change in the approach to Infrastructure-as-Code is indisputable, but, more importantly, platform engineering as a whole will not come from companies whose core audience has been operators.
It will come from cloud vendors or developers who have experienced the pain first-hand and understand that you cannot build a developer self-service platform with security guardrails from Terraform or other static scripting languages. We need to shift to a systems design approach to DevOps.
Most successful platforms, like Kubernetes, observability- and security-solutions and even the public clouds themselves are built with a systems design architecture. They all have an opinionated interface often called the policy model and have a state machine that can translate and implement higher-level user specifications into lower-layer nuances.
There is an “as-a-Service” theme to all of them: Infrastructure-as-a-Service, container orchestration service and so on. They offer a reliable and consistent way to manage and process many complex use cases without human intervention, reducing the potential for errors and improving efficiency.
In fact, DevOps automation needs to be looked at as a systems design problem. By simply extrapolating Infrastructure-as-a-Service, one could argue that Devops-as-a-Service can be built on similar principles above the IaaS layer.
To that end, at DuploCloud, we have created a platform where all the Infrastructure-as-Code, infrastructure provisioning, including security and compliance controls, as well as application deployment tasks, are automated in a rules-based engine and provisioned correctly the first time. In addition, we have integrated all the relevant DevOps lifecycle tools to complete the solution.
As the user submits higher-level deployment configurations via the application-centric user interface, the internal rules-based engine translates the configurations to low-level infrastructure constructs automatically, while also incorporating the desired compliance standards.
The fundamental limitation of IaC is that it is an attended script that runs serially and assumes the presence of a human to supervise. The DuploCloud Platform features a powerful user-friendly policy model and an intelligent state machine that applies the lower-level configuration generated by the rules engine to the cloud provider by invoking cloud-native APIs, which work asynchronously in multiple threads.
Failures are auto-recovered and repeated ones are proactively flagged as faults in the user interface. The platform continually compares the current state of the infrastructure with the desired state, which includes compliance standards and security requirements. This gives control to the developers, who need it most to ship products quickly and efficiently.