Overlooked in the emergence of the DevOps practice is the fact that the techniques that prefigured the new role of DevOps have evolved due to changes in the nature of the infrastructure under management. Once a passive participant, infrastructure has now become a full partner in the creative process.
Virtual machines, containers, services, and physical hosts come into existence on-demand and persist only as long as they are required. Their full life cycle may be automated, and require no human intervention. Untold multitudes of services may experience a brief flash of existence, never rising to a level where they may be directly observed by an operator.
Today’s infrastructure is vast, flexible and ephemeral. It is not only expressed now as code but behaves like it as well, inheriting code’s abstract complexity.
To manage this new complexity, it is tempting to artificially limit choice, to “get religion,” and to develop only for a specific platform, targeting a single provider. This is a vestige of the process of the old infrastructure; a reflex acquired in days when it was thought to aid survival. Voluntarily building within the provider’s walls may keep overwhelming complexity at the gate for a time, but it inhibits exploration and restricts growth.
Embracing multiple platforms and providers, limits may instead be placed on the tools used to interact with them; a single automation or orchestration toolset used as a portal to the wider world. The risk, in this case, is that the pace of development is restricted to the technologies and methodologies supported by the toolset. The wider world may advance, and over time a gap grows that cannot be easily closed.
Complexity must be managed to control costs and delivery risk, but if it is managed too closely then agility, and the opportunity for innovation, is lost. Employing arbitrary constraints misses the point and in doing so ducks the issue. Complexity itself is not the issue, it is that our tools for managing this complexity have not kept pace with the needs of current practice.
Policy-based management, powered by autonomic computing, is the most appropriate level of abstraction for managing the new complexity.
Moving from a mostly manual interaction model to one that is policy-based and adaptive, built upon the foundation of autonomic computing, provides a more powerful and appropriate level of abstraction. Focus is given to the application, and to its proper behavior, regardless of the specific infrastructure on which it may be running.
In this model, operational requirements are integrated into the definition of the application, and autonomic management is responsible for effecting context-specific actions to achieve the desired operational states. Sensors monitor the application and its components and ensure continuous adherence to the policy. Corrective action is taken autonomously.
Policy-based management, powered by autonomic computing, is the most appropriate level of abstraction for managing the new complexity. Policies which describe the deployment and in-life management of an application define the application; the application and its behavior upon a given infrastructure are no longer separate concerns.
The power of policies is that, as code, they are infinitely expressive, recursive, composable, and subject to the same rigorous control, testing and cycles of innovation as the application they define. And the power of autonomic computing is that it provides for infrastructure, expressed as and behaving like code, to be operated as code, truly becoming a first class participant in the creative process.
Cloudsoft is a sponsor of The New Stack.