How much can change in 15 years’ time? If you’re talking about the world of software and computing, everything.
Think about it. About 15 years ago, back in the age of Y2K, the way we computed looked wildly different. Virtually all computing took place using on-premise hardware. Data lived locally on workstations or bare-metal servers that sat in a dark closet somewhere in your office, where admins had to venture to whenever they needed to work with it. Very few apps or systems were distributed across multiple servers; instead, deployment models were structured around a single server and a single app, managed usually via a simple FTP connection. When you wanted to install software, you sat down in front of the machine that you were installing it to and clicked through configuration wizards.
The way people in the software development world worked was very different, too. Different teams operated in isolation from each other. Developers developed, sysadmins administered, software testers tested, and so on, with little coordination or collaboration. In other words, software development workflows were highly siloed.
What Has Changed Since Then
Since the early 2000s, the computing world has seen enormous innovations. Here’s a list of just the biggest:
- Bare-metal workloads have been migrated to virtual servers, which provide more portability and flexibility. Eventually, virtual machines facilitated the migration of most workloads to the cloud, another key transformation.
- More recently, the widespread embrace of Docker containers as a replacement for many virtual machines has introduced another level of innovation. Docker not only creates a new abstraction layer but has also completed the shift in focus from individual servers to clusters. When you use Docker in a production environment powered by an orchestrator like Kubernetes, what matters is not specific hosts or even individual containers, but rather the services composed by containers that are spread across many host servers.
- Systems are now almost over-administered over the network. The days where admins sat in dark closets-turned-server rooms are gone. In fact, admins today can work thousands of miles away from the servers they administer.
- The focus and nature of admins’ work have also changed. In the early 2000s, admins’ main job was to administer Linux servers. Then, with the introduction of scripted infrastructure and automated provisioning, their focus shifted to deploying tools like Chef and Puppet. Now, the DevOps movement has changed admins’ roles again by encouraging greater collaboration between admins and developers and blurring the lines separating the two groups at most organizations.
- The cloud paradigm has been widely accepted. Yes, you may still run some workloads on-premise for security, cost or other reasons. But even in the most tightly regulated industries, adoption of the cloud is common.
- Applications and systems have become distributed. This started to happen in the mid-2000s with the Software Oriented Architecture (or SOA) trend. It has now gone full-steam with the container and microservices revolution. Under this new model, services are not just distributed across clusters of servers but are composed using multiple small, encapsulated objects that facilitate instant portability and repeatability of a set of services across any type of host infrastructure.
- Developers and admins have a proliferation of new automation tools and strategies to work with. CI servers like Jenkins and Bamboo, Infrastructure-as-Code tools such as Ansible and Chef, orchestrators like Kubernetes and Mesos and more have made the days of manual installation, manual provisioning and manual configuration a thing of the past.
- Massive cultural change has taken place within organizations. This change has been most obvious as the result of the DevOps movement, which empowers developers and is now widely accepted even in large enterprises. But cultural change doesn’t end with DevOps. We’re on the cusp of further innovation, as ideas like NoOps — which promotes development operations that are so automated that environments can provision, scale and load-balance themselves without any manual intervention from admins — continue to gain steam.
Why Software-Development Nirvana Remains Elusive
If we were able to realize the full potential of the new technologies and strategies that we have embraced over the past 15 years, we’d be living in software-development nirvana. The cloud was supposed to eliminate downtime. In theory, microservices make apps more agile and greatly increase the efficiency of the teams that develop and administer software built around a microservices infrastructure. Orchestrators and automated provisioning were supposed to remove tedious, manual processes from admins’ lives as easy as clicking buttons.
Yet with all of the innovations that the computing world has seen since the early 2000s, nirvana remains elusive. New tools are not being used to maximum effect because, as the toolsets have expanded in size and apps have been broken into smaller services, learning curves have also increased. We lack solutions that can bridge or integrate all of the different technologies we work with today. And above all, despite the significance of DevOps, culture change has not kept pace in many respects with technological change.
As a result, workflows are still not nearly as streamlined as they could be. Admins still spend lots of their workdays manually provisioning or configuring software that is supposed to be deployed automatically.
Why have we fallen short of achieving software-development nirvana? At the root of the problem lie the following five challenges, which persist despite all of the technological innovations of the past 15 years:
- The average organization is not Netflix or Google. It’s smaller and has fewer resources. As a result, it can’t consume or implement new technologies in the same way—even though many organizations mistakenly try to do this. Nor does the average organization have the staff or capital to build new tools from the ground up to suit its particular needs in the way that tech giants can.
- Companies try to implement new tools without realizing that these tools often come with huge learning curves. New tools that often seem elegant are only useful in practice as long as engineering teams can use them without being required to learn the underlying architecture that these tools are supposed to abstract away.
- Tooling remains weak. Organizations have adopted specific tools, like Jenkins, but they have not constructed the complete CI/CD pipelines that they require to take full advantage of those tools. Jenkins on its own only automates a specific part of your workflow; it doesn’t provide the full automation that you need to thrive in today’s technology landscape. Plus, when not used properly, tools like Jenkins add more complexity than they eliminate because they come with a lot of overhead, outdated plugins, and confusing documentation. Something’s rotten in the state of Jenkins, indeed.
- Organizations which have attempted to embrace DevOps have not always understood (despite the term DevOps) that DevOps is about more than just encouraging collaboration between developers and IT Ops. You need to streamline collaboration across your entire organization — which means also including QA, security teams, etc. In other words, to reap the benefits of DevOps, you have to be comprehensive in your approach to DevOps and the workflows that accompany it.
- For as much as technology has evolved, there are still big gaps in the stack. We’re good at automating infrastructure provisioning and code integration, for example, thanks to tools like Chef, Kubernetes, Swarm, Mesos and Jenkins. But the deployment and continuous feedback processes still tend to rely heavily on manual intervention by development and admin staff. Even the automation tools that do exist are effective only when engineers can use them without having to learn the underlying architectures that the tools are supposed to abstract or perform the manual processes that they are supposed to obviate.
These are some of the big problems (the list could go on, for sure!) that we face today that have prevented us from achieving utopia. But we’re not here just to criticize. In the next post in this series, we’ll get constructive by looking at the solutions available to help take full advantage of all of the innovations introduced over the past 15 years to move closer to utopia—and into a post-Jenkins world, which, as we’ll see, is an important part of the equation.
Any questions? We’re always interested in feedback, e-mail us.
Wercker is a sponsor of The New Stack.
Feature image: New Old Stock.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.