Exactly what is the end goal of DevOps? Ask ten people, said an IBM practice leader at DevOps World 2018 in San Francisco last week, and you’ll get eleven answers.
A measurable plurality of active participants in the DevOps movement will tell you its goal is automation — more specifically, the elimination of human effort from the types of jobs that more engineers are classifying as “undifferentiated busy-work.” Would this make the fully “digitally transformed” organization more Dev than Ops… or what I actually did hear more than one attendee here this year call “DevDev?”
“The cloud is expensive. But you know what’s even more expensive? Having a whole department of thirty people managing your infrastructure, when there’s a modern toolset out there that can do it for you,” asserted Dana Lawson, vice president of engineering for digital product design platform provider InVision. “It’s that struggle of build-versus-buy, but when you want to have that magic margin of, how much money am I going to spend on my operations — especially because you want to be profitable sometime in this century… you do make those decisions.”
Lawson addressed the currently established maxim that a cloud-native deployment becomes more expensive for organizations over time, as the volume of data they generate for themselves becomes more and more untenable. What that formula isn’t taking into account, she said, is the expense of employing a team of engineers committed to the singular task of maintaining infrastructure, either on-premises or co-located.
“You can have a global infrastructure set with a small team that can do it in a consistent manner with today’s technology, if you embrace it,” the VP told attendees. “And you can also diversify your client base by being cloud-agnostic and having technologies like Kubernetes… By doing so, you’re not only going to impact the ability to move fast, which is key, but also hopefully have a smaller set of engineers to be able to manage something globally, and have that consistency.”
The Wrong Question Gets Asked Anyway
Lawson spoke as part of a panel moderated by Cloud Native Computing Foundation Vice President Dee Kumar. At least on the official schedule, its topic was the obstacles and/or hurdles facing the adoption of cloud-native technologies in modern organizations. As it turned out, panelists were reluctant to characterize any issue facing the software development community today — including those working on infrastructure-as-code — as a hurdle, obstacle, pothole, mountain range, asteroid field, intra-galactic void, or any other manner of hindrance.
Cloud nativity is not at all a problem, all seven appeared to agree, as long as we don’t restrict ourselves to the public cloud, and stay focused on the goal of eliminating administrative overhead.
I asked the panel whether this topic actually was an obstacle, perhaps without us realizing it. Organizations appear to have two options, I stated: one, automating the delegation of pink slips to formerly important network engineers and administrators; two, imposing a corporate cultural theme of unity and prosperity while at the same reassigning these people to lower-paid positions.
CloudBees Distinguished Fellow James Strachan was the first to respond, saying I was asking the wrong question. It’s not how do we sack folks we don’t need anymore, but how do organizations deliver business value faster?
“If that means we can outsource our level-one Ops, of running software processes and doing load balancing, people shouldn’t do that anymore,” said Strachan. “That’s not a task for operations teams. The task for operations teams is, how do we keep continually improving how we build applications so they’re easier to manage and monitor, and it’s easier for developers to build reliable code and fail fast and deliver business value better.”
It’s not that we won’t need the operations people, Strachan continued, but rather that their roles will evolve more toward a site reliability engineer (SRE). He described this as more of a developer support role, helping developers to build faster, keep them on track with their CI/CD pipelines, and move their resources to the cloud when necessary.
Bo Chheng, senior director for IT infrastructure at Sirius XM, followed up on Strachlan’ comment, acknowledging in the process that the looming shadow of the public cloud still hangs heavy over long-time operations managers.
“If you’re in a leadership role,” Chheng warned the audience, “you don’t make an announcement that you’re moving to the cloud. You have conversations about that, especially about folks in infrastructure. What we told them is, these are opportunities for them to grow and learn, and adapt to the new way of doing things.”
The people who need to be treated delicately in these situations, Chheng continued, tend to fall into three separate behavioral categories. One group is certain to champion new technology, and will already have read up on it, probably on one of those online tech blogs you see cropping up these days. Another is made up of innocent bystanders who are waiting to see outcomes. This second group is capable of being educated, so these first two groups end up not posing obstacles to adoption.
There is a third group that perhaps few people had counted upon: the resistors. “They want to be a storage admin; they want to die a storage admin,” said Chheng. “And there’s nothing you can do about it. But I really think you can provide a lot of opportunities for people through moving to the cloud, and that’s what we try to do.”
Panelists agreed that “the cloud,” in this particular context, is not necessarily the public cloud (AWS, Azure, Google Cloud, “other”), and that cloud service providers have effectively moved their service territories into their own customers’ data center premises and co-located facilities. But it’s important to note this trend in their perception: These major cloud providers are assuming a space in their public dialogues analogous to the major networks in the era when broadcast television was more dominant than cable or streaming. Cloud native platforms have become a kind of “regular network program” in this model, something all the networks have an obligation of delivering and delivering fairly, even in the absence of something like an FCC to mandate it.
As the providers of underlying infrastructure, however, these “three major networks” may actually be setting the technology agenda for organizations even more than Kubernetes. Indeed, the recent announcement of Amazon’s RDS service on VMware vSphere — something which was announced earlier this month at VMworld 2018 in Las Vegas — was overheard several times being uttered by a variety of attendees at DevOps World. It’s a big deal for many of them, a signal that the most expensive element of public cloud infrastructure to maintain over time, by many estimates, can move back on-premises without losing either the oversight or the functionality of AWS.
It could actually be one of those moments, similar to when Oracle first announced its support for open source Linux, when it was clear to most everyone that the managed cloud — the part that organizations pay AWS and others to oversee — has taken up permanent residence in the space formerly occupied by the hybrid cloud. It’s this realization that may be prompting everyone in IT to start asking, has the in-house IT operations workforce already become outmoded?
Good Hands People
The goal of a DevOps movement, suggested Tony Mulvenna, who directs the infrastructure group for global insurance firm Allstate, should not be outlined in financial terms. Mulvenna spoke during a different panel at DevOps World.
Productivity can be measured without invoking dollar amounts, Mulvenna continued. Yet even from that vantage point, it’s still Allstate’s goal to reduce its technology footprint and, in his words, “get infrastructure out of the way.”
“Too often within all of our organizations, people have that many handoffs whenever they want to get anything done,” Mulvenna told the audience. “It’s just extremely slow.”
As part of a partnership with Pivotal Labs, Allstate instituted a performance measurement initiative that measures the velocity of people, not technology — for example, how often the team moves code artifacts through the staging environment. Evidently, that program has helped Mulvenna and his colleagues identify which areas of the organization contribute the least to the end customer’s perception of value.
“One great example is, I have a team: 35 people. All they’d done was building source code management for the enterprise. We sunset that team at the end of June of this year. We moved away from four different source code management tools; we’re down to two, and we should be down to one by the end of 2018. There are savings out there in terms of financials, but for us, it was more about pushing code, pushing it quickly, automation, self-service capability, and removing the infrastructure from the way of the engineering community.”
I gave the panel a definition I was told recently by another fellow on the ops side of DevOps: that its principal objective was “to make the task of infrastructure management irrelevant.” I then asked the panel whether they agreed or disagreed.
Mulvenna was first to respond: “The intent that we had as we embarked on this journey wasn’t so much to make the infrastructure management irrelevant, but to totally abstract it from the engineers. Before we started on this journey, if you think about an engineer moving from one business unit to another, just to learn the toolsets and the infrastructure that was in place, was a chore before the guy even got to write any code. Our intent was to totally abstract that from the engineer, so it wouldn’t matter whether they were writing code for deployment on-premises or in a public cloud.”
Robert Cole, an engineer with U.S. Citizenship & Immigration Services, expanded on that: “I don’t know if I would say ‘irrelevant,’ but at least making it easy. We would like to make it irrelevant for the development teams. These teams were kind of siloed off, managing their own infrastructure, and then on top of that, they’re over here developing the application. So they’re not really focusing on what their main focus should be. That’s been our giant push, shifting towards multitenant enterprise platforms that these teams can just use, and they don’t have to worry about what’s going on behind the scenes. They don’t have to worry about user management, plugin integrations, any of that. They can just get on there and use the platform and just focus on application development.”
It bears pointing out that, on the ops side of the DevOps community, a converse theory of job obsolescence is equally valid: Artificial intelligence and job automation, some say, coupled with the wide introduction of microservices into production environments, will lead to attrition among developers as they discover that software versions become more durable, less bug-ridden, and thus longer lasting. Debugging, in their view, is the undifferentiated busy-work that will inevitably disappear as organizations adopt a DevOps culture.
Whenever any two social groups converge, whether through intentional design, unintended consequence, or cataclysmic accident, the first order of business among the members of those groups becomes survival — their own, as well as that of the other group. One means for survival under discussion usually becomes the sublimation, if not eradication, of any need for the other group’s existence. The fact that we’re in the midst of that discussion still today, is the clearest indicator yet that DevOps remains mired in the opening chapter of its history.
CloudBees is a sponsor of The New Stack.
Photos by Scott M Fulton III.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Flip.