The New Stack is delighted to share the inaugural episode of The New Stack @ Scale podcast. This new, monthly program, sponsored by New Relic, will explore cloud-native architectures, and will help us understand how to manage complexity in distributed, scaled-out environments.
Naturally, this involves venturing deep into monitoring, analytics and open source, all beckoning toward a future of transparency between platforms and the infrastructure itself.
But it’s not all just spinning containers and tortured ecosystem metaphors. The New Stack @ Scale will bring the industry’s best and most enlightening minds to the studio each month to address topics of interest to developers, operations managers and business leaders alike, treating us to lively discussions laced with a variety of featured segments to create an informative, ambitious show that stands apart from the typical one-mic podcast.
The show is co-hosted by The New Stack founder Alex Williams, and New Relic Editor-in-Chief Fredric Paul, a long-time journalist himself. The first episode features two guests who bring a wealth of experience to the conversation:
- Bryan Cantrill is the CTO of Joyent — a purveyor of container-native infrastructure for public cloud and private cloud software solutions. This year, Joyent introduced the Triton Elastic Container Infrastructure, which provides for running secure Linux containers directly on bare metal via an elastic Docker host. Bryan has spent two decades working on system software, and he is as engaging and entertaining speaking on the podcast as he is speaking on the software development and operations-focused conference circuit.
- In 2014, CEO Patrick Reilly and his team founded Kismatic in order to bring the just-created Kubernetes to the enterprise. Before the launch of Kismatic he was CEO and co-founder at OrlyAtomics, Inc., and he has previously worked at Wikimedia Foundation, OmniTI, Schematic, Media Revolution and Sony Pictures. He’s also an advisor at Mesosphere, Inc.
For other podcast episodes, check out the podcast section of The New Stack.
Listen to all TNS podcasts on Simplecast.
Fred starts the conversation by asking what the term “cloud native” really means, and why the concept is getting such attention lately.
Bryan expresses the term “cloud native” as moving toward “a pure container future” in which we break up the massive monolithic services that are holdovers from the dot-com boom; re-architect applications to be more distributed — and ideally more available — but most importantly more nimble and more agile.
“Across the corporate world — not just here in the Silicon Valley bubble — you see companies that are realizing that they need to develop software faster, that they need to deploy it faster, that software is competitive advantage,” says Bryan. “Cloud-native computing, to me, is all about expressing that modern way of developing apps, such that you can develop and deploy them quickly.”
Fred asks whether a technological or environmental tipping point brought us to where we are today.
Bryan cites a few tipping point candidates, one being the proliferation of languages such as Node.js and Go that make it easier to “bang out a service with not many dependencies and get this thing working in the small, which kind of drives you toward adopting this UNIX philosophy when you develop your services — this kind of microservice approach.”
Also, containers. “Folks like Joyent, like Google, realized internally that containers are actually the right way to develop things.” Docker realized that containers are not only more operationally efficient, but also that they can be used as an application delivery vehicle — “they played a very key, catalytic role here,” says Bryan. “I can take this dependency nest into one binary — one giant, statically-linked executable that we call a ‘container’ — and ship that into production,” and ideally back-and-forth from laptop to production, he says.
Alex brings orchestration into the discussion, as he asks Patrick about the particular effect of Kubernetes on application development. Patrick observes that the same engineers at Google who originated Kubernetes had been working on Borg, the large-scale cluster management system.
“Everything pretty much runs in a container at Google,” says Patrick. “So, they’re going to make sure that they build the orchestration around the containers’ needs to talk together. They’re going to come up with the concepts of services and replication controllers and things that make sense for the context of trying to make a bunch of containers work together, and work well.”
Alex asks whether newer versions of Docker present problems with running containers that are built with an older version.
“The key is not necessarily what has happened to the Docker engine, but what the Docker container looks like,” says Bryan. “Part of the value here is that it relies on the fact that what’s underlying the Docker container is the Linux system call interface. That’s actually what you’re executing on.” The stability of the Linux kernel system call interface makes it possible to “take a container from the past and run it in the future,” he says.
Docker is moving so quickly, Bryan goes on to say, that some of these concerns are still in the future. “When you deploy at massive scale, you need to begin to think about these kinds of things — versioning and so on — but you also need to think about service composition. You need to get out from underneath single containers, and even groups of containers, into thinking about it a higher level. That’s part of what Kubernetes and some of these other frameworks have captured.”
About Kubernetes, Patrick adds that the idea of rolling updates of services and of containers was part of the thinking from the beginning. “There’s no way you’re going to get stuck,” unable to by-pass a particular Docker daemon version or container format, he says. “Everything is meant to be pretty pluggable and future-proof — as much as we can see in the future.”
Alex then asks how application development in general has been affected by the advent of cloud-native architecture. Patrick remembers his days at OmniTI and triaging web properties as they started to get popular. “They started to get traffic and they started to fall over, because the way they designed things weren’t going to meet the demands of the property.” They didn’t understand why they couldn’t scale.
“You could slap a little caching on there, you could try little memcache, you could do some more slaves for their database, but you still had to re-architect bits and pieces,” he recalls.
“Where I’m excited about the cloud-native stuff,” he continues, “is now I can try a ton of experiments that are going to work in production as well as they work on the laptop.”
“Ryan Lane from Lyft was speaking this week at the San Francisco Microservices Meetup, and he was talking about how their developers now can get something into production in one hour from when they actually built it,” Patrick says. “It’s not just the container they built on their laptop. It’s the Docker file that they built. That’s tested, gone through all their CI/CD, the container’s put out the other end, they go through all of the different checks, they get it all the way up into canary deploy, and they can promote it all the way to production traffic within that one hour.”
Docker and New Relic are sponsors of The New Stack.