Why Kubernetes Makes Lyft Rides What They Are Today
Ride-sharing firm Lyft will continue to rely heavily on Kubernetes and microservices in the race to offer mobility solutions that should eventually include AI-piloted cars in the very near future. This was a key point Vicki Cheung, engineering manager at Lyft, told Alex Williams, founder and editor-in-chief of The New Stack, hosted during a podcast recorded at KubeCon + CloudNativeCon 2018 in Shanghai.
After serving as head of engineering at OpenAI, a non-profit AI research group Tesla founder and CEO Elon Musk co-founded; Cheung joined Lyft after it had attempted to make the switch to a container-based stack a few years prior. This was when “the hype was building up and everyone was trying to make the switch — but before Kubernetes was a thing,” Cheung said.
“And so before that, Lyft had a home-grown provisioning system that was Salt-based and we wanted to make everything container-based, so we again built some orchestration system in-house,” Cheung said. “Then Kubernetes came along, and then our engineering team was like, ‘oh, actually, that looks a lot better because then we get the open-source community support and then we don’t have to maintain our own thing.’ So, then we kind of decided to wait for Kubernetes to mature a little bit and then we are making the switch now.”
For Lyft’s infrastructure, Go is often used, “especially because we’re deeply integrated with a bunch of different open-source projects,” Cheung said.
“We definitely made the switch to start integrating more and more Go into our stack and..we’re pretty much a Go and Python shop,” Cheung said. “We do have microservices so people can choose whatever they want but those are kind of the preferred languages. So for me, I would say that historically, just like the other companies I’ve worked at, we have always been Go or Python.”
The end result is Lyft’s development platform and infrastructure is “super microservice, micro-oriented,” Cheung said. “So we have literally hundreds of microservices, and for a lot of people to have that freedom to pick their stack, I think, containers are a natural option so that they have control over their whole stack from the VM layer. And then we, as an infrastructure team now, don’t need to get into the weeds of how the service is going to be run,” Cheung said. “We just need to provide the platform that can run these images. And so I think the reason why we picked…containers is that the contrast between infrastructure and the service owners is a lot clearer. We just say, ‘okay, there is this platform that will run this very specific interface and you just have to provide that.’”
Kubernetes, among other available platforms, was a clear-cut choice, Cheung said. “We picked Kubernetes because we looked up a bunch of different platforms for running containers and we decided that Kubernetes was well, the most supported, I think, in the ecosystem currently and also because it was easily understood, I guess,” Cheung said. “There’s less overhead in terms of operating it.”
When customers schedule rides with Lyft with their smartphones, most are obviously unaware and unconcerned about the underlying Kubernetes platform powering their experience, of course.
“Kubernetes enables us to really improve developer productivity and so we can bring new features to our customers a lot faster and that’s because like now, our engineers don’t need to go and mingle with Salt. They also don’t need to go and understand layers of infrastructure that they’re not trained to understand or they’re not supposed to understand,” Cheung said “Now, they just need to know that they can build an image that needs to run, and once they have that, they hand it over to the infrastructure team and then that’s it.”
In this Edition:
1:40: How Cheung got involved in infrastructure software.
4:19: Exploring Cheung’s job as an Engineering Manager at Lyft, and the Envoy and service mesh aspect of it
10:59: What is Envoy?
13:49: How has that translated to the developer experience?
18:04: What are some of the other tools that you’re starting to use that come from the open source community?
19:06: What is the observability stack?