How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
No change in plans, though we will keep an eye on the situation.
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
What recent turmoil?
Containers / DevOps / Microservices / Open Source / Software Development

Discussing All Things Distributed from The New Stack WarmUp

Mar 19th, 2015 3:00am by
Featued image for: Discussing All Things Distributed from The New Stack WarmUp

At our first WarmUp of 2015, on March 3rd in Seattle, we talked about the transition to new distributed environments, and the ways in which platform providers, app developers, operations pros and enterprises are adapting. In this episode of The New Stack Analysts podcast, which was recorded live at the event, host Alex Williams is joined by four excellent panelists who are on the front lines of managing and analyzing distributed systems:

  • Avi Cavale, co-founder and CEO at Shippable,
  • Heather McKelvey, vice president of engineering at Basho,
  • Richard Seroter, director of product management for CenturyLink Cloud,
  • and, Kit Merker, product manager at Google.

For more episodes of “The New Stack Analysts” check out the podcast section.

#35: Discussing All Things Distributed from The New Stack WarmUp

“The thing that we have solved with containers,” says Avi, “is, ‘how do I get some process on a machine spun-up super fast and keep it isolated?'”

“Even though a piece of the puzzle has been solved, other inefficiencies are getting amplified,” Avi says, and mentions several, including “orchestration…’how do I manage all this chaos of all these containers?'”

Regarding containers, Heather says, “It’s great that they allow you to break it into microservices, but from a classic point of view … it’s very akin to an image service within compute. One of the keys that’s going to need to be solved well there is, ‘how do I keep those containers up-to-date with having to rebuild them all the time?'”

“There’s been maybe a meme or a myth about, ‘containers are the new VM, and move away from VMs to containers,'” says Kit. “We [at Google] really think of it a little differently, which is that containers are an approach to get the most out of your VM infrastructure.”

“If you use containers as a way of reducing overhead — by sharing OSs, that’s one piece of it — and the second is by stacking them and being able to take advantage of more of the CPU and memory resources available to you — that’s a way of getting a better utilization,” says Kit. “As you’re scaling up and down, or you’re changing the mix of the different workloads on those VMs, you’re getting more out of them.”

“Complexity is shifting,” says Richard, “as you look at, even, microservices moving from these ‘intelligent’ middleware solutions that have all of this knowledge baked in, to dumb pipes and smart end points, where now even my different microservices have to be smarter about, ‘how am I doing authentication; how am I centrally logging; how am I making sure that can scale horizontally effectively?’ So, with all these things that maybe you didn’t worry as much before: ‘Now my end point has to be smarter; now this thing has to scale, support continuous integration, continuous delivery’ —  things that maybe you took for granted before — complexity is now potentially moving to that service level where I just can’t things for granted.”

“We’re not getting any simpler,” Richard says. “We’re not going to look back a year from now and say ‘it’s easier to do distributed systems.'”

Feature image via Flickr Creative Commons.

Basho and Shippable are sponsors of The New Stack.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.