Cloud Native / Cloud Services / Kubernetes / Serverless / Sponsored

Why Serverless Matters

4 Jan 2019 12:05pm, by

Oracle sponsored this podcast.

Serverless continues to mean different things for different people — but many users and proponents have very solid examples of how and why it works for them.

Christopher Woods, research software engineer, at the University of Bristol, is a case in point. In addition to saving money by not having to pay engineers to maintain servers so they can build business logic, serverless allows organizations to shift code faster to the cloud, Woods said. You also “do not have to think about any of that kind of non-strategic, undifferentiated heavy lifting,” he said.

“[Serverless providers offer] the stuff that everybody has to do when you ship an app to the cloud: you have to choose a framework, you have to do find the server, you have to provision that server, you have to get your SSH key… I mean, these are kind of the old ways of doing things,” Woods said. “Maybe it’s going to scale, then what happens when that server goes down? I mean, all of this stuff is not strategic for the organization, so there’s no value to the business for spending time managing servers.”

Woods, as well as Shaun Smith, director of product management for serverless at Oracle and Chad Arimura, vice president of serverless at Oracle, discussed in detail what serverless can offer during a podcast hosted by Alex Williams, TNS founder and editor-in-chief, at KubeCon + CloudNativeCon North America 2018.

The move to serverless also means shifting much of the complexity associated with serverless maintenance and operations to a third party, Woods said. “It’s more in terms of the contract between the software designer sort of running on cloud-native, then connecting to use the platform designers because effectively, with serverless, what you’re doing is you’re saying, ‘I’m going to define well-defined bits of work distributed by default,’’ Woods said. “So, you’re basically making a contract with the stateless functions that are distributed by default, and then you’re saying, ‘now I’ve got this application which can scale infinitely and vary.’”

The operations work associated with scaling “can be passed over to the platform designers,” Woods said. “They can then build something which will fulfill this sort of serverless contract to infinite scaling,” Woods said. “That I think is this key thing: the advantage is that you’re designing for infinite scaling from the beginning.”

The genesis of the serverless solutions Woods describes above can be traced back to platforms as a service (PaaS) that have existed for over 10 years. Under a PaaS model, for example, the organization agrees to a pay-as-you-go model. “You’re paying for the service while it’s running, and then when its finished its job, it’s nothing,” Smith said. “So, it’s very much like electricity, right? Turn the light, it goes on. Turn of the light, it goes off — so [that’s how] you’re paying for it.”

PaaS, like serverless, of course, can help to save costs. “You can deploy code without worrying about the fact it has a huge bill for services no one is actually using,” Smith said. ”And for a service model, you pay as someone is using it. So, it really gives a new opportunity for a lot of the startups.”

Serverless also has the bandwidth to scale for new applications and platforms, such as machine learning and other artificial intelligence (AI) applications running on Kubernetes. “Serverless is not just about your code and your logic and your computer — it’s things like AI and image recognition,” Arimura said. “And those are services now that you can call through an API that could be strategic to the organization…You know how hard it is to build tons of [flow] models and kind of manage all that yourself and then how to set up infrastructure to do that — but now, you can just call an API and get facial recognition right into your application on a serverless platform.”

In this Edition:

1:42: What are the similarities between serverless and PaaS?
5:11: Explaining undifferentiated heavy lifting.
12:17: Tell us about the architecture of the Fn stack itself.
16:00: How are you [Smith] evaluating platforms and what are you exploring now?
21:23: Investment in functions on both external and internal teams.
21:38: What are you seeing in terms of the team development that has happened with the adoption of Fn internally? Are the teams changing?

A newsletter digest of the week’s most important stories & analyses.