Serverless / Sponsored

Why Serverless Is the Uber of Infrastructure

4 Apr 2019 9:39am, by

NS1 sponsored this post.

Jonathan Sullivan
Jonathan has more than a decade of experience architecting, deploying, and maintaining mission-critical, distributed IT solutions across colocation, bare metal, cloud and CDN for companies ranging from dot-com startups to Fortune 1000 enterprises. Prior to NS1's founding, Jonathan led the New York managed hosting operations team for Voxel from 2006 through its acquisition by Internap (NASDAQ:INAP) in 2011. At Internap, he led the hosting business unit and sales engineering organizations’ evolution to become a key competitor in modern hosting and hybrid cloud ecosystems. He studied computer science at RPI.

Serverless, the latest evolution of computing power, allows engineering to operate and think about infrastructure in the same way a ride-share user thinks about cars: in short, they do not.

Engineers can build their application to execute on a serverless platform and that’s it. They don’t have any insight into what kind of CPU happens to be executing their code. They also don’t know what operating system or kernel is running. The entire server is abstracted away — all the developer knows is that their code has been executed, work has been done, and now those compute resources disappear back into the either, ostensibly free to do someone else’s work. Lyft and Uber provide us with our car analogy here. A rider opens an app, inputs where they want to go, within a few minutes they’re picked up and brought to their destination. Aside from the number of seats, the rider has no control over what model car is going to pick them up. All they know is that the service is going to get them from point A to point B, and then they can forget the car ever existed. They have no control, but incredible, on-demand flexibility.

To extend the analogy further, serverless’ adoption is having a disruptive effect on computing similar to how Uber and Lyft have forever changed how one hails a cab. The numbers also clearly indicate that serverless is more than just a passing fad: recent research, for example, indicates that serverless computing is the fastest-growing type of cloud service with a growth rate of 75 percent, and Gartner predicts more than 20 percent of global enterprises will have deployed serverless computing technologies by 2020. Recently, industry leaders Red Hat, IBM and SAP announced they were launching serverless offerings, which should serve as a major signal that the technology is coming of age.

So, what’s all the buzz about? Serverless is a cloud computing platform in which the cloud provider owns the burden of resource provisioning. The cloud provider is solely responsible for ensuring that the application instance has the back-end infrastructure it needs to execute when it’s invoked, which lets the development team focus solely on writing their application code. At its heart, serverless computing offers a dynamic every DevOps team seeks to provide their back-end engineering team — an application substrate that “just works” all the time. From a cost-benefit perspective, an organization only pays for resources it uses, rather than renting or securing servers in fixed quantities, or maintaining always-on cloud compute or co-located resources 24×7.

How Serverless Is the Ride Share of Computing Power

A colleague recently drew an interesting analogy between the evolution of application infrastructure and consumer transportation. He noted that the level of ownership and control over how enterprises deploy an application has followed the same evolution as the vehicles we use to get around. Like the options people have for getting from place to place, the options for how a developer deploys an application have changed substantially in recent years. In the early days of internet applications, the only option an organization had was to buy what was needed: they purchased the infrastructure, and then it was theirs. The first fifty years of automobiles worked the same way — if a person wanted a car, they bought one. The buyer had complete control over the model, specs, and color of the vehicle they drove, just as engineering teams had control over the model, specs, and in some cases, the colors of the servers their application lived on.

This computing model evolved in the 2000s with the introduction of managed hosting and later virtual hosting, which let organizations pick from a limited set of pre-specified servers that they could rent by the month. Here we see the first tradeoffs in flexibility as the vendor might only have three or four CPU and RAM combinations to choose from. Developers traded control over the hardware for ease of use in deployment and flexible rental terms. Back in the world of automobiles, rental cars offered the same paradigm shift: instead of buying a car and committing to owning it for years, a consumer simply rented it for however long they needed it. The downside is that the customer only had a few models to choose from, but the upsides were myriad, absolving them of all the baggage and responsibilities that come with ownership (depreciation, maintenance, insurance costs, being stuck with a car that doesn’t fit their needs as their life changes, etc.).

One of the more recent computing evolutions driven by advances in virtualization is the now ubiquitous cloud, which enabled IT teams to deploy and scale compute and storage nodes paying by the hour for the resources. In our car analogy, RFID, GPS and other technologies enabled companies like Zipcar and car2go to offer the same “by the hour” access to cars. Again the consumer is trading customization and control over the models in exchange for even greater flexibility with the location and length of the rental.

Limits to Serverless Computing

As interest in serverless continues to increase, organizations need to understand the potential tradeoffs. Those organizations that have gone all-in on cloud computing are still sometimes shocked when they get their bill at the end of the month. Companies don’t move to the cloud solely to save money on infrastructure though. They do it so their development team can think less about buying, operating, patching, and upgrading infrastructure — because that’s probably not core to their business — and in doing so, they free up engineering cycles to focus on what’s really important: building an application and serving customers.

Serverless has the same tradeoffs, and then some. With serverless an organization is forfeiting complete control over the infrastructure. While this can have tremendous upside — they are free from all operational burdens associated with infrastructure, they only pay for the actual work the serverless platform does for them — it also means their engineering team has written their entire application against someone else’s platform. This should be setting off alarm bells. If an organization has written its entire code to run on AWS Lambda using their proprietary APIs and schemas, suddenly it is very much locked-in.

Maybe an organization needs to add a new feature to its application that’s not possible given the serverless platform’s API. Maybe the vendor’s costs or features change. Certainly, any businesses’ requirements evolve over time. Any one of these situations could require an engineering team to move part or all of its application off of the serverless platform and onto more traditional cloud computing infrastructure (or maybe even all the way back to its own infrastructure). This move can mean having to completely rewrite an application in order to break ties with that vendor and escape the constraints of the serverless platform.

Application performance is another oft-overlooked metric that developers still need to worry about in a serverless environment. Much has been written about the noisy-neighbor problems inherent to cloud computing platforms. Some might assume that in the absence of servers to manage, there would be no concerns around application performance management or traffic steering. In reality, these things are still critical to monitor and manage. Enterprises using serverless technology to build applications aren’t magically freed from the physics of latency, fiber cuts, or performance issues with the provider because just like the cloud, there’s a server at the end of that rainbow – they just can’t see it. These organizations also still need to make sure they are getting users to the right serverless endpoint, and they still need to monitor and retain control over their traffic steering so they can route around localized performance issues.

What This All Means for Enterprises

As businesses look to make decisions about computing strategy in 2019, they should be sure to follow the old adage of choosing the right tool for the job. For enterprises considering a serverless approach in 2019, rapid prototyping can be a worthy entry point or a chance to dip their toes in the water. An organization probably doesn’t need to buy or rent 100 servers in its pursuit of an MVP. Serverless computing, like cloud computing, can support agile development in new and transformative ways that give you additional options for cost effectively iterating and scaling from early development all the way to production.

Enterprises that can maintain application performance while reaping the cost and time savings benefits of serverless computing will find themselves with more time to innovate and deliver new offerings to their customers. With evolving serverless offerings and major investments from cloud darlings AWS, Microsoft, Google alongside enterprise stalwarts like IBM and SAP, we’re just now seeing the beginning of how serverless is going to impact the way applications are built and delivered. Remember though, sometimes it still pays to just buy a car.

Feature image from Pixabay.


A newsletter digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.