Vercel Brings Serverless Functions to the Edge
Vercel’s Edge Network saw over 30 billion Edge Function invocations since the beta launch earlier this year. In one test, involving image generation, the edge APIs return results in almost 40% faster than a hot Serverless Function at a fraction of the cost.
Vercel’s Chief Technology Officer, Malte Ubl, sat down with The New Stack to discuss Edge Functions.
What Are Edge Functions?
Edge Functions are middleware functions supported by Next.js, Nuxt, Astro, and SvelteKit as well as created as a standalone function in Vercel CLI.
The default setting for Edge Functions is to run in the region closest to the request, with the goal of lower latency. They run after the cache and can both cache and return responses making them available solutions for fetching data or rewrites.
The pricing model is billed in 50ms of CPU time per invocation. This means billing is reflexively based on time spent performing compute operations not time spent waiting for data fetches.
One thing about the global deployment of serverless functions is that “It’s very expensive,” said Ubl. If that’s a priority, it’s one of the tradeoffs when considering choosing a serverless option. He continued, “typically you will only deploy in a single data center worldwide. You can [deploy] in multiple [data centers] but that’s like very uncommonly done.”
But cost alone isn’t the only barrier to entering edge networking. Understanding the complexities of building and maintaining a global edge network is usually more complicated than adding middleware. Edge Functions aims to bridge the gap for its customers.
AWS’s sole role with Edge Functions is as the network infrastructure provider, as Ubl confirmed Vercel is “terminating the traffic ourselves within AWS’ network.” But Vercel doesn’t use every data center available inside AWS’s infrastructure as Ubl explains there’s a data center “sweet spot” of about 12-20 data centers globally that, when utilized, will offer roughly 10 milliseconds latency for all users while maintaining network safely.
Not every data center is created equal, of this Ubl explains, “it might be fine to like run a cache in, you know, in a [untrusted] location, but you don’t necessarily want your compute there.” To keep it simple, Vercel offers about 15 locations for Edge Functions which can be found here
Default deployment is in the region closest to the application user making the request. Closer data center, shorter latency. This usually tracks unless the application requires the use of a database. For these cases, Regional Deployment, a new feature announced with GA allows developers to bind the function to a specific region (ie the region where the database is performing compute operations).
The Cold Start Problem Solved
One incredible feature is the ability to scale down to zero instances, though it may not feel like it for the user who has to start the function back up the next time it’s invoked. Vercel put a lot of engineering muscle into its architecture to minimize its cold start times. Ubl defined faster cold start as “We’re talking dozens of milliseconds of cold start time on Edge Functions [vs] serverless we’re talking hundreds of milliseconds of cold start time.”
Even better yet is no cold start. Between the beta and GA, Vercel, “substantially improved the performance of the health of the product offering,” Ubl said when explaining the reduction in cold start instances. “We’re now much better at routing the incoming traffic that we get so we decrease the amount of time where we when we hit a call function that hasn’t been warmed up.”
In a more traditional serverless infrastructure such as AWS Lambda, each function gets a microVM that has a start-up node inside of it. The isolate layer — the layer handling the requests — is then located inside of the start-up node. Each function is comprised of three layers of isolation, all needing RAM and all having costs associated with them.
Edge Functions’ infrastructure still includes the isolation layers needed to protect customers from potential harm but it just includes the innermost part of the instance, the VM isolate. This helps greatly with the cold start because only one isolation layer needs to be spun up. The infrastructure design is the source of both the efficiency and cost wins because less work is required when a request arrives.
The Roadmap Ahead
There is still work ahead. Increased Node.js compatibility is one of the main areas of investment. Observability and error reporting are also currently being expanded upon. No release date for any new versions has been made public yet.