What Front-End Developers Need to Know about Serverless Databases
Serverless is the next paradigm shift in the way we create and run web services and apps. Serverless computing is run in the cloud to free developers from thinking about back-end tasks such as managing infrastructure, scaling servers and provisioning capacity and resources.
In this article, we talk about what’s essential to know for front-end developers when working with serverless databases and how they differ from traditional systems.
Core Features of Serverless Databases
There are many different kinds of serverless databases. Every major cloud provider has one to offer, and there are even companies, like Fauna, centered around providing a serverless database as their core product. Let’s look at the features all of these databases have in stock and which pain point of legacy databases they ease.
The main point of serverless technology, in general, is consumption-based billing. We don’t have to get a subscription that costs us a fixed amount of money per month; we only pay for what we use.
For example, if we create a new app that doesn’t have any users yet, we don’t need to pay for it. Many serverless database offerings even come with a free tier that is often enough to accommodate the first users we get. This free tier allows us to test cost structures before finalizing design and fee details.
Usually, the pricing of serverless databases is based on storage and compute. How much and how long we store data in the database and how much and how long we use computing to read and write that data. The granularity of the metrics these calculations are based on varies from provider to provider, with some providers allowing users to switch to provisioned capacity if needed. This can save a significant amount of money later in the lifetime of service after actual usage metrics are established.
Auto-scaling is another crucial part of serverless technology. It’s the ability to accommodate changes in utilization quickly and without any human intervention.
In the previous era of privately owned servers, system designers had to choose the right size of hard disks, CPU, and memory. If we didn’t buy enough, we would run out of space or service quality would degrade with the increasing amount of users. If we bought too much, we would pay for hardware we didn’t need, which would make our services more expensive and possibly give competitors a cost advantage.
Then came the cloud and we could rent servers, even virtual ones. First, they had to be rented for months and later minutes. So it was possible to spin new servers up or shut down old ones when the demand changed.
Serverless auto-scaling is the next step in that evolution. With serverless database technology, it is possible to scale to virtually infinite disk space and compute in seconds when traffic spikes hit us, and we can scale back down if all users are offline.
Keep in mind that auto-scaling coupled with consumption-based billing can be a double-edged sword when not handled properly. If we don’t set upper limits for free users, or have a flaw in our pricing structure, this setup can lead to significant costs we didn’t anticipate. It’s vital to understand how much a user will cost us, so we aren’t surprised when we have suddenly millions of them.
No Cold Starts
One of the main discussions around serverless happens around cold starts. A cold start occurs when a serverless resource is accessed for the first time or after it wasn’t accessed for a more extended period. Cold starts have very high latency, which is why they’re frowned upon by anti-serverless folks.
A cold start isn’t an inherent serverless problem, but an issue of Functions as a Service (FaaS) platforms like AWS Lambda or Azure Functions. Serverless databases like FaunaDB, for example, don’t have cold starts and can deliver low latency responses even after being in standby for a period of time.
Not having to think about operations isn’t directly related to serverless technology, but as part of cloud and managed services.
With managed services, we don’t have to buy new hardware if it’s outdated or broken, and we don’t have to update our database software when updates and bug fixes are released. In the serverless space, many services offer automatic backups of our data, so it doesn’t get lost if something breaks down or we or some of our customers make an error that results in data loss.
Serverless databases come with operations baked right in. At first glance, fully utilized legacy databases running on a rented server may be cheaper than a serverless database offering, but we can’t forget the other long-term maintenance costs that contribute to and inflate the total cost of ownership.
Global Low Latency
Almost all serverless database providers have some form of global deployment offering. FaunaDB builds on a special algorithm called Calvin, which allows replication in different geographical data centers with minimal latency overhead compared to other solutions.
Global availability zones may not seem essential when starting, but without them, there could be unneeded complications if additional databases and replication for different locations become necessary. Serverless databases allow us to keep the data where it’s needed right from the start.
In today’s global economy, this can often be the make or break question when it comes to competitive differentiation and profitability.
Secure by Default
Another crucial aspect many developers ignore when it comes to databases is security.
Like maintenance, it’s not directly related to the product we are considering using, but it can quickly get expensive if someone steals user data.
Most serverless databases are secure by default and come with encryption at rest and in transit, some with select encryption clients that can be used on the data before it gets transmitted to the backend. This can all be handled with basic security know-how and doesn’t require weeks of learning or even hiring a security expert.
No Consistency Tradeoffs
While most serverless databases deliver some ACID conformity to allow transactions that span a whole database cluster, they often come with significant tradeoffs, especially in terms of performance**. All the housekeeping they do in the background to allow a rollback later can block other clients from accessing or changing the data.
Using the Calvin algorithm, tradeoffs are kept to a minimum so clients won’t get stale data and don’t suddenly see performance drops because other clients started a transaction.
Some features aren’t part of every available database offering, but they should still be highlighted.
Direct Client Access
Databases like Firebase Cloud Firestore and FaunaDB offer direct access via clients, like browsers. Others, like AWS DynamoDB, have to be accessed indirectly by an application server or AWS API-Gateway. This is one of the essential features for many front-end developers since they usually lack the experience to set up an application server.
Most serverless databases aren’t relational and thus don’t need a schema to get started. FaunaDB, for example, only needs a GraphQL schema when it’s accessed via GraphQL, but the database itself is schemaless and can accommodate multiple types of data models. This is also the case with Azure Cosmos DB and AWS DynamoDB.
A schemaless database has some coarse ways to structure data but generally allows us to send in what we have and save it as it sees fit. A form has a new field that needs to be displayed in another screen? No problem. Send the new data to a schemaless database, and it will be available.
Some providers offer relational databases like AWS Aurora Serverless and Google Cloud Spanner that allow for relational schema definition.
Real-Time Processing and Synchronization
Some serverless databases, like IBM Cloudant and Firebase Cloud Firestorage, offer real-time synchronization of data between the front-end and the back-end.
This isn’t needed by all types of apps and probably not even by the majority of them, but implementing such a system isn’t trivial and having a database doing the heavy lifting helps enormously when the feature is essential.
Most serverless databases offer some ACID conformity, but often this comes at the price of increased latency for complicated transactions. FaunaDB’s novel algorithm Calvin, helps to scale multi-document transactions across multiple geographical distinct data centers with a minimal footprint.
While high performance, 100% ACID conformity isn’t necessary for all kinds of applications, for some of them, it can be crucial and should be considered from the start, so we don’t get overwhelmed by the contention footprint of our transaction protocols later.
Often databases of big cloud providers are integrated into their FaaS ecosystem. Azure Cosmos DB, AWS DynamoDB, and Firebase Cloud Firestore can trigger backend functions that require additional work like transforming the data with the help of software libraries or third-party services.
If back-end-side data transformation is anticipated, we should think about choosing a database that allows us to drop in a function to do so. FaaS allows front-end developers to create serverless back-end functionality without the need to set up and maintain the infrastructure by themselves.
All the features serverless databases have in stock give front-end developers great power without the need for back-end development skills. It has never been easier to create full-stack applications as a front-end developer.
We can set up a database with the push of the button and start pumping data into it. Everything will scale and archive effortlessly without a second thought. Additionally, there is no upfront payment. If we don’t have users we don’t pay for the resources, but we still have the option to scale globally if it turns out we have a hit product.
Most importantly, we don’t have to spend cycles on bug fixes and broken hardware while we could be innovating the next feature.
Feature image via Pixabay.