Serverless Helps Developers Focus on Differentiating Features
Developers are in a never-ending struggle to free themselves from mundane tasks so they can focus on what they do best: write code. According to our own research, developers spend 41% of their day on infrastructure maintenance instead of innovation or bringing new products to market.
In an ideal world, developers wouldn’t worry about non-coding tasks like managing a host or container, provisioning servers or anything that has to do with bare metal. This is why we created MongoDB Atlas serverless instances, which takes the characteristic flexibility of serverless architecture and combines it with a flexible application database platform built upon the document model.
What Is Serverless Architecture?
The rise of serverless computing can be traced back to the introduction of serverless cloud functions like AWS Lambda, which allowed developers to start and stop their applications with simple API calls. Developers could run code without having to provision any hardware for it. This concept was then extended to other parts of the stack, including databases.
Serverless architecture enables developers to build applications in the cloud without spinning up a server. While the application does, in fact, run on a server, the deployment and management of the server is abstracted away from application development.
Serverless architecture frees developers from having to think about server provisioning, like scaling up to meet increasing workloads or over-provisioning and paying for unused resources. Serverless architectures dynamically use only what they need, and customers are only billed for what they use.
With serverless architecture, infrastructure configuration decisions and capacity management are abstracted away. If a developer has an idea for an app, serverless architecture absolves them from the need to plan the resources they may need on day one. They can just choose their database and their development stack and start developing.
Once an app is live and getting customers, developers would typically have to fine-tune and decide if they need to scale up or down on an ongoing basis depending on the market or customer demands. With serverless architecture, the fine-tuning happens automatically, and there are always enough resources to meet demand.
Serverless architecture has two defining characteristics:
- Elastic scaling — You can scale up and down based on your workload, including to zero.
- Consumption-based pricing — Because it scales based on what you need, you only pay for what you use. You don’t waste resources on idle hardware.
Serverless architecture lowers the barriers to entry for app development. And, over time there’s less ongoing maintenance and less need to continually optimize to ensure there are adequate resources to power the application.
Gains from Serverless Architecture
By abstracting away the hardware layer, serverless architecture resembles a NoOps approach to application development. With little to no provisioning or capacity management to worry about, developers can accelerate turnaround times for new functions and services. It’s not about outsourcing the hardware layer as much as it is about automating it.
If this transformation sounds familiar, it should because it’s the same one that’s already taking place as organizations move away from managing hardware in favor of instances, and as they increasingly replace monolithic applications with microservices, moving the development abstraction from application to service. Serverless is an evolution in both infrastructure management and software development, removing infrastructure considerations entirely and moving to a development mindset focused on data and functions.
In a serverless architecture, the function is usually triggered by an event. For example, a single user sign-in can trigger multiple functions:
- The login attempt triggers an authentication function.
- If successful, it creates as output a login event, which goes into a queue.
- That login event is picked up from the queue, triggering a user profile function, an offer function, etc.
Each of these can be designed and coordinated as serverless functions in an event-driven architecture.
Serverless Use Cases
Serverless computing is well-suited for sudden surges in demand. Large workloads that run infrequently or unpredictably are also ideal for serverless architecture. Serverless helps you avoid having to deal with unknown factors that naturally occur when developing apps, like when you don’t know what scale of workload to expect. It lets you break down application logic into discrete functions and build a small piece of custom functionality in a workflow of other services. And it allows you to host an entire application back end on a managed platform and connect any number of end devices — think mobile and IoT — to that backend.
There are some scenarios where serverless architecture should not be used, like if you’re not already in the cloud. For high-performance computing, it may be cheaper to bulk provision the servers you need to handle the workloads. Long-running functions can also drive up the cost of serverless computing. And latency can be an issue where serverless architectures are spinning up from a cold start.
Not too long ago, all cars came with manual transmissions. Then, in the ’50s and ’60s, automatic transmissions became more popular, while manual transmissions remained the choice for an increasingly smaller share of holdouts. In the not too distant future, it’s conceivable that serverless architecture will be the equivalent of automatic transmission for developers once they realize how easy it is to start building and how little ongoing maintenance is required. You can also imagine a future analogous to automated cars — a NoOps future — where hardware and capacity management are entirely abstracted away, and developers focus exclusively on building features, microservices, and true differentiation. Until then, a lot of organizations will continue to shift more toward serverless computing for rapid application development, event-based functions and surges in demand, while others will insist on dedicated containers for workloads with predictable usage patterns.