MongoDB sponsored this post.
Last month, MongoDB unveiled a new “serverless” option for its MongoDB Atlas database service.
The company claims that Atlas Serverless, now in general availability, can support a wide range of application requirements “with little to no initial configuration and ongoing capacity management.” It can be deployed across all three major cloud providers — Amazon Web Services, Microsoft Azure and the Google Cloud Platform — with tiered pricing and no upfront commitments.
Shum explained the serverless paradigm by comparing it with an on-demand ride-sharing service. Just as some people prefer cars for extended uses, so too do organizations tend to buy or lease dedicated servers, or virtual server instances, to run workloads that are required to be always running. But there are many people who only occasionally need an automobile, so it isn’t necessary to buy and maintain an automobile. In those cases a ride-sharing service may be a better fit, financially. Likewise, serverless services make more sense for users who need access to resources on demand
Both work and free market users can determine which works better for them. There’s a premium to having an on-demand car service but there’s something to be said about never looking for parking in Manhattan or de-icing a windshield.
Who’s Making the Switch?
Shum explained that clients from every category have shown an interest in migrating to serverless; he expanded on a few potential uses cases. One specific fit for serverless is financial and accoutering customers where the bulk of their workloads might take place at the end of the week, month, or quarter, making pre-provisioned always-on infrastructure inefficient.
Another unique fit for a serverless development are development applications where developers work during the day and go home at night or with gaps between feature development. Currently the main options for these applications are dedicated clusters or a function that spins something up when developers start work and takes it down at days end. Serverless now presents a new possibility where these use cases can be easily accommodated without users thinking about resource management.
Legacy applications can also be a good fit for serverless development as part of a company’s modernization efforts. “If I’m going to modernize and build a brand new app, then I better use the latest and greatest technology,” Shum said. Mission critical legacy applications looking to include serverless as part of their modernization efforts include massive globally known HR, payroll, and tax systems.
A New Pricing Structure for an Emerging Marketplace
MongoDB saw the serverless options in the marketplace and concluded “it would be unfortunate if we had a serverless offering that still charged you some type of minimum or required you to get pre-provisioned servers because that would resemble a dedicated offering,” Shum said.
He’s referring to other serverless database providers that either require a minimum amount of compute in order to keep the data or require high workload clients to pre-procure a certain amount of provisioned tiered serverless units.
Rather than modeling off of other serverless offerings, MongoDB went back to what was currently working — their dedicated clusters. The research and modeling after the dedicated clusters brought a few revelations, most notably that any linear pricing structure will essentially force a customer to switch to dedicated once they hit a certain use volume due to the exorbitant cost.
MongoDB was not keen on introducing a product to customers that they liked but then priced them out once their applications really took flight. “We want a healthy business with good margins but fundamentally it’s about the developer experience because we’ve all been there,” said Shum.
MongoDB came up with a tiered pricing structure that bills on reads and writes per unit and can starts at $0 monthly fee but, “If you generate enough load on serverless it makes sense for us as a company to incentivize that and try to make that more economically feasible,” he explains. With that, MongoDB dropped the headline price for Read Processing Units by 66% from 30 cents per million to 10 cents per million . More details on serverless pricing can be found here.
Not only was the pricing model the biggest challenge in terms of how to design the serverless model, but it was also a challenge in terms of adding another level of complexity and granularity to how customers understand their bills. Billing would not be a monthly fee, but one of reads and writes per unit.
This gave way to one of the largest technical challenges of the feature as well. The database was not set up to track these units and pass them through the cloud control panel and into a billing system because they never needed to be. Until now. This new challenge brought the team back to their fundamental understanding of the database itself.
Shum shares that there are very few people who understand the database in such an intimate way as to be able to answer questions such as how to track the number of documents scanned and index keys used in order to answer a query, followed by how to implement, track, and pass that data upstream to the cloud control plane and into a billing system.
During the pandemic there were meetings taking place with all the veterans of MongoDB including the EVPs and VPs of the core database and the cloud control plane to solve this challenge and challenges like this.
Shum said, “Everyone just sat in a zoom room and we’re like how do we do this? Where do we go? What do we instrument? How do we start collecting?”
Serverless is a very large initiative for MongoDB. There are four full time engineering teams dedicated to serverless work but at any given time there are another three to four engineering teams across the core database, cloud, automated orchestration, and billing teams also working on serverless projects.
Of this Shum said. “I’m lucky that I benefit from two things. a) this developer groundswell of folks who really like serverless and b) many people in concert convincing my CPO and CTO that this is an important developer movement.”
What’s Next for Serverless?
Shum confirms that the preview period for serverless was incredibly useful. There were huge learning gains about the operations of a serverless product and fleet. The MongoDB team are confident in the current release and recommend it for clients should they be seeking a serverless option.
Currently the team is working to make serverless features closely match those of dedicated clusters. “How can we make serverless look and feel just as much as that fully featured dedicated offering so that you don’t have to feel like you’re making a choice between the two,” Shum said. Engineering teams are tackling this challenge in the present moment to remove serverless limitations.
Over time, Shum explains that MongoDB would like customers to think of serverless more of a consumption model than a separate product. If a user doesn’t know their workload initially, or how volatile it is, they may consider starting with something serverless. But they may have other requirements that compel them to pick a dedicated offering. The investment in serverless isn’t only in a product that MongoDB hopes developers love, but also an acknowledgement of a broader serverless developer that is afoot.
Currently there is a three to five year roadmap for serverless projects, with most of the projects being large scale R&D investments.
Amazon Web Services is a sponsor of The New Stack.