Case Study: How Fabric Uses Serverless to Disrupt the Life Insurance Market

5 Sep 2018 8:52am, by

Fabric is an insurance startup designed for new parents, one with the aims of becoming “the place where all families come to start their financial life,” said company co-founder and Chief Technology Officer Steven Surgnier For now, the company offers offer a free will service and “simple, fast accident and life insurance.” Both products are completely provided using serverless technologies.

Choosing Serverless

“We have one server in our development environment for staging, but in production we are completely serverless,” said Surgnier.

Surgnier said that as an early stage startup, decisions come down to the ability to execute quickly and with minimal complexity. “We need to decrease the time from the initial idea to getting the end product in the customer’s hands,” said Surgnier. Serverless has enabled that speed of product development to be realized.

In addition, as a small team, Fabric has not needed to splinter off their engineering team into too many specific roles early in their business life. “We are a small team, so we don’t have a dedicated DevOps team, we just have full stack engineers,” said Surgnier. “We look for engineers with good system design skills: understanding how the entire system works is pretty critical. Because we are using foundational APIs from AWS like Cloud Formation, the toolbox that we use is AWS’ toolbox. So it is not like our Ops team has said here is the database you will use. We expect our engineers to be familiar with AWS’ technologies. There is not a lot of hand holding.”

Surgnier said this requirement for the team has gotten easier over the last few years. As an early adopter of AWS Lambda, starting over four years ago, Fabric engineers started off with a minimal serverless tool palette to paint with. Relying on serverless, especially prior to the more recent generation of tooling that has become available, gave Fabric’s engineers a chance to build robust, standardized processes. “Over the past years, we have developed good internal libraries and good internal practices. For example, we have good internal methods for standing up new APIs, so we are able to leverage that painful upfront work, we don’t have to do it all the time,” said Surgnier.

Surgnier said the team specifies the APIs in AWS Cloud Formation, allowing Swagger (aka Open API Initiative specification) as an export, which the team has not leveraged that much as yet. When defining the API, the stand up the API method and resource, then mock as quickly as possible so that it can be exposed with mock data. Then they are able to set up the Lambda, Amazon Web Services’ serverless offering.

Cost Versus Velocity

Surgnier and Fabric definitely sit on the velocity side of the fence as far as choosing serverless as their architectural technology. “Honestly, I haven’t been too price sensitive. We deal with a lot of PDFs, like customer applications for life insurance, and consuming that data has to be designed into our workflow. Those functions are written in Java and that is probably our slowest API call. Overall, it’s all about executing faster and getting new product out quickly. The cost savings of Lambda are a nice to have.”

Surgnier said the time to get the product in customer’s hands is faster in part because the code on the backend is focused on the business logic. “There is less code and interaction with infrastructure. So there is more opportunity focus on the contract and the semantics of the API,” Surgnier explained.

Managing Security

Surgnier points to two elements of serverless security that have changed the way the team works from traditional software design: identity access management (IAM) and the separation of security concerns in the API gateway.

“In serverless, there is no server that is sitting in front of a database and is the guard to that database, which is how traditional RESTful microservices would look,” Surgnier stepped through the architecture. “We have a DynamoDB table and S3. S3 talks directly to the DynamoDB table, so then we go to IAM to set attributes to allow access to that table. That is a distinct phase of Lambda. You need to define that access to data upfront, so you are going to need to set the minimum access permissions. By default, our Lambda functions have zero privileges. Then we have ‘known goods’ starting templates. Then it is each engineer’s responsibility to set the appropriate responsibility levels for IAM.”

Surgnier also points to the API gateway as another security aspect. “One of the features of API Gateway is the separation of security concerns,” explained Surgnier. “You can encapsulate the security properties in a separate Lambda function and that Lambda function can be 100 percent focused on security logic. So the security team can be focused just on that and then you have the backend team focused on business logic. I expect that at Fabric, we will be moving in that direction as we grow our team.”

That separation of concerns also holds for data and analytics. With serverless, it is possible to decouple the data and analytics. It is possible to set up a DynamoDB stream and consume data changes, so engineers to do not need to remember in the business logic to host analytics data.

The Road Ahead

“All of this points out that Ops has not gone away, it is just that there is a different class of problems now,” said Surgnier. “We really want a backend engineer to focus on the business logic, but who is also able to focus on Lambda invocations. To ask, how big of a slice of the concurrency pie does this Lambda function need to reserve for itself? What is the appropriate write-throughput and read-throughput for a DynamoDB table? For some of this, we are using Cloudwatch internally for logs and metrics. For data engineering, we are using Kinesis and Athena.”

Surgnier also plans to continue building on their identity access management processes. As engineering grows large enough to warrant a dedicated security team, Surgnier sees a point where that team would own the IAM permissions of a Cloud Formation template. “And ideally that would be generated from source code and not a bunch of JSON files paired with custom authorizers in API Gateway, which are Lambda functions,” Surgnier added.

Overall, Surgnier sees serverless as the best decision for Fabric as a startup entering an established market and revitalizing the product offerings available. The company is focusing on listening to a particular customer market (young families) and meeting their needs, which have been sidelined in a larger, homogenized industry approach.

Surgnier summarized Fabric’s approach: “We are trying to make it easy to make good decisions for your family by default. It shouldn’t be hard to protect your family and you should feel good about it when you are done. We want that experience to be reflected in the brand. So we focus on being better at listening to customers. Every great company knows how to deal that well. That just takes compassion and energy, it doesn’t take servers.”

Feature image: Screenshot from the Fabric website.

This post is part of a larger story we're telling about serverless technologies.

Get the full story in the ebook

Get the full story in the ebook

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.