LaunchDarkly sponsored this post.
I think a lot about the junction point of infrastructure and software. The place where the infrastructure runtime — virtual machine, Kubernetes; heck, I even think serverless lives there — gives way to the software that’s being presented to users who consume it. Historically, the software side of that switch track is where the idea of “features” lives.
When we think of features, we immediately think of a thing people are going to use inside our application. When we think of feature management, we think of how are we rolling out new features for our application.
Full disclosure: I’m a transplant in the software development world. Much of my career background has been in the infrastructure realm. I have a very different view on what a feature is. I’ve seen teams re-architect entire platforms chasing modernization goals. These modernization goals didn’t always result in big changes to the application.
Oftentimes they were about setting up the application for the future: breaking apart the monolith into microservices. Sometimes it was the infamous cloud migration. It was always some level of functionality that was being chased; but again, it wasn’t always about application-level features being implemented.
On the other hand, there were always stakeholders who cared about the new functionality and opportunities that existed as part of the infrastructure side of the change.
What I’m getting at here, is that a “feature” is just a different way of presenting some level of “functionality,” and as I mentioned above, that “functionality” can mean so much more than simply a new tab showing up in our web browser.
What Defines a Feature?
Let’s look at the example of an application team adopting a secrets management platform within their application.
Does this qualify as newly added functionality? For the security team, does the application being integrated with a secret management platform qualify as a feature? We’ve tangibly improved the security footprint of the application and added new security focused enhancements to it. What if we can use the concepts we use every day inside the feature management world to more intelligently roll this feature out?
Targeting rules allow us to isolate this newly added capability specifically to QA testers. If we see errors generating from user connections, we can immediately disable this functionality without the need of a redeployment (the idea of a kill switch).
If our QA testers report all is well, we can gradually continue rolling this functionality out, achieving the concept of progressive delivery for a capability that we wouldn’t typically present as a user-facing feature. Ultimately however, it was a feature for the security team.
Another tangible example is the previously-mentioned infamous cloud migration scenario. Most customers wouldn’t care if an application is running on Kubernetes, a virtual machine or as a serverless function. It’s an invisible thing to the people using the application in question.
That being said, Kubernetes provides a lot of benefits for platform teams. Scalability, the control loop, immutable deployments, even the concept of infrastructure as code are all features that the platform team typically cares a lot about.
When the example company moves from their traditional, virtual machine deployment to this Kubernetes-centric one, does that qualify as new functionality being presented to the platform team? Isn’t that a feature?
What if we can take those same concepts we mentioned above around targeting rules, kill switches and progressive delivery, and apply it to the users who “connect” to our new kubernetes cluster? In this case, we can use “feature management” to orchestrate the way we roll out an entire new platform to end user teams.
For a final example, many of us have been a part of, or heard the horror stories of, a database migration scenario. Our company is tired of paying for a huge virtual machine that runs our Postgres database, so we’re going to move to a managed, cloud-hosted offering instead.
I don’t have enough fingers or toes for all the times I’ve watched a database migration go sideways, but what if we can drastically reduce that risk by applying the ideas we’ve described in this section?
What if we can send 20% of our users to the new cloud-hosted database and validate whether connections are successful? What if we can send users in California to a cloud-hosted US-West database, and users in Boston to a US-East one? In this scenario, we migrate one of the scariest parts of a platform migration, the database, via feature management.
Evolving the way we look at features to be more focused on the functionality we’re exposing, instead of just the software we’re deploying, allows us to significantly enhance the way we deliver not just software, but platforms to users and teams.
Growing Beyond Software Delivery
Feature management has become the evolutionary next step in the entire “delivery life cycle,” not simply the software delivery one that we hear so much about when we’re talking about adopting agile development. This evolution is why feature management doesn’t fully live in the software side of that junction point I mentioned earlier.
Instead, it lives closer to being “aside” continuous delivery, orchestrating that junction point, helping safely and progressively get those capabilities out there. In this scenario, feature management becomes the engine that powers how we release new functionality for the users who care about them the most.
Reframing the way we look at “feature managed” to be more aligned to the idea of delivering functionality allows us to gain a lot more control around the way that delivery happens.
In our industry, we reference the idea of progressive delivery a lot — this idea of controlling our blast radius by gradually releasing functionality through different controls, versus the alternative of turning it on and crossing our fingers, waking up in cold sweats for the next few days hoping that nothing went wrong. We talk a little bit about the controls in the examples above, but just to recap some potential ideas in this space:
- Targeting individual QA users with a change relating to which database is active and sending them to the new database to validate functionality.
- Geographically targeting users and having them consume a different API system based on their region.
- Implementing some form of a middleware component, like secrets management tooling, and testing performance between multiple control groups.
- Migration from on-premises systems into cloud infrastructure living across regions.
Codification of Functionality
We’re used to “codifying” features in our applications. We want a new account functionality, or a new part of our store website. We want to change to a dark mode or use vector images instead of traditional images. Ultimately, the infrastructure our platforms run on are a key part of modernizing our applications.
Understanding that the migration to new infrastructure ultimately brings new functional possibilities to the application allows us to take a broader look at how we build, deploy and release features to different groups of people (users / operations / developers, etc.)
The delivery and release of these platforms are their own kind of features and functionality, and leveraging tools like LaunchDarkly to progressively deliver these changes to the people that care about them the most is how you get to a place where you are deploying often and releasing when you are ready.
This path is how you start to focus on shipping value, while you drift between the junction of infrastructure and software.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: LaunchDarkly.
Photo by David Bartus from Pexels.