How Platform Engineering Is Disrupting FinOps
The FinOps movement evolved from a key selling point of the public cloud — decentralization.
A generation of technology and engineering leaders lost sleep over the fact that anyone in their organization could procure technology resources against their budget with absolutely no oversight.
This stress created a groundswell that has grown into an industry of its own. Built on a wealth of data from cloud-service providers’ bills, the complex world of pricing models and a need to keep budget under control, the FinOps movement played a pivotal role in the astronomical growth of the public cloud. Engineering leaders could rely on their FinOps tools to dive full-force into the public cloud, giving their teams unprecedented access to technology in the process.
But while FinOps tools continued to advance on the pricing and financial side of the cloud, progress on the operational side has remained elusive.
Billing data is valuable to help determine the right pricing models and understand what your team has consumed, but it does little to prevent the activity that drives up cloud waste in the first place. When it comes down to it, the insights taken from cloud-billing data can only show your teams what they ought not to do. Those hoping to prevent that activity have little recourse other than to hope that it doesn’t happen.
Here are a few ways the platform engineering movement is helping to bridge that gap.
Tracking Real-Time Cloud Costs by Team
A platform approach leverages decentralization to help development teams get more value from the public cloud. Anyone with access to the platform can deploy the testing, staging and production environments they need on demand.
However, by orchestrating and deploying those environments via the platform, this approach allows engineering teams to calculate costs based on the configuration of the cloud services and the duration of the deployments.
The platform approach provides data reflecting activity as it occurs. By tracking real-time deployments, you can understand costs as they accrue and make operational adjustments before receiving the bill.
Prohibiting Oversized or Otherwise Unapproved Cloud Instances
A platform approach to delivering application environments also provides a way to make governance policies part of day-to-day operations.
For years, FinOps teams have struggled to enforce standards for cost-efficient cloud behavior. With configurations decentralized across git repositories and Infrastructure-as-Code tools, FinOps teams have had little recourse to know whether cloud deployments adhered to best practices until they received the cloud bill. And even then, ensuring compliance going forward is an uphill battle.
A platform that orchestrates environments from configurations defined in git gives the FinOps team a mechanism to make up ground in that battle.
Let’s take rightsizing, for example. The practice of identifying and rightsizing oversized cloud instances can bring down costs without disrupting operations.
Over time, however, these changes are subject to drift. As oversized instances creep back in, the FinOps team likely won’t know until the cloud bill arrives, at which point they have the same awkward conversations.
The platform, however, can make rightsizing a requirement to deploy an environment. By setting a policy to prohibit specific instance sizes, the platform can deny any deployment with an oversized cloud instance.
These types of policies can vary in purpose and scope, applying rules to technologies or runtimes and limiting enforcement to individual teams. But without the platform as the focal point for enforcement, governance policies are little more than hopeful requests.
Scheduling Automated Shutdown
Terminating idle resources is another benefit of a platform approach.
Consider, for example, a software-testing team that only works standard business hours, Monday through Friday. Any testing environment left running overnight or over a weekend will incur unnecessary cloud costs, and FinOps data does not give you the tools to shut them down or prevent them from running unnecessarily in the first place.
Since a platform deploys the environments, you can set rules to deploy and terminate the VMs to support the testing team’s workday. Setting a schedule to run VMs from 8 a.m. to 8 p.m. every weekday ensures that testing environments will run when required and shut down when they’re no longer needed.
Automating Consistent Tagging
Even the most well-thought-out tagging strategy is only as strong as the person who applies the tags.
Missing tags, typos and inconsistent capitalization can lead to blind spots in reporting that hold back cost-optimization efforts. How do you right size cloud instances if you don’t know who is deploying them?
Again, the platform can answer this problem. The self-service nature of deployment via a platform provides an opportunity to standardize tagging. Including a required field for tag via a centrally managed picklist eliminates the risk of missing and misspelled tags.
When paired with visibility into real-time deployments, this provides a valuable way to monitor cloud costs with the context needed to intervene.
FinOps is still a pillar of the modern cloud world. When paired with an environment orchestration platform, it will only be more valuable.