How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
No change in plans, though we will keep an eye on the situation.
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
What recent turmoil?
Compliance / Kubernetes / Operations / Platform Engineering

Rise of FinOps: CAST AI and Port Illuminate Your Cloud Spend

The complexity and constantly rising cost of cloud has given rise to FinOps tooling, which looks to reduce cost and create transparency.
Apr 20th, 2023 2:00am by
Featued image for: Rise of FinOps: CAST AI and Port Illuminate Your Cloud Spend
Feature image via Pixabay.

AMSTERDAM — As your company grows, so does your cloud spend, and so does your carbon footprint. In the tech industry, we just kind of take this as a given. Simply too much power condensed among the big three cloud computing providers — Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).

FinOps is the technical, financial and business discipline that looks to change that by creating transparency across silos and shifting financial accountability left to the developers. With it, a whole slue of tools are cropping up to help facilitate your FinOps. This is especially true at the Cloud Native Computing Foundation‘s KubeCon+CloudNativeCon Europe, because Kubernetes clusters often have the highest cloud cost, with the least amount of insight into them.

Just this week, Microsoft announced an integration with the open source Kubernetes cost management project OpenCost, which is dedicated to cost monitoring cloud native environments.

The rise in FinOps for Kubernetes tooling will only continue. For Kubernetes workloads, FinOps is about rightsizing resources within your cluster, so scaling up and down does come with waste — wasted resources and excessive carbon output. And, with startups CAST AI and Port, it’s about taking a lot of this decision-making away from the developers, automating and optimizing that cloud spend, and reducing that cognitive load.

CAST AI Cloud Automation AI Platform

When Laurent Gil and his cofounders launched their previous cybersecurity SaaS company Zenedge, they learned this lessen the hard way. The team’s monthly AWS cloud bill went from $1,000 in 2015 to almost $1.2 million in 2018. They experienced a 10% increase just about every month.

“Just describing the life of a SaaS product. We were using our cloud to deliver our product, so the more customers we had, the more costs we had. That’s normal,” Gil told The New Stack. “The frustration was not being able to understand what to do about it.”

His team would receive 100-page AWS bills explaining what they were spending, but “It doesn’t tell you, is that the right amount? Are we spending the right thing? Are we overspending? Underspending?” They knew what they were spending, but he continued, they didn’t know what to do about it.

So, after Zenedge was acquired by Oracle and they worked there for a while, Gil, Yuri Frayman and Leon Kuperman decided to found CAST AI in 2020 — because they couldn’t be the only ones with that issue. “We did not want to be another cost reporting tool. We really, really wanted to build an engine that will automatically look inside your cloud account, and reduce automatically, and rightsize automatically your cloud cost,” he clarified.

Now, in about a minute, Gil says, CAST AI is able to analyze all your Kubernetes clusters across the big three cloud providers and say:

  • This is your cost.
  • This is what your cost would really be.
  • Push a button to activate.

Then, the CAST AI engine goes inside and real-time automatically optimizes within the clusters. Not just once, but every few seconds, cutting on average 40% of CPU usage and cloud cost.

“You have to do this every few seconds because your traffic is never linear,” Gil explained, like when your main base is sleeping. He says the price drop is almost instant.

Besides this automated rightsizing, CAST AI also features pricing arbitrage.

“When a developer deploys an app, they have to say, how many? What kind of machine do I want? There are roughly 600 different times of machines on AWS. So the developer is asking: Which one do I take to deploy my application?” Gil argues that 99% of the time, they take the machine they already know, like the Amazon EC2 M6g instances. “That’s the only reason and there are 600 different types of machine. Maybe there’s another one that has the same amount of compute that is actually cheaper.”

The CAST AI engine takes over that decision-making as well.

“It takes over entirely the management of your cloud accounts, cloud infrastructure for all your applications, and it has been trained to decide which VM, which machine, is the most cost-effective for the workload right now. And then think of this as ‘rinse-repeat every few seconds. Because your application is growing. You need to add more machines. Which ones do you turn off? Which ones do you add?” Gil explained. It will also automatically move some of the on-demand workloads to spot instances. About a third of customers are multicloud, but he predicts he will see an increase in that soon enough.

In the about 14 months since launch, the few hundred CAST AI customers have saved an average between 50% and 75% of their cloud cost.

When asked if they are hated by the Big Three for taking away revenue, he responded that actually their biggest source of customers is referrals by AWS. Their mostly SaaS provider base is then reinvesting that money saved into moving more workloads onto the cloud, accelerating their app modernization.

Since the writing of this piece, CAST AI has expanded its AI reach to create near real-time responses to cloud usage, taking orgs from just FinOps to DevFinOps.

Port Internal Developer Portal Embraces FinOps Transparency

The next step in embracing FinOps is the accountability side of things. How do you understand what team or service is using what cloud services? When possible, this takes examining the 10-plus step process that a developer needs to create and deploy a microservice. Which, as Gil pointed out, involves making decisions for which is the best machine to put it on and other cloud native knowledge that app developers really shouldn’t need to worry about.

The cloud native cognitive load for developers is ever-increasing. In part, because of Kubernetes’ endless learning curve. That’s an added risk why companies try to go FinOps without careful consideration and planning.

“Most cost reporting for Kubernetes is around Kubernetes. So you can see numbers with regards to deployment, service, namespace, clusters, etcetera. The problem is that it doesn’t provide the context you really need, which is to see costs by service, team, customer or environment. Providing cost reports in the context of the business is what’s needed, and — just like everything else with Kubernetes — it’s about an abstraction layer that helps developers do their job without digging around in DevOps tools or reports,” Zohar Einy, CEO of Port, told The New Stack.

Port is an internal developer portal (IDP) that this week is announcing its use cases extended to FinOps. For Einy, since everyone in the company is directly or indirectly dependent on the software being developed, it becomes necessary for everyone to understand their expenses, connecting back to the business.

The challenge is, as previously mentioned, cloud provider reports, while extensive, are kind of meaningless to all three pillars of FinOps — finance, engineering and business. To effectively measure cloud spend, it needs to drill down to the team or microservice level.

The necessary data is there, assures Einy, but it’s just scattered across your existing cloud reporting tools. Port now integrates with those tools like Kubecost, to offer cloud reporting via a common, visual language across these silos.

“We see DevOps teams as a major beneficiary of IDPs, and FinOps specifically, since, instead of organizing FinOps reports, or trying to manage all of their DevOps asset information, they can use the software catalog in the internal developer portal,” Einy said. FinOps is a logical extension of the common purpose shared with platform engineering — reducing developer cognitive load while increase control and transparency.

Is FinOps the Fastest Path to GreenOps?

We already know that cloud cost is the closest proxy we have for environmental impact of your software development lifecycle. After all, data centers are the driving force behind the tech industry having the fastest-growing carbon footprint of all. And there are no signs that growth will slow any time soon.

Using a no-code developer portal like Port enables companies to represent their regions — there are certainly more or less environmentally friendly ones — and providers, while tying them to expense. FinOps becomes a natural and important driver of GreenOps.

CAST AI is commissioning a study in attempt to measure their positive environmental impact. After all, “We eliminate things that are switched on, but that we don’t use.” If you think of your footprint using 100 servers, with us, on average 40 servers are going to be shut down because they are not necessary anymore,” Gil said.

“These are things you don’t need and don’t use.” That’s definitely the theme of the first stage of GreenOps — and FinOps optimization to boot. Can’t wait to see what the next steps are.

Check back often this week for all things KubeCon+CloudNativeCon Europe 2023. The New Stack will be your eyes and ears on the ground in Amsterdam!

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack, Pragma, Simply.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.