TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Microservices / Serverless / Tech Life

Enterprise Application Cost Savings Using Serverless Computing

This elastic IT supply model can expand to microservices in a serverless model. This takes away the need to procure, provision, manage, upgrade, or pay for server infrastructure. The services can now be scaled in a matter of minutes.
Jul 17th, 2020 12:22pm by
Featued image for: Enterprise Application Cost Savings Using Serverless Computing

Andy Thurai
Andy Thurai is a technology influencer and thought leader who focuses on emerging technologies such as Cloud, AIOps, AI, ML, DL, Edge, and IoT. He is a trusted advisor to many startups and enterprise executives, and specializes in selling complex technologies to CxOs. His website is .

Capacity planning during the data center era was an art. It is part science, part guesswork, and part negotiations for hardware and software with vendors. Generally, the enterprise capacity planning cycle begins six months to a year in advance of actual need. The process involves predicting demand for specific applications, which is followed by procuring and setting up servers to scale the application. It was a very common practice for enterprises to predict their application capacity as early as six months ahead.

After all that, with a fear of missing out on new business/users, enterprises always buy excess capacity — just in case. What is worse, after using this excessive, unnecessary capacity, the project teams generally tend to hoard the servers in fear of not getting the capacity for the projects again. Due to this, enterprise data centers used to be very underutilized. I have seen usage statistics of 10-20% of the planned capacity at times. Even after all that, the extra capacity requests can take up to three weeks to fulfill — getting the servers imaged, patched, set up, and ready for prime time.

Cloud computing takes away all this. The provisioning is done in a matter of hours and released immediately. This elastic IT supply model can expand to microservices in a serverless model. This takes away the need to procure, provision, manage, upgrade, or pay for server infrastructure. The services can now be scaled in a matter of minutes.

High IT Costs, Low Utilization Ratio

A joint research survey and a white paper published by VMware and IBM titled, “Using Virtualization to Improve Data Center Efficiency,” identifies that 20% of the servers were running below 0.5% of capacity, and about 75% of the servers were running at or below 5% utilization based on information from 300,000 production servers running in thousands of enterprise companies around the world. This underutilization often increases the IT cost to more than what is necessary to support an application.

Business Decisions on Applications

When enterprises decide to launch a new application, a new business model, or a platform, they often base their decisions on an economic impact analysis, of which IT is a smaller part. Most times, the “what-if” analysis is based on the usage scenario and traction model but never based on IT cost analysis (which is a fixed cost as a determining factor). So, technically, when the IT costs are included in a business planning model, the appropriate cost should include these underutilization facts, which it almost never does.

Cost of IT is all-too-often either fixed or underrepresented.

For example, if an app was estimated to bring in a $1 million and IT spend was estimated to be at $100,000 then that is where the upper limits should be set. If the revenue goes to $2 million, though then whatever was factored in upper spend limit should be used. When their decision is based on the business impact then it is easy to justify chargeback. But right now, IT sends in chargebacks based on actual spends and it can give a business units sticker shock at times.

And if things don’t work out, there is no recourse for getting a return on the purchased equipment.

CIO’s Nightmare — Balance Application Cost Based on Business Needs

With the liberation of infrastructure, and with automated instantaneous infrastructure scaling, it is hard to know which application to scale and when. Any application scaling should be based on the business necessity behind it, not because the application is overloaded. CIOs always dread the question of “Should an application of limited value be placed on a platform of unlimited scale and cost?” This is a very hard question for IT executives to answer without fully understanding the business value the application offers and the time when it is valuable.

For example, the benefits enrollment application should receive almost no upper bounds spending restriction during the enrollment period versus almost no spend approval during the non-enrollment period or during the testing periods. In other words, during the enrollment period, the enrollment application should be allowed to scale infinitely with unlimited resources — especially during the final days when everybody tries to get in — as opposed to the non-enrollment periods when there is no one using it.

Any technology executive can easily justify the extra spending pattern quickly by demonstrating the need and the business value. A side application or an experimental application can be given a very limited spending budget and can be managed closely whereas an order-taking application must remain “always ON” to scale up to meet any demand with no upper bounds.

However, serverless solves only infrastructure issues. It helps enterprises to scale the infrastructure as needed. CIOs still have the problem of accurately predicting the cloud usage per application, which is hard without knowing the scalability of the application layer. Because of this, it is common even in the cloud environment to over-provision the infrastructure more than necessary. Stateful serverless can help solve this.

IT Budget Based on Business Needs not Based on Infrastructure Costs

This solves a very important and painful issue for CIOs. In the data center capacity planning model, asking for a specific budget to run a business-critical application is always guesswork. What is worse, the chargeback is even more complicated. When running only at 20% capacity, charging a BU (business unit) on the full capacity can be excessive and will get a push back, but charging them only for used capacity will increase the wastage costs of IT spend. But, knowing the accurate usage costs, almost near real-time, gives the flexibility to CIOs to negotiate with the BUs. It is up to the BUs and business application owners to decide whether it is worth spending that much money in IT for an application.

What is more compelling would be based on specific needs. BUs can now set the spending limits, or upper bounds, for any specific application. This kind of fine-grain costing was almost impossible in the past even with cloud computing. BUs and other executives can easily justify the cost of spending based on opportunity costs. If the usage or the business opportunity is not maturing, or if the BU decides to run extra campaign efforts, they can notify CIOs days or hours in advance adjusting their spending and extra budget approvals. This also allows for CIOs and other CxO to decide to allocate spend properly between applications with almost no wasted IT costs.

Business Approved IT Spending — Real-Time

This idea of controlling a platform spend based on business functions and need is very new to enterprises. I see this gaining a lot of traction. A business application can scale as needed up to the maximum allowable limits set based on needs. That limit can also be very dynamic and can be calendar based as well. Using AI/ML models, the demand for application usage can be predicted and when a trend is identified the usage can get pre-approved by the application business owners rather than giving them a sticker shock months later.

Especially with serverless, it is easy to launch services in a matter of milliseconds on-demand. This will allow an application to request additional resources only when the demand spikes up vs random capacity increase based on earlier guesswork of demand that was predicted.

Feature image via Pixabay.

At this time, The New Stack does not allow comments directly on this website. We invite all readers who wish to discuss a story to visit us on Twitter or Facebook. We also welcome your news tips and feedback via email: feedback@thenewstack.io.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack, Real.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.