Monoliths to Microservices: 8 Technical Debt Metrics to Know

Technical debt is a major impediment to innovation and development velocity for many enterprises. Where is it? How do we tackle it? Can we calculate it in a way that helps us prioritize application modernization efforts?
Without a data-driven approach, you may find your team falling into the 79% of organizations whose application modernization initiatives end in failure. In other articles, we’ve discussed the challenges of identifying, calculating and managing technical debt.
In this article, we review eight specific metrics based on technical debt that are important for assessing and planning application modernization initiatives across your application landscape.
Briefly, How It Works
For these purposes, we’ll be using metrics generated by the vFunction Assessment Hub Express, which is free to use for up to three Java applications. As part of the overall vFunction Platform, Assessment Hub employs static analysis alongside machine learning (ML) algorithms that measure technical debt of an application based on the dependency graph between its classes.
These calculations measure the:
- average/median outdegree of the vertices on the graph.
- top N outdegree of any node in the graph.
- longest paths between classes.
Using standard clustering algorithms on the graph, we can identify communities of classes within the graph and measure additional metrics on them, such as the average outdegree of the identified communities as well as the longest paths between communities.
With this in mind, let’s look at the metrics revealed by this analysis and how they can help architects and developers make a business case and a priority list of applications to modernize.
Metric No. 1: Cost of Innovation (per $1 Spent)
How can you tell if the technical debt in your monolithic application is actually hurting your business?
One of the most important metrics that determines investment decisions behind application modernization initiatives is “How much does it cost to keep around?”

Image 1: Budget spent toward innovation vs. technical debt
The cost of innovation metric (Image 1) shows a breakdown that makes sense to executive decision-makers. How much, for each dollar spent, goes to simply maintaining the application, and how much goes toward innovating new features and functionality?
In this example, we can see that 87% of this application’s budget is spent maintaining accumulated technical debt, and only 13% is put toward innovative efforts like building new features. Within the technical debt category, there is a third metric mentioned in the analysis — high debt classes, which is the next metric we’ll look at.
Metric No. 2: Top 10 Highest Debt Classes
Does your application have primary culprits contributing to technical debt?
In Image 1 above, we saw that $0.11 of each dollar spent has been categorized as high debt classes. That is to say, for every dollar, 11% is spent on maintaining these high debt classes. The static analysis and ML algorithms mentioned above identify the top 10 most heavily indebted classes in the application based on analysis across the entire code base.

Image 2: The Top 10 worst debt classes, identified by static analysis (hidden for privacy)
In Image 2, we see a breakdown of the top 10 worst debt classes in this particular application, which have been hidden for privacy. This provides a clear view of the most problematic classes that together are contributing the most technical debt that has accumulated in the application. With this information in mind, architects and developers can begin to understand the total cost of ownership (TCO) of the application before refactoring efforts begin, which leads us to our next metric.
Metric No. 3: Pre- and Post-Refactor TCO
How much difference would it make to focus purely on the Top 10 worst indebted classes?
In the first metric above, we saw a breakdown of innovation, debt and high debt classes. With those numbers, we use simple math to provide another metric that helps decision-makers quickly ascertain the priority of modernization efforts across a broad application estate: the formal TCO of the application, both pre- and post-refactoring efforts (Image 3).

Image 3: Current TCO compared to post-refactor TCO of the Top 10 high debt classes
On the top-right side of Image 3, the pre-refactor TCO multiplier indicates how much is spent currently to simply maintain an existing application. The post-refactor metric on the bottom right represents the reduction in TCO that would apply if just the top 10 worst debt classes are refactored. To go from 7.4X to 4.2X by simply focusing on these top 10 worst debt classes is a powerful metric for convincing decision-makers to modernize applications.
Metrics Nos. 4, 5, 6: Overall Debt, Risk and Complexity
How does the application you’re assessing compare to your other applications in terms of complexity and risk?
In Image 4, we see a graph representing various indices that help decision-makers understand the debt, risk and complexity of an assessed application. On the right side of Image 4, we measure how far above or below your application measures against the averages. (Again, we’re employing static analysis combined with machine learning to ascertain these metrics.)

Image 4: Debt, risk and complexity measurements
Metric No. 4: The debt index measures the severity of the overall debt of the application and also displays averages against all other applications assessed in your portfolio. In this example, we are measuring against other applications assessed by vFunction. This metric combines the complexity and risk indices, described below.
Metric No. 5: The risk index is correlated to the length of the dependencies. This analyzes how likely a change in one part of the application will affect a seemingly unrelated part of the application downstream. Code dependencies are a major blocker in application modernization efforts — the risk of changes screwing something up so badly that it results in involuntary career shifts is not something most engineers are willing to do.
Metric No. 6: The complexity index measures the degree to which class dependencies are entangled between themselves, reducing the level of modularity of the code. This influences an architect’s ability to decouple functionality and create independent, isolated microservices in the future. (As a reminder, the vFunction Modernization Hub uses AI and automation to untangle these dependencies with much less risk than with manual efforts).
Other metrics that represent risk and complexity are calculated during analysis in the vFunction Modernization Hub, Assessment Hub or Assessment Hub Express, and provide insights into immediately actionable areas, such as our next metric.
Metric No. 7: Aging Frameworks and Libraries
Do the frameworks and libraries in your application present additional risks and challenges to your modernization efforts?
Aging frameworks pose a risk to enterprises not only due to technical debt accumulation, but also because of security concerns. For example, the 2020 HIMSS Cybersecurity Survey notes that 80% of health-care institutions are not using up-to-date tools and practices among their existing systems. Yet, because such systems were created at a time when security threats were far less sophisticated and frequent, aging applications are particularly vulnerable to modern-day cybersecurity threats.

Image 5: Aging frameworks represent a security risk as well as contribute to technical debt
In Image 5, we can see a list of the frameworks and libraries identified during the analysis. The colors correspond to modern, aging and unknown frameworks used in the application, based on a query of the current framework version versus the latest version available. The “modern framework” tag indicates that the app shares at a minimum the same major (e.g., 2.0+) and minor release versions (e.g., 2.4).
If this is your application, and 79% of the frameworks are considered aging or out of date, this metric provides a starting point for refactoring, which will serve as a quick win toward strengthening security policies in your team.
Metric No. 8: Number of Classes and Minimum Compile Version
Just how big and complex is your application?
The final metrics that we’ll look at in Image 6 include the number of classes and the minimum compile version (in this case, Java 1.6) of the application under assessment. So before modernization efforts begin, it’s crucial to ensure compatibility for the future. Upgrading the JVM version of this application may bring further advantages like better development velocity, strengthened security and even performance improvements.

Image 6: The number of classes in the app and minimum compile JVM version
In this application, there are over 18,000 Java classes, a fairly large amount that represents what engineers would likely term “a huge monolith.” This metric gives a sneak preview of the potential scope of the modernization initiative: A single large class inside this monolith may easily cover a scope of functionality that, in the future, would represent an entire service domain (once decomposed into a microservice).
This is where vFunction Modernization Hub takes over the heavy lifting for automatically suggesting a reference architecture that you can interactively refine and extract as microservices to deploy with Docker and Kubernetes to cloud providers like AWS, Microsoft Azure, Google Cloud Platform, Red Hat OpenShift and others.
The Bottom Line
Modernizing monolithic applications into microservices is not an easy task. Understanding the technical debt, app complexity, risk and aging frameworks is critical to getting buy-in from executives and support from other teams for any future efforts.
Without a confident assessment of a single monolith, let alone your entire application ecosystem, modernization efforts are likely to fall into that alarming majority of initiatives that fail at an average cost of $1.5 million, and 16 months of wasted work hours.
Armed with these metrics, architects and developers can provide an informed, data-driven approach to modernization efforts. And that’s what we came here for, right?