Despite the Hype, Engineers Not Impressed with DORA Metrics
Overall, 52% of engineers surveyed who could speak to the subject said that DORA — short for DevOps Research and Assessment, a set of metrics assembled by Google — is either not at all or only somewhat effective.
Slightly fewer (48%) say the same about the lesser-known SPACE framework, developed by researchers at GitHub, Microsoft Research and the University of Victoria. Yet, more than half (55%) couldn’t even answer the SPACE question, either because they had never heard of it, or because they had never seen the framework implemented in practice.
Even DORA metrics, which have attained widespread attention among DevOps practitioners, were something that only 29% of respondents could address.
The results come from a survey of 600 engineers by LeadDev and Swarmia.
Measuring Team Productivity
The New Stack previously reported that focusing on individual developer productivity metrics can distract from tracking business and team-based achievement. In fact, the LeadDev report found that user growth and user satisfaction were the most common ways that team performance is assessed, versus strategic business goals.
Out of the five answer options offered, meeting service-level objectives (SLOs), which can be seen as a proxy for quality, ranked last.
Overall, cycle time (from requirement definition to feature delivery) and lead time (from developer work starting to feature delivery) are the first and third most used approaches to measuring developer productivity.
When asked what are the most useful ways to measure productivity, meeting deadlines topped the list for participants in LeadDev’s survey. Did they consider it to be most useful because cycle/lead times are something that people actually track, or is it because time-based benchmarks are available to software engineering managers desperate to provide a justification for their staffing budget?
Surveys are the least useful approach to measuring developer productivity, according to the same LeadDev report. Still, surveys proliferate because they are often the cheapest way to collect information. Regularly asking survey questions becomes increasingly ineffective when research instruments become too long or onerous to fill out.
Abi Noda, co-founded DX, a developer insights company, to improve the process of collecting data and assessing developer productivity. Its DEVEX 360 offers a survey platform that makes it easier for software managers to both regularly collect information and to benchmark the results versus peers. Perhaps this will make surveys more useful, but it also may just be making the process less painful.
The people behind Google's latest "Accelerate State of DevOps Report" addressed survey pain by dramatically cutting the number of questions asked. This helps explain why its 2023 survey received more than three times as many "organic" respondents.
When the data is not onerous to collect, value is generated, as previous reports have found a direct link between many best practices and improvements in software delivery, reliability, operational performance and the personal well-being of team members.
Another problem with measuring developer productivity is that many of the input variables are "soft" and have to do with culture, which are usually hard to quantify and document. For example, the Google DevOps report's survey builds a proxy for individual productivity by asking if someone's work aligns with their skills, creates value and lets them be efficient. This helps explain job satisfaction and burnout, but does not actually provide a number that measures developer productivity.
The most quantifiable ways to track success are related to IT operations quality and performance (e.g., change failure rate) or continuous delivery methods. Of course, the time it takes to produce software features can also be tracked, but it is hard to benchmark developer velocity unless factors like quality are taken into account.