Will real-time data processing replace batch processing?
At Confluent's user conference, Kafka co-creator Jay Kreps argued that stream processing would eventually supplant traditional methods of batch processing altogether.
Absolutely: Businesses operate in real-time and are looking to move their IT systems to real-time capabilities.
Eventually: Enterprises will adopt technology slowly, so batch processing will be around for several more years.
No way: Stream processing is a niche, and there will always be cases where batch processing is the only option.

Measuring Engineering Velocity: How Deploy Time Affects Cost and Quality

Apr 3rd, 2018 6:00am by
Featued image for: Measuring Engineering Velocity: How Deploy Time Affects Cost and Quality
This is the third article in a three-part series about Measuring Engineering Velocity sponsored by CircleCI. Read more about mainline stability in part 1 and deploy frequency in part 2.

If you’re planning to adopt DevOps, good news: Companies that embrace DevOps practices are more successful, no matter what industry they’re in. However, it’s not uncommon for engineering leaders to stumble when trying to institute DevOps practices. Tracking key metrics — such as mainline branch stability, commit-to-deploy time (CDT) and deploy frequency — can help accelerate the DevOps journey, while improving opportunities for velocity and growth. Our new CircleCI report takes an in-depth look at these metrics, based on a sample of GitHub and Bitbucket organizations built on CircleCI’s cloud platform in mid-2017. In this article, we cover deploy time; read more about mainline branch stability in part one and deploy frequency in part two.

Deploy Time: The Benefits of Faster Changes and Patching

Jim Rose, CircleCI
Jim Rose, CEO, joined CircleCI in 2014 through the acquisition of Distiller, an iOS-only continuous integration service. He was Distiller's co-founder and CEO. Prior to Distiller, Jim was the co-founder and CEO of many companies; Copious, a social marketplace backed by Foundation Capital, Google Ventures, amongst others; Vamoose, a vertical search engine in the travel space acquired by Internet Brands; MobShop, which invented and patented the idea of group buying online in 2000, raised over $49 million in funding, and whose IP was acquired by Groupon.

After the code has been written, reviewed and tested, it still needs to be delivered to users. The time it takes for code to move from the mainline branch to production can range from a few minutes to many hours — a cost incurred every time an organization’s codebase changes for new features or bugfixes.

Deploy time is a measurement of deploy cost. The lower the deploy time, the less expensive it is to change your product. When deploy time is short, engineers waste less time waiting for deploys, allowing them to start new work more quickly. Product owners can conduct more experiments and build more prototypes. Customers see changes faster, and bugs are patched within minutes of being spotted.

Findings: In this study, deploy time was measured as the number of wall-clock minutes between queuing a build and completing the build. We found that deploy time is largely kept under control, with 80.2 percent of organizations deploying in under 15 minutes. The fastest organizations (95th percentile) deploy in 2.7 minutes, while the median is at 7.6 minutes. From there, a long tail extends to 30 minutes for the bottom 5th percentile.

Among top performers (10th percentile of Alexa Internet Ranked organizations), 80 percent deploy in less than 17 minutes, with the top 5th percentile at 2.6 minutes. The median for these organizations is 7.9 minutes, and the bottom 5th percentile is at 36.1 minutes.

Best Practices for Reducing Deploy Time

Customers as QA: Organizations with robust test suites can reduce time spent performing manual QA, which helps reduce deploy time. However, technology isn’t the only factor: It also requires highly optimized processes, including few bugs, efficient recovery from failure, and constant monitoring. Having these pieces in place allows organizations to treat customers as the QA team — at least where it’s safe to do so. This doesn’t mean that QA isn’t important, only that there’s a definite advantage to automating expensive manual work where possible.

Find balance: The key metrics we studied, including mainline branch stability and deploy frequency, are not isolated variables; optimizing for one will affect the others. For example, organizations that spend less than 5 percent of their time in the red have a median of 5.3 deploys per week and 6.7 minutes of deploy time. By contrast, everyone else has a median of 8.7 deploys per week and 11.7 minutes of deploy time.

Moving quickly without proper testing will result in lower stability and higher deploy frequency – and organizations focusing only on velocity may have to spend more time fixing their mistakes. This might explain why, despite higher deploy time, these companies still deploy more – not because they planned to, but because they have to.

In contrast, organizations that only prioritize deploy frequency increase the likelihood of instability as cruft and technical debt increase. Shipping buggy code also increases the need for more deploys to fix problems. Therefore, deploy frequency can’t be safely treated as the definitive measure of an organization’s velocity.

Organizations building in 12 minutes or less have a median of 5.3 deploys per week and an instability of 0.2 percent. Organizations over 12 minutes of build have a median of 8.3 deploys per week and instability of 2.7 percent. This reinforces the theory that organizations pushing themselves to quickly meet customer needs may find themselves in bad states or waiting longer for projects to build. Finding the right balance is key.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.