Table of Contents

Last Updated Jul 05, 2021 —

 

How do organizations know when automation in their software delivery pipelines is working? It’s crucial to gauge success in a meaningful way — one that identifies value delivered to customers and doesn’t focus solely on speed or efficiency.

There are a number of key metrics that allow DevOps teams to identify and quantify improvements that have been made where automation has been implemented in various stages of the software delivery pipeline, from builds through deployment and testing.

Some of the key DevOps metrics that reveal information about how well automation is working in the DevOps pipeline include:

  • Deployment duration
  • Deployment failure rate
  • Defect Escape Ratio
  • Automated test failure rate

In addition, Google’s DevOps Research and Assessment (DORA) team has identified four key DevOps measurements indicative of an organization’s software delivery performance, and ability to meet its DevOps goals  These key metrics are:

  • Lead Time
  • Deployment frequency
  • Mean Time to Restore
  • Change Fail Percentage

The value of adding automation

Adding and implementing automation throughout the CI/CD pipeline, including configuration, deployment, and testing, is one of the key DevOps principles. Automation is also highly effective at improving performance by:

  • Removing outdated or unnecessary manual processes
  • Identifying and removing bottlenecks in the delivery pipeline
  • Eliminating slower and more error-proneerror prone processes

There are a number of benefits associated with increasing automated testing in the CI/CD pipeline. First, automated testing allows for faster testing processes. It’s also significant in improving test coverage in areas including QA, regression, and performance testing.

Meanwhile, industry reports show that organizations are seeing an increased value from test automation, which is measurable in various KPIs and metrics. According to a recent World Quality Report: “As automation continues to grow, and organizations increase the amount of automation across their testing ecosystems, respondents said that they are getting increased value from automation, such as better control and transparency of test activities, reuse of test cases, and defect detection,” the report notes.

It’s also imperative for organizations to take a smarter approach to automated testing that is focused on delivering value. In a recent article on the challenges involved with test automation, we noted that the primary objective of automated testing should be, “creating value efficiently,” and not just to, “complete tests quickly.”

Furthermore, DevOps teams should be aware of metrics beyond just answering the question of whether a process is automated. We noted that, “metrics should retain a focus of the value and benefits of automation, such as the quicker cycle time,  the higher deployment frequency, lower defect escape rate, and less unplanned work.”

Metrics that track how well your automated tools are working and provide intelligent insights

Process and performance metrics can help evaluate an organization’s DevOps strategy. Metrics can help teams determine what’s working, what’s lagging, and whether an organization is close to reaching its software delivery and CI/CD goals and objectives.

According to one DevOps expert, “Metrics provide a reliable, long-term indicator of how your software delivery team is performing. They open the door for your team to experiment with different approaches and assess their impact using a common standard.”

It’s also critical to make sure teams are using the right metrics, or those that provide useful insights that show whether or not you’re reaching your software delivery goals and objectives. Here is a brief overview of some key metrics and what they can reveal about your automated processes:

  • Deployment Duration: This metric measures how long it takes to deploy a set of changes. This is typically effected by how many manual processes are still in place. Adding automation to the process could improve the metric.
    • Value: Shows whether deployment activity is becoming more or less efficient over time
  • Defect Escape Ratio: This measures the number of defects found in production vs the number of defects found in development.
    • Value: This metric indicates whether automated tests, code review, and other quality processes are working or need improvement
  • Deployment Failure Rate: This metric tracks how often deployments are failing. Deployment failures are often related to unforeseen defects, and can often be linked to problematic manual processes or a lack of visible feedback from production.
    • Value: A high failure rate can reveal weaknesses  in the deployment process including bottlenecks, or human error. Adding more automation can improve this metric.
  • Automated test failure rate: This metric tracks how well your automated tests are working and how often they are failing.
    • Value: This metric can reveal if your tests are relevant and reliable.

Best practices for evaluating metrics pertaining to automation

Organizations must employ sound practices when evaluating metrics that measure the different types of automation that are being implemented. Teams need to ensure that they are interpreting results effectively and not focusing on the wrong types of measurements. Teams must not lose sight of the main goals of tracking process and performance metrics— namely: increasing productivity; optimizing CI/CD, and delivering value to your users and customers.

In a recent white paper on value stream management solutions, Forrester notes that DevOps organizations, “must use process metrics to gain a greater understanding of where roadblocks lie in the value stream.” The report asserts that organizations with, “disparate ways to measure metrics,” report an inhibited ability to measure value.

However, Forrester also notes that organizations that use a Value Stream Management solution report a greater ability to measure their software delivery efforts and increase their automation. By using a VSM practice and set of tools, they add, “organizations can use metrics to drive further process automation and identify areas that are ripe for automation.”

Again, automation itself is not the complete answer. Organizations that do automate aren’t guaranteed immediate success. They must be able to determine if automation is making work more efficient and delivering more value. Feedback generated by metrics, however, can point to opportunities for further automation or refinement of other existing practices. By being selective about the right metrics, tracking them diligently throughout the product cycle, and using metric feedback to inform new changes to processes or products, organizations can seek higher levels of value delivery with each new delivery.

To get a better look at the nature of VSM and how all the pieces fit together download our VSM eBook now.

Are you ready to scale your enterprise?

Explore

What's New In The World of Digital.ai

December 2, 2024

Mobile Application Accessibility Testing Guide

Improve mobile app accessibility with our guide. Learn about tools, standards, and strategies for gathering user feedback to create inclusive experiences.

Learn More
November 22, 2024

Digital.ai Continuous Testing Leads the Way: First to Support Android 16 Developer Preview

Stay ahead in tech with Digital.ai’s early support for Android 16 Developer Preview. Equip your tools to innovate confidently & enhance user experiences today!

Learn More
November 19, 2024

Smoke Testing vs. Sanity Testing

Explore the differences between smoke testing and sanity testing. Discover their characteristics and best practices for effective software quality assurance.

Learn More