Last Updated Mar 02, 2022 —

Increase deployment speed and efficiency by using data models and AI-powered analytics that enable DevOps leaders to identify which change categories and change teams have the highest rate of change success. Read on to learn more about how our Change Risk Prediction solution makes the difference.

All changes are, in some sense, risky. This includes incremental changes to digital products and services that millions (sometimes billions) of customers may rely upon. Diligent change management means quantifying risks and seeking outcomes that avoid risks–all without hampering change deployment speed any longer than necessary.

At one point, the expectation was that all changes would be scrutinized before deployment, thereby reducing the risk of service interruptions, incidents, or other problems. However, much of this protocol had a chilling effect on deployment speed without necessarily improving the quality of releases.

Now, we have the data-based information and the technology needed to avoid having to manually review every change in the pipeline. Using AI/ML modeling in combination with an analytics dashboard and process automation, DevOps teams can switch from “change management” to the ideal of “change enablement“, where change teams and development engineers are given the conditions they need to commit changes at a regular pace. Change Risk Prediction works in combination with Release to make this ambition happen. Data models and AI-backed analytics enable DevOps leaders to reveal which change categories and change teams have a high rate of change success. They can then use this information to automate approvals for these change types, bypassing much of the cumbersome review processes that would otherwise slow down release velocity.

This is, at its heart, the ideal of change enablement: creating a situation where changes flow freely, and teams deliver more new value to customers on a more frequent basis.

Rich, contextualized data fuels change enablement

The chief problem preventing change releases and delaying change teams is the factor of the unknown. Teams may have no idea which changes will fail. More worrisome, they have no idea why certain changes fail. In this situation, the most common way to manage an unknown risk is to implement gatekeeping, often in the form of a manual review process.

ITIL was one of the first IT frameworks to suggest a formalized change review process, starting as far back as 1989. Now, with the release of ITIL v4 guidelines in 2019, the IT best practices manual suggests teams shift from an overly diligent “change management” perspective to a “change enablement” one.

Let’s dig a little deeper into how this semantic change reveals a bigger shift in attitude. “Management” implies that every change must be handled through a specific process. In reality, many changes can be deployed with both minimal reviews and minimal levels of risk. Expecting every change to be individually “managed” by human workers implies a level of control that is neither necessary, nor truly possible. Forcing manual change reviews not only adds delays to each release, but it also asks change reviewers to apply the same level of diligence and forethought to each change. What ends up happening in reality is that while many change approvals are granted appropriately, some change approvals inevitably lead to escaped defects.

Instead of trying to manage each change manually, what is needed is a method of determining which change types and change factors tend to drive risk. These risk factors can be surfaced and understood using data analytics in combination with AI/ML modeling. Using automated processes to sift through the relevant data, change teams can not only broaden the scope of change reviews but also learn what risk factors to focus upon — and which minimal-risk changes should bypass the full review process. This difference represents a true shift towards enabling changes to flow freely while focusing priorities on the change types and teams that need it most.

In the words of ITIL expert Jon Stevens and ITSM consultant Joseph Mathenge: “The purpose of the change enablement practice is to maximize the number of successful IT changes by ensuring that risks have been properly assessed,”

The idea, then, is to enable change velocity by focusing solely on changes that present a high level of risk while giving low or no-risk changes automated approvals. One way to do that is to quantify the risks of each change through the use of ML/AI modeling. Using historical data and a data model that associates past outages/incidents with specific risk factors, IT leaders can see at-a-glance on a dashboard which changes are most likely to fail and why. Change Risk Prediction even goes as far as to assign risk/failure probability scores for all planned changes. This contextual information allows change teams to act on emergent risks, and to also divert their focus away from low-risk changes that don’t need their attention.

Once this improved system of change enablement has been established, DevOps leaders can hasten change velocity further by creating a “fast lane” for change teams that continually meet or exceed quality benchmarks. AI-driven analytics can use benchmark KPIs, using a combination of factors like escaped defects, to score individual change teams on their relative risk, enabling a system of hastened approval for teams that continually prove themselves.

Pave a fast lane for your high-performing change teams Change Risk Prediction is capable of considering the performance of individual change teams as an overall risk factor for change failures.

Teams can be assigned a “change risk credit score” based on their performance for specific metrics. Examples of factors that go into the score may include:

  • % of failed changes
  • CI alerts in the past 7 days
  • Pre-production defects detected by testing automation
  • Cross team dependencies

Teams with an overall low risk credit score can be granted automated approvals provided they continue to meet certain performance outcomes. Automation in risk management can allow for the creation or removal of gates based on thresholds like no-fail smoke tests, passing automated compliance checks.

Essentially, change leaders are driving processes automatically based on risk predictions and past performance. Change automation through instant approvals or low-level gating, in a nutshell, creates a fast lane for teams capable of consistently delivering stable changes that don’t cause incidents or outages. This level of automation also builds faster, less-rigorous process around change enablement activities.

Change management should be all about setting expectations and measuring the ability for change teams to meet or exceed those expectations, says Greg Sanker, author of IT Change Management: A Practitioner’s Guide.

“So long as it’s achieved (the change outcome expectation), change management cannot add any value to the value stream by inspecting individual changes. It’s an outcome expectation that’s literally engineered into the value stream itself.”

Fast-tracking certain changes not only saves time but also avoids the costs of convening a change advisory board (CAB). Further, modeling and understanding change risks controls the costs of incidents and outages, which Gartner estimates to be $6,000 a minute for large enterprises and their customers — over $10 M per day for major service disruptions.

It’s all part of a strategy to be more data-driven in the change enablement processes, pushing processes towards specific outcomes and higher ROI.

There’s also the gamification aspect that comes with scoring teams and rewarding them for consistent positive performance. In the words of Solutions Consultant Neal DeBuhr: “Teams will want to improve credit score very naturally when they see these factors and have the information to do that very effectively and very precisely”

The data surfaced by the AI/ML models empowers a deep, rich, and holistic understanding, helping change enablement teams work more efficiently and effectively thanks to the information it provides. DevOps teams can use this information drive processes in an efficient way, making things go faster while also making them less likely to break. Thanks to the freed resources, teams can then focus on releasing more market-leading innovations, with more efficiency and less cost to run the process, and resulting in a lower overall risk of unexpected change failures.

This is the power and potential automation in risk management brings, all at your fingertips.

Learn more about harnessing the power of data-driven analytics in our video from this past summer’s meetup series:Gain end-to-end visibility & actionable insights you need to make data-driven decisions with Intelligence solutions

Are you ready to scale your enterprise?


What's New In The World of

May 5, 2023

AI Risk Management Means Less Stressful Work Weeks

Roughly 41% of tech professionals working have a high risk of burnout, which is why the need for AI predictive analytics is greater than ever. Read on to learn more!

Learn More
March 2, 2022

Automate low-risk change teams to create a “fast lane” for release velocity

Increase deployment speed and efficiency by using data models and AI-powered analytics that enable DevOps leaders identify which change categories and change teams have a high rate of change success. Read on to learn how our Change Risk Prediction solution makes the difference.

Learn More
February 23, 2022

Explore with DORA: Leveraging DORA analytics to their fullest

In today’s completely digital world, organizations and individuals alike are acutely aware of the speed of change and how it affects them. Read on to discover the importance of DORA metrics and how AI/ML can make DORA metrics more actionable.

Learn More