Last Updated Jun 30, 2020 — AI-Powered Analytics expert

Change teams need clear processes in order to define and identify threats to operational stability — all without slowing the continual creation of new value. They also need to remain flexible to be able to respond to ongoing shifts in the business environment, which can include changes to the technology platform.

Fast, flexible, and vigilant: ITIL change management best practices empower IT operations to do all three. The use of AI-powered analytics can further protect operations environments from disruptions, preserving business functions and mitigating risk. Using data from across the value stream, alongside AI, allows teams to mitigate outage risks, monitor for threats to operational stability, and streamline decision-making in light of new data-derived insights.

ITIL 4 provides guidance for change management best practices, which can reduce common risks in production. Without being overly prescriptive, ITIL 4 guidelines provide clear descriptions of the roles and responsibilities needed to maintain stable and predictable change management cycles.

The following best practices suggested by the ITIL 4 guidebook and change management experts can assist enterprises looking to maintain performance and quickly address operational threats.

Ensure that tvery IT change has a defined business value and a clear “Why”

The first step with any change-related decision is to understand why the change is being proposed. Whether the change was instigated by a change request from development teams or a change proposal from within IT operations, each change should be considered from the perspective of the value it offers to the organization. Additionally, the change should be understood in light of the risks it presents.

Defining value and risk can – and should – be done from an objective, metrics-driven perspective. However, IT organizations must first be prepared to model upcoming changes from a subjective standpoint so that they can intrinsically recognize the proposed value an expected change brings.

Popular blogger “Joe the IT Guy,” defines four basic value-producing reasons that a change might be proposed:

  1. To correct something that has already failed or gone wrong
  2. To prevent something from failing or going wrong
  3. Because something else has changed, or is going to change, and you need to make a change to stay compatible
  4. Because you need to add, remove, or enhance a capability

The first type of change is reactive to problems; two and three are proactive/preventative; and number four is the only of these to directly offer new value.

By considering proposed changes from this perspective, IT organizations can then develop metrics to quantify the types of changes they are making. Changes can be assigned a category, for instance, describing the purpose of the change and whether it is a fix or a new feature.

If too many changes are based on reactive needs, then the organization needs better risk-assessment and predictive capabilities. Changes related to compatibility and preventing failure also tend to consume resources and not create new measurable value, instead they preserve value created elsewhere. With this in mind, the organization can target a reduction in changes related to problem resolution or prevention.

Ideally, as the organization adjusts its processes and its approach to DevOps, a higher ratio of changes will be related to the direct creation of value.If the organization is continually pushing for changes that elicit direct value creation, then it can evolve its offerings while keeping pace with modern advancements.

Utilize change metrics and KPIs to understand associated change management risks

Understanding the purpose changes serve allows organizations to more ably quantify changes using appropriate metrics and KPIs. Monitoring change metrics provides feedback to describe ongoing trends, inform resource allocation, and indicate when things are going well or poorly.

Examples of change metrics that can drive agility and continual value creation include:

  • Change success rate
  • Emergency changes implemented per period
  • Change-related incident / problem volume

What metrics matter the most for predicting change risks? Machine learning models can answer that question in all its complexities. An ML algorithm will sift through historical change data and associated failure logs to determine which metrics have the best capacity for predicting future change-associated problems, failures, and incidents. Proposed changes can then be measured in terms of the risks they pose, including how likely the risks are and how disruptive they might be. A scoring model can facilitate rapid response to identified change risks.

Measuring change metrics using these methods establishes benchmarks for IT to visualize stability within their current operating environment, predict which changes might be inherently risky, and chase continually improving targets to reduce service disruptions while maintaining agility. The use of scoring models also streamlines the approaches needed to address the change risk, reducing the amount of time and energy the CAB needs to address each change and keep the value pipeline moving.

Define roles, responsibilities, and ownership of change metrics

Defining roles and ownership in IT has a unique way of producing results. Establishing accountability in IT can help leaders meet SLAs, implement CSI initiatives successfully, and reduce the amount of resources required to respond to and prevent change-related service disruptions.

Different roles and hierarchies can be established for different contexts, meaning that a person with ownership over one change-related metric may be second-in-command for another.

However, roles and responsibilities should always be clear, documented, and fully understood in order for IT to function efficiently. This level of high functioning is especially important during responses to major incidents, where emergency changes have a high rate of failure that can create, rather than solve, new problems.

IT leaders should map stakeholders, roles, and hierarchies so that responsibilities are defined and accountability is understood. None of this needs to introduce rigid processes, either; instead, teams can be given autonomy with ultimate understanding that they are in control of the performance of a given metric, CI, or operational feature.

Model the impact of operational changes before deployment using AI

Predictive IT analytics can allow organizations to quantify change risks and understand what consequences a given change push may have. They can then identify the appropriate response, which may include accepting the change risk, mitigating it by modifying the change, or avoiding the risk by halting the change until it can be made less risky.

IT operations teams also must be prepared in the event that any high-risk changes fail. For change failures that result in performance degradation, for example, teams can have a back-out plan to restore the prior operating environment state while the change is studied in more detail. For a series of high risk changes that are anticipated to — or already have — resulted in an incident or service disruptions, a change freeze may be in order while the operating environment is stabilized.

Contingency plans allow for quick action in the event that a proposed change does not go as expected. They also encourage IT teams to consider alternative options that may need to become part of the regular process or a decision-making heuristic, given a pattern of past changes that have had negative outcomes. Predictive AI models also can leverage data analytics to alert change teams when they may have a higher risk of a change failure, prompting the need for a contingency plan.

Ensure each change has closure

Every change should be accompanied by a closure process, whether the change was successful or not. The impacts of the change should be monitored and documented. The metadata of the change should be logged so that metrics can identify which changes impact what functions as well as which changes carry certain risks. The CMDB must be updated as certain changes affect the relationship CIs have with one another.

These tasks can be laborious, but partial or total automation can reduce the efforts needed by individual IT members while enhancing efficiency overall. The stages of a change closure process are incredibly important for not only monitoring the presence of risk for service degradation/disruption, but also quantifying change risks with more accurate modeling over time.

ITIL change management best practices allow constant value creation in the face of uncertainty

None of the best practices described above are suffocatingly prescriptive, but they do reveal the level of attentiveness and custodial responsibility required for IT operations to deliver continual business value.

“One of the benefits of using a standardized best-practice framework is in ensuring that employees understand their roles and the procedures that they must follow to deliver services and provide a high level of customer support,” notes BMC blogs. At the same time, BMC recognizes that, “The ITIL framework is also intended to give IT support providers a more interactive role in businesses. Instead of providing support in the background, IT departments that utilize this framework are part of the businesses’ overall structure,” meaning that they are a direct part of the value creation chain within their enterprise.

As fewer changes require direct oversight and intervention and more directly beneficial changes are enabled through streamlining and automation, IT organizations as a whole can contribute more to their organization’s bottom line while giving their talent a bigger role in value creation.

Learn about how IT business analytics can help you evolve from reactive to proactive ITSM and Change management from our recent webinar: “How to adapt your IT Service & Change Management for a distributed workforce

Are you ready to scale your enterprise?


What's New In The World of

May 15, 2024

Unlocking the Full Potential of AI-Assisted Development with’s DevSecOps Platform

Discover how’s AI-powered DevSecOps platform unlocks the promise of AI-assisted development, boosting productivity while managing risks.

Learn More
March 18, 2024

Sticking the Landing: How AI-Powered Insights are Streamlining Software Delivery in the Aviation Industry

Unlock the power of AI-powered predictive analytics in aviation software delivery. Learn how’s Intelligence solution ensures smooth operations and enhances safety.

Learn More
September 12, 2023

Build vs. Buy: Unveiling the Real Costs of Intelligence Solutions for IT Leaders forecasts that you have a better than 50% chance…

Learn More