The key to actually having quicker change reviews is to empower the change review teams with contextualized data sourced from across the product pipeline.
One of the main objectives of agile product management is to speed up the change review process and implement changes faster. The key to actually having quicker change reviews is to empower the change review teams with contextualized data sourced from across the product pipeline. That data can indicate the level of risk for a given change, and point to the appropriate action to take.
- High-level risks can be given a thorough review.
- Medium risks can be sent back for quick modifications.
- Low risks can be approved or corrected with minimal modifications.
Developing such a scoring system is possible through the use of machine learning (ML) combined with analytics. Historical data can generate a change risk score, revealing the overall health of a proposed change at a glance. Once a generalized change risk score is generated, IT leaders can drill down to discover the main risk drivers. This type of scoring system allows for not just effective risk management but also informs development practices.
The overall effect of adding analytics is that the majority of low-risk change reviews can be automated, speeding up the process and eliminating the need for the change advisory board (CAB) to meet prior to every single release. Riskier changes can be reviewed by the CAB with the appropriate contextual information. This eliminates the need for CABs to have a formal meeting and review prior to every single change. They also end up spending less time on change reviews that are deemed necessary thanks to having contextualized information indicating the sources of possible risk. The result is the entire organization can complete more changes more quickly without the risk of a higher volume of escaped defects.
Your own production and development data is key to accelerating change approvals
Operations analytics tools like Digital.ai Analytics allow organizations to obtain a high-level view of the main drivers of change risks. Gathering data from across operations and development is critical for supplying the needed historical data. Historical trends, incident cluster analyses, and predictive models can identify high-risk changes.
- Platforms like Digital.ai Analytics use data adapters to source data right from key systems of record
- Once data is aggregated, it can be analyzed and have advanced ML/AI techniques applied to it to identify high-risk changes
Historical data on change failures allows the ML model to understand what the driving sources of change failures are. Failures could be related to several factors, such as specific coding practices, the use of certain configuration item (CI) interactions, or even specific teams and individuals. ML algorithms can be trained to flag high-risk changes or risk factors, to evaluate possible candidates and identify the strongest predictors over time.
Go from data to action with a change risk credit score
One use case of analytics is to model change risk using an immediately familiar evaluative tool. Digital.ai Analytics clients in the past have made use of a change risk credit score model, for instance. The score is affected by the presence (or absence) of risk factors, generating a score in a similar range to a consumer credit score. Bad scores provide immediate awareness of a risky set of proposed changes.
Change risk credit scores provide immediate information from a single dashboard metric. This metric allows teams to drill down to determine which factors are driving the risk. Risk can result from the presence of certain change factors, such as a change relying on an unpredictable server call. The size of a given change is often a good predictor, as another example, since large releases can invite more risk. Having this level of drill down available makes it even possible to evaluate individual or team performance, while the range of the risk score allows for rapid action.
- Near-zero risk can be approved through automation
- Low risk can be flagged for remediation or quick internal review pre-CAB submission, with risk drivers identified
- Medium risks can be addressed through more extensive mediation or flagged for a manual CAB review
- High risks may call for a change freeze or other drastic action to avoid a change-related incident
Make change approval more agile and automated through analytics
Metrics like a change risk credit score offer a depth of information that gives development teams more detail, making it easier to assess and address risks prior to formalized review and deployment. Without these in-depth metrics, sparse information for decisions can lead to longer review processes. Worse, a lack of contextual information can make CAB review determinations opaque, often to the frustration of development teams. When rejections or remediation requests aren’t backed up with abundant data, they can seem to be arbitrary or lacking the robust detail needed to address the perceived risk.
Objective information sourced directly from systems of record provides transparency, trust, and a single source of truth. Agile teams can get the information they need to create better, less risky changes proactively, reducing the likelihood of CAB disapprovals. This process represents operations and development working in tandem, as the name “DevOps” implies.
CAB review is often a vestige of the desire to command and control the product creation process. Organizations can move on from this with tools to automate low-level risks and address higher risks with context-rich data. These tools can address persistent problems revealed by known risk drivers through higher test coverage or smaller change sizes, for example.
The Digital.ai Analytics team has extensively covered the subject of improving the change management process. They suggest that “the answer is to give IT ops teams and CABs access to information that can allow rapid assessment and decision-making. They need self-service business analytics that inform them of both change risks as well as underlying risk factors. This capability can be further complemented through the automation of tasks like low-level change approval or change risk remediation.”
The team goes on to say: “To keep CABs focused on only major changes, expert groups such as Axelos (creator of Information Technology Infrastructure Library — ITIL) suggest modeling standard changes to automate change processes. And some big changes can be broken into smaller change releases, again allowing many changes to follow standardized change models, removing the need for CAB review.”
Analytics not only puts your data to work; it allows everyone to improve the quality of their work while working better together.
Watch our AI powered analytics webinar now.