“Information is the oil of the 21st century, and analytics is the combustion engine” — Peter Sondergaard, Senior Vice President, Gartner
Over the past decade, emerging technologies have sparked a data revolution. The term “big data” may be somewhat dated now, but its implications still stand firm. Using mass quantities of data — too big for a single person to comprehend, let alone compile — businesses can use analytics to surface insights that drive optimal decision-making.
Business intelligence can be fueled further by artificial intelligence, as well as machine learning. Using AI/ML data models, algorithms quickly surface the most relevant information autonomously. For example, a data model can predict which work items are most likely to contribute to release delays or change failures. Machines stand to benefit from these AI/ML insights, as well. Metrics created using data models to measure factors affecting release quality, change risk, etc. can drive automation. Release cycles then become smoother, more efficient, more predictable, and more consistently result in high levels of quality.
The end effect of these feedback loops is that baseline data creates its own economy of value. By using historical data to not just learn but also configure systems capable of taking self-supporting action, more value can be created with less effort. This net beneficial effect frees up teams to focus on innovations and expand the scope of what was previously thought possible.
Data doesn’t do any good sitting in silos
DevOps employees understandably focus on the work items at hand as they try to get through the next sprint or release cycle. Focusing on the work itself means teams can lose sight of the factors that can make their life harder or easier, such as team dependencies that result in bottlenecks or frequent work queue logjams.
The prime question facing DevOps leaders then becomes: how can teams focus on fixing product or process factors they are so busy just trying to get work complete?
Compounding this problem is the currency needed to fuel insights — data — is typically consigned to the programs and databases where it is created. Companies may be using best-of-breed tools for agile planning, release orchestration, testing, etc., but unable to create a collective 360° view of their releases using all the data generated therein.
“From a data analytics standpoint, [our best of breed digital] systems come with some fairly significant shortcomings,” laments Digital.ai Director of Product Marketing Amit Shah. “These systems are not meant to talk to each other, and the data sits in silos. This means that answering questions that span multiple systems requires you to manually query and stitch together data.”
Even more unfortunate, the reporting capabilities of individual solutions won’t yield a lot of depth. These systems are not really designed for deep analysis, but rather pre-made KPIs and reports. The end result is that it is extremely difficult to get the bigger picture, leaving questions like “how do we speed up our release process?” largely left unanswered.
What is needed to bust through these silos and reveal the value potential of system of record data are AI and ML-powered Business intelligence solutions. The aggregated data can then empower deep, contextualized analysis that depicts a 360° view of release cycles and other key aspects of primary value streams.
Model business data using AI/ML to put all needed insights in one dashboard view
AI/ML data modeling can be used to answer the most important DevOps questions at a simple glance
As an example, Digital.ai Flow Acceleration uses a powerful analytics engine to reveal factors that have the highest tendency to hamper release velocity. Historical release data is considered in light of major factors like the size of releases, the teams contributing to a release build, the total work in progress, and more. Using these factors, AI/ML algorithms develop a model to instantly highlight possible delays for releases currently in the pipeline.
A product owner or DevOps scrum leader can use the Digital.ai Flow Acceleration dashboard to see that a component of the release in development has a high volume of test failures, for example, or that a work item has been sitting with a team for twice the normal amount of time. Insights like these lead to action! After viewing them, IT leaders can take proactive steps to prevent a delay (or reduce its impact) — long before the predicted delivery date is missed.
In a similar example, Digital.ai Change Risk Prediction models release components and factors associated with a high risk of change failure. Using this information, change approval roles can quickly isolate risky changes from those with a low chance of failure. The net effect is to not only reduce the frequency of change failures but also to accelerate the review process.
Using simple-to-analyze “credit scores” and drill-down, DevOps leaders can see which factors and teams have the highest positive or negative effects when it comes to predicting the operational health of the upcoming release. CABs then are able to focus on releases that present the most risk, while automating the approval of low-risk releases. DevOps leaders can also account for risk-driving factors and take steps to address them within the development cycle.
The overall effect is that AI/ML-driven insights make it easier for teams to focus on factors that promote release quality, make work easier, accelerate deployment timelines, and result in more value being delivered to end customers.
Use analytics insights to increase automation’s effectiveness
Insights delivered from solutions like Flow Acceleration and Change Risk Prediction can help not just humans but also machines. A high change risk score can trigger more thorough test coverage, for example, using simple rules and commands directly from the Digital.ai Release dashboard. Likely release delays can signal a need to rearrange sprint plans and automatically reassign teams while also alerting the needed personnel.
Using Digital.ai Quality Improvement as an example, analysis of releases in-the-pipeline can show release components with likely defects while revealing root cause. In many cases, the root cause can be resolved automatically through automated code cleanup. In cases where manual attention is needed, the root cause can still be revealed in specific terms to the appropriate team using automated alerts and reporting.
At this point, the machines are not only teaching themselves through ML, but also using the lessons learned within these models to drive more efficient processes and higher-quality releases. Customers benefit by receiving more-consistent releases, delivered faster and with fewer defects. Data, once again, becomes the initial fuel rod powering the entire transformative effect.
“Today, developers need the foundation of agility, decision-making power, and speed to deliver solutions on-demand,” emphasizes technologist Yesh Subramanian. “To facilitate this, enterprises must automate the management of development, testing, quality assurance and IT operations to successfully build continuous delivery processes.”
The key within this long-term goal, though, is to surface the needed level of contextual information in order to derive insights that both humans and AI-driven automation can capitalize upon. Put in even more reductive terms: when organizations leverage AI/ML to make the most of Business Intelligence, data comes in, and expanded value comes out. It’s all part of making the most of modern technologies to consciously improve the way we deliver value to customers — and to society as a whole.