Last Updated Jun 28, 2021 —

Aggregating and automating data collection across silos can improve data availability for better analytics, reporting, and decision making. 

DevOps

In DevOps, “data availability” often refers to a state where the app or platform can access necessary data in the operating environment. But it’s not just operational products that require data availability. Internal toolsets for DevOps and InfoSec also need data available from planning, development teams, testing, security, and more. IT decision-makers (ITDM) need access to this data to apply analytics, answer questions, identify change opportunities, and determine new strategic initiatives.

Data generated in corporate environments traditionally tend to have an extreme siloing effect. Even with siloed data, it’s easy to answer questions that can be answered within that silo — such as: what is the monthly cost of payroll? However, it’s much harder to answer complex questions on value delivery and optimization, which may involve factors from across the organization. For example, Airbnb uses a combination of metrics that rely on input from the entire organization, not just certain teams.

Sourcing data from across silos manually to generate a specific report can be a months-long project, delaying or even preventing the realization of positive business outcomes. Organizations need a way to punch through silo walls and access data quickly, to generate reports and monitor current performance

A system to aggregate data can improve data availability for analytics reporting, but there are other criteria to meet, as well. Improving data availability can also make security, compliance, and governance (SCaG) easier because data flow can be monitored and controlled. 

The following five actions will assist you in setting goals for improving data availability.

Aggregate — source data from all key systems of record

The most important step for improving data availability for analytics is to aggregate all data together, creating a single source portal for all data access. Individual tools may have reporting capabilities, but this is not the same thing as applying analytics broadly to all data. Each tool has a limited view, and will not be capable of aggregating all data to extend this view. Differences in data sourcing and metric calculation can also lead to bias and skewing in reports.

Instead, each individual system of record requires its own data adapter. All this data can then be imported into a single repository, facilitating analysis. Data sources can include planning tools, development tools, and app monitoring environments. Even internal tools like Office365, Slack, or Salesforce can be used to source data. 

Once combined, the analytics system can query data with context, giving decision-makers a 360° view of company activities and product statuses. As an example, a key performance indicator (KPI) is revealed when comparing two data sets, such as average lead time per DevOps steps compared to escaped defects later on.

Automate — make data importing automatic and effortless

Manual data import and synchronization increase the burden on DevOps teams. It also decreases the chances that the importing work will be done! Instead, make data import automatic, set it to certain cycles. Data import activities should be audited every few weeks, looking for blind spots or issues with data adapters. Automation is the only way to improve data availability without reverting to time-consuming manual tasks.

Assimilate — make data fit a standard format using a canonical data model

When building a data pipeline, ITDM should be aware of differences in how data is represented. For example, one tool may calculate testing results according to failures caught, whereas another might calculate an overall percentage of integrity. The data sourced by both metrics can be combined and compared, but only when data fits a standard format: a canonical data model (CDM). A CDM may need to be proprietary and developed according to the individual organization’s needs. Once implemented, the CDM allows for rapid, automated standardization of imported data.

Investigate — make data actionable, easy to query

A data analytics solution should possess both a customizable dashboard and the option to generate ad-hoc reports. Visual data representation can tell a story and prompt action in ways unformatted data can’t. Machine learning and artificial intelligence (ML/AI) can model data in an even more actionable way.

An example of this is establishing a change risk credit score — an objective metric that can evaluate the risk of given changes to fail post-deployment, based on modeled factors.  When a leading healthcare service provider discovered that most of its change issues were human-related factors, it began assigning a score to change managers. This gave managers an opportunity to see and improve their scores, which in turn led to closing out more open problems, going longer between outages, and reducing the amount of time, money, and effort IT leaders needed to address change-related issues. Factors can include the size of the change, the number of configuration changes, which teams were involved, and even the day of the month. This data lets you drill down to see sources of change risk, which can be traced to individual practices, teams, or developers. 

Gathering data is just part of this step. Representing the data visually empowers ITDM to ask tough questions, monitor internal performance closely, and even incorporate consumer feedback, closing the loop.

Integrate — incorporate security, compliance, and governance to monitor data and chain of custody

Improving data availability for analytics allows the opportunity to audit current data streams and practices, not just monitor performance. DevOps teams should aim for a closed conduit of data regarding how data is handled internally and how it is handled within the operating environment. A closed data conduit functions similarly to a “sterile environment” in a laboratory. Because it’s closed, it allows for fewer chances for contamination, leaks, or intrusion. 

This conduit can also reveal SCaG metrics through analytics. One method may take the form of representing compliance with gated steps — such as “check this box to confirm that you performed a smoke test for integrity.” Another option for revealing SCaG performance is to analyze data to source metrics for SCaG health. Examples of actionable metrics to track include escaped defects, incident volume, and citations avoided through compliance. Representing SCaG initiatives through analytics can shift priorities left, baking in an understanding of all three initiatives to improve practices and to improve practices and product integrity.

Answers should come quickly, not after months

When data and analytics reports are readily available, it allows more DevOps team members to be self-driven when it comes to improving common pain points. Self-service business analytics allows anyone to investigate issues or see drivers of success or failure. 

Clear understanding improves transparency and accountability, paving the way for improved practices and results. When reporting is manual and cumbersome, the answer to the original question may be irrelevant because so much time has elapsed. Businesses should reach for a standard of low latency/low effort in order to solve common problems.

When leveraging analytics, organizations can also customize the dashboard for best fit, adopting purpose to culture. An example is how Rogers Communications uses a single sign-on tool that shows custom views tailored to job roles, allowing business users to see “their world” once they log in. The overall effect is more transparency, more alignment, and more focus on improving value delivery, both inside and out.

Better data availability ultimately leads to better decisions and positive business outcomes. Learn how to integrate the development process and its underlying data with strategic and operational processes, in our webinar: The key to amazing business outcomes is data.
 

Are you ready to scale your enterprise?

Explore

What's New In The World of Digital.ai

December 22, 2023

How DevOps and AI Together Maximize Software Delivery Efficiency

Explore the transformative power of AI and ML in DevOps. Predict delays, avoid software change failure, and leverage solution patterns for a more efficient SDLC.

Learn More
December 11, 2023

Key Findings from the Accelerate State of DevOps Report 2023

Unlock insights from the 2023 Accelerate State of DevOps Report and start enhancing software delivery, operations, and team well-being for sustained success.

Learn More
October 25, 2023

Denali Release

Explore the Denali Release: Digital.ai’s Latest AI-Powered DevSecOps Platform! Dive deep into new features, AI integrations, and secure application delivery at scale.

Learn More