This post is from the Numerify blog and has not been updated since the original publish date.
How to Start Analyzing Your IT Process Data Today
The thought of transitioning to a data-centered culture, where all of your IT process data is consistently analyzed and used for decision making, can feel overwhelming. You may think that your data isn't ready, and not know where to start. Many organizations implement their IT Service Management (ITSM) systems without considering the implications for analytics under the assumption that they will retool it later.
However, in our experience, your existing data is often more than "good enough" to begin making proactive changes. This means you can start the process of gaining insight into your IT process today — not tomorrow, and not whenever a long list of "nice to haves" in your data can be satisfied.
You don't even necessarily need to wait for consensus; you can start small and foster iterations that grow in ambition with each success. Once incremental changes are being made and it can be demonstrated that there is value worth extracting from IT process data, stakeholder buy-in can be earned, paving a path towards more sweeping changes and more advanced analytics maturity.
The central tenets with this "start now" approach are:
- Start with a strategic goal or IT initiative
- Unlock the insights within your data needed to draw a roadmap towards that goal, no matter what challenges you face
- Seek easy wins by identifying "low hanging fruit" opportunities that can make the biggest impact with the smallest changes
- Demonstrate the progress from the baseline to illustrate how much more can be accomplished once barriers standing in the way of further IT data insights are removed
To demonstrate these principles in action, consider the following use cases and the lessons that can be extracted from them.
Use case #1: Improve application resiliency by text mining incident tickets
The challenges faced by a Major Incident Management (MIM) team within a Fortune 50 healthcare provider organization could have easily represented what some might imagine as a "worst case scenario" environment for making impactful changes.
Since the MIM team had no ownership over the ITSM platform, they resorted to a completely text-based ticketing and incident tracking system. This process sufficed on an incident-to-incident level, but obtaining a big picture view of incidents required manually reviewing each text entry and manually compiling data. User inputs for the system were, unsurprisingly, inconsistent. This led to data entry variables, like describing the Customer Relationship Management (CRM) system as SalesForceDotCom (SFDC) or Salesforce.com, depending on the person who entered the information or how they were feeling that day.
Information like the Mean Time To Resolve (MTTR) incidents was nearly impossible to glean, as was information on outages or affected applications. Everything about the process of reporting on the text-based data was manual, labor intensive, and prone to unreliability.
The MIM team did not seek to tackle these problems directly. Instead, their strategic initiative-based priority was to improve applications and infrastructure resiliency overall.
To help them accomplish this goal, they applied Natural Language Processing (NLP) analytics models to interpret the text-based tickets in an automated fashion. The resulting analysis helped the MIM team produce a Mean Time Between Failure (MTBF) metric, allowing them to identify the applications or infrastructural components most likely to fail. These insights, in turn, revealed opportunities to dramatically reduce outages of mission-critical applications while at the same time all-but eliminated the time spent manually scraping data from text entries.
Use case #2: Building trust in the CMDB by prioritizing the most important asset fields
A Fortune 150 food service company wanted to be able to audit their existing assets but had an extremely incomplete and unreliable Configuration Management Database (CMDB) operating as their system of record. The problem led to a lack of governance around tasks like discovery scanning or true-up, and lack of trust in the CMDB in turn led to lack of use.
They prioritized making strategic improvements that could have the biggest impact. Accordingly, they analyzed the CMDB fields that were most likely to be missing, which then allowed the company to identify which fields were most critical to their understanding and accounting of assets. These missing field categories could be grouped by region, product model, the department(s) associated, and other drill-down factors.
With this analysis, the company was able to set goals for improving particular fields and make big gains quickly. At the same time, they revealed opportunities to improve CMDB use and accuracy, such as eliminating non-critical fields that tended to create issues rather than provide value.
Use case #3: Directly measuring vendor performance
The final example use case concerns a Fortune 150 industrial company that relied upon a large, complex and varied web of vendors. None of the vendors' Service Level Agreements (SLAs) were digitized, and the company largely relied upon the vendors' self reporting to determine whether SLA was being met. Since vendors have an obvious motivation to be biased toward reporting that they are 100% aboveboard, the company faced a challenge when it came to objectively evaluating which relationships generated the most — or the least — value.
Digitizing the SLAs allowed the company to automate the process of evaluating whether vendors were meeting their targets. The company could then enforce contractual performance-based penalties and rewards accordingly. They transformed from a reactive position where they only engaged vendors after a problem occurred to a proactive one where performance and SLA compliance were always on the radar.
To improve IT process data analysis, make the most of what you have, then focus small to win big
In all of the cases above, there was no long wind-up phase to transition the organizations to a state of perfection before progress could be made. Instead, the specific departments engaged were given tools that could analyze the data they had right then to identify the most ripe opportunities for improvement.
By focusing on specific strategic initiatives, these gains could have a large impact while acting as milestones towards more ambitious and sweeping improvements. IT leaders shouldn't start small in terms of impact, but rather start small in terms of scale of effort.
Establishing a baseline and working towards graspable goals can help organizations paint a picture of what they don't yet know and what they need to get to a point of more omniscient knowledge. IT leaders gain examples at their disposal for showing the "art of the possible" with each accomplishment. These demonstrations can drive ownership of proactive changes while at the same time motivating key stakeholders to want to change.
Every journey begins with a single step, after all, so start today to make your future brighter and more saturated with opportunities than before.
Learn about these strategies and see each use case in-depth in our recent webinar: "How to Analyze Your IT Process Data as it Stands TODAY"