This post is from the Collabnet VersionOne blog and has not been updated since the original publish date.
4 Basic Building Blocks Needed to Measure DevOps Flow
4 Basic Building Blocks Needed to Measure DevOps Flow I recently wrote a blog post that describes the importance of precisely measuring business value as it flows through the DevOps value stream. In that post, I described two different ways to measure DevOps flow. I also described how these insights can help dramatically reduce DevOps wait time. In this post, I’ll outline the four simple but critically important building blocks that you’ll need to have in place prior to measuring DevOps flow. Each is required / regardless of what tools you’re using. A Note About Business Value In DevOps, we talk a lot about “flow”. So what is flowing thru our DevOps machines? Presumably, its business value. Or more precisely, potential business value. It would be awesome if we could associate actual business value realized, in dollar terms, with each bit of new code that flows thru DevOps. But we’re not there yet. Instead, we have to assume that our agile methods are producing a continuously groomed backlog. Hopefully, this backlog represents a prioritized list of the current “most valuable” business opportunities. If this is the case, each backlog item carries with it the next most important bit of business value. So, when we talk about flow we mean business value in the form of some new software capability that should represent the next most valuable business opportunity. Now…onto the four building blocks of DevOps flow. 1/ Affiliation Affiliation is a method of correlating individual backlog items with changes in source code during development. Backlog items are often the “atomic unit of flow” representing business value, in flight, as it moves through the DevOps value stream. Connecting backlog items with source code is how we join DevOps data with the data generated in agile planning or agile lifecycle management (ALM). This allows us to affiliate (or connect) each commit with its parent backlog item. Next, we follow these commits thru the continuous integration (CI)/build process so that we can connect backlog items with the binary artifacts where each child/commit initially appears. Collectively, this string of data connects each backlog item with a) each of its child source code commits, and b) the artifacts that contain those commits. 2/ Artifact Diffing Diffing artifacts is the process of comparing two files to show the net changes between any two compiled versions of the same code as a list of backlog items that have been added or have changed. Unlike source code files, which are text based and can be easily read, diffing artifacts requires some external system to record and track the incremental backlog item changes contained within every single new build or binary artifact version. 3/ Delivery Mapping Value stream mapping captures the delivery phases or steps that backlog items progress through as they flow from development to end users. It is critical for value stream models to include people, tools, and processes, both automated and manual. Delivery phases should also include time spent in wait states and cross/team handoffs such as compliance, change management, security, and any other internal process that must be exercised during the course of software delivery. 4/ Work Item/Level Tracking Work item/level tracking is a method of tracking individual backlog items, in real time, as they transition from common language to binary artifacts during the handoff from development to delivery. One of the natural side effects of a healthy CI/automated build process is an explosion in the number of artifacts that can and will be generated by a single team. It is not uncommon for a single team to generate more than 100 artifacts per day. I recently asked a survey question for group of 300 webinar attendees and was surprised to learn that the average enterprise team generates more than 800 builds per single production deployment. This means that the vast majority of builds are “discarded” and only a few builds are active in the delivery progression at any given moment. This results in a complex phenomenon I call “change pooling” that severely frustrates the process of accurately tracking the flow of backlog items. Change pooling is worth its own separate blog post (coming soon!). Work item/level tracking solves the change pooling problem to provide a highly precise accounting for each backlog item as it moves through a delivery progression. I hope this has encouraged you to take a deeper look at how your organization is tracking the value that is being delivered to customers. These four basic building blocks for measuring DevOps flow is just the starting point, so if you want to learn more, check out the other blog posts in this series and the How to Measure DevOps Performance Webinar. 1/ How Measuring DevOps Performance Increases Enterprise Agility 2/ Measuring DevOps Performance Using a Value/Based Approach 3/ 3 Performance Measures That Can Help Reduce DevOps Wait Time