The Scaled Agile Framework® (SAFe®) described by Scaled Agile, Inc. promises us benefits in four areas, shown below, and suggests values that have been experienced by companies in their adoption of SAFe. So how do you know if you are getting these benefits? One of the challenges is that we often do not have a set of metrics representing the starting point, but we can start to measure once we have adopted SAFe and VersionOne. At that point we can go about seeing how our organisation is performing.
Engagement
VersionOne as a platform can provide information about three of these parameters / productivity, time to market, and quality. The engagement aspect is the one that is outside of VersionOne and is measured through a number of means. Sometimes we simply ask people if they are happy and we can discuss this at the team level during the retrospectives. Hard measurements are available in terms of staff turn/over, improvement ideas, and workplace surveys.
Productivity
Initially let’s focus on the productivity measure. We are concerned with the team’s productivity here so we are interested in the size of work being delivered for an amount of effort. I avoid the value measure here, because once a story is ranked highest, then it should also be the highest value that can be currently delivered. To ensure that the highest value is being delivered through the capacity of the teams is the responsibility of the product owner. Therefore I take productivity to be the amount of work being performed in time, considering the resources. So we would like to know what the productivity is and more importantly, how it is improving over time. The simplest measure of productivity is the velocity of the teams, and in aggregation the productivity of the Agile Release Trains (ART’s) and Value Streams, in points divided by the effort expended to achieve those points. To use this approach we require that teams have a measured velocity and a consistent meaning of story point. Note I do not require that teams within an ART or Value Stream have a common idea of a story point. A productivity measure can also be performed at the programme and portfolio levels using the swag attribute within VersionOne Lifecycle. To produce a graphic on these concepts I recommend starting with the velocity report. An example of such a report is: Here you can use the planning level navigator to determine if you are looking for velocity at the Value Stream, ART, or team level. While productivity as an absolute value is not comparable using these points, the improvement in productivity can and should be measured. As teams and ARTs become more productive, the velocity of these teams and ARTs grow. These measurements require that the points estimated are stable and that they are not being gamed. Setting targets for productivity should be about the improvements that can be made and these targets should be in relative terms, for example improvement of velocity by 10% over 6 sprints. These efforts to improve form a sequence of the PDSA cycles that are conducted as part of a learning organisation. These measures of productivity can be seen at a number of places in VersionOne. The velocity report above uses the project setting to see what the scope of the work is. At a value stream setting it will show the points being delivered in each sprint for all teams, which can also be used to measure the increase in productivity of Value Streams, ART’s, and so on. In addition we can use the detail estimate trend to show the estimates and the done work over time. This shows work, measured in points that is delivered compared to work planned. This looks like the following. Note that the planned effort in this case has been clearly affected by an overly ambitious product owner. As with the velocity trend this is a measure that is affected by capacity and effort. To understand the effort side of this equation, we can look at reports within VersionOne that show the effort trend. This provides information about the hours consumed over time as scoped by the planning level selected. By using the information identified above we can implement a report to show the estimate divided by the features organised by time or sprints. This is the way we can start to use VersionOne to view the productivity of the teams. In some cases, where teams are not recording their time in tasks, then I would suggest we use the head count of people in the teams in these reports. The important measure is the increase in productivity and not the actual productivity itself. The use of points may not allow us to compare teams in terms of points, but we can use them to measure the improvements, as long as a point is a consistent size within a team.
Time To Market
The next measure to consider is the time to market value. VersionOne Lifecycle can measure a portion of this cycle, and if enhanced with VersionOne Continuum™ for DevOps more of the path can be measured. But let’s consider the Lifecycle only case for now. For time to market we look at a feature. This is the item that we will measure in terms of when work begins when it is implemented. The time to market can be built through the use of a report looking at time in status values. For example we can look at the main status values for a feature and see where the time is going. The breakdown, build, and definition stages could be worth looking at to help target improvements. In addition, VersionOne Lifecycle allows us to see a range of values for the amount of time taken for stories of various sizes. We would expect to see the dot follow a linear upward progression of some sort. Bear in mind however that these times are more than the simple touch times, and include wait time as well. We are interested in the range of dates as shown. Bringing these into a smaller band would indicate some better estimation and control of wait times in the process. Attention to these wait times is where more improvement in time to market is likely to be obtained. Another report type is the cumulative flow. This can also be used to look for variations in the flow such as high Work In Progress values. This chart shows that much of the work is in the planning stage since the status is shown as None. The fact that the implemented layers seem to be flat is also an indication of something that needs investigation.
Quality
The measures of quality in software terms are usually taken to mean defects and the number of defects present. There are a number of reports within VersionOne Lifecycle that can present the quality of the delivered product. The defect trend shows us the number of defects in the various status values. An example is shown below. Perhaps more interesting is the relationship between stories and defects, including the number of defects generated for each story and the trend. The report in the Enterprise Dashboard called Defects Created vs Stories Closed can provide some interesting information about this.
Summary
While it is useful to have these reports, providing us with the metrics that help achieve the promises of SAFe, there is more to it. With SAFe we need to operate the PDSA cycles as presented by W Edwards Deming. Design your cycles based on your measurements and see the affect that these have on your business results. And then plan again, and again … Continuum is a trademark of VersionOne Inc. Scaled Agile Framework and SAFe are registered trademarks of Scaled Agile, Inc.
Are you ready to scale your enterprise?
Explore
What's New In The World of Digital.ai
What is SAFe PI Planning?
PI Planning aims to bring together all the people doing the work and empower them to plan, estimate, innovate, and commit to work that aligns with the business’s high-level goals, vision, and strategy.
How to bring external data to Digital.ai Agility
Silvia Davis, Sr. Product Marketing Manager at Digital.ai, tells her story of how a positive app experience led to the realization that proper data integration is essential to the entire application lifecycle.
Happy Anniversary Digital.ai!
This year, Digital.ai turns two! Continue reading for insight on Digital.ai’s journey and what plans we have for the future.