Lean | Agile metrics focus on the flow of value from an organization to its customers, and on workflow and getting tasks done. Therefore, Lean | Agile -specific metrics focus on predictable software delivery, making sure Lean | Agile teams deliver maximum value to customers with every Iteration.
Lean | Agile KPIs have three major goals:

  • To measure deliverables of the Lean | Agile team and understand how much value is being delivered to customers.
  • To measure effectiveness of the Lean | Agile team; its contribution to the business in terms of ROI, time to market, etc.
  • To measure the Lean | Agile team itself in order to gauge its health and catch problems like team turnover, attrition and dissatisfied developers.

 

Measuring Deliverables

The following metrics can help measure the work done by Lean | Agile teams and value delivered to customers:


1. Iteration Goal Success

A Iteration Goal is an optional part of the Lean | Agile framework. But, it answers three questions: Why are we carrying out the Iteration? How do we reach the Iteration goal? What metric tells us the goal has been met? For example, a goal might be delivering a feature, addressing a risk, or testing an assumption.
By defining Iteration goals and then measuring how many Iterations met the goal, you can get a qualitative assessment of a Lean | Agile team’s work. Not just how many story points are completed, but how frequently the objectives of the business are met.


2. Escaped Defects and Defect Density

Escaped defects is a crucial metric that shows how many bugs were experienced by users in production. Ideally, a Lean | Agile team should fully test stories and completely avoid escaped defects. In reality, this rarely happens, but the trend of escaped defects is a good signal of product quality.
Defect density measures number of defects per software size, for example per lines of code (LOC). While this metric can easily be skewed, it is valuable in fast-moving deliveries to check if growth in defects is “normal” given the growth of the underlying codebase.


3. Team Velocity

Velocity measures how many user stories were completed by the team, on average, in previous Iterations. It assists in estimating how much work the team is able to accomplish in future Iterations.


4. Iteration Burndown, Cumulative Flow Diagram, and Control Charts

The Iteration burndown chart is the classic representation of progress within an Iteration. It shows the number of hours remaining to complete the stories planned for the current Iteration, for each day during the Iteration. The Iteration burndown shows, at a glance, whether the team is on schedule to complete the Iteration scope or not.

A cumulative flow diagram (CFD) is a tool used in queuing theory. It is an area graph that depicts the quantity of work in a given state, showing arrivals, time in queue, quantity in queue, and departure. Cumulative flow diagrams are seen in the literature of agile software development and lean product development. Some people consider a cumulative flow diagram to be a more sophisticated version of a burndown chart because it focuses on tracking changes in queue size per state. The CFD has a stronger focus on identifying and rooting out the causes of dramatic changes in throughput.

A Control Chart can show the cycle time or lead time for your product, version or sprint. The horizontal x-axis in a Control Chart indicates time, and the vertical y-axis indicates the number of days issues have spent in those statuses. A Control Chart helps you identify whether data from the current sprint can be used to determine future performance. The less variance in the cycle time of an issue, the higher the confidence in using the mean (or median) as an indication of future performance.

 

Measuring Effectiveness

The following metrics can help assess the effectiveness of Lean | Agile teams in meeting business goals:

1. Time to Market

Time to market is the time a delivery takes to start providing value to customers, or the time it takes to start generating revenue. The first can be calculated by taking the length of the number of Iterations before a Lean | Agile team releases to production. The second could be longer, depending on the organization’s alpha and beta testing strategy.

2. ROI

Return on Investment (ROI) for a Lean | Agile delivery calculates the total revenue generated from a product vs. the cost of the Iterations required to develop it. Lean | Agile has the potential to generate ROI much faster than traditional development methods, because working software can be delivered to customers very early on. With each Iteration, Lean | Agile teams create more features that can translate into growth in revenue.

3. Customer Satisfaction

There are several well-known metrics used to measure customer satisfaction. One is the Net Promoter Score (NPS), which measures if users would recommend the software to others, do nothing, or recommend against it. Using a consistent customer satisfaction metric and measuring it for every release indicates whether the Lean | Agile team is meeting its end goal-to provide value to customers.

 

Monitoring the Lean | Agile Team

These metrics can help a Lean | Agile team monitor its activity and identify problems early on, before they impact development:

1. Lean | Agile Stand-ups and Iteration Retrospective

These two Lean | Agile events, if carried out regularly with well-documented conclusions, can provide an important qualitative measurement of team progress and process health.

2. Team Satisfaction

Surveying the Lean | Agile team periodically to see how satisfied they are with their work can provide warning signals about culture issues, team conflicts or process issues.

3. Team Member Turnover

Low turnover (replacement of team members) in a Lean | Agile team indicates a healthy environment, while high turnover could indicate the opposite. Also contrast this metric with overall company turnover, which can impact the Lean | Agile team.

Which Metrics to Report to Stakeholders?

The most important thing stakeholders need to know about your Lean | Agile delivery is whether it is on track. The following metrics might help communicate this, and explain deviations from the expected delivery path:

  • Iteration and release burndown-Gives stakeholders a view of your progress at a glance.
  • Iteration velocity-A historic review of how much value you have been delivering.
  • Scope change-The number of stories added to the delivery during the release, which is often a cause of delays (many agile tools can show this automatically).
  • Team capacity-How many developers are on the team full time? Has work capacity been affected by vacations or sick leave? Developers pulled off to side work?
  • Escaped defects-provides a picture of how your software is faring in production.

 

Software Quality

There’s been one thing missing in all the Lean | Agile metrics we covered-software quality. The escaped defects metrics alone provides a view of quality, and it is an imperfect metric which identifies quality issues only after they are released to production.

Quality is critical to Lean | Agile deliveries. Stories completed do not provide value unless they are tested and working as the customer expects. Existing tooling only provides fragmented stats, such as unit test coverage and number of tests executed-it does not provide a good picture of the overall quality status.

At the Program Portfolio level, all of these metrics would be rolled up across all Lean | Agile Teams.

It is recommended that you do introspections at the end of every Iteration, milestone and quarter. A part from reviewing the metrics, you want to capture what went well, what did not go well, and how to improve.  At the team and program portfolio level, Speed Boat and dot voting are popular techniques to identify issues. In either technique, you would then want to do root cause analysis on the top issues. Fishbone analysis (5 Whys) is a popular approach for root cause analysis.

At the Program Portfolio level, you will need multiple facilitators like an APM and Servant Leaders, since the audience will be much larger. A typical time-box of approximately 90 minutes should be allocated. Solicited feedback and discussions in the Program Portfolio Retrospective should be at the Program  Portfolio level, and not so much at the team level. Team level issues are addressed at the team level retros. The Attendees will be Program and Leadership stakeholders and representatives from the Lean | Agile Teams.

Capture and discussion of what went well and what did not go well in the Release (at the Program level). With multiple Lean | AgileTeams and a tight time-box, its suggested to solicit the feedback as much as possible prior to the Program Portfolio Retro. This way, the time can be spent on categorizing and ranking the top issues based on dot voting by the teams. Since there will be a tight timebox, it will be important for the Agile Program Manager to facilitate the dot voting with stakeholders and the Lean | Agile Teams. This is done by giving each person 3 dots to mark on the issue cards. The person can spread their dots across 3 issue cards, or put all dots on one issue card, or one dot on one issue card and two dots on another issue card.

The 5 Whys (Fishbone Analysis) will help Teams to understand how to improve for the next Release/Quarter. Suggestion is to have each Lean | Agile Team do the analysis for the same problem statement and then compare and discuss to get consensus:

Proposed resolutions to the issues discussed in these introspection sessions can translate into work items for the teams to implement in upcoming iterations and following weeks based on prioritization of the backlog. It is also important to give demos at certain intervals like at the end of an iteration, quarter or major release. At the Program Portfolio level, we call these demos Showcases because its across all teams and the audience is typically much larger. Feedback at these session should be filtered and prioritized by the Product Managers and Backlog Owners.