Cet article est également disponible en français

How can agile principles help frame and measure success or failure, while learning from it?

Measure a feature

Measurement, deviated principle and yet essential

Indicators often use within an agile team become useless as soon as they are used for human performance control purpose:

  • The velocity must be constant? It will be. Elements of the backlog will then be evaluated upwards or downwards to meet the expectations oh the hierarchy
  • Velocity must increase every quarter? It will increase. The same functionality estimated at “3” points in sprint 1, will be estimated at “5” points in the first sprint of the following quarter, again, to meet the expectation of the hierarchy
  • No technical debt allowed? It will be invisible, again, to meet the expectation of the hierarchy, as I explained a few weeks ago
  • The number of delivered features must be constant? It will be, again, to meet the expectation of the hierarchy, thanks, perhaps, to the shadow-velocity (read my guide on this subject)

Indicators are essential for the team and should be used only for self-evaluation purposes.

What about the measure of impact, then?

The team’s self-assessment measures do not indicate whether the functionality delivered is useful or not, however well done.

To measure the real impact of a feature, the Product Owner has a reliable and accurate tool: the probe.

The probe is a technical brick added to a feature in order to collect data on its use.


With these data, there are then actions to take. If a feature is not used or not often used, there are several possible reasons:

  • Is this really a user’s need? If the idea was born in the head of a marketer without a prior study, it is possible that the user has not yet the utility of the functionality or the ability to exploit it
  • Are users aware? Has the feature been announced and explained?
  • Is the functionality accessible to the right target? If a feature is limited to a specific type of user, it is important that indicators only take into account behaviors of the target
  • Is the functionality usable? Do users know how to use the feature? Maybe it’s too complicated?

Failure is important and necessary. If we know how to learn from it, then we can act.

For example we can set up a marketing action to announce and explain the functionality. We can also organize user interviews to better understand them.

You also have to know how to kill a feature. If it brings nothing, it becomes a useless source of code to maintain, worthless.


Thanks to these indicators, it is not only possible to improve the product, but also to improve oneself.

If the marketing campaign did not work, we must be able to question ourselves so as not to miss the next one.

If the need is not there, how can you avoid running a useless item?

If the probes failed to understand why the feature was not successful, then what can we learn from it?

If you want to know more and discover other tools to complete the probes, you can look at the side of the change (an upcoming article will be dedicated)