When Your KPI Is Not Apples With Applesby Stacey Barr
Performance measurement is about comparison. But if we need to be very careful to compare apples with apples, and not with oranges. What does this mean?
An apples-with-apples KPI comparison is one that gives us an accurate quantification of a difference in performance. An apples-with-oranges KPI comparison is when the difference is not due entirely to a difference in performance, but also a difference in context.
To get the truth from our KPIs, our comparisons need to be apples-with-apples. And that means we need to be aware of
four types of comparison that each can fall victim to an apples-with-oranges situation:
- Comparing one measure to another different measure
- Comparing the same measure across different entities
- Comparing one measure value to another value of the same measure
- Comparing a numerator to a denominator within the value of a single measure
1. Comparing one measure to another different measure
For an apples-with-apples comparison between one measure and a different measure, we need to check that each measure relates to the same context, such as timeframe or population or other relevant factors.
Two common metrics used to track COVID-19 are Number of New Cases and Number of New Deaths, and these are often reported daily in the media. Comparing them, to draw a conclusion like deaths are rising while cases are falling, is not an apples-with-apples comparison. That’s because the timeframes are staggered: the new deaths recorded today are not the result of the new cases recorded today; deaths came from cases that were recorded two or more weeks ago.
Before we draw conclusions from comparisons of the different measures with each other, we need to check that they relate to the same timeframe, population and any other contextual factors.
2. Comparing the same measure across different entities
For an apples-with-apples comparison when we are comparing the same measure construct across different entities, we need to check for two apples-with-oranges problems:
- that each entity is using the same definition for the measure
- that differences in the measure across each entity are normalised or standardised
For example, we might want to compare Lost Time Injury Frequency Rate (LTIFR) between teams, or between geographical regions, or with other organisations. This type of measure-to-measure comparison is common in benchmarking, too. But if each team or region or organisation has a different way of calculating LTIFR, the comparison will not be apples-with-apples.
And with COVID-19, a common comparison is number of cases by country. On the World Health Organisation’s COVID-19 dashboard, countries are compared on a colour-scale based on a simple count of cases. Are these
meaningful differences? No, because each country’s population is more responsible for their ranking than COVID cases.
Before we draw conclusions from comparisons of the same measure between different entities, we need to check that they share the same data definitions, calculation formula, and normalisation.
3. Comparing one measure value to another value of the same measure
For an apples-with-apples comparison of a measure value to another value of the same measure, we need to check that each measure value has come from the same calculation.
For example, an energy distribution company measured the reliability of its electricity supply using an industry-standard measure, SAIDI. The executive team noticed a signal that the measure was improving. But before they could celebrate, their measurement specialist quickly pointed out that the change was not due to improved performance. It was due entirely to an industry-wide agreed change in the calculation of SAIDI.
Before we draw conclusions from comparisons based on changes in a measure’s values over time, we need to check that the signal of change is not caused by a change in the data definition or calculation formula.
4. Comparing a numerator to a denominator within the value of a single measure
For an apples-with-apples comparison within a measure calculated as a rate, we need to check that the numerator has come from the same context as the denominator.
For example, you might be surprised how many companies measure Sales Conversion Rate as the number of sales this month divided by the number of sales meetings this month. But that gives us an apples-with-oranges comparison: at least some of those sales in the numerator may have come from sales meetings in previous months. The apples-with-apples solution is to set the denominator as the number of sales meetings for the month, and the numerator as the number of those meetings that turned into a sale within (say) 3 months of the sales meeting. Yes, there is a lag in this measure, because sales are not always immediate.
Before we draw conclusions from measures based on a numerator and a denominator, we need to check that what is counted in the numerator is a logical subset of what is counted in the denominator.
Make sure your KPI comparisons are apples-with-apples by checking that four things match: data source, calculation formula, timeframe and scope. [tweet this]
Connect with Stacey
Haven’t found what you’re looking for? Want more information? Fill out the form below and I’ll get in touch with you as soon as possible.