5 Analysis Methods That Make Us Misinterpret KPIs

November 27, 2018 by Stacey Barr

Using the wrong analysis methods makes us misinterpret our KPI signals, and leads us to make the wrong decisions.

Typical ways we misinterpret our KPIs mean we make the wrong decisions

We feel the need to explain every little change we can see in our measures – every time it goes up or down. This variation-explanation rollercoaster ride is the consequence of making limited comparisons.

Limited comparisons make us draw the wrong conclusions from our measures.

Limited comparisons are not signals because they are based on assumptions that aren’t true. When we make limited comparisons, they cause a lot of waste in our decision-making by reacting to changes that are not there or missing changes that are there, essentially because were ignoring the natural variability that each and every measure has.

Every measure has it’s own unique amount of natural variability. Reducing unwanted variation is almost always the best way to improve performance. But variation will always be there, and we need to learn how to detect real changes in performance, despite the variation.

The most common limited comparisons that we are conditioned to make when we use KPIs or performance measures are these:

  1. Calculating month to month differences.
  2. Focusing on a linear trendline.
  3. Smoothing variation with a rolling or moving average.
  4. Using only the current year’s data.
  5. Setting tolerances limits.

Are you practicing one or more of these? Maybe it’s time to rethink them…

Don’t compare month to month.

Or comparing this month to the same month last year. Or this month to a target or a quota. All these point-to-point comparisons assume that two points of data are sufficient to conclude that a change occurred.

Two points are not sufficient to conclude a change has occurred. That’s because every measure or KPI has a unique amount of natural variation. Natural variation is simply the result of complexity in our business environments.

What affects your daily weight? It might be the amount you ate yesterday, the amount of water you’ve been drinking, the amount of inflammation in your body, the amount of sweating you did during your morning workout. You won’t ever really know if your weight is changing unless you figure out your normal weight variation, and look for consistent patterns of change over a week or more.

Instead of comparing month-to-month, get a feel for the normal amount of variability in your KPI. We get a feel for this by using at least 5 but ideally around 20 historic values in our KPI time series. Look at the up and down movement of the KPI over time. We don’t want to react to that natural variation, we only want to react to real changes in performance. And we’ll see any changes when we use more historic data, because we’ll see a change in the pattern of variation (like in this example).

Don’t focus on linear trends.

Just because Excel can compute a trend line through your KPI’s time series, doesn’t mean there is a trend. Trend lines assume a gradual, consistent rate of change. But most KPIs or performance measures don’t behave that way.

It’s more realistic that a KPI has no change for a period of time, followed by a sudden shift, then more steady state, and maybe an outlier then a smaller shift to a new steady state. Just think about the unpredictable changes in our market or economy, the different change initiatives that execute, and the varying levels of resources we have available. All these things can cause non-linear changes in our KPIs.

A trendline cannot explain these disjointed changes in the pattern of variability of our performance measure. But these different changes really need to be explained. We want to know what impact the economy might be having. We want to know if the change initiative worked. We want to know if we need to change our resource planning. That’s what managing performance is all about.

You can see just how poorly trend lines explain KPIs by using the R-squared statistic (Excel adds this to trendlines as an option). This R-squared statistic measures the amount of variation in the data that the trend line explains. It will vary from 0 to 100% and the higher the value the better. Most trend lines I’ve seen explain less than 10% of the variation in the data, and that’s just not enough to be even remotely useful.

Don’t smooth variation out of your data.

Moving averages, or rolling averages, are when each value we plot on a time series is the average of the previous X values of our KPI. For example, X will be 12 for a 12-month moving average. And generally the effect is that the bumps in our data – which might be seasonal or chaotic – are smoothed out of our KPI’s time series.

It’s why rolling or moving averages are so popular, even for data that is NOT seasonal or cyclical. People fundamentally do not understand statistical variation. They assume that variation is too messy to help them interpret their KPIs. They want to smooth out that variation so they can see the trend in the data over time. But, just like trendlines, it assumes that change over time is smooth. And it often isn’t.

As for seasonal KPIs, there are better ways to remove cyclical variation from our KPI to make the signals of change over time easier to detect, rather than hiding all the signals. We can only see real signals of change when we can see changes in the pattern of variability. So smoothing it out is helping no one.

Don’t focus on the current year only.

At the start of each new financial or calendar year, many people ditch the prior year’s data and just start reporting the current year, as it progresses. Their reasoning is that it’s the current year they are managing, so they only need the current year’s data.

The flaw in this thinking is the assumption that none of the performance dynamics of the previous year will affect this year. The world does not start anew each year. And last year’s data (and possibly the year before that, in addition) gives context for interpreting what’s happening now.

In part it’s because one or two or three points of data are not enough to understand performance. A KPIs’ first few months of values in a new year won’t give us any insight at all. We need at least 5 and often quite a few more points in our KPI time series before we can reliably detect any signals of change. We’ll get insights more quickly if we use the KPI’s historic data to set a baseline for comparison.

Don’t use tolerance limits.

When we compare this month’s KPI value with a tolerance limit or zone, we’re doing no better than the first limited comparison of month-to-month comparisons. We’re still assuming that two points of data are sufficient to conclude that a change occurred. In this case, the two points of data are this month’s actual value and the tolerance value.

It’s just another point-to-point variation explanation rollercoaster ride. We focus on the difference between two values, and then label the difference as good or bad. And then we set up silly traffic lights that alarm us into looking for the cause or explanation. But if the natural variation in our measure is larger than the tolerances we set, we’ll waste resources looking for a cause that doesn’t exist. Natural variability is often the product of randomness, not an assignable cause.

And the flipside is that if our measure’s natural variation is quite small relative to our tolerances. A real change can happen to the pattern of variation, but within the tolerance limits. So it goes unnoticed.

When using limited comparisons, we make two big mistakes.

The two basic mistakes we make when we make limited comparisons with our KPIs cause us to make the wrong decisions from our performance information most of the time. Statistician and expert teacher of understanding variation, Donald Wheeler, describes these two mistakes as:

  1. we miss a signal that the process has changed
  2. we interpret noise as a signal

And these mistakes happen because we’re ignoring the natural variability that every measure inherently has. Stop making these limited comparisons. Instead, focus on the two comparisons that matter most with monitoring performance quantitatively:

  1. How does current performance compare to the past?
  2. How does current performance compare to where we want it to be?

And the only way to do this reliably, easily and consistently is with XmR charts.

Use the wrong analysis method, like month-to-month variations, trendlines and moving averages, and you’ll misinterpret your KPI signals.
[tweet this]


What performance measures have you forecast with? What forecast technique did you use? In what way did you use the forecast values?


Speak Your Mind

Your email address will not be published. Required fields are marked *

    Upcoming KPI Training

    Our next PuMP Performance Measure Blueprint Workshops include:

    >> London, UK, 5-6 Dec 2018

    >> Ottawa, Canada, 26-27 Feb 2019

    >> Brisbane, Australia, 13-15 Mar 2019

    >> Melbourne, Australia, 15-17 May 2019

    Register for the next PuMP Blueprint Workshop near you

    Stacey's Books

    Prove It! How to Create a High-Performance Culture and Measurable Success, book by Stacey Barr

    Order Prove It! here.

    Practical Performance Measurement: Using the PuMP Blueprint For Fast, Easy, And Engaging KPIs, book by Stacey Barr

    Order Practical Performance Measurement here.

    Reprinting Articles

    You are welcome to use articles from the Measure Up blog, with these requirements

    Connect with Stacey

    Haven’t found what you’re looking for? Want more information? Fill out the form below and I’ll get in touch with you as soon as possible.

    *We respect your email privacy.
    Level 54, 111 Eagle Street
    Brisbane, Qld, 4000
    Stacey Barr Pty Ltd
    ACN: 129953635
    Director: Stacey Barr