The Single Most Important Reason Why Performance Measures Should Be Scientific

by Stacey Barr |

Whether you’re measuring hard facts like sales and revenue and cycle time, or softer perceptions like satisfaction and agreement, those measures need to be scientific. Here’s why.

On the radio recently, I listened to an interview with a researcher at a prominent university about a study he conducted to learn what influences children to grow into adults with drinking problems.

He learned, from scientifically designed longitudinal studies of cohorts of school children, that those children introduced to alcohol at a young age were more likely to have a drinking problem when they grew up.

When the program host opened up the phone lines to hear questions and comments from his listeners, I was appalled. Every caller declared that the researcher was wrong or his study was flawed or that he made it all up. Their evidence for such accusations was that their personal experience was different to the research findings.

The callers believed that because they could offer an exception then that meant the study had to be wrong.

Have you seen this happen with your performance measures? A measure shows that sick leave rates are increasing, and a manager refutes it because her staff were all at work last week.

Science, and measurement, help us draw conclusions about populations, not about individuals. There will always be the exceptions to the rule (basically, scientific findings are not about creating rules anyway). There will always be diversity and variability in how the results play out for individuals.

But that doesn’t mean we can’t make good use of what we learn about the population. If we have learned that children who are given even small and infrequent amounts of alcohol are much more likely to have an alcohol problem later in life, why wouldn’t we do something to change that?

Another study might identify another factor that we could act upon too.

Performance measures are not different to scientific findings. They tell us something about the big picture, something we can act on so that performance can take as big as possible a step toward the ideal.

People who refute scientifically produced findings are letting their personal biases drive their decision-making. And that’s the whole point of taking a scientific approach to measuring and learning about our world: to remove personal bias.


How do your colleagues interpret performance measures? As useful information to act on the big picture? Or do they spend too much time refuting what the measures have to say because it differs to their personal experience? Share your thoughts!

Speak Your Mind

Your email address will not be published. Required fields are marked *

  1. Awesome post! But first I have to ask…because I’m guessing you may be too young to have seen (or remember seeing) the original “Secret Life of Walter Mitty” starring Danny Kaye, is it as good? I have a hard time believing Ben Stiller can live up to Danny Kaye, but I would expect the special effects to be much better.

    One thought on your post (it is SO right on target). What do you think about the habit of researchers making declarative statements such as: “those children introduced to alcohol at a young age were more likely to have a drinking problem when they grew up” vs. couching it with something like, “data shows that a high percentage of children who have been introduced to alcohol at a young age end up having a drinking problem when they grew up.” I ask because, while I totally agree that people shouldn’t discount research based on their personal experiences, there is something to be said for not being determinant without enough data. This is why I like to think of all measures as indicators.

    Could there have been other factors? Let’s say our research covered the depression error? Or if there were other factors that helped push those exposed to alcohol at a young age to abuse it later? The part that’s eating at me is the “…likely to…” Of course we’re not seeing the entire research (it would take you much more than a blog post), but I’ve found abuse in both directions (those who try to discount data because of experience and those who try to promote their indicators as predictors)

    • Stacey Barr says:

      Marty I haven’t seen the original Walter Mitty… but yes the special effects (rather, cinematography) was fabulous.

      What you always advise about always reporting data in context to avoid misinterpretation is right. Most researchers I am aware of will do this properly, such as “data shows that children who have been allowed to consume even small amounts of alcohol are Z times more likely to have a drinking problem when they grow up than those children who have not consumed any alcohol.” Or something like that.

      When research findings are presented to non-researchers (I include myself in that category most of the time!) I see no reason why those findings can’t be articulated in statements as simple as the one I just suggested. It’s more actionable and most people won’t even understand the research design and analysis, let alone have the inclination to read it all.

  2. Rod Jacka says:

    Hi Stacey, nice post.

    Trying to combat this issue is probably one of the most important jobs that I see any measurement expert (and scientist, consultant, etc) should do. The media is so full of stories that highlight the single case to prove or disprove the general theory. Daniel Kahneman and many others have done some very interesting research into this area with the availability heuristic or bias (

    Its no wonder that this is such a thorn in our side 🙂

    Keep up the good work!

  3. Rich Torr says:

    Hi Stacey,

    Ben Goldacre’s Bad Science writing does a brilliant job of exposing these types of distortions of science in the media.

    Your post inspired me to finish a piece about cognitive bias and decision making in IT organisations.


Upcoming KPI Training

>> North America, Online Interactive, 6-10 February 2023 - SOLD OUT

>> Africa, Online Interactive, 27 February - 3 March 2023

>> UK & Europe, Online Interactive, 27 February - 3 March 2023

>> North America, In-Person, Calgary AB, 7-9 March 2023

>> Australia/NZ/Asia Pacific, Online Interactive, 15-16 & 20-22 March 2023

>> Africa, In-Person, Dubai UAE, 3-5 May 2023

>> Australia/NZ/Asia Pacific, In-Person, Wellington NZ, 9-11 May 2023

>> UK & Europe, Online Interactive, 22-26 May 2023

>> North America, Online Interactive, 29 May - 2 June 2023

Register for the next PuMP Blueprint Workshop near you

Reprinting Articles

You are welcome to use articles from the Measure Up blog, with these requirements

Connect with Stacey

Haven’t found what you’re looking for? Want more information? Fill out the form below and I’ll get in touch with you as soon as possible.

    *We respect your email privacy.
    Suite 117 Level 14,
    167 Eagle Street,
    Brisbane Qld 4000,
    Stacey Barr Pty Ltd
    ACN: 129953635
    Director: Stacey Barr