Should You Measure the Positive or the Negative?

by Stacey Barr |

People occasionally ask me whether we should measure the positive, or what we want more of, rather than the negative, what we want less of. Mostly they believe it should be the positive. But when does it matter? Or does it ever matter?

A Zebra's black and white stripes.

The answer, like it is for most things, is it depends. It depends on the risks either way. There are three common arguments people use for why we avoid measuring the negative, but let’s explore why often the risks are not worth it.

Where attention goes, energy flows.

One argument against measuring in the negative is that it keeps our attention on what we don’t want. This would mean that if we measure workplace accidents, then our attention is on injuring people rather than keeping them safe.

But really, why can’t our attention be on the shrinking gap between the current injury level and zero, through a trajectory of improvement targets? Our energy, then, could flow in the direction of finding the solutions that continue to shrink that gap.

Bad news is depressing.

Another argument is that it’s depressing to monitor what we don’t want. But the role of KPIs is not to manage our mood. It’s to incite action. Anyway, it’s much more depressing to suddenly have a huge system crash to deal with because we monitored the percent of system uptime within tolerance…

… and therefore failed to see the small but significant lead indication of an increase in system downtime, which we can see in this XmR chart:

Particularly with rare events or small failure rates, signals of change can be swamped and drowned out when we only monitor the positive.

Up should always mean good.

In dashboards and reports, many people like to have all the measures framed so up means good and it’s thus faster to interpret each measure. If your philosophy is that measures should be monitored by the tick-and-flick method, and out of the way as fast as possible, then fine. Do that. But you leave the insight behind.

Measures work together as characters in a story, and each measure has its own personality. We need to spend time with each measure; appreciate its interaction with others, and only interpret it within that context.

What story do these three call centre measures tell you?

They tell me that whatever the call centre team did to reduce call handling time, it had an adverse impact on the rate that customer issues were resolved in the first call. And we see that echo in the caller satisfaction trend. But after a few months, it seems the team has discovered how to resolve more calls more quickly, within the first call.

It’s best not to set blanket rules about how to measure.

We want our measures to focus us on improving performance, and that means they need to give us quick and accurate signals of change, and trigger us in to action when action is needed. That’s much more important than whether we’re measuring the positive or the negative, or which direction is good or bad. Let’s not treat measurement so trivially.


Is there any debate about measuring the positive or negative in your organisation? What are the arguments for and against? And do you see the risks?


Speak Your Mind

Your email address will not be published. Required fields are marked *

  1. P Srinivas Kumar says:

    Nice article.I agree that instead of measuring negative or positive it is better to select on those measures which induces action.Action succeds on having necessary will to perform the action and necessary requird knowledge for performing the action.probably the measure for will and knowledge will far as knowledge goes probably the number of items known and no of items not known may be a measure.achievement of a target with increased risk levCAN BE CONSIDERED AS A MEASURE FOR WILL.AM I RIGHT IN THINKING SO?Tracking the level of Will would induce change in will which in turn has effect on action?

  2. Stacey, the uptime / downtime comparison isn’t fair.
    Had both charts been given the same scale, the change would have been equally obvious. Had both been plotted on an (appropriately scaled) XmR, both would have shown the same signal.

    • Stacey Barr says:

      You’re right Bill. But when I see this measure (uptime) plotted in reality, it’s neither on a scale that shows it clearly, nor is it in an XmR chart. Mostly people want to show that there’s no problem; their focus is on what’s working. So in a case like this, measuring downtime better directs attention to when and where action is needed.

Upcoming KPI Training

Our next PuMP Performance Measure Blueprint Workshops include:

>> UK/Europe, Online Interactive, 2-4, 7-8 Feb 2022

>> North America, Online Interactive, 7-11 Feb 2022 *SOLD OUT*

>> Australia/NZ/Asia Pacific, Online Interactive, 23-24, 28 Feb, 1-2 Mar 2022

>> Africa, Online Interactive, 14-18 Mar 2022

>> Australia/NZ/Asia Pacific, Online Interactive, 29-31 Mar 2022

>> North America, Online Interactive, 4-8 Apr 2022

>> UK/Europe, Online Interactive, 9-13 May 2022

>> Africa, Online Interactive, 20-24 Jun 2022

>> Africa, Online Interactive, 26-30 Sep 2022

>> Africa, Online Interactive, 28 Nov - 2 Dec 2022

Register for the next PuMP Blueprint Workshop near you

Stacey's Books

Prove It! How to Create a High-Performance Culture and Measurable Success, book by Stacey Barr

Order Prove It! here.

Practical Performance Measurement: Using the PuMP Blueprint For Fast, Easy, And Engaging KPIs, book by Stacey Barr

Order Practical Performance Measurement here.

Reprinting Articles

You are welcome to use articles from the Measure Up blog, with these requirements

Connect with Stacey

Haven’t found what you’re looking for? Want more information? Fill out the form below and I’ll get in touch with you as soon as possible.

    *We respect your email privacy.
    Level 54, 111 Eagle Street
    Brisbane, Qld, 4000
    Stacey Barr Pty Ltd
    ACN: 129953635
    Director: Stacey Barr