The Risks of Short-Term Monitoring of Measuresby Stacey Barr |
Lots of my new clients share the same habit (a bad one): they conclude whether performance is getting better or worse by comparing the current month’s (or week’s or quarter’s) performance measure value with last month, or with a target or standard. If this month (or week or quarter) is worse, they go digging for the cause. Trouble is, they don’t find that cause.
I’m a novice mountain bike rider, still striving to get stronger on hills and more skilful through the technical single-track trails I just love to ride. I’ve noticed when riding my mountain bike, if I look too close in front of me, about a metre or two, every rock, log, rut, and patch of loose pebbles looks like something I need to correct for.
Paying so much attention to these short-term obstacles makes me try too hard to control where my front wheel goes, and I end up correcting so much that I can get the wobbles and down I go.
But if I instead look further ahead, paying attention to the bigger picture at least 10 to 30m in front of me, the bike tracks more smoothly. When it hits one of those rocks or ruts or patches of loose pebbles, it’s no big deal. We just move through them, and keep going forward.
Continuing like this, and as my riding skill increases and my fitness improves, I can absorb more of these obstacles in my process and they have less affect on my control of the bike. Overall, my mountain bike riding experience moves to a higher level of performance, and interestingly, with less effort. I can more easily pinpoint specific skills or behaviours or bike set-up attributes that I can sharpen and hone.
My new clients haven’t found the causes of their so-called performance dips because the causes don’t exist. They might blame things akin to rocks or logs or ruts or patches of loose pebbles. Performance isn’t actually getting worse; there are always rocks and logs and ruts and patches of loose pebbles that make our ride less than perfectly smooth and predictable. Most of the time, when I do a proper time series analysis of their performance measure values, it proves that overall nothing has really changed at all. Not in a long time.
Here’s the real problem: When we look only at the short term, we end up over-correcting and tampering, making variability worse. Just like I do when I focus on the obstacles immediately in my path on my mountain bike. Over-correcting causes the month-to-month or week-to-week or quarter-to-quarter variation in performance to increase. Increased variation is the enemy of high performance.
The entire field of quality and process improvement – which has revolutionised many industries over the last 60 or so years – centres on the endeavour to manage the factors that cause controllable variation in performance, so we can get more control (or at least influence) over that performance.
We need to look longer term and understand the natural variation in our business processes (those work flows that produce the results we measure), and learn about the factors that have the most impact on the variation’s size. We need to keep our eyes on the big picture.
Abandon point-to-point comparisons when you look at your performance measures. You’ll just end up over-correcting. Look instead at a longer timeframe, and pay attention to patterns in variation and changes in those patterns. It’s the patterns, not the points, that hold the potential for improving performance.
Connect with Stacey
Haven’t found what you’re looking for? Want more information? Fill out the form below and I’ll get in touch with you as soon as possible.