CASE STUDY: 3 Common Mistakes to Avoid With Your XmR Charts (Part 2)

September 17, 2013 by Stacey Barr

Following on from Part 1, another of the legacy habits that can sneak surreptitiously into our XmR charts and make them incapable of highlighting the real signals in our performance measures, is focusing on the wrong performance measure!

XmR charts are based on very specific and deliberate statistical theory, even though they are quite easy to interpret and understand. They are quite easy to create also, when you have the correct knowledge and procedure to build them.

Without this correct knowledge and procedure, a few mistakes creep into our XmR charts that cause them to be invalid: they don’t highlight the right signals.

MISTAKE #2: Focusing on the wrong performance measure

Brigitte works for UK-based software company that has, as we’d expect, a performance measure of the total count of High Priority Software Bugs which are logged by customers each quarter.

//www.staceybarr.com/images/BrigitteXmR1.jpg

If we just look at the XmR chart of the simple count of High Priority Software Bugs, we’d see early suggestion that things are getting worse: there seems to be more bugs produced. We’d be tempted to draw the conclusion that software developers were getting careless.

But is the simple count truly evidence of software developer performance? What other factors could explain an increase the number?

Correcting this mistake…

One factor is the amount of software being developed, and another is the technical complexity of the software being developed. Both these factors could be quantified, but a quick proxy we can use is the measure of Hours Booked to Software Development.

Combining the two measures would give us a rate at which bugs are occurring that might adjust for the opportunity for bugs to occur: High Priority Software Bugs per 100 Hours.

And the XmR chart for this shows a very different story:

//www.staceybarr.com/images/BrigitteXmR2.jpg

We see a definite signal – a long run below the central line – that indicates that fewer bugs are produced per 100 hours worked. In fact, it looks like the rate of bug production has nearly halved, moving from 1.9 bugs per 100 hours down to 1.0 bugs per 100 hours.

What happened in the first quarter of 2011 to produce this improvement?

It goes to show that it’s very important to be clear about the performance result you’re trying to improve, and to choose a performance measure that provides direct evidence of that result.

Perhaps High Priority Software Bugs per 100 Hours isn’t even the best measure. But until we understand the performance result that matters most, we won’t be able to know with confidence what the best measure is for us to monitor and respond to.

Coming up next is 3 Common Mistakes to Avoid With Your XmR Charts (Part 3)…

TAKE ACTION:

Do you have the right measure for your performance result? Or do you find people have lots of ‘reasons’ why performance varies, such that signals in your XmR charts are not signals about changes in performance, but signals of something else?

Facebooktwittergoogle_pluslinkedinmailby feather

Speak Your Mind

Your email address will not be published. Required fields are marked *

  1. Prahlad Bhugra says:

    Stacey, this is really cool method. I have an observation – when we break number of bugs obtained in a quarter to a smaller window of bugs per 100 hours – we get 6-10 more points in the control chart for 1 quarter. What do you suggest we should do to the points so that we again get one point for the quarter (as shown in 2nd control chart above). In particular do we average all the points or use some other mathematical method.
    Second question is what is the basis for choosing a window of 100 hour for calculations – can choosing a smaller window of 10 hours be better than choosing a window of 200 hours or it will be vice versa?

    regards
    Prahlad

  2. Stacey Barr says:

    Prahlad thanks for your question.

    The way this measure is calculated is total number of bugs in the quarter divided by the total number of hours of software development in the quarter, and then this ratio is multiplied by 100. It looks like this:

    (bugs / hours) x 100

    The ony reason for multiplying by 100 is because the ratio is so small. Multiplying by 100 just brings the measure value into the realm of whole numbers.

    So the window isn’t 100 hours – it’s just an inflation factor to make the charts a bit easier to interpret. It’s hard to interpret measure values that are fractions like 0.0234 and 0.0123 – 2.34 and 1.23 are a bit easier.

Reprinting Articles


You are welcome to use articles from the Measure Up blog, with these requirements

Connect with Stacey


Haven’t found what you’re looking for? Want more information? Fill out the form below and I’ll get in touch with you as soon as possible.

*We respect your email privacy.
PO Box 422
Samford, Qld, 4520
Australia
Stacey Barr Pty Ltd
ACN: 129953635
Director: Stacey Barr
Simple Share Buttons