When Your KPIs Have Less Than 100% Data Integrity

December 11, 2012 by Stacey Barr

Data quality worries most users of performance measures. There is an obscene number of reported measures that only generate dialogue about how unreliable the underlying data is, rather than how performance can be improved.

Leaning Tower of PisaOne of my clients is drowning in dozens of reports collectively containing over 100 measures. Where he expects two measures from separate reports to have the same values, they don’t. Where he expects a measure’s value to be accepted by his customer, it is disputed. Where he thinks he’s looking at the right measure to answer his question, someone warns him ‘no!’.

Too many performance measures get derailed because of concerns that the data isn’t reliable enough. I’ve heard some performance measure experts proclaim that performance data must have 100% integrity.

Hogwash! Lots of things are still useful and usable, even though they’re not perfect, or anywhere near perfect.

Data never will be 100% perfect, and most data will be far from 100% perfect. Here are some of the reasons why.

Performance data is almost always gathered by humans.

A vast proportion of our performance measures rely on data that has been touched at least once by human hands. People design data collection forms and processes, people fill out those forms, people enter the data from the forms into databases, people extract and manipulate data out of databases, people filter and analyse the data to produce performance measures.

So human error and misunderstanding, ambiguity or absence of clear data definitions, ad hoc data collection and analysis processes, and vague measure definitions (the calculation of measure values) all contribute some amount of error to our performance measures.

People know that performance data can bite.

Unfortunately many of our organisations are still carrying the burden of a blame culture. People can still remember (or are still experiencing) the use of data as a big stick to humiliate, take resources away from, demote or sack the so-called poor performers.

We know in this kind of environment people swing into self-preservation mode (it’s only natural) and weigh up their choices: take another whack from the data stick or sweep that nasty data under the rug?

Managers and decision-makers who use performance measures need to earn the trust of employees again; trust that data will not be used against anyone. Performance measures and data need to be seen being used to honestly assess performance of systems and processes, being used to explore root causes and learn from the past, being used to stimulate dialogue about how the future can be influenced.

Data has no meaning apart from its context.

An event must occur before data can be produced. And the data is the product of the event being observed and interpreted and coded. When people are doing the observing (as opposed to a machine such as a temperature gauge), the person unconsciously – and occasionally consciously – applies filters that affect how the event is interpreted and how it is coded.

These filters are influenced by beliefs the person has about the event, their interactions and relationships with others around them, their physical and mental health on the day, what they are thinking about at the time, their values and priorities regarding their work, and the list goes on.

Don’t just rely on technical solutions to data integrity problems.

Most data integrity problems can be discovered and dealt with through better communication among the people involved in data capture: from designing measures to developing data collection processes, to collecting data, to storing and analysing it. Don’t rely just on the technical solutions – think through what needs to change in the social systems surrounding data.

Be concerned more with how much integrity your decisions can survive with, and trust that it’s not 100% integrity!

TAKE ACTION:

Following is a checklist to help you to stop data integrity from being an excuse for not using your performance measures. Use it to assess the amount of integrity your performance data has and to generate ideas for improving it. But not to aim for 100% integrity!

  1. How many of your performance measures are defined in enough detail to avoid miscalculation or use of the wrong data?
  2. How many of your data collection processes are documented consistently and ingrained into work practices?
  3. How many of your people that collect data have been trained to do it according to the documented process?
  4. Does your organisation have a data dictionary that is available outside of the IT team?
  5. How many of your managers and decision-makers look for root causes of undesirable performance in the systems and processes (as opposed to the people)?
  6. How many performance measures are supported by diagnostic measures of causal factors (as opposed to just slice-and-dice the data into smaller fragments)?
  7. Have you got an automatic improvement process that kicks in when a performance measure reveals a problem?
  8. Have you explored the context around the types of performance data you collect?
  9. Have you thought about the factors that might influence the way someone interprets and codes what they observe when they are capturing performance data?
  10. Do you have guidelines and examples in your data collection instructions to help data collectors capture quality data?
Facebooktwittergoogle_pluslinkedinmailby feather

Speak Your Mind

Your email address will not be published. Required fields are marked *

  1. Great to see a discussion of Data Quality in a slightly different business context. I’d add to this: How many of your KPI metrics are actually undermining the quality of your KPI data? Humans like to ‘game’ the system and if they are told X is the most important goal then all that is required to do X will be done, but that can mean Y (which is needed to measure X) might get de-prioritised.

    Classic example: Call Centres that are measured on speed of answer/call handing duration and then get beaten up over customer care issues or incomplete data.

    Each of the 10 points you set out above are good practice in Information Quality Management, particularly the “look for root cause in the system” and the importance of clear operational decisions.

    I would disagree though – data might not be perfect but it can become perfect (just like any product manufactured through a process). However the real question is whether the perfect data is available in the right time/place/context to become perfect and actionable information. 100% complete and accurate sales targets that are 9 months late are good data but crummy information. However, if the requirement is to have 100% complete and accurate (or whatever other quality characteristic you choose) sales information within a month, that sets performance criteria for the process, people, and technology.

    Which can be measured. And can be made part of a scorecard.

  2. madhu singh says:

    hi, A good basic information of why data .i need help in educaional set up (i.e secondry school) data emaurement methodologies.
    madhu

Reprinting Articles


You are welcome to use articles from the Measure Up blog, with these requirements

Connect with Stacey


Haven’t found what you’re looking for? Want more information? Fill out the form below and I’ll get in touch with you as soon as possible.





*We respect your email privacy.
PO Box 422
Samford, Qld, 4520
Australia
Stacey Barr Pty Ltd
ACN: 129953635
Director: Stacey Barr
Simple Share Buttons