When Your KPIs Have Less Than 100% Data IntegrityDecember 11, 2012 by Stacey Barr
Data quality worries most users of performance measures. There is an obscene number of reported measures that only generate dialogue about how unreliable the underlying data is, rather than how performance can be improved.
One of my clients is drowning in dozens of reports collectively containing over 100 measures. Where he expects two measures from separate reports to have the same values, they don’t. Where he expects a measure’s value to be accepted by his customer, it is disputed. Where he thinks he’s looking at the right measure to answer his question, someone warns him ‘no!’.
Too many performance measures get derailed because of concerns that the data isn’t reliable enough. I’ve heard some performance measure experts proclaim that performance data must have 100% integrity.
Hogwash! Lots of things are still useful and usable, even though they’re not perfect, or anywhere near perfect.
Data never will be 100% perfect, and most data will be far from 100% perfect. Here are some of the reasons why.
Performance data is almost always gathered by humans.
A vast proportion of our performance measures rely on data that has been touched at least once by human hands. People design data collection forms and processes, people fill out those forms, people enter the data from the forms into databases, people extract and manipulate data out of databases, people filter and analyse the data to produce performance measures.
So human error and misunderstanding, ambiguity or absence of clear data definitions, ad hoc data collection and analysis processes, and vague measure definitions (the calculation of measure values) all contribute some amount of error to our performance measures.
People know that performance data can bite.
Unfortunately many of our organisations are still carrying the burden of a blame culture. People can still remember (or are still experiencing) the use of data as a big stick to humiliate, take resources away from, demote or sack the so-called poor performers.
We know in this kind of environment people swing into self-preservation mode (it’s only natural) and weigh up their choices: take another whack from the data stick or sweep that nasty data under the rug?
Managers and decision-makers who use performance measures need to earn the trust of employees again; trust that data will not be used against anyone. Performance measures and data need to be seen being used to honestly assess performance of systems and processes, being used to explore root causes and learn from the past, being used to stimulate dialogue about how the future can be influenced.
Data has no meaning apart from its context.
An event must occur before data can be produced. And the data is the product of the event being observed and interpreted and coded. When people are doing the observing (as opposed to a machine such as a temperature gauge), the person unconsciously – and occasionally consciously – applies filters that affect how the event is interpreted and how it is coded.
These filters are influenced by beliefs the person has about the event, their interactions and relationships with others around them, their physical and mental health on the day, what they are thinking about at the time, their values and priorities regarding their work, and the list goes on.
Don’t just rely on technical solutions to data integrity problems.
Most data integrity problems can be discovered and dealt with through better communication among the people involved in data capture: from designing measures to developing data collection processes, to collecting data, to storing and analysing it. Don’t rely just on the technical solutions – think through what needs to change in the social systems surrounding data.
Be concerned more with how much integrity your decisions can survive with, and trust that it’s not 100% integrity!
Following is a checklist to help you to stop data integrity from being an excuse for not using your performance measures. Use it to assess the amount of integrity your performance data has and to generate ideas for improving it. But not to aim for 100% integrity!
- How many of your performance measures are defined in enough detail to avoid miscalculation or use of the wrong data?
- How many of your data collection processes are documented consistently and ingrained into work practices?
- How many of your people that collect data have been trained to do it according to the documented process?
- Does your organisation have a data dictionary that is available outside of the IT team?
- How many of your managers and decision-makers look for root causes of undesirable performance in the systems and processes (as opposed to the people)?
- How many performance measures are supported by diagnostic measures of causal factors (as opposed to just slice-and-dice the data into smaller fragments)?
- Have you got an automatic improvement process that kicks in when a performance measure reveals a problem?
- Have you explored the context around the types of performance data you collect?
- Have you thought about the factors that might influence the way someone interprets and codes what they observe when they are capturing performance data?
- Do you have guidelines and examples in your data collection instructions to help data collectors capture quality data?
Connect with Stacey
Haven’t found what you’re looking for? Want more information? Fill out the form below and I’ll get in touch with you as soon as possible.