Digging Deeper to Understand the Customer Experience
The Dangers of Simplified Measures and Metrics
It is common in business to want to simplify everything and many articles and books advocate managing with just a few key statistics. Averages and aggregations abound particularly in the customer experience area. Common CX measures are averages like Net Promoter Score, Average Turnaround Times, Average Speed of Answer and Average Revenue Per User to name but a few. These are useful indicators and make it easy to compare periods, to benchmark against others and at times to measure and reward performance but they can also mislead in two ways.
The first danger of managing with these averages is that they hide or ignore what many customers are experiencing. An average just shows an aggregation of performance on a curve of some form. By definition, some customers are getting performance far worse than the average, some better and only a few get the average performance. For example, an average that we like to promote is contacts per X (account or flight or order) hides the fact that some customers aren’t having to make contact at all, and many are making multiple contacts. The shape of the curve and how customers respond to different results on that curve provides more insight than the average.
The second problem is that these simplified measures can also be open to manipulation. Front line staff learn how to beat the scores by, for example, manipulating who and how they refer to satisfaction surveys. Managers may learn how to manipulate the work and workforce to inflate scores and we’ve seen outsourcers manipulate their contracts and measurement to make their performance look better on over simplistic measures. Once again, the averages “hid” the reality of how these results were being achieved.
To supplement these “aggregation measures” this paper looks at three techniques that provide greater insight:
a) How to unpack the aggregate measures
b) Using information for observation
c) Going straight for gold
1. Unpack the aggregates and the bottom of bell curve
Making sense of the aggregate measures involves unpacking them in some form. For example, some businesses look less at their Net Promoter score and more intensively at the number of “detractors” (those who score 0-6 in theory) and why they were unhappy. In statistical terms, that is a focus on the bottom of the bell curve. There are many benefits to this approach. Firstly, these customers aren’t happy with their experiences and therefore are at risk of leaving or damaging the organization’s reputation, so action is needed. Secondly, there is more to be learnt and more potential for improvement in understanding these negative experiences. Thirdly, those that are unhappy are likely to generate additional work such as a complaint or repeat contact. Therefore, there is much to be gained by looking at negative sentiment and unpacking it.
The improvement potential explains why some companies are spending more time understanding and analysing complaints. Many businesses seem so busy handling complaints and problems that they don’t get time to analyse the causes and solutions. In contrast, one major bank uses complaints data in a variety of ways. They use samples of complaints to educate executives on what goes wrong and why. They also drill down to understand root causes and then mobilise teams to address the problems. Fixing the true root cause of complaints not only prevents other similar complaints but often reduces repeat contacts and “first time” contacts that shouldn’t have been necessary (see the blue “wedge” of related contacts). Therefore, the benefits of analysing and fixing complaint causes can drive broader benefits.
Unpacking other aggregates may give a far clearer picture of what customers are experiencing. Many operations report speed of response measures as averages and then those in the operations learn to manipulate these averages. For example, service level measures for calls or processes report what percentage were serviced in a certain time (80% in X time). This encourages behaviours like having “easy days’ to lift the average and offset busy days. A different picture emerges when the organisation looks at the results period by period or process by process. We often use tools such as the “heat map” (see picture) to show service levels for every half hour for each day of the week. The number of “intervals” over a week where the customers experienced long waits almost matches those where wait times were short. The average hides the problems on the first three days of the week and doesn’t show why many customers are unhappy. The average also masks the fact that on Thursdays and Fridays there appears to be surplus capacity which is being under used to “drive up the average”. A more meaningful metric would be to report the number of intervals where targets were and weren’t achieved and then analyse how the problem periods can be addressed.
Average durations of work are also of only limited use. Many customer facing businesses report their average handling time for contacts or processes. A more informative technique looks at “time bands” across the process (see diagram) which starts to show “the bell curve” of what the customers are really experiencing and what is “driving the average”. In this contact example the average duration was 7.5 minutes but 30% of customers had contacts over ten minutes which consumed 54% of the workload. That picture shows what customers are really experiencing and what needs focus if the result is to change.
2. Information for observation
Aggregated statistics are interesting, but they should always provoke curiosity. Someone once told a LimeBridge Director they were like a three-year-old child “always asking why?” but that is the point, and we aren’t alone in that behaviour. One board director told us that they always perform their own observations as they don’t trust the aggregates being reported to the board. We find that to understand aggregate information it needs to be analysed deeper using what we call “observation”. This technique is really important in managing teams and individual performance. Team leaders get daily and weekly statistics which usually show “aggregated performance”. This reporting shows how many widgets their team members processed or how many calls and may break it down into other aggregates like hold time, average duration and so on.
Unfortunately, some coaching methodologies advocate that you can “coach” and improve performance of a team using just this aggregate data. The “Information for observation” technique suggests that the data should tell a manager where to look and where to invest their “observation time”. Observations are about getting close enough to the work to understand why these results are occurring so that the reasons can be addressed. This level of detail is needed to understand what is driving the aggregate
performance and where there are opportunities to improve. For example, by observing a team member executing their “worst” type of transaction, their manager should be able to see why they are struggling and what needs to change to lift that performance. The observation provides the “why” and the detail that may enable improvement if the manager is good enough to understand what they can do to help their team member.
The same is true at a department level. If contacts or complaints are rising in a business, the organisation needs more information on what types of contact are rising. This can enable detailed analysis and observation of what is driving the increases. Only the detailed analysis will identify what needs fixing. For example, in one customer service business the contact centre was seen to be “underperforming” with long wait times. Management thought the problem was in the contact centre and sent in a fix team. The team did sufficient observations to understand the issues. These contacts observation showed that contact volumes had increased 30% because the “claims team” had a major backlog. “Where is my claim” calls had gone from 2% to 32% of contacts. The problem wasn’t in the contact centre at all, and observations and analysis exposed the real issue and where resources and focus were needed.
3. Going straight for gold The techniques we have explained for managing with aggregates have all required deeper and deeper analysis to make sense of the aggregates. A different approach works a bit smarter by putting more attention on the likely areas of opportunity “from the get go”. This can be particularly powerful for customer experience but is often a change to “score keeping” approaches. For example, many businesses try and get feedback on all interactions and all customers. They are so keen to keep score that they want to survey happy and unhappy customers as well as all those in between. However, if the real purpose of feedback is to drive improvement, then more focus is needed on unhappy customers. Rather than random sampling and surveys this suggests a greater focus on things like delayed flights, long contacts, repeat contacts or customers returning goods. This change in focus moves from score keeping to areas where issues and improvement are likely.
This drastic change in measurement approach uncovers more improvement potential. It may also save money on “random” sampling and save customers' time. One business used automation to narrow the focus. It started using analytics to provide automated assessment of all contacts compared to a previous approach of random sampling of less than 2% of contacts. While the analytics wasn’t perfect, it was able to direct “manned” sampling to contacts where issues may have occurred by assessing the duration of contact, the language used and sentiment. The quality team then looked at fewer calls because they went from random to directed sampling, but they found more issues and opportunities. The opportunities for coaching and improvement expanded. This is a great example of “going for gold”.
Another business started recognising that the number of repeat contacts was a better measurement of quality than any random sample. They focused their analytics team on producing a much more detailed measurement of repeat contacts and their reasons knowing that these issues were the real gold. They used analytics to produce an accurate repeat contact measure for every process and every staff member. This became the focus of continuous improvement and coaching and produced wins for customers and the business. Customer effort dropped, advocacy improved, and costs fell. They even cut back on CX surveys for more savings and altered their outsource contracts to focus attention on repeat contacts. Summary
In this paper, we’ve explained the dangers of managing with over aggregated data and suggested alternatives. At the heart of the issue is a recognition that a key purpose of any measurement is to drive improvement. If you’d like more detail on any of these techniques, please feel free to get in touch at info@limebridge.com.au or call 03 9499 3550 or 0438 652 396.
Comments