Lost Voice of Customer
When Voice of Customer Matters Most – Can you Hear it?
Many organisations have measurement systems in place to try and provide the Voice of Customer or give an indication of the customer’s perception of their experience. In addition, most companies have some form of periodic customer satisfaction and relationship survey and many have real time feedback systems that provide
“transactional” NPS. These measures have become common and some of the data they provide is reported and shared at executive board level. The extent of customer experience measurement has grown exponentially in the last decade. Despite this there have been scandals of poor customer treatment and royal commissions looking at customer issues in a range of industries. The increase in measurement seems at “odds” with the scandals and issues that have emerged. If the intent of all these surveys is to make companies more aware of what their customers are experiencing, then in theory they should have been well aware of some of the issues and scandals that have emerged.
The recent commissions and enquiries concluded that many of the problem customer issues weren’t visible even though these measures existed. There appears to be some form of disconnection between these attempts to bring the customer voice into companies and the lack of visibility of problems and issues that customers have. In this paper we’ll look at ways to ensure the organisation has methods to listen and act on problematic customer experiences. We’ve identified four key areas to address that we’ll describe in turn as crisis busy-ness, organisational denial, managing with averages and missing the obvious although often all four can co-exist.
Crisis Busy-ness
In many of the worst problem situations customers are too busy dealing with the situation to have time or mechanisms to complain. A customer in a situation such as lost bags on a trip, no internet access or a very late delivery of a crucial item is in the middle of their own “damage control” process. They have work to do trying to re-arrange their life around this failure. Whilst many would like to complain or give feedback, they are often too busy dealing with the consequences of the problem to find time to complain or respond to surveys. These bad situations may also trigger customers to change to another organisation and therefore their perspectives leave with them.
Similarly, organisations are often very busy dealing with problem situations that impact
many customers like a web site outage, a system issue or a weather driven “mass” cancellation. In these semi crisis situations staff have limited time to pass on feedback or pass issues up the line. Sometimes surveys and feedback requests are turned off or aren’t available in these busy periods. Often big problems (for example an extended internet outage) create lots of “extra work” in the organisation like fixing up accounts or taking extra calls so there is limited capacity to analyse what this has meant for customers and what recovery is really required.
The answer we’d recommend for these big crisis type scenarios is to take the time after the event to understand customers’ reactions and what they expect in some form of “repair”. Even the act of listening and a limited acknowledgement and apology can help restore some trust. If a company shows interest in how a bad situation impacted customers, it can assist in the relationship, but the company must be prepared to accept further criticism. For an event like a lost bag or misplaced parcel, getting feedback is crucial to understanding customer expectations in these situations. Organisations may need to use more costly mechanisms to get feedback such as outbound calls These requests for feedback need to be handled sensitively and show empathy to the situation the customer has experienced.
The insights to be gleaned on what went wrong and how it was handled provide useful information on how customers expect these situations to be handled in the future. Post event surveys can also reveal how staff and processes in a company hold up under pressure. Post “bad-event” feedback can reveal more about the culture and processes of a company than transactional feedback on routine interactions.
Organisations that don’t handle these mistakes and problems well, or are unwilling to investigate the impact on customers, are showing that any customer centricity may be merely a veneer. Truly customer centric businesses don’t shirk these issues. They seek out feedback on what occurred and how it could be handled better and are prepared to accept and act on the negative feedback they receive.
Organisational Denial
Some organisations have got so used to certain situations that cause customer issues that they have absolved themselves of responsibility. Airlines, for example, consider weather delays or even maintenance issues on their planes as “facts of life”. Some airlines seem to take no responsibility for these issues when they impact customers. We know a major airline that never surveys customers if their flights are delayed. During a major airport outage last year, where many flights were cancelled and passengers stranded, some airlines left passengers to find their own accommodation at very short notice and therefore very high prices. Many faced delays of up to 24 hours but there was limited acknowledgment of the issue or impact – not even a free drink or snack on the subsequent flights for frequent flyers. The contractual small print may say “the weather isn’t our problem” but stranded customers don’t care! This organisational “denial” also means that customer feedback isn’t sought and there is no mechanism to feed the voice of customers back into the organisation. The company’s “voice of customer” stops listening when customers are unhappy. It’s a bit like saying “we measure customer experience when it suits us”.
To solve this problem, organisations have to start walking better in their customer shoes. The first stage of this is to turn up rather than turn down or turn off the feedback systems when things have gone wrong regardless of the cause. We would recommend
a “sample bias” towards problem situations. There is far more to be learnt from how well a hotel handled a botched reservation than from a visit that went flawlessly. The companies need to seek feedback post a problem and get the customer reaction to how it was managed and what it meant for them. Asking for feedback can be a show of empathy to the situation the customer went through but this also needs to recognise that the customer is giving up more of their time to provide the feedback. The request therefore needs to be couched in terms that recognise the history of events e.g. “we know you recently experienced a delay that was inconvenient, we’d be grateful for your feedback on how we managed this so that we can help customers better in the future”. This can be even more powerful if there is some potential benefit for the customer in responding or if there is some recovery element in the process e.g. in compensation for your delay, we are offering X but would be grateful for your feedback. These surveys may have to ask different questions such as how well the company kept the customer informed or whether they provided enough alternatives to assist the customer. The surveys need careful design to be relevant to these situations.
Managing with Averages
Many of the current voice of customer trends suggest that companies can manage with “one number”. Fred Reichheld, who invented the Net Promoter Score, suggested that it was the only number that management needed. Organisations declare their Net Promoter Score, “Top box satisfaction” or “average speed of answer” as their key metrics. Sometimes this is the sole focus of senior executives and the board. However, any average score hides the range of responses that led to this score. Any NPS score may contain a large volume of unhappy detractors. There can also be hidden survey bias that means that some customers are less likely to respond and provide feedback. We have often seen front line stuff learn how to “game” these systems and avoid any negative feedback. Most organisations also use a sampling methodology which may mean that most customers are not asked for feedback. It is therefore possible for many problems not to be exposed by these mechanisms.
A secondary problem is that well managed recovery and issues well handled, still produce positive feedback on call surveys and through NPS. We’ve seen many examples of companies getting higher NPS scores from those who have had problems fixed than those who have no issues. Despite the value of this recovery, we have also seen evidence that those who never make contact are as loyal and more profitable because no contact occurred. The danger of managing with these survey scores, is that they mask the issues or problems that led to the customer contact in the first place. They can lull the organisation into thinking they are improving customer experiences when really, they are just getting better at recovery. That is a good outcome but a costly way to operate.
Some organisations recognise the danger of averages and focus more on the lessons to be learnt from contacts that shouldn’t have occurred, repeat contacts, negative feedback and complaints. They analyse things like complaints and detractors in greater detail as they recognise that there is more potential to improve if these reasons and problems can be addressed. We are passionate advocates of analysing and addressing the causes of all repeat contacts (see The Best Service is no Service – Josey Bass 2008 and our related white papers). The benefits for the organisation if root causes of repeats and complaints can be eliminated are both customer satisfaction and reduced cost.
We’ve also seen organisations ask staff to provide feedback on situations and processes that customers don’t like. This is a process Amazon called “WOCAS” or “what our customers are saying”. This allows a company to tap into feedback continuously without asking customers to go to more effort. In our studies we always tap into the front-line staff because they hear informal customer feedback all day and every day. To make this process systematic, companies are using speech and text analysis tools to start to extract feedback from all interactions be they calls, emails or chat interactions.
Missing the obvious
Voice of customer in most organisations has become associated with surveys and feedback. We have written other papers talking about “survey fatigue” and how surveys represent effort for customers. In contrast, the processes that put customers to no extra effort, are often not tapped into at all as a source of voice of customer. For example, almost every call, email or chat is telling you something. In “The Best Service is No Service” we urge organisations to analyse and address the causes of contacts as they represent issues and effort for customers.
Often 60-90% of calls, emails and chats represent hidden feedback that something isn’t working, or a customer is confused. In our diagnostics at clients we obtain detail of why the customers are making contact and then classify the proportion that are irritating to the customer (see the matrix above). We have never seen a survey that asked customers “did you want to call/email/chat?" but it’s pretty obvious when their query relates to confusions or problems, that they would rather not have made the contact. We call these irritants and in some companies, up to 80% of the contacts can be seen as irritating. They also represent avoidable cost.
At a “macro” level, if organisations want one number to manage, we’d suggest that the rate of contact (we call it C per X) is a key indicator of how well organisations are serving customers. This could be contacts per customer, contacts per account or contacts per order. The lower the ratio, the easier an organisation is to deal with. This sounds simple, but in an era of expanding channels (calls, emails, chat, messaging) it has become harder to bring these channels together to create such a measure. It seems to us that this kind of measure is the clearest indicator of customer effort and how easy an organisation is to deal with. It also means an organisation has to ask the question “Why are our customers contacting us?”.
The other “obvious” measures we see missed relate to the duration of contact. Many organisations obsess about service levels such as how quickly they answer a call or respond to an email (see ASA or average speed of answer shown).
In our diagnostics we look at the “total effort” for the customer. This includes the time they spend “navigating” through an IVR, the wait time and the duration of the contact.
This starts to expose the total effort of customers on contacts rather than just their wait time or talk time and indicates what may be a greater priority for customers. Of course, customers don’t like waiting in a queue, but they also don’t long complex conversations and most importantly failure to resolve a call (see our other papers on the ultimate customer measures).
We also break down averages. Often, we look at how many calls exceed thresholds of high effort such as ten minutes (see diagram). In this example, 37% of customers had to invest over ten minutes on the talking component of the call-in addition to their wait time and navigation time. This sort of analysis gets executives to ask why processes are taking so long and question how customer time (as well as staff time) is being used. Measuring effort in this way is relatively simple but provides a clear view of the true costs of time spent by the customer. Better still it shows where saving time for customers will also save money for the company. Shorter contacts and better still, no contacts, are a win for all.
Summary
Measuring the voice of customer has become common across business and it can certainly help drive improvements. We hope this paper has shown that organisations should work harder in crisis situations, should be careful of averages and think hard about which issues to measure. They can also tap into much simpler measures of customer effort that also uncover avoidable cost. We’re happy to explain more about the solutions we have recommended. For more information email us at info@limebridge.com.au or call 03 9499 3550.