top of page

Repeat work déjà vu!

Creating capacity by tackling the worst kind of customer effort


A number of our papers this year have looked at ways to address the “capacity crunch” impacting Australian business. With unemployment In Australia now down to 3.9% at time of writing and unmet job ads at record numbers, most businesses need ways to reduce workloads so that they need fewer staff. Our most recent white paper looked at using this mini crisis to get the whole of any business to help reduce the demand for contact. Our research shows that most demand is created outside the service or care teams that have to handle it. For example, the customer service teams don’t cause delays in key processes like claims or package delivery and they don’t design the bill, the portal log in process or the web pages that customers don’t understand. Those are all examples of demand “generated” elsewhere in the business that customer service areas have to “fix”. Whether it be calls, branch visits, chat, messaging or email, most of the causes of the work sit elsewhere and our last paper suggested some ways to tackle that based on the ideas in our new book “The Frictionless Organization”.


In the book we describe that this isn't the whole story. The customer facing teams and operating models do cause some of their own work. Typically, we find that anywhere from 10% to 45% of workload in customer facing service teams represents some form of repeat work. These issues take longer to “sort out” than a typical first time contact so they may be a certain percentage by volume, but a larger proportion of workload. This repeat work can therefore be a high cost to the business. The chart shows the disproportionate workload in one operation where unresolved items got more and more costly to sort out. The worst problems were nearly three times as expensive to handle per contact than items handled once.


Repeat work has impacts beyond the volumes and workload as they are major sources of frustration for customers. These interactions have a high correlation with customers choosing to leave the business and therefore have a disproportionate revenue “cost”. Many of these contacts also incur other downstream costs like fees waived, rebates and items re-sent. They are also the most likely to result in formal complaints that may attract other costs as well as reputational damage.


These repeat interactions are therefore a major drag on cost and capacity. We’ll look at solutions and case examples of three types:

1. How to get a grip on the problem and causes

2. How to fix operating models that cause repeat work

3. How to measure and reward staff on their ability to prevent these issues


1. Getting a Grip on repeat work and resolution

Getting a grip on resolution and the flip side “repeat contact” has always been hard because many organizations have no way to track this. Many organizations use simplifying assumptions like “X% of customers call back within 7 days so they must be repeats and the rest must be resolved”. Unfortunately, the proliferation of channels has made this harder rather than easier because customers and issues can move across channels so repeats’ customer effort spans channels. A customer may appear to have made just one contact by chat or email only to follow up with a call or vice versa. The repeat work is harder to spot and measure now that issues and problems flow across more channels.


Our earlier paper, the Resolution Holy Grail explains a range of mechanisms

that can be used to measure resolution and repeats in more detail. The latest data analytics tools can add great sophistication and rigour by enabling a business to measure customers’ repeat work:

  • Within channels

  • Across channels

  • For the same reason

  • For related reasons e.g., non resolution of A spawns problem B

  • Across relevant time periods

For example, these analytics mechanisms can spot that Customer A used web chat to raise an issue about his bill, then emails two days later to try and resolve it before calling two months later when his next bill appears to raise the same issue. Even though this work spans channels and an extended time period, it can be measured by these tools as repeat work rather than being seen as independent and unrelated events. This kind of analysis may show a bigger opportunity than was first assumed, when repeat work is measured right across a business. This analysis can also help by highlighting which types of problems have a high repeat percentage and allow an organisation to focus on problem areas with a high potential for return. Analytics can also include a range of others costs and impacts such as loss of customers and therefore help build the case to prioritise improvements.


Automated channels can also be measured on their resolution level. Self-service “containment” rate measures the extent to which customers complete the work they want to within the channel. This is a valuable measure of self-service success but sometimes it is a guesstimate rather than a precise measure. For example, some bots measure “success” as a customer abandoning a session after information is provided. There is an assumption of success but of course some customers may abandon in frustration and divert to another channel. Therefore, it is important to validate these apparent containment rates by looking at whether they substitute for contacts in other channels. If contacts increase in the new channel, but don’t fall proportionately, in the older channels, then no substitution has occurred. Where self-service “containment” occurs, then the rate of contact should be falling in assisted channels. Some self-service and bot designs seek to confirm resolution by asking customers to confirm success, but this runs the risk of putting the customer to more work just to provide measurement.


Case example:

In one Wealth Management business, the most common contact represented “cross channel”: repeat work as customers would call when they couldn’t log onto the self-service system. Deeper dive analysis showed that many of the assisted contacts were secondary repeats because staff weren’t clear on the “rules” for password formats, so customers would try again after a phone call and still fail and then call back again. The analysis led to three levels of solutions: a) Providing customers better self-help when things did go wrong

b) Providing better knowledge and tools to front line staff to solve problems

c) Re-evaluating the business rules and how they were applied to security and passwords.


2. Operating Models that “Cause” repeat work and how to fix them

Operating model design can often unintentionally drive repeat contact. In these instances, it’s things like the structure, process, measurement or a combination that causes repeat work. Two examples illustrate the problem.


Case 1: Work “request-itis”

In one company, resolution by the customer-facing teams was measured at only 30% of interactions (through a sampling approach). In over 40% of contacts, the team had to “request” work from other areas of the business and in another 20% they sent the customer away to complete actions. Some of that analysis is shown in the chart.

This model of non-resolution led to extensive repeat work as customers called to “chase” the work or complained that the work hadn’t been performed correctly.


The theoretical benefit behind this work structure was to free up customer facing staff from repeatable tasks whilst using the labour cost savings of an offshore location. However, the consequence of this operating model design was that nearly 50% of the contacts were repeats. Far from freeing up the time of front-line staff, this model had them chasing work and dealing with frustrated customers.

Correcting this model used a process we call “Back to front” reengineering. Many types of work were moved from the back office to the “front” office so that the front-line staff could resolve more. On shore labour costs were higher but there was so much overhead and waste in the “hand off model” that this was a financial benefit as well as reduced effort and work for customers. The revised model slashed the amount of repeat work and made customers happier while getting them faster solutions. The workload in the system fell dramatically and produced less work overall in the front office, even though they were handling more parts of the process.


Case 2: Reject mania

This example illustrated the problems caused by poor measures and processes combined. In this business the repeat contact was customers chasing work that they had initiated. There were a series of quite complex processes that required customers to complete detailed forms and provide complex information. Often, they would call first to ask how to do the process and then be sent a form. Call analysis showed that many called back to ask, “where is my X up to?”. Then they often called again saying “why did you send this back to me?” So, there were many repeat contacts and often re-work for customers and processing teams.

There were two underlying problems. The first was one of form design. The forms were paper based and often “multipurpose”. So a single form might be used for three different request types and customers would have to work out how to complete them. Customers would get confused and fill out too much or not realise which items were mandatory. When data was missing or wrong, rather than calling customers to sort out obvious issues, the admin teams rejected any forms that were incomplete and mailed them back. Rejecting a form and sending them back to customers generated better productivity credits than taking the time to call the customer or look up missing data. The measurement system was such that it rewarded rejection and re-work.

The solutions for this were multi-faceted. Firstly, revised form design and use of online forms made each form simpler and easier to get right. Secondly, contact centre staff were given access to systems so they could clean up any issues for customers when they called and lastly the measurement system was redesigned to reward resolution rather than rejection. These changes are still work in progress but showing signs of 20-30% reduction in workload for these requests and the benefits could be larger. The customer experience is also much improved.


3. Measuring and rewarding staff on their ability to prevent or solve issues

It is surprising how rarely front-line service staff are measured on things like resolution or repeats. Many organizations quality sample 5-10 interactions a month in an attempt to measure process adherence and sometimes this includes resolution. Customer surveys are also now used as a way of measuring customer’s perceptions of interactions be they chat, voice or front counter. It’s very rare to find a company that can measure the resolution achieved by their staff or the repeats that they cause. There are two mechanisms that we’ve seen be effective and technology is now enabling the second:


Amazon and Snowballs

Over twenty years ago, Amazon started measuring their staff on the repeat contacts they created. Repeats were called” snowballs”, using the analogy that a snowball rolling down a hill gets bigger as does a repeat contact. Staff were encouraged and trained to melt any “snowballs” they received and log the fact they had done so.

The system became self-measuring. A melted snowball earnt a credit for the person who solved the problem and a minus score for the person who created the initial snowball. Every staff member was measured on their “net snowball” rate and whether they were creators or resolvers of problems. This became an important metric because it measured both the quality of initial calls and the capability of staff to fix problems. It became one of the most important measures of staff, teams and centres. It was also used to measure outsourcer performance and penalise or reward them. It meant that Amazon needed little random quality monitoring. The snowballs recorded against staff showed team leaders areas where their team members needed coaching and training. In effect it became the ultimate measure of quality and outcome for the customer. The data captured was also used to spot systemic issues like training gaps or processes that everyone struggled with. It was a great measure that focused staff on solving customer problems and revolutionized coaching and feedback.


Analytics driven repeat measurement

Once an organization has analytics in places to track the reasons for contact, it can also measure repeat contacts at a detailed level (see above). Repeat measurement at the customer level also enables repeat and resolution measurement of staff who handle the contacts. They can be measured ‘repeats they caused’ and ‘repeats they solved’ as with the Amazon case but without the need for an additional logging process. Analytics can calculate resolution at the staff member level as the “inverse” of the rate of repeats they cause. These mechanisms can even track that contact with an agent in one channel resulted in a repeat contact in another channel. For example, if a web chat team member fails to handle a problem which then gets sorted through a phone call, this can be measured against both the call and chat staff members. In effect this automates the manual mechanism that Amazon used through Snowball tracking.

Using this repeat data, in the example shown, agent performance was plotted on two axis showing repeats created and repeats solved. The agents who were net problem solvers showed above the midline and the net problem creators were below the line. Such clear measurement enables alignment of rewards and incentives but also drives continuous improvement and coaching.

Direct and objective measurement of resolution and repeats enables a targeted approach to coaching and feedback. Instead of randomly sampling calls in very small samples, team leaders and coaches can focus on the contacts that haven’t worked well and use these for coaching and feedback. They can also reward staff for contacts that are handled well. At a macro level, continuous improvement teams have data to identify systemic problem areas and can also help reduce repeats.


Summary

In this paper, we hope we have explained that another way to tackle capacity constraints is to address repeat work. We’ve described ways to measure the problem, find operating model fixes and build it into staff measurement and rewards. Some of the solutions require sophisticated technology while others can be driven from contact sampling and existing measurement mechanisms. Fixing repeat work isn’t easy but it produces the double win of lower costs and better experiences. If you’d like to discuss these techniques further, please feel free to get in touch at info@limebridge.com.au or call 03 9499 3550 or 0438 652 396.

Comentarios


Whitepaper Access

Please complete the following form to gain access to all our whitepapers

Please complete all required fields.

Submit

If you have already registered, this form will disappear in a few seconds

Whitepaper Access

Please complete the following form to gain access to all our whitepapers

Featured Posts
Recent Posts
Search By Tags
Contact us to discuss ideas in this White Paper
bottom of page