top of page

The Great Measurement Debate and Working from Home

No measures versus the right measures


The seismic shift to working from home brings to the fore an emerging debate regarding different measurement business models. We’ve been tracking a trend that some organisations have been removing many traditional measures and claiming great improvements in both culture and performance. These companies claim that if you give people the right training and freedoms to act, but take away measurement, they perform better and take sensible decisions that help customers and sort problems. Counter to this is traditional management theory that “you get what you measure” and that people respond to clarity and simplicity of measurement, and lift performance if they know how they are performing and what performance is expected.

Working from home creates new challenges for this measurement debate. In many jobs, performance and behaviours become less visible when staff are at home. It is harder to see and hear what staff are doing. A team leader sitting with their team can see and hear frustrations, anger, boredom, tiredness and stress symptoms in their team members. They can see physical and emotional signals that are invisible when staff are remote. In theory this means that formal measurement becomes more important because it’s the only way managers can tell how staff are travelling. However, the counter argument runs that “measurement free” systems are less undermined by hybrid work environments where some are in the office and others not. The claim is that “self-motivated” cultures and workplaces will work just as well at home.


In this paper we’ll explore the two different camps and try and assess the ingredients for success on either side of the debate. Our view is that either option can work but in both cases the management processes and other operating model dimensions have to be correctly aligned. Our learning is that the real issue is not whether you do or don’t measure, but how you make either “system” work well.


1. The “No measures” Camp

One of the first companies to try a measures free approach was Semco in Brazil who wrote a Harvard Business Review article in the late 1980s on the democratisation of their business and their stripping back of management and increased distribution of decision making to locations, plants etc. However, the company also had significant employee share ownership that meant that everyone was motivated for success. They also made many collegial decisions with some taken by a companywide vote. This illustrates that the company had a complete set of systems, processes and reward mechanisms that supported a measurement free system. One could argue, the lack of measures was almost an outworking of other elements of the system, rather than the thing that drove success.


Start-up companies, where everyone has shares and want the business to succeed, are often organisations with few measures. However, recent evidence suggests that this may only work when start-ups are in high growth mode. In recent months, companies where growth is slowing like Canva and Atlassian, have gone very public in strengthening their performance management systems because investors are pushing them to show cost control. At Facebook, many staff even write their own job descriptions and so measurement isn’t a big thing. However, in recent years there has also been stern external criticism of the culture and issues that this environment has created, with government scrutiny on several continents exposing a lack of care about issues such as privacy and data controls. The “make your own adventure” model has some drawbacks.


A UK insurance business removed many front-line measures in recent years and claimed great success in “empowering” front line staff to take decisions and do the right thing without measures. They also claimed to have removed layers of middle management. That wasn’t the complete story. Before moving to this “way of working” the company had defined four key principles such as “reducing repeat work” for this way of working and 12 key planks to support that model such as “only doing something if it adds value for the business or customer”. One of these twelve planks was to “measure against these principles”. This change moved the business from measuring individuals on outcomes and results, to measuring behaviours. It worked for them, but it wasn’t really “measurement free”, more “measurement different”.


Conclusions on “No Measurement” models

The companies that have removed traditional measures have often created either different measurement systems or different ways to motivate staff. In some cases, they have provided ownership and growth incentives, and in others they define the desired behaviours and shifted measurement to those. Perhaps the biggest lesson here is, if you drop traditional individual measures, be sure of what will motivate or what you will measure instead. In a work from home situation, having clearly defined behaviours or incentives seems even more important, and then creates the challenges of how to monitor and measure those behaviours.


2. The “get what you measure perspective”

Many organisations believe measurement and associated incentives to be critical to driving performance but often they are unpopular or ineffective. We’ll look at why that happens and the design principles that can be used to overcome the most frequent issues so that measurements get their desired outcomes. The issues and principles discussed here apply as much for at-home as office settings. Work from home does add complexity in making some behaviours associated with measures less visible. It’s easy sitting in a team to see some measures being gamed, but often harder in a remote work setting.


Sales parts of organisations seem the most focused on measures, but they often attract staff who like to compete and are motivated by sales incentives. This isn’t a guarantee of success and can produce poor outcomes such as:

  • Overselling, as highlighted by the Australian Banking Royal Commission. The Commission found sales incentives were driving inappropriate sales to prospects unable to understand what they were buying. The commission put this down to aggressive sales reward schemes.

  • Result manipulation where success is claimed but isn’t sustained or real. For example, in the early days of internet banking one bank rewarded branches for signing customers up for internet banking even if these customers never logged on. It was easy to game that measure.

  • Manipulation of the way results are calculated. We have seen sales staff lift their “measured performance” by transferring calls to other areas, so they don’t count as “prospects” against conversion rate targets.

Each of those negative outcomes can be addressed with good measurement design. For example, an organisation was concerned that poor sales behaviours could produce “unsticky” customers who lapsed within months of the sale. The company moved to rewarding sales only where customers stayed for 6 months or more. They aligned the strategy and the measure. This illustrates an important principle in aligning what matters to the business with measures of staff. There are many examples of measurement not following this principal such as:


  • Back-office admin staff getting almost as much productivity credit for “rejecting” a form as for processing one to completion. Rather than fixing obvious issues and getting on with the work, staff would return forms to customers (see cartoon)

  • Call taking staff being measured on “hold time” so using ‘dead air” instead because it can’t be measured

  • Contact centre staff being measured almost exclusively on apparent productivity targets like handling time when resolution and the way the call was handled are more important to the business and customer.

In addition to alignment to business goals, there are a range of other tests that can be applied to check measures will work. Almost as important as “strategic alignment” is “controllability” by the person being measured. If they can't control the results being measured, then it is both unfair and demotivating. For example, if air-craft crew can’t control the weather or air traffic control driven delays, then it will be demotivating to measure them on “on time arrival”. In contrast, we find that staff can control the process they follow, provided they have been trained on the processes. Then it is fair to measure them on compliance to the process and up to management to make sure those processes are as good as they can be and to coach and train staff to use the process in the most effective way.

A third important principle is that the measure will be effective in measuring what was intended and won’t result in unintended consequences. Too often we see “well intentioned” measures produce predictable but poor outcomes. Classic examples of this include measuring staff on “adherence to schedule” i.e. that they go to lunch, breaks etc at a planned time. That provides a perverse incentive not to serve a customer before a scheduled break, even when customers are queuing. Unintended consequences also follow from poorly thought through quality measures. For example, an organisation had decided that they wanted front line staff to “promote” digital capability when dealing with customers and built this into their quality measurement. Front line staff felt like they had to promote digital on every interaction to pass the quality assessment. They would jam “digital promotion” into conversations in inappropriate ways when it made no sense to customers. It ticked the quality box but created poor experiences (and led to yet another cartoon – see “tried digital mate”).


A fourth principle is to look at the alignment of measures “up and down” the organisation. For example, if middle management are measured on factor X, often they will make their staff accountable for X even though it isn’t a formal measure. Staff even take it into their own hands once they know their boss is measured on X, they will naturally think it is important. In one bank, front line staff had no individual sales targets. When new sales channels were created, staff were supposed to hand off sales opportunities to the new self-service sales technology. However, the branch and branch manager had sales targets and therefore staff thought they were doing the right thing by hanging onto sales opportunities and trying to convert them themselves. Their behaviour undermined the new channels and process. It didn’t matter that they didn’t have targets – they behaved like they did, because of the targets of those they reported to.


Conclusions on measurement

There are lots of things to get right in measurement design and it is very easy to get it wrong. However, if measures adhere to some key principles and organisations think through issues like alignment and unintended consequences, then they can drive appropriate behaviours. It is interesting that some organisations can get so detached from actual performance and behaviours that they don’t recognise the issues that the measurements are causing. With at home working, that risk of detachment from reality is even greater. The poor examples we have quoted here are all real and many organisations didn’t fix the problems even when the issues were raised.


Summary

In this paper, we’ve explained the two sides of the measurement debate. We understand the push back on measurement as some have been so badly designed. However, our read of the “measurement free” environments is that they measure things – just different things. If you’d like more detail on the principles and issues we have discussed, we love designing measurement, so please feel free to get in touch at info@limebridge.com.au or call 03 9499 3550 or 0438 652 396.

Comments


Whitepaper Access

Please complete the following form to gain access to all our whitepapers

Please complete all required fields.

Submit

If you have already registered, this form will disappear in a few seconds

Whitepaper Access

Please complete the following form to gain access to all our whitepapers

Featured Posts
Recent Posts
Search By Tags
Contact us to discuss ideas in this White Paper
bottom of page