Delight Your Customers… Sometimes
Our Henley Business School report on Customer Effort
In Part One of a two-part special report into the challenges, application and advantages of using customer effort (CE) programmes to gain customer loyalty, Professor Moira Clark of Henley Business School explains why some of the received wisdom about CE is wrong. How does it really work, and why?
Customer Effort: Part One of two
‘Customer effort’ (CE) research has been around since the 1940s, but it wasn’t until 2010 when an article, Stop trying to delight your customers, appeared in Harvard Business Review that the debate about it began to gain momentum. The article claimed that reducing customer effort – the work that customers must do to get their problems solved – was a better predictor of customer loyalty than trying to increase their delight at every stage of a company’s relationship with them.
The findings were certainly compelling. Among customers who experienced low effort in their dealings with a company, 94 per cent company expressed an intention to repurchase, while 88 per cent said they would increase their spending. Meanwhile, among those customers who had a hard time solving their problems, 81 per cent reported an active intention to spread negative word of mouth.
Of course, thinking from your customers’ perspectives and making life less difficult for them is a sensible strategy for any organisation. But are things always as simple as the above results suggested? The short answer is no. Part of the problem is that ‘effort’ can be a tough concept to pin down, and Henley Business School research suggests that it can be better expressed in other ways.
In this two-part report, we explain why that seminal article was wrong in some key respects, and why a better approach would be ‘Delight your customers, but only where they value it’, rather than the implicit advice to ‘abandon service excellence’.
However, as both parts One and Two of this report will explore, many of the findings in the Harvard Business Review report are correct. We explain why, and what organisations should do about it – and in particular, how they can make the concept of customer effort much easier to implement.
But first, what do we mean by customer effort?
Defining ‘customer effort’
Customer effort is a person’s perception of the amount of time and energy that they have to expend in any encounter with a brand or organisation. An encounter includes (but isn’t limited to):
• getting information about a product or service
• purchasing it
• getting a problem solved post-purchase; and
• actually using the product, be it a car, a Sky box, or an iPhone.
In this context, ‘effort’ could be defined as the non-monetary cost of consumption, which is different to an objective measure of time and energy. So, we can already see that it’s not a simple concept for customers to respond to in an online questionnaire, or in a series of options via an interactive voice recognition (IVR) system.
There are further problems with the question of ‘effort’: CE can be a global judgement – when a customer thinks about all of their encounters with an organisation – or a judgement about a single encounter, be it a positive or a negative one. (In the online space, customers often make rapid decisions about whether to revisit, so reducing their effort there is clearly a good strategy.)
More, the concept can sometimes be complicated further by thinking about customer expectations and about their ideas of value.
The more effort that someone puts into something, the more they typically expect in return – unless they’re doing voluntary or community work, of course. This is because effort is a cost, and so there is a trade-off between the effort that a customer puts into something and the reward they expect to receive.
The outcome of that trade-off influences their perceptions of how happy they are. Putting obstacles in the way of a commodity purchase, or making people jump through multiple hoops to access a simple service, makes people frustrated. This is because what they get in return doesn’t feel like compensation.
But this doesn’t apply to all types of human experience. For example, the satisfaction of mastering a difficult musical instrument is much greater than learning how to play the triangle. The effort required for the former is much greater than the latter, and so many people simply give up. Others persist and are rewarded with something that enhances their lives. For them, therefore, the effort is part of the pleasure.
On rare occasions, these broad principles might apply in business in specific niches, or at the top ends of certain markets. For example, few customers would expect to order a limited-edition Ferrari or Bugatti with a single click. So if you’re – genuinely – a company that can afford a high degree of ‘failure’, then customer effort may not be a measure that you need to give much consideration to. ‘Your’ customers are the ones that are rewarded with something exclusive or unusual; the ‘also rans’ were never the targets to begin with.
But in business, such occasions are very much the exception rather than the rule.
Nevertheless, to some degree CE is about understanding your market. In the same way as spending more money on an item can increase a customer’s perception of its value (as the luxury end of the fashion industry has shown), customers may also value a product or service more, and evaluate its performance more highly, as the ‘effort cost’ goes up. They may also believe that more effort increases the likelihood of making the right decision, probably because they want to receive apt compensation for all of their expended energies.
Effort, therefore, can sometimes be a more complex and subjective consideration than some studies have suggested. But what are its components? Let’s consider four types.
This is the amount of mental energy that’s required to process information. If things aren’t simple, or if there is too much uncertainty or too much choice, cognitive effort can be high.
Cognitive effort has been extensively researched in economics, psychology, marketing and decision-making theory. In all of these areas, consumers are consistently described as having limited cognitive resources; they are ‘cognitive misers’ who strive to reduce the amount of mental energy associated with decision-making.
For example, research has shown that people don’t necessarily want the best answer; they will often settle for whichever one incurs the lowest ‘decision cost’ and work. This is particularly true when complexity is high, where there are numerous alternatives, and/or products or services are difficult to compare.
This is the amount of time that consumers think that it takes to do something – which tends to be a perception rather than a reality. For example, studies have shown that consumers significantly overestimate the time they spend waiting, such as in a queue (be it a real or a virtual one).
This is often a design issue. Studies on the factors that influence consumers’ reactions to waiting show that service, physical environment, distractions, perceived fairness, customer state of mind, and availability of information can all be used to influence their perceptions of time in different ways.
So understanding time effort is not just about measuring the number of minutes it takes to serve someone – it’s about their perception of how long it takes. That perception can be altered by good design.
Anxiety, stress, anger, fear, boredom and frustration are all psychological costs related to emotional effort. These can be the results of: a problem with staff or other customers; an inability to access the right people, processes or procedures; complaints not being properly dealt with; failures in technology; and feelings of personal risk due to safety or security concerns.
These are familiar emotions for many customers, particularly when dealing with call centres, – as study after study has shown.
This is the amount of physical energy that needs to be exerted to do something, such as lugging bulky goods around, having to walk long distances, or having to go physically into a bank or building society when many functions could be better served online.
All of the above components of customer effort can be used to design easier customer ‘journeys’ (such as through a call centre or ecommerce site) after the customer effort score (CES) has been analysed in each case and the key action points identified.
The relationship between effort and involvement
In the real world, most purchases are low value/involvement, and high frequency/familiarity. The decision processes associated with these types of encounters, therefore, ought to be simple and straightforward and demand little in terms of time and effort.
However, if customers are trying to reach a goal that really matters to them, and/or they closely identify with a particular brand, then they have both a higher level of personal involvement and a higher perception of risk, especially if the product or service is complex (such as choosing a mortgage and buying a house), or time dependent (such as buying a gift or booking a holiday).
But it’s important to mention that the cost of an item or service is not always directly related to the perceived risk. To some consumers, choosing the correct items – tyres, pain killers, contact lenses, or hundreds of other product types – can represent a risk as huge as choosing a new television or car. An individual’s propensity to take risks (or avoid them) can influence how much effort they are willing to invest and how satisfied they are.
Customer effort in practice
There have been increasing numbers of articles recently on the merits and application of customer effort. These primarily relate it to established customer service measures, such as customer satisfaction and net promoter scores, and are largely opinion pieces rather than objective research. So rather than explore these, we’ve interviewed five major companies that have practical experience of implementing customer effort measures within their organisations, and we present the findings of this research below.
UK retail telecoms giant BT was one of the companies, while the other four asked to remain anonymous. In the findings below and in Part Two, we refer to those companies as: Company B1 (European, B2B, in the fast-moving consumer goods sector); Company B2 (European, B2B, in the technology sector); Company C2 (UK, B2C, in the holidays sector); and Company C3 (UK, B2C, in the financial services industry).
Since there were distinctly different approaches between the B2C and B2B respondents, the two categories are discussed separately in the sections that follow.
Why invest in customer effort?
Inevitably, the 2010 Harvard Business Review article, Stop trying to delight your customers, inspired some of the companies that we interviewed to explore whether CE is a better indicator of customer loyalty than customer satisfaction or net promoter scores. More importantly, perhaps, they felt that the concept of CE was much easier to understand than those others.
All the B2C companies we interviewed had a common objective for investing in customer effort: to improve their customer loyalty. Each also had well-established customer service measures in place, based on surveys and feedback analysis. They all felt that CE was an approach that complemented their existing programmes.
Company C2 (in the travel sector) said: “The core reason for us being in business is to provide services to customers that they would find more difficult to locate on their own. Our goal is to make it easy for customers by reducing the effort they have to expend. So it makes perfect sense to measure that effort.”
Company C2 added that its own research on what it calls ‘satisficers’ (which measure the tendency for customers to select the first option that meets their needs rather than the ‘optimal’ solution) supports the approach of not attempting to delight each customer at every stage of the customer journey, as the Harvard Business Review article advised.
This behaviour is well understood in the travel sector, where customers actually enjoy the process and effort involved in researching holidays, but then want the booking process to be as simple as possible.
Company C3 (in the financial services space), had attended a conference on the merits of CE, and was impressed with what it heard. It decided to start by applying CE measurements to its telephone channel and then see if differences in customer performance emerged compared with other channels. That process is still ongoing, but the initial results are promising.
The advantages of ease
Meanwhile, the two B2B companies’ expectations were that being seen as easier to use would have a simple, positive impact on customer loyalty and, by the same token, that being seen as ‘difficult’ would have a negative effect. Both companies were already conducting continuous improvement initiatives, but lacked input from their customers. For them, the CE approach seemed like a good way to capture the voice of the customer.
Company B1 (in the FMCG sector) kicked off an internal programme across the organisation to keep the idea of being easier to do business with uppermost in the minds of its employees when dealing with customers. It had already refined processes to meet operational key performance indicators (KPIs), but wanted to shift the focus externally to the things that mattered to its customers.
Measuring CE was attractive as it provided a way of identifying whatever caused problems for those customers and, importantly, what needed to change. “We realised that customers want transactions to be as easy as possible and this was not always the case,” said Company B1.
Company B2 (technology) went through a very similar process. It already had an extensive programme for measuring customer satisfaction in place and added CE questions to it.
In both cases, the challenge was to establish a way of measuring customers’ effort, to identify where improvements were needed, and then to measure the outcomes. To achieve this, both companies found that it made more sense to phrase questions around ‘How easy is it to do business with us?’ rather than ‘How much effort is required?’, because the concept of ‘easy’ was itself much easier to understand than asking customers to quantify their effort.
How to measure customer effort
None of the companies we surveyed wanted to replace one metric with another, or simply add another metric to their customer research that failed to provide actionable insight, and so each of them spent time trialling and reviewing how a CE approach could work for them.
For all of the companies, the main CE considerations are:
• Consistent measurement to allow comparisons across multiple channels.
• Defining the scope of the CE programme: is it company-wide, applicable only to a specific function, such as customer services, or focused on a specific channel, such as the telephone?
• How it fits alongside existing customer measures, such as customer satisfaction and net promoter scores.
• Ensuring that outcomes are actionable.
• Establishing benchmarks to help assess the impact of actions.
The B2C companies we interviewed recognised the need to ask CE questions at each customer touchpoint. One, BT, has developed its own ‘Net Easy’ metric, which is similar in structure to a net promoter score scale. BT said that it can apply this across all of its contact channels, including voice, webchat, website, email, social media, ‘white mail’ and IVR.
However, there is always a trade-off on each channel between the most effective scale and the accuracy of data collection. The seven-point scale chosen by BT worked in places where the range of questions could be easily managed, such as on its website, but it proved too unwieldy to use on its IVR system. A simple three-question survey – Easy, Difficult or Neither? – was used instead.
BT is also shifting its metrics away from measuring internal processes to finding out what its customers are most concerned about. These are identified by simply asking its customers why they’ve given particular scores and then analysing their answers. BT reports that customer response rates to these exercises are good, with about 50 per cent of those who take the survey also leaving comments.
Company C3 introduced CE for its telephone channel contacts by asking customers to score their encounter at the end of each call. This is supported by an open question on why they’ve given a particular score. The intention is to follow up later with more data analysis in order to identify significant issues and track why changes occur.
Company C2 is still assessing how to measure CE and is looking at scoring in relation to existing survey questions.
‘How easy?’ questions
The B2B companies have incorporated ‘How easy?’ questions into their existing customer service questionnaires, using standard five-point response scales – ie, ranging from ‘Very easy’ to ‘Very hard’. In their experience, net promoter score (NPS) data were not actionable enough, but did provide external benchmarks against competitors or ‘best in class’ companies.
Like their B2C counterparts, B2B companies also favoured an approach of combining NPS and ‘How easy?’ types of questions (in preference to the more difficult task of asking customers to quantify the effort involved).
Company B1 also tracked complaints as a rich source of information about any issues arising from the ‘How easy?’ questions. It had already been on a long journey to understand customer complaints by conducting root cause analysis.
At Company B1, complaints are reviewed monthly to see what problems are occurring. The company’s strategists then identify the causes and design remedies. Solutions to some complaints can have a significant impact across the company, but many are about smaller issues that can be resolved through incremental improvements.
The company realised that CE programmes have cultural repercussions. In its case, it had to change the way that its employees think so that the impact of its processes on customers became a primary consideration. “We recognised that the customer effort approach provides a measurable basis, using our voice-of-the-customer insight, for driving our continuous improvement programme.” TS
In Part Two...