Navigation Menu+

Delight Your Customers… Sometimes 2

How to design better customer experiences using CE

 

In Part Two of our two-part special report on using customer effort (CE) programmes to gain customer loyalty, Professor Moira Clark of Henley Business School explains the practical application of CE and the conclusions of her detailed research.

ORGANISATIONS SHORTIn Part One of this Henley Business School report on the emerging discipline of customer effort (CE) programmes, we explored what customer effort is, including the need to consider people’s emotional and cognitive responses. We then looked at the advantages of measuring customer effort and at how five major companies have been experimenting with this.

In Part Two, we look at the practical application of CE, at how to design customer experiences, and discover the revenue and business cases for establishing CE programmes. Plus: All of Henley Business School’s conclusions on this emerging business discipline.

• The enterprises referred to in this report are consumer-focused telco BT, and four companies that asked to remain anonymous: Company B1 (European, B2B, in the fast-moving consumer goods sector); Company B2 (European, B2B, in the technology sector); Company C2 (UK, B2C, in the holidays market); and Company C3 (UK, B2C, in the financial services industry).

Some of these organisations are still figuring out how to apply CE metrics and gain meaningful insight from them, but their clear intention is to use CE data to improve customer loyalty.

All of our interviewees made the point that CE scores alone may not provide the big picture or the whole answer. Companies need to go through all of their data sources – such as single contact resolution, success rates for transactions, process failures and repeat calls – and triangulate those to ensure that they properly understand what’s going on.

“The best thing to measure is everything, and then be really intelligent about how you use what you’ve got,” said BT during the Henley study.

Gavin Patterson

BT CEO Gavin Patterson

Beginning the CE journey

BT started out in CE by finding out how its customer ‘journeys’ differ and how their respective effort scores change relative to each other. This demanded the ability to slice and dice the results according to the types of experience that were being measured.

The telco analysed the paths that customers took through its interactive voice recognition (IVR) system. By using natural language analysis, 200 distinct routes were identified relating to what customers were trying to do – along with how easy it was to do those things. Customers’ verbatim comments were then cross-referenced with each instance of a journey through BT’s systems to produce trend insights.

Company C2, in the travel sector, was already using questionnaires to collect customer satisfaction data, including the likelihood of customers recommending its services to other people and making further bookings. It was carrying out these surveys at each step of the customer ‘journey’ (in business terms), from researching a holiday through to the post-holiday survey, and then collating the findings to give average ‘recommend’ and ‘repeat booking’ scores.

While this already provided some indication of customer intentions, Company C2 recognised that satisfaction scores alone do not provide actionable data and are not a reliable indicator of future customer behaviour.

Accordingly, it decided to explore how CE could be incorporated into its questionnaires to provide more effective insight and drive customer experience improvements across each contact channel. The same approach is being taken by Company C3 (in the financial services sector).

All the B2B companies in our survey were focused on improving their customer experience by being easier to do business with. The adoption of CE measurements (typically ‘how easy?’ types of questions) provided real insight to the areas where customers found processes difficult.

At Company B1, the ‘easy to do business’ programme was a company-wide initiative, evangelised for and led by Customer Services. The programme first identified where improvements were needed in each part of the organisation. That started with supply chain processes and then moved on through manufacturing and design so that, eventually, ‘easy’ became a key driver of all its process design decisions. Change is now driven by customer feedback, and this can be anywhere in the company.

The key to successful change

So it appears that the key to implementing CE improvements is to ensure cross-functional support within the business.

For some organisations, this isn’t a simple matter as they operate in functional ‘silos’, each of which has its own objectives and metrics. CE requires the whole organisation to commit to a customer-centric model for continuous improvement, so that any conflicts between internal processes and CE insights can be resolved.

Henley Business School's Professor Moira Clark

Henley Business School’s Professor Moira Clark

During the Henley research, Company B1 said, “Ask a simple question like ‘Why wouldn’t you want to get every customer order delivered in full?’. [For us], the initial answer was that it’s an unrealistic target while there are other initiatives around, such as inventory reduction. But now the Supply Chain Director has an ‘on time in full’ [OTIF] target. We have made ‘easy’ a philosophy that has been brought to life with ideas, responses, and cross-silo working.”

A common CE recommendation is to empower staff to resolve customer issues. However, Company B1 found that it’s not as simple as saying ‘You are now empowered’ to its staff. Changing behaviours from following a set of rules to allowing employees to think for themselves is a lengthy process that requires training and support, particularly if such a culture has not existed before within the organisation.

“We want to be effortless and easy to do business with, where every agent is empowered to say ‘yes’,” said Company B1. “We aim to get to the point where saying ‘no’ is almost a failure. I would prefer to forgive people for doing the right thing for the customer rather than not doing something.”

Company B2 didn’t undertake customer research at such a granular level as Company B1, but it too was committed to producing actionable data. Its annual customer satisfaction programme is centrally defined and managed through a third-party agency. It’s sophisticated and provides a lot of data on ‘high risk’ customers – those who are likely to reduce spending or defect to the competition. A CE question has now been introduced to that programme and the company is assessing how CE scores relate to customer loyalty.

However, in many sectors there is another class of customer: those who are ‘trapped’ and don’t have any choice about staying – for example, where a company may be the sole network or service provider in a specific location. In these situations, analysis of customers’ verbatim comments is likely to be the only way to understand their real feelings.

What is the impact of CE on loyalty?

The consensus of the B2C companies we surveyed is that their own data supports the claim made in the 2010 Harvard Business Review article, Stop trying to delight your customers (see Part One of this report), that customer effort scoring is a better indicator of their loyalty than customer satisfaction and net promoter scores. “It’s definitely not hype,” said BT.

After BT had been asking its ‘Net Easy’ question for some time, all the data was taken from the existing brand surveys that drive its net promoter score programmes. After comparing and analysing all of that data, CE came equal first for positive influence, alongside ‘brand warmth’, and it came a clear first in ‘negative advocacy’ (where customers had a poor experience – i.e. one requiring a lot of effort).

BT also tested how much better CE is as a predictor of behaviour than customer satisfaction. The results were equally clear, particularly in terms of CE being an indicator of negative behaviours: there was a close correlation between ‘satisfied’ or ‘extremely satisfied’ customers and those stating that it was easy to do business with the company.

Loyalty outcomes were similar. But when comparing ‘unsatisfied’ customers and those who found their encounter with the company to be ‘difficult’, BT found that the effort rating was a much better predictor of advocacy than those customers who just felt disgruntled.

In other words, a customer who assesses a company as ‘difficult’ to do business with is much more likely to defect than a customer who is simply dissatisfied with its services. This makes CE a very powerful indicator of customer behaviour.

Further analysis by BT found that CE was also a key driver of value for money, which in turn is a driver for advocacy. Put simply, if customer service is easy and low effort it bolsters the customer’s impression of value for money.

BT also discovered that among those customers who had a difficult experience of its services, only five per cent felt that the company offered good value for money.

This implication of all these findings is significant: it is almost impossible to foster positive attitudes about value for money among customers who experience high effort when dealing with a company. Those customers are also more susceptible to churn when a competing supplier comes along. Again, this makes CE a powerful metric.

BT also discovered for itself the impact of CE on loyalty. The company developed a model using six months’ worth of data to compare ‘Net Easy’ scores with the likelihood of someone still being a customer six months later. This was based on actual retention data. “The rate of customer loss for an ‘easy’ score was found to be significantly less than for other customer ratings. And it showed a 40 per cent reduction in their propensity to churn when compared to ‘difficult’ scores,” said BT.

But what of those customers who feel ‘trapped’ in a supplier relationship because no alternative service or product is available in their area? An example might be broadband or cable TV services in a particular town, street, or building.

The Strategist suggests that companies should beware of discounting customer effort scores among these groups of people. Social media offers all customers a means of sharing their thoughts about how difficult a company is, even if they are unable to switch to an alternative. Don’t assume ‘trapped’ customers are ones you don’t have to worry about.

The problem of delight

Henley Business School research reveals some controversial findings for the many organisations that are striving to delight their customers.

Travel company C2 wanted to understand the value and relevance of CE scoring as part of its pre-implementation study. It analysed historical data from its service quality surveys to identify how the ‘recommend’ score related to those customers’ subsequent re-bookings.

The findings showed that the level of re-bookings flattened out at much lower ‘recommend’ scores than the company had anticipated. For Company C2, this indicated that delivering service levels that make people strongly recommend its services to their friends wasn’t having the expected impact on increased spending – at least not for existing customers.

FMCG

The research shows that customers want the basics done well and to experience low effort.

This analysis seemed to agree with the conclusion in the Harvard Business Review article (see Part One of this article) that, to win their loyalty, customers mainly want the basics done well, rather than the bells and whistles. This finding was particularly challenging for a company that was striving for ever-higher levels of service excellence.

This is a valuable lesson: there is no point in striving for excellence if some aspects of your customer service are difficult and push customers away. Get the basics right first and reduce the customer’s effort – which, as we explored in Part One, includes their mental and emotional ‘effort’, as well as the time and/or physical energy it takes for them to do something.

Company C2’s research showed that although enhanced service levels may lead to higher recommendation scores, once  existing customers are ‘fairly satisfied’ the percentage of re-bookings doesn’t substantially increase.

This analysis has persuaded the company to continue looking to CE for insight on where to focus its service improvements. “It makes more sense to invest in the lower end of recommendation scores, minimising the ‘no’ and ‘unlikely’ responses, rather than moving customers from ‘probably’ to ‘definitely’,” said Company C2.

Where Harvard Business Review got it wrong

• In hard financial and customer-retention terms, therefore, the conclusion from these findings might be expressed as: ‘Reduce customer effort where it is too high, and reduce service levels and costs where they exceed high expectations.’ This suggests that a better title for the Harvard Business Review article would have been ‘Delight your customers, but only where they value it’ rather than ‘Stop trying to delight your customers’.

The B2B companies surveyed by Henley also recognised that low effort is a driver of higher retention. However, their emphasis has been more on continuous improvement of the customer experience and less on the CE score as an indicator of future behaviour.

Company B2 (in the technology space) has been surveying 100,000 customers globally. The company has used its service quality data to test the relationship between CE and customer loyalty, where loyalty is defined as increased purchases and/or recommendations. It found that customers who rate the company as ‘easy to do business with’ have a much greater intention to spend money with it than customers who don’t share that assessment.

If this lesson applies across the whole sector – and there’s no reason to assume it doesn’t – this means that customers who expect to increase their technology spending are unlikely to spend more with ‘difficult’ companies, and would much rather do business with companies that focus on easy customer service.

Professor Leslie Willcocks

Professor Leslie Willcocks of the LSE.

Editor’s note: This tallies with the findings of London School of Economics’ Professor Leslie Willcocks that, as technology itself becomes more of a service (through cloud platforms) and less of a licensed, on-premise function, technology vendors need to focus more on keeping their customers happy and less on telling them what to do. Read the Strategist interview with him here.

How does CE fit with ‘right first time’?

In general, there has been a shift in customer service metrics away from quantitative measurements (how fast, how many) and towards qualitative measurements (how well).

Many companies have been running ‘right first time’ (RFT) or continuous improvement initiatives over the last decade, and there is clear evidence (not to mention regulatory mandates) to support investment in this area to ensure that products and services do what they claim to do.

Again, this is about getting the basics right, and if these are not in place then designing low-effort experiences won’t help.

But while improvements driven by these approaches have often led to improved net promoter scores, these have not been as great as expected. “We put a lot of effort and investment into ‘right first time’ and made great strides by eliminating failures and improving the customer experience, but that didn’t have as much impact as we expected on our key measure – customer advocacy [for our services],” said BT.

Another popular measure has been ‘one-contact resolution’, but this and ‘right first time’ are measures of process performance, not customer experience, and so there has always been a disconnect when attempting to use them to improve customer loyalty.

This is where CE comes in. As an alternative to the process-driven navel gazing of RFT and similar measures, companies are beginning to recognise that CE is a better indicator of advocacy and loyalty. Put another way, they’re starting to think about how their processes are impacting on customers’ lives and feelings, and less about the technology or whether something is responded to quickly or efficiently. The Strategist believes this is a good thing.

The other advantage of CE is its actionability. Companies can use other ‘voice of the customer’ metrics, such as value for money or customer satisfaction, but for customer service organisations none are as actionable as CE. For example, if a ‘satisfaction’ measure is used it could be well correlated to key performance indicators, but it would not tell you what to do.

“’Net Easy’ feels like the best measure as it has a good correlation with advocacy and can be trusted in good times and bad,” said BT.

From a service perspective at Company B1, brand equity has helped its ‘easy to do business with’ programmes to be adopted across the organisation, and the next step is to get to the point where things are signed off by the ‘easy’ team. These programmes have been linked with a corporate initiative called ‘Getting work out’, based on lean process improvements. The company says that this combination helps it to identify process changes that reduce customer effort.

Indeed, the company went a step further by inviting its customers to workshops and asked them where, and in what ways, the company was not easy to do business with. This provided it with valuable insights that enabled it to prioritise process improvements.

Company B1 said, “One example was credit control: five per cent of orders held for credit checking were delayed by one day, and this was not helping customers. A member of the credit department established a way of protecting the company by credit checking while ensuring that we remained ‘easy’. As a result, the number dropped from five per cent to a tiny figure.

“The key to resolving this was recognising that the credit-hold could jeopardise [the chances of] a consignment making its shipment date. Ensuring that that didn’t happen was achieved by looking across the organisation rather than just within a single department.”

At Company B2, the annual customer satisfaction programme now captures CE scores for the main customer-facing processes and functions. The company now analyses the data for any areas that need improvement. These could be in functions such as technical support, or in processes such as ‘time and effort to order’.

Having identified poor CE scores for a specific function or process, an initiative is then put in place to identify the cause and implement a solution. As B2 is a global company with numerous global processes, its approach to changing processes to reduce CE is global too.

The benefits and the business case

None of the companies interviewed by Henley Business School had prepared a formal business case that included a rigorous investment appraisal. Investment in CE was generally considered to be within the overall strategic objective to grow revenue by retaining existing customers as well as by winning new customers.

'The Strategist' watch

Keep an eye on the Strategist shop in 2015.

All of the companies were already investing time and resources in improving their customer experience scores, so adopting a CE approach was seen as a helpful addition to those programmes. Some business cases were prepared in order to assess how CE could be adopted and what the implications might be, but these were not financially driven.

Company C3 did not prepare a business case as it felt that the justification was already clear, and the implementation was relatively low cost and low risk.

The availability of customer service data at all the B2C companies in our survey has enabled them to analyse customer behaviour in some detail. For example, BT can now calculate a clear ROI from its figures by applying the average lifetime value of customers to the changes in retention gained from CE improvements.

At the B2B companies, the CE approach was part of an ongoing corporate initiative to be easy to do business with and so there was no single business case. However, when areas for improvement are found, the remedial actions are identified and approved on a case-by-case basis.

The lessons from companies that are using CE

Each of the companies Henley interviewed was asked to share what they’d learned by implementing customer effort programmes.

For the B2C companies, the key lessons are:
• Analysis of CE metrics can be a great attention-grabber.
• CE programmes can help target practical improvements.
• Be prepared to evolve how to measure CE / ‘easy’ scores and how to make your surveys more effective – even simple things like whether using a three-, five- or seven-point scale is best.
• If CE is used as a measure to target individual performance, or to measure performance between teams, then it has to be seen to be fair and entirely within their control.
• In particular, there needs to be a clear distinction between how easy it is to interact with the contact centre adviser versus how easy it is to navigate through the processes to get to that point.
• Effort programmes need to be effectively communicated internally in order for everyone to understand their potential impact on the business.
• Suggesting that people should stop trying to delight their customers may be counter-productive, especially since many companies have emphasised the importance of customer service excellence over the past few years.
• The real challenge now is to identify where customers expect low- or high-effort experiences and then deliver against those expectations.

“It’s about being able to identify where to put in effort that the customer will appreciate and where it makes no difference. An example of this is an IVR experience survey that includes rating the music played while on hold. That’s the wrong question and is always scored low whatever the music!” said Company C2.

For the B2B companies, the key lessons are:
• In the B2B world, clients can interact with many different parts of a supplier. This means that functions like Accounts and Logistics need to be as easy to interact with as the account management teams.
• Effectively, the CE approach requires B2B companies to design their processes around their customers’ needs, rather than for their own internal functional needs. This is often referred to as being a ‘customer-centric’ company, although a better phrase for driving process improvements is simply ‘being easy to do business with’.
• Since ‘easy’ needs to be a cross-departmental initiative, the way that it is branded and communicated internally is really important in order to get buy-in and support. It needs to start as a change programme, but truly transform into the way a company does business.
• Reinforce ‘easy’ philosophies by marketing successes both internally and externally.
• Ensure that there is an ‘easy’ champion to provide vision and direction for the programme so that people see the value in it.
• Top-level support and buy-in is essential.
• Decision-making requires stakeholders with the authority to say ‘yes’ or ‘no’ to implementing solutions.
• Challenge any clashes that occur with other initiatives, especially where they risk running in opposite directions, such as cost-cutting. Try to ensure that these initiatives do not erode each other.
• Look at how to incorporate insight on effort into customer surveys with a particular emphasis on identifying the process changes that can reduce effort.
• Involve customers in the process.
• Use customer feedback to demonstrate the impact of the changes.
• Looking at customer comments on a global level requires analytics in order to find comments on specific topics and ensure that they are actionable.

Conclusions

The findings of the Henley Business School research point strongly towards low effort being a good indicator of customer loyalty. The companies that have applied CE techniques are finding that it supplies them with loyalty data that goes beyond customer intentions (which is where net promoter scoring works) and into actual customer behaviour.

The data also show that the negative consequences of high-effort experiences are greater than the positive. A customer who assesses a company as being ‘difficult’ is much more likely to defect than a customer who is simply ‘dissatisfied’ or underwhelmed by its services.

However, the positive impact of ‘low-effort’ experiences has similar prediction accuracy to customer satisfaction and net promoter score programmes. This is possibly due to the fact that customers reasonably expect ‘easy’ experiences, especially when dealing with commodity products or everyday processes.

However, it also points toward the fact that ‘customer delight’ may not add a huge amount to customer loyalty. That said, the suggestion that companies should stop trying to delight their customers is counter-productive. A better strategy would be to delight your customers, but only where they value it.

The challenge, therefore, is to identify where customers expect low- or high-effort experiences and deliver against those expectations.

In general terms – with some exceptions – effort also links into perceptions of value for money. Customers are unlikely to spend more with difficult companies, and will opt to increase their spending with competitors that seem easier to do business with.

That said, ‘customer effort’ is, itself, not the easiest way of expressing this idea. The question ‘How easy is it…?’ is much more effective (and easier to answer) than ‘How much effort…?’.

Overall, the real advantage of the CE approach is that it is not prescriptive and allows companies to identify and correct only those issues that are applicable to them. One size does not fit all when implementing a CE programme, which can be used to help identify changes on individual channels, such as the contact centre, the website, or IVR, but also as a company-wide continuous improvement programme.

The conclusion of all the companies interviewed was that customer effort programmes are themselves worth the effort and produce tangible benefits. These benefits could be seen by direct measures, such as changes in customer retention figures, and by indirect measures, such as a reduction in complaints or an increase in positive word of mouth. tS

Professor Moira Clark is Head of Marketing and Reputation, Professor of Strategic Marketing, and Director of The Henley Centre for Customer Management, at Henley Business School. Additional reporting: Chris Middleton.
Henley Business School would like to thank BT and the four other companies involved with the original research, which asked to remain anonymous.

 

TAKE ME HOMEHOMEPAGE

Fresh thinking.
Twitter @strategistmag
ello.co/strategistmagazine