Blog article
See all stories »

Mystery Shopping the Virtual Assistant

Virtual assistants might be transforming how banks deal with customer queries, but quantifying that the best advice and the right answer is given each time, isn’t always as simple as it seems.

With most virtual assistants it’s possible for an organisation to easily see when a problem has occurred just by looking at the unresolved queries – ones that the assistant couldn’t understand and therefore answer. These can usually be resolved by looking at the actual chat log files and adjusting the assistant’s parameters or by updating its knowledge.

The organisation knows about these errors because the assistant wasn’t able to answer them.  What happens to the ones they do answer, but incorrectly? Detecting a wrong answer can be a bit more of a challenge.

The average number of queries a virtual assistant answers each day typically runs into the thousands. Combine this with the number of different topics from current accounts to insurance cover and it’s easy to see why Mystery Shopping your virtual assistant isn’t just a simple affair if you wish to test their entire knowledge.

Mia is the Co-operative Bank’s virtual assistant that is used internally by call-centre staff. Launched a couple of years ago she generally achieves a very high accuracy rate in answering queries first time, in February for instance she achieved a 98.64% efficiency rating. Thomas Bacon, Virtual Adviser General Editor at The Co-operative Bank explains how they ensure that Mia delivers the right answer straightaway – most of the time.

“All of Mia’s information is sorted into categories, for example, all ISA-related content is grouped together. Any one category might have been accessed thousands of times during the course of a month, but when you look into the finer details it’s possible to see that maybe it was just two or three hundred different terms that users inputted that took them to a specific category,” says Thomas.

By comparing the triggers against the categories Thomas and his team are able to double check that users were given answers from the right category. This highlights if something is out of place and allows for it to be corrected. This is a technique, known as Trigger Analysis, can then be used to measure Mia’s efficiency.

“If we look at February’s data analysis for example,” continues Thomas. “You can see straight away that the results in Mobile Banking were lower than the others. In this case, just one of Mia’s responses was incorrect in terms of the natural language interaction, which is demonstrated by 9% of Triggers going to the wrong place. It still wasn’t too bad – the adviser was only ever one mouse-click away from the information they needed to see, but it is something that the editorial team can correct easily, continuing to improve Mia.”

2614

Comments: (0)

Member since

0

Location

0

More from member

This post is from a series of posts in the group:

Innovation in Financial Services

A discussion of trends in innovation management within financial institutions, and the key processes, technology and cultural shifts driving innovation.


See all

Now hiring