Join the Community

22,042
Expert opinions
43,974
Total members
375
New members (last 30 days)
176
New opinions (last 30 days)
28,689
Total comments

Customer Service: How to Balance AI and Automation with Human Touch

If automation is operating as it should, people should not need (or want) to talk to a customer service representative. But what if something goes wrong, or the customer has a complex query?

Take the A-level results mishandling in the UK as an example. In theory, the algorithm should have worked, but it didn’t, and hundreds of students had to get in touch directly with universities to solve the problem. Putting that in the context of banking, a similar issue could occur if several people do not get a loan request approved but they don’t know why. This could spark a sudden influx in urgent customer enquiries. As banks digitally transform, the use of AI and automation is growing rapidly. This makes the need for AI governance even more important because now a single mistake can have ramifications on a huge scale – it is no longer one person making an error in one branch.

How can banks minimise the risk of the algorithms going wrong?

They need to educate both customers and staff to have greater scrutiny over the AI and empower them to weigh in on processes to continually strive for outstanding customer service. To do that, IT and business leaders need an easy system to adapt which must have transparency – not a situation where you cannot see what is actually going on in systems where it is impossible to understand the code.

Low-code/no-code platforms are a good way to design and implement processes and be able to easily review and correct them if need be. For example, if it is clear that AI being used to automate a decision in a process for a mortgage application is biased towards one demographic group over another, employees can address that to ensure customers are being treated fairly. For this to work, rules and settings need to be clear and transparent.

Low-code software enables IT teams and business leaders to have a greater understanding over each step and therefore easily collaborate to eliminate unintended bias and make improvements to AI models. With no coding required, development times are slashed too, meaning problems can be solved and goals can be realised faster.

Tools that enable organisations to increase the transparency of any AI model they use to serve customers are also helpful in spotting algorithms that are treating people unfairly. There are tools available that allow organisations to predefine the transparency level using a sliding scale from one (highly opaque models with logic that can’t be fully explained) to five (highly transparent models that allow humans to fully understand the decisioning and resulting actions).

One example of an organisation that has got it right is the Commonwealth Bank of Australia. It deployed AI and automation successfully as part of its customer engagement engine, building over 200 machine learning models with 157 billion data points. This helps the bank anticipate customer needs and decide the ‘next best conversation’ customer service personnel should have with each individual customer in any interaction. Because the customer engagement engine was built in a low-code, model-based design environment, the bank can easily adapt and finetune processes over time to improve customer service with every interaction. During the initial lockdown period, the bank saw a 500% spike in usage of their ‘benefits finder’ helping customers understand what they are entitled to, and they were able to quickly change communications strategies both outbound and dealing with inbound.

In summary, no different to other parts of a bank like loan decisioning or transaction monitoring, banks must monitor their AI to ensure mistakes don’t happen whether through unintended bias or algorithm inconsistency. With a no-code system that uses transparent AI, and proper training of staff, they can strike the right balance between leveraging technology and retaining a human touch. If they are not proactive in reviewing their use of AI and training staff, they could be inviting unnecessary risk, damage to their reputation and loss of customers. To get the right balance, there should be checks and feedback loops from your employees and customers. After all, there’s enough evidence that machines make mistakes too…

 

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

22,042
Expert opinions
43,974
Total members
375
New members (last 30 days)
176
New opinions (last 30 days)
28,689
Total comments

Trending

David Smith

David Smith Information Analyst at ManpowerGroup

Best 5 White-Label Neobank Solutions in 2024

Ruoyu Xie

Ruoyu Xie Marketing Manager at Grand Compliance

Governance, Risk and Compliance: How AI will Make Fintech Comply?

Now Hiring