Community
While the sophistication of chatbot technology has been steadily increasing over the past decade, the release of Large Language Model-based ChatGPT is set to be the harbinger of the next wave of powerful, hyper-personalized and highly adaptable chatbots.
Released last November, ChatGPT – Chat Generative Pre-Trained Transformer – has been trained by the non-profit research and development organization OpenAI. Built on OpenAI's GPT-3 family of LLMs, ChatGPT represents a significant advancement in natural language processing (NLP) due to its ability to understand complex natural language inquiries and supply highly coherent and oftentimes correct responses.
While this newest LLM-based model is not yet available for commercial use, OpenAI President and Co-founder Greg Brockman has indicated that a professional version – and monetization – is not far off, likely through a ‘chatbot-as-a-service’ model. As predicted, ChatGPT was integrated into Microsoft services through the Bing search engine.
While much development and integration work will be needed, there is no doubt that those industries with significant existing investments into contact centre technology are monitoring developments closely for opportunities.
Key use cases
With that in mind, what are some of the top use cases where contact centres can leverage such advanced language models in the near to more distant future?
Risks and Considerations
While the clear advances ChatGPT makes over current chatbots and virtual assistants are obvious, there are several risk factors that must be addressed. ChatGPT’s LLM and word prediction approach is fundamentally different than current technologies, and this can lead to the following challenges.
Misinformation – While ChatGPT sounds extremely confident in its responses, there are instances where it has supplied inaccurate information, due to the nature of its algorithms and training data. Essentially, ChatGPT does not draw from a defined knowledgebase of vetted facts – instead it predicts what it should say based on its training set, and will make up details seemingly at random. Organizations looking to use ChatGPT to interact directly with customers must be 100% certain that any information being provided is accurate, and that sufficient content guardrails are in place.
Bias and discriminatory behavior – LLMs can be used to hyper-personalize customer interactions through adjustments on tone, language, and content. However, the quality and relevance of the training set data will shape responses and there is accordingly a danger that bias will be introduced. It should be noted that bias is always a risk in these types of models, but the sheer vastness of the input data used for ChatGPT poses an even greater risk.
Relevance and staying on topic – A risk inherent in the openness of the ChatGPT training set and framework is that users can ask it questions that fall far outside of its intended use. Sufficient controls must be put in place to ensure that responses stay on topic, and do not provide advice to users that could open an organization to legal liability. Users should also not be able to engage in conversation on third rail issues, e.g. around politics or society.
Relevance of training data – Regardless of the sophistication of the underlying technology, the saying of ‘garbage in, garbage out’ still stands. Large Language Models like ChatGPT will always be trained on extremely large datasets, ensuring that not all inputs can be controlled. While fine tuning is offered, it is still unfeasible to sufficiently restrict the inputs to control the output.
Security - Organizations within highly regulated industries such as banking and insurance must consider how customer data is being ingested when using any third party technology. Once access is granted to customer data – notably account balances and transaction history – any number of security issues can arise. These include the overspill of customer data to unauthenticated or unauthorized users, or the exposure of payment card information (PCI) or personally identifiable information (PII) data.
Integration – the current paradigm for LLMs is to train them on vast historical data sets (not least 'the internet’ itself), but narrower use cases will require the integration of proprietary data sources and the ability to pull real-time data in response to a user queries. While technology will no doubt in time address this need, current iterations have not been built with this framework in mind.
What’s next?
While the arrival of ChatGPT is certainly an exciting step forward in the ongoing automation of digital customer service, the technology is still far from ready for direct customer interactions without the addition of thorough checks and balances.
In the near term, we expect aspects of LLMs to enable and enhance certain functions within contact centre and wider digital interactions, providing for the first time truly conversational experiences coupled with an ability to handle ambiguous and highly varied inputs. However, the current technology is also not fit for purpose, and hybrid approaches will be needed. At this juncture, a LLM alone does not provide an immediate substitute for chatbots, interactive voice responses and other existing technologies.
However, ChatGPT has shown us that great leaps forward in the sophistication and potential application of NLP technologies is no longer just a pipe dream. We are excited to see how organizations will incorporate the technology in a fruitful yet responsible manner.
This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.
Damien Dugauquier Co-Founder & CEO at iPiD
30 October
Kyrylo Reitor Chief Marketing Officer at International Fintech Business
Prashant Bhardwaj Innovation Manager at Crif
Philipp Buschmann Founder & CEO at AAZZUR
29 October
Welcome to Finextra. We use cookies to help us to deliver our services. You may change your preferences at our Cookie Centre.
Please read our Privacy Policy.