Join the Community

22,042
Expert opinions
43,974
Total members
375
New members (last 30 days)
176
New opinions (last 30 days)
28,689
Total comments

Mitigating the risks of GPTs in wealth management

Since ChatGPT dethroned TikTok as the fastest app to amass one million users, generative pre-trained transformative (GPT) technology has been hailed as the hands-free solution to every writing need. From composing emails in business to writing content articles, the common perception is that GPT technology is here to replace humans. Yet behind the public hyperbole, there are a myriad of serious use cases for GPT technology in financial services, including wealth management. Many financial services firms are intrigued about the productivity gains of GPTs; but are equally concerned about the risks they may pose from a legal and compliance perspective. Here we explore GPT technology in more detail, outline their limitations and risks, and how to mitigate them.  

GPT technology in context  

GPTs are artificial intelligence (AI) models used to generate natural language text through a conversational interface. By pre-training on large volumes of real data, they can understand natural language, recognise words and grammar, infer intent, and have good reasoning capabilities.  

Open AI has led the charge with the release of ChatGPT powered by the GPT 3.0 model. Such is the rapid pace of development, that GPT 4.0 was released four months later, and those one million users reached 100 million. GPT-4 is many orders of magnitude better than ChatGPT, finishing the Uniform Bar exam in the 99th percentile vs, bottom 10th for ChatGPT. Other providers such as Google’s Bard, Meta, Amazon, Baidu, and well-funded start-ups, such as DeepMind, PI, and Anthropic have also entered the market and are creating offerings. 

Using GPTs in wealth management 

In wealth space, the benefits of GPTs for both advisors and clients are substantial. Client engagement plans can be created with detailed prompting, financial education and answers on a number of topics can be provided around the clock to end investors. Other interesting use cases include identifying a client’s needs and providing the questions to ask them to construct a financial plan. In using this technology, the advisor dramatically cuts down on manual work and can devote more time to engaging clients, thereby boosting productivity.  

The limitations and risks of GPTs 

While GPTs have many benefits, they also have their limitations. It is essential to recognise and understand these, so that they can be factored into any effective deployment. One of the major risks is that there may be inherent bias from the nature of the wide range of data that GPTs are trained on. This can range from reference content, such as Wikipedia and message boards, to news sources such as the NYTimes and the Guardian, and also forums such as Reddit, all of which, according to research by The Washington Post, can introduce bias. GPTs also have limited understanding of financial terminology, which means they may not produce context relevant language. And lastly, they do not have a human’s complex decision-making capabilities or intuition. Combined, these limitations raise legitimate concerns that GPTs could create incorrect outputs, leading to poor outcomes for customers and causing legal and compliance headaches.  

The competent knowledge worker principle  

Given the potential risks, it is easy to dismiss GPT technology, yet its undoubted language skills mean that it has a valuable role to play. To use GPTs safely, the overriding principle is that GPTs should be viewed as competent knowledge workers, not experts, that help humans in their work. Once this principle is established, there are a number of key steps that wealth managers can take to ensure their effective and safe use.  

Train, test, and check 

Firstly, GPTs should be trained specifically for the institution. If solely trained on the Internet, there will be inherent biases in the output. Correctly trained GPTs with clean inputs, however, are highly unlikely to generate false outputs. GPTs should therefore be customised to the institution by training them on enterprise data sets, not just external ones. It is also good practice to get inputs from internal experts, such as research teams, for the GPT to draw on. GPTs should also be trained on financial market data. By undertaking these steps, GPTs draw on verifiable facts and use the right language context.  

 Secondly, GPT outputs should be tested to establish trust in its capabilities. Just as any new service or tool would undergo rigorous testing, so too should any service using GPT technology. To start, wealth managers need to identify an application of the technology, for example, client support automation. They then need to select the appropriate key performance indicators (KPIs), such as an increase in the resolution of customer queries. Finally, they need to assess the output and refine and adjust how the technology is used.  

 Lastly, human checks also need to be built into any workflows where GPT technology is used. In the case of customer support information, checks need to be made of responses and GPTs need to revert to humans for more complex queries. This will help to ensure that it meets all regulatory standards, and that compliance is not circumvented. 

The real risk of GPTs? Not using them! 

To conclude, the highly regulated nature of wealth management means there are natural concerns about the use of GPT technology. However, the proven language capabilities of GPTs means that wealth management firms cannot afford to ignore them. Instead, by adopting the principle of GPTs as a competent knowledge worker and by putting in place a framework to train models effectively, test the applications, and incorporate human oversight, wealth managers can realise substantial benefits in improving work rate and providing a better customer experience. The ultimate risk in the whole GPT debate is whether wealth managers can afford not to use GPT technology if they hope to remain competitive. 

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

22,042
Expert opinions
43,974
Total members
375
New members (last 30 days)
176
New opinions (last 30 days)
28,689
Total comments

Trending

David Smith

David Smith Information Analyst at ManpowerGroup

Best 5 White-Label Neobank Solutions in 2024

Ruoyu Xie

Ruoyu Xie Marketing Manager at Grand Compliance

Governance, Risk and Compliance: How AI will Make Fintech Comply?

Now Hiring