Join the Community

22,011
Expert opinions
44,184
Total members
424
New members (last 30 days)
166
New opinions (last 30 days)
28,673
Total comments

3 Steps For Successful Transformative Financial Services GenAI Use Cases

A certain research agency, one that runs big conferences and captures a ton of CIO mindshare, recently offered guidance on (analyst jargon speak ahead!) a decision framework for assessing and realizing value from enterprise generative AI initiatives.

Business culture will adapt over time, of course, they state. Here is not the place to reflect on hypothesized or real job losses and Luddite populism in conjunction with that adaptation. The report also highlighted that those builiding GenAI products and business models successfully, i.e., delivering transformative use cases well, would gain competitive advantage. On the flip-side, those on the wrong side of industry disruption and thus prospectively higher costs, complexity, risks and, for those of us in tech, technical debt as new techniques emerge, would lose out. Thus a corporate landscape of winners and losers will ensue.

With this in mind, I’ll give my perspective on the techniques and technologies that can help you reach those right use cases more successfully soonest, and highlight the role of the vector database in that process. I've discussed the notion of the vector database in two prior Finextra blogs, Similarity Searches: the Neurons of the Vector Database, and 3 GenAI Use Cases for Capital Markets.

Leveling the Playing Field Through Generative AI 

First, some context about the democratic opportunity for all. Generative AI is a great disruptive leveler. All firms and sectors can stand on the shoulders of giants to be successful. Past leaders in AI exploitation may not be the future leaders. They who dare, win. 

From support and marketing chatbots to automated customer emails, most sizeable and reputable financial organizations have AI projects under way. Industry leaders in capital markets in particular have long used structured data and “discriminative AI” to observe, predict, and find anomalies. They have serviced use cases such as detecting fraudulent transactions or trades, personalizing financial product marketing recommendations, and dispensing investment manager and retail brokers robo-advice. Natural language processing (NLP) has transformed some back office functions significantly already, as well as supporting some sentiment-based trading strategy evaluations and a few subsequent implementations in the front office. 

Insurance too. Many actuaries claim they were the original data scientists and predictive modelers, and they have a point. Reinsurance stacks are every bit as sophisticated as the smartest hedge funds, servicing more challenging pricing and risk regimes. I know, I’ve worked with both. Medical devices and automotive telematics have long been a focus of AI, and are increasingly integrated into insurance products. Social media scraping alongside image analysis from satellites and drones can signify loss events soonest, allowing remediation to incur, for example detecting and responding to flood risk events. 

Whatever your starting point, I offer three tips for successfully implementing differentiating and transformative use cases soonest, with the least technical debt:  

1. Prepare data well – It’s not enough to have data; it needs to be in the right format, organized, and engineered quickly. Make sure your tooling can expedite essential data engineering operations, for example, join disparate data-sets and run efficient pre-filtering. 

2. Select the Large Language Model (LLM) that’s right for you. There’s no right answer. “Open source” (e.g. LLaMA, Falcon) versus proprietary (e.g. GPT, Bard) gets debated online. I find the option of working with different LLMs compelling, and I personally have a sneaking regard for Amazon Bedrock on that front. My colleagues rave about other LLM-neutral developer tooling such as LangChain which facilitate environments that drive LLM-centered apps. AWS’s Eduardo Ordax’s guidance on LLMOps and what he calls FMOps (Foundational Model Ops) is also helpful:

  1. Resource appropriately for providers, fine-tuners, and consumers. Lifecycle demands differ. 
  2. Adapt a foundational model to a specific context. Consider different aspects of prompting, open source vs. proprietary models, and latency, cost, and precision.  
  3. Evaluate and monitor fine-tuned models differently. With LLMs, consider different fine-tuning techniques, RLHF [reinforcement learning from human feedback], and cover all aspects of bias, toxicity, IP, and privacy. 
  4. Know computation requirements of your models. ChatGPT uses an estimated 500ml of water for every five to 50 prompts. Falcon 180B was trained on a staggering 4096 GPUs over 7M GPU hours employing huge computational resources – computation requirements matter. OK you’re unlikely to train Falcon, but if you use it or anything else, know what you’re consuming.

3. Here's the bit I really want to focus on. Determine your optimal “taker, shaper, or maker” profile 

  • #taker - uses publicly available models, e.g. general-purpose customer service chatbot with prompt engineering and text chat only via interface or API, with little or no customization.
  • #shaper - integrates internal data and systems for customisation e.g. get data from HCM, CRM or ERP systems (among others) into LLM workflows, allowing fine-tuning of company data.
  • #maker - builds large proprietary models. To my mind, only specialists will adopt this trajectory.   


The Role of Vector Databases in Shaper Environments

A vector database is a shaper technology that I believe offers the greatest opportunity to implement your golden transformative use cases. Think of a vector database as an auxiliary to an LLM. A vector database helps you quickly find the most similar embeddings to a given embedding with customization. For example, if you’ve embedded the text of a search query and want to find the top 10 documents most similar to the query. It manages embeddings on your own secured data while partnering with your preferred LLM, helping you manange guardrails against so-called "hallucinations". That includes storing and engaging with unstructured data, such as documents and images.

For example, imagine searching through your documents relating to your market data consumption, legal use and pricing information. Practitioners know the minefield! By extracting and structuring key information from those documents to searchable embeddings, you can apply search to identify meaning quickly - in conjunction with public information about market data pricing, licensing and legalities - that can inform your choice of market data vendor procurement. 

In short, a vector database helps you incorporate contextual understanding alongside your LLM. Through this combination of "your world" and the pre-trained "LLM world," you have the option to query and search for contextualized differentiating and transformational use cases relevant to your organization.  It saves the expense of maintaining your own models, yet gives you control over how an LLM can work with your own proprietary information.

Not all vector databases are the same. Some are more flexible with datatypes allowing combinations of generative and discriminative AI. Make sure your vector database is efficient. Think of the environmental and financial costs of your search. Ensure your database matches your business needs. If your organization requires searches with to-the-minute data sets, choose one which can work with edge workflows. Choose wisely. 

No need for most organizations to “make,” but stand on the shoulders of giants to “shape” for optimal impact and YOUR golden transformative use cases. 

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

22,011
Expert opinions
44,184
Total members
424
New members (last 30 days)
166
New opinions (last 30 days)
28,673
Total comments

Trending

Kyrylo Reitor

Kyrylo Reitor Chief Marketing Officer at International Fintech Business

Forex Market Regulation on the African Continent

Francesco Fulcoli

Francesco Fulcoli Chief Compliance and Risk Officer at Flagstone

National Payments Vision 2024: The UK's Vision for a World-Leading Ecosystem

Now Hiring