Blog article
See all stories »

How to Prevent Another AI Winter

Almost every new technology cycles through phases. Initial hype about the exciting possibilities the tech could present leads to it becoming a much discussed and debated topic, with early adopters rushing to jump onboard. This hype is followed by teething problems, limitations and unmet expectations coming to light, trust waning and popularity crashing.

AI has followed this pattern more than once – each unprecedented hype cycle has been swiftly followed by an AI winter. We currently find ourselves amidst an AI hype cycle – as detailed in the new Gartner Hype Cycle for AI. According to this report, GenAI is now entering the “Trough of Disillusionment” as users are not getting the value that the hype led them to expect. Responsible AI, AI engineering and edge AI are at the “Peak of Inflated Expectations”, meaning they are gaining momentum but run the risk of disappointment if they don’t deliver.

Based on the sheer volume of AI hype we’ve experienced, this could be the biggest AI winter yet. But is the drawback inevitable this time, or can businesses take steps to avoid the same outcome as previous cycles?

Swirled up by the boundless hype around GenAI, organisations are increasingly exploring AI usage, often without understanding algorithms’ core limitations. Many are trying to apply plasters to not-ready-for-prime-time applications of AI. Today, less than 10% of organisations can operationalise AI to enable meaningful execution.

Regulation implications

Adding further pressure, prescriptive AI regulation is being fuelled by the premature release of LLMs to the public, which was quickly followed by case after case of AI fails. These AI regulations specify strong responsibility and transparency requirements for AI applications, which GenAI is unable to meet. AI regulation will exert further pressure on companies to pull back, and this has already started.

Today about 60% of banking companies are prohibiting or significantly limiting GenAI usage. This is expected to get more restrictive until acceptably governed AI innovation is created to meet consumer-impacting AI use cases.

If – or when – a market drawback or collapse does occur, all enterprises would be affected, particularly by AI regulation, but some more than others. In financial services, analytic and AI technologies exist today that can withstand AI regulatory scrutiny. Forward-looking companies are ensuring that they have interpretable AI and traditional analytics on hand while they explore newer AI technologies with appropriate caution.

Many financial services organisations have already pulled back from using GenAI both internally and in customer facing applications; the fact that ChatGPT, for example, doesn’t give the same answer twice is a big roadblock for banks, which operate on the principle of consistency.

The enterprises that are likely to pull back the most on AI are the ones that have gone all-in on GenAI – especially those that have already rebranded themselves as GenAI companies, much like there were Big Data companies a few years ago.

Drawback prevention begins with transparency

To prevent major pull-back in AI today, we must go beyond aspirational and boastful claims, to having honest discussions of the risks of this technology, and defining what mature and immature AI look like. Companies need to empower their data science leadership to define what is high-risk AI, and how they are prepared or not to meet responsible / trustworthy AI; this comes back to upcoming AI regulation.

Companies must focus on developing a Responsible AI programme, or boost Responsible AI practices that have atrophied during the GenAI hype cycle. They should start with a review of how AI regulation is developing, and whether they have the tools to appropriately address and pressure-test their AI applications. If they’re not prepared, they need to understand the business impacts of potentially having AI pulled from their repository of tools. 

Next, companies should determine and classify what is “traditional AI” vs. Generative AI and pinpoint where they are using each, and adopt a formal “humble AI” approach: a way to tier down to safer tech when an AI model indicates its decisioning is not trustworthy.

Why we still need data scientists

Too many organisations are driving AI strategy through business owners or software engineers who often have limited to no knowledge of the specifics of AI algorithms’ mathematics and risks. Stringing together AI is easy. Building AI that is responsible and safe is a much harder exercise. Data scientists can help businesses find the right paths to adopt the right types of AI for different business applications, regulatory compliances, and optimal consumer outcomes.

1650

Comments: (0)

Scott Zoldi

Scott Zoldi

Chief Analytics Officer

FICO

Member since

02 Jul

Location

San Diego, Ca

Blog posts

1

This post is from a series of posts in the group:

Artificial Intelligence and Financial Services

Artificial Intelligence and Financial Services


See all

Now hiring