US 2024 election: How AI policy differences will impact financial institutions and fintechs

  2 Be the first to comment

US 2024 election: How AI policy differences will impact financial institutions and fintechs

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.

Kamala Harris and Donald Trump’s artificial intelligence (AI) regulation and/or promotion within the financial services and fintech arena likely won’t differ much this November, based on a consensus of opinions found from experts in the field.

At least not initially, say expert issue-watchers from sources such as the Center for Strategic & International Studies (CSIS) and BSA – The Software Alliance, which counts among its members many leading fintechs based in the US and worldwide.

The respective candidates’ policies on AI are quite similar as of now, with few noted variations, and neither side has spoken specifically on how they would weigh in where financial institutions’ or fintechs’ deployment and expansion of AI are concerned.

In a podcast discussing the candidates’ AI policies, Gregory Allen, author and CSIS director of the Wadhwani AI Center at CSIS (and former director of strategy and policy at the US Department of Defense Joint Artificial Intelligence Center) shared his views on the topic of general AI policy positions: “Both the Trump administration and the Biden administration view competition with China in AI and national security as really foundational to the overall geopolitical competition and national security competition between the United States and China.”

Allen relayed several examples of how key experts in AI technology and utilisation have served during both the Trump and Biden administrations, leading further to what looks at this point to be an effective ‘homogenisation’ of AI views on Capitol Hill and around the Washington Beltway.

Though some of the election rhetoric might say otherwise, Allen doesn’t see any substantive disagreements between the two competing camps (yet) in terms of regulating or policing AI in society, no matter what specific industry is involved. Adding to the ‘blended positions’ held by staffers on both fronts, he noted that Kamala Harris, in her role as vice president, has been one of the leaders in the administration on AI issues and policies. This, Allen says, helps cement his prediction that not much specific change in tone or actions should be expected from those of her predecessor, should she be elected to the US’s highest office.

“President Harris has actually been a genuine leader in the Biden administration's AI work,” said Allen. He noted that “it happens to be the case that Vice President Harris, a former senator from California, where an awful lot of tech industry resides, really was the point person for a good chunk of the administration's AI policy.”

AI in financial services: first in the back office, now shaping client experience, and more

Financial services and their technology partners have been using AI in back-office functions for several years now, some for a decade of more. What began primarily with AI and related Machine Learning (ML) tools to provide Robotic Process Automation (RPA) of mundane tasks or anti-money-laundering or fraud identification assistance within transaction processing has now progressed through to creating client service chatbots, and in some cases gone far beyond these early applications.

According to most surveys, including AI chip giant Nvidia’s own overview of 2024’s financial services AI trends, nearly half or more of larger companies either in the banking or embedded finance arenas are not just using traditional AI to improve operational tasks, they’ve also moved toward either using or planning to implement Generative or Agentic AI (using any of several products and providers now available).

Companies are using these to help them improve service, support, marketing, sales, and in general to enable them to analyse large bodies of data to create greater efficiencies for a huge variety of potential use cases.

In some of those instances, employment of AI solutions is being used to justify reducing employment of human staffers, but the situation surrounding specific use cases for AI rollouts seems to evolve almost daily across the financial services industry and elsewhere in the business world.

Three main ‘flavours’ of AI, from traditional to generative to agentic AI

The main difference between “traditional” AI and generative AI is that the former is designed to analyse data and make predictions for specific tasks, while generative AI goes a step further by creating new data similar to that upon which it is trained, using what are called large language models (LLMs), or more widely scoped data sets. To put it another way, per one industry player, Neurond ai, Traditional AI excels at tasks requiring logical reasoning, pattern recognition, and decision-making based on predefined rules.

Agentic AI, as defined by another provider, UiPath, goes even further in content creation “to empower autonomous systems capable of independent decision making and actions [and] analyse situations, formulate strategies, and execute actions to achieve specific goals, all with minimal human intervention.

“Agentic systems are designed to operate independently,” says the company,  “adapting to changing environments and learning from their experiences. While GenAI focuses on creating, agentic AI focuses on doing.”

When Finextra has hosted many recent (and future) webinars on the topic of AI/ML and ongoing deployment of new forms of AI technology within financial institutions, panellists representing larger and some smaller banks and their fintech partners have consistently and increasingly stated that the use of such tools and methods has helped them bolster security for transactions and bank operations in the back office, while also improving the speed of execution and the overall client experience on the front end of the business.

AI policy differences seem substantial on the surface, but in reality…not so much

The two opposing US political camps are not in total agreement about AI and its regulatory future. It’s true that Donald Trump and his running mate JD Vance have publicly lambasted the sitting president for issuing his executive order on AI last October, proposing to create clear and safer guidelines around AI use. Trump and the Republican party platform during this summer’s national convention described it as: “Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes radical leftwing ideas on the development of this technology.” 

Vance, a former venture capitalist with his own lingering concerns about some AI applications, said that even with those reservations, he believes the Biden/Harris policies are going too far with what he calls “pre-emptive overregulation attempts that would frankly entrench the tech incumbents that we already have.”

Biden’s sweeping presidential directive issued nearly a year ago stated that AI “holds extraordinary potential for both promise and peril.” Going further, the document hailed AI’s potential upsides “to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure,” as well as warning of its potential pitfalls: “Irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.”

In recent statements, Vice President Harris has amplified her own views on the topic, voicing general support – with some cautions – of AI’s expansion among government agencies and in business and consumer circles. These opinions have likely been forged over time through her own experience and relationships with many of the Golden State-headquartered tech titans as California Attorney General and US Senator.

In one speech in particular, at last November’s Global Summit on AI Safety in London, and notably more than six months before she became a candidate for president, Harris spelled out American AI strategy for the audience. The VP declared at the time – with European and other regional and national AI regulations already in place – that she and President Biden “reject the false choice that suggests we can either protect the public or advance innovation.” And while acknowledging a need to consider existential threats to humanity, Harris emphasised in her address the importance to also understand “the full spectrum of AI risk.”

Some prominent AI VCs supporting Trump/Vance ticket, yet most FIs and fintechs remain silent

The Biden administration’s proposed security constraints around AI’s usage have helped swing some Silicon Valley heavyweights into Trump’s corner, including Elon Musk and the powerful venture capitalists Marc Andreessen and Ben Horowitz. The latter said the Republican call to repeal Biden’s order “sounds like a good plan to me” – noting that he and Andreessen had discussed the proposals with Trump at a dinner prior to their being shared in the party’s platform and at its convention.

But not much in the way of open concerns or compliments for AI policies of either party has been voiced – at least not publicly – by notable company leaders in the financial services and fintech arenas.

CSIS’s Allen says that we shouldn’t read too much into all the bombast from both sides of the political aisle. He asserts that AI policymaking has been viewed through much of its relatively short life as a bipartisan issue. “That’s one of the reasons why […] we had Senate Majority Leader Chuck Schumer come visit CSIS in June of 2023. He [was sponsoring] a bipartisan initiative in Capitol Hill around AI.”

Might AI policy continue to be what it has been thus far – a (surprisingly) bipartisan issue?

Anyone observing the legislative discord that has frequently marked both the Trump and Biden presidential terms is aware that such cross-party agreement, on nearly any topic, is unusual, as Allen explained: “There weren’t a ton of areas that were very bipartisan, but AI policy was one of those areas. And I think there's some things that you can point to in what the first Trump administration did that also emphasise, you know, what parts of this are bipartisan now. “

Aaron Cooper, senior vice president of global policy for BSA, the software industry advocacy organisation which includes AI leader Microsoft, Oracle, OpenAI, Salesforce, SAP, and most other major tech firms among its members, agreed. “There’s a lot of similarity” between how the Trump and Biden administrations have approached AI policy, he said. In fact, he counsels political and other observers to temper their expectations of wide disparities between actions likely to be taken regarding AI usage and guidance by either side should they win the presidency next month.

“Regardless of who’s in the White House, they’ll be looking at how we can unleash the most good from AI while reducing the most harm,” Cooper added, “That sounds obvious, but it’s not an easy calculation.”

Comments: (0)

Editorial

This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community.