What California's AI Safety Bill SB 1047 means for fintech

Be the first to comment

What California's AI Safety Bill SB 1047 means for fintech

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.

The Wall Street Journal reported this week that SB 1047, the first-of-its-kind, controversial AI safety state bill that had Silicon Valley tech giants such as Meta, Google and OpenAI divided, had been vetoed by California Governor Gavin Newsom. Here’s everything you need to know.

What is SB 1047?

Also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, the bill’s intention is to make it mandatory for organisations that spend more than $100 million to train AI models to put safety measures in place. Passed on 28 August 2024, the proposed California State Assembly bill will seek to regulate foundational AI models and to place obligations on companies that provide resources that are used for AI. SB 1047 is one of the first significant AI regulations in the US that would place liability on the developers of AI models.

Earlier this month, Governor Newsom revealed some hesitance around signing the bill, and it was made clear that regardless of whether he signs or vetoes, SB 1047 will become law and take effect on 1 January 2026. What is different about this regulation is that developers working on AI models can put in place what is being referred to as a ‘kill switch’, which means that they would be subjected to scrutiny from third-party compliance audits.

This has resulted in open source AI developers raising concerns across the technology sector because of the assumption that the rules will stifle innovation and in turn, prevent organisations from being as secure as possible, through the use of AI. While Big Tech pushed back, over 100 current and former employees called on the California Governor to support the legislation because “the most powerful AI models may soon pose severe risks.”

Why was SB 1047 vetoed?

SB 1047 was vetoed because:

  1. When focused on expensive and large-scale models, it establishes a regulatory framework and gives the public a false sense of security about controlling AI.
  2. When focused on smaller, specialised models, it may emerge as equally or even more dangerous and at the expense of curtailing the innovation that fuels advancement for the public good.

Newsom provided his view on regulating AI. “While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions – so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”

Is the US working on AI regulation?

In his message, Newsom mentions that there is a US AI regulation problem that needs to be solved. Further, California has a role in regulating AI, as does the US Congress, but “it must be based on empirical evidence and science.” There are many initiatives underway that are indeed based on evidence and science, but Newsom himself has signed “over a dozen bills regulating specific, known risks posed by AI” in the last 30 days. However, it must be done right.

With no federal legislation on AI, the patchwork of various current and proposed AI regulatory frameworks at the state and local level will have to be considered by companies wanting to keep pace with their peers that are leveraging AI.

What the AI safety bill will mean for fintech

Liability is a discussion that continues to permeate through the fintech industry. The EU’s AI Act, for instance, establishes liability in numerous ways, but it is clear that there is strict liability for the operators of high-risk AI systems, as well as importers, distributors, and deployers, if the technology causes harm.

When SB 1047 comes into effect, before a fintech firm can train a model, AI developers within the organisation will be required to publicly announce disclosures about how the company will test how likely the model will be harmful. If critically harmful, the model will be shut down and the California Attorney General will have to be notified. Violations would lead to a civil penalty up to 10% of the cost of the quantity of computing power used to train the model and 30% for any subsequent violation.

It is also evident here that the new legislation will impact how large AI developers behave because of the increased testing, governance, and reporting hurdles that will have to be jumped through ahead of any AI model training. SB 1047 also plans to establish a Government Operations Agency which would develop a framework for the creation of a public cloud computing cluster known as ‘CalCompute’ for use by developers and researchers statewide, in fintech or otherwise.

While 2024 was a legislative year, many more regulations will be pushed in 2025. Fintech organisations must consider the AI tools and services that they are currently using and AI developers must establish a compliance strategy that includes risk assessments and proactively harnesses the power of AI technologies.

Comments: (0)

Editorial

This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community.