US AI regulation state by state: Impact and strategies for financial services

1 Like 2 Be the first to comment

US AI regulation state by state: Impact and strategies for financial services

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.

AI has been a transformative force across various industries, with financial services being one of the early adopters. The integration of AI in financial services has evolved significantly over the past few decades, from basic algorithmic trading and fraud detection to more sophisticated applications like generative AI. The rise of tools such as ChatGPT has added a new dimension to AI's role in the sector, with a notable increase in usage among adults in the US. However, the lack of standardised regulations in the US poses both opportunities and challenges for the future of AI in financial services.

Here’s an overview of where each US state is at with regulating AI, deploying AI responsibly, ensuring compliance, and safeguarding against potential pitfalls. While California and Texas are the most discussed states when considering regulating, states like Colorado and Virginia have taken a lead in consumer data protection, while others have ensured legislation around insurance decision-making and employer bias. 

Alabama

Enacted H172 regulation in 2024

In Alabama under H172, it is prohibited to distribute media that depicts an individual engaging in speech or conduct that they did not. AI-generated media, otherwise known as deepfakes or synthetic media, that falsely represents someone, namely 90 days before an election and with the intent of causing a particular result is no longer permitted in the state.

What does this mean for financial services? Now that deepfakes are banned in Alabama, it is illegal for fraudsters to use them to gain access to bank accounts that use video verification. This has a clear impact on know-your-customer (KYC) processes, making it more difficult for remote biometric verification to be replicated.

 

Alaska

Failed to enact regulation in January and February 2024

Alaska has attempted to introduce a number of prohibitions, but none have been enacted. S117 would have banned the creation of deepfakes related to an election candidate. H358 would also have made known use of synthetic media illegal. H352 intended to revise the definition of a person in civil action to not include AI, but was not passed.

 

Arizona

No regulation proposed

 

Arkansas

No regulation proposed

 

California

Enacted related regulations

After introducing the SB 1001, The Bolstering Online Transparency Act (BOT), in July 2019, it has been illegal to use a bot to communicate with a person in California to incentivise a transaction of goods or services or to influence an election without disclosing that the communication is via a bot.

Also, from 31 January 2024, the AB2013 requires AI developers to post the data they used to train the system on their website. The law applies to generative AI released on or after 1 January 2022, and developers must comply with its provisions by 1 January 2026.

The SB 1229, in place from 15 February 2024, requires property and casualty insurers to disclose until 1 January 2030 if it has used AI to make decisions that affect applications and claims.

14 January 2024 saw the SB942, the California AI Transparency Act, be established. This requires businesses to create an AI detection tool that allows a user to query the business about which content was created by a generative AI system. This law will into effect on 1 January 2026.

Vetoed by Governor Newsom, the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, SB 1047, would have authorised an AI developer to determine if models qualify for limited duty exemption before training on that model can begin. The ‘limited duty exemption’ would provide reasonable assurance the model does not, and will not, possess a hazardous capability.

What does this mean for financial services? If this rule came into play, before a fintech firm can train a model, AI developers within the organisation will be required to publicly announce disclosures about how the company will test how likely the model will be harmful. If critically harmful, the model will be shut down and the California Attorney General will have to be notified. Violations would lead to a civil penalty up to 10% of the cost of the quantity of computing power used to train the model and 30% for any subsequent violation.

 

Colorado

Enacted HB1147 and SB24-205 in 2024

HB1147 regulates the use of deepfakes produced using generative AI in communications about election candidates. Later, on 17 May 2024, the SB24-205 bill was put in place to encourage both developers and deployers of high-risk AI systems to use reasonable care to avoid algorithmic discrimination.

In 2021, Colorado enacted SB 21-169 to protect consumers from unfair discrimination when insurers use external consumer data and information sources, as well as algorithms and predictive models, based on race, colour, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression. Following implementation bills in 2023, all Colorado-licensed life insurers were asked to submit a compliance progress report on 1 June 2024, and an annual compliance attestation on 1 December 2024. The Colorado Privacy Act (CPA) which went into effect on 1 July 2023, allows consumers to opt out of personal data processing, whether AI is used or not.

What does this mean for financial services? The financial services industry will need to improve AI compliance in order to keep up with the number of regulations that are being implemented. This will involve validation, testing, and tight feedback loops, in addition to transparency, disclaimers, and circuit breakers. In a community article published on Finextra, Raj Bakhru, CEO, BlueFlame AI, explained: “As finance leaders confront mounting data privacy concerns, now is the time to ensure they have a robust governance framework in place, deploy compliance-focused AI solutions, implement continuous monitoring and auditing, and invest in ongoing training and awareness programs. The key to success lies in establishing clear policies, embracing strategic foresight, and committing to responsible AI utilization to usher in a future where AI and compliance converge to redefine the norms of our industry.”

 

Connecticut

Enacted Connecticut Privacy Act (CTPA) in 2023

The Connecticut Privacy Act (CTPA) went into force 1 July 2023, providing consumers the right to opt-out of profiling if it involves automated decision-making that produces legal or other similarly significant effects.

What does this mean for financial services? As Scott Hamilton, contributing editor, Finextra, writes: “While most agree it will bring positive advances for consumer data control and protection to the customer banking experience as a whole, many industry experts are throwing up caution flags to 1033’s proponent agency, the Consumer Financial Protection Bureau (CPFB), about growing operational and compliance concerns they have with the pending regulation’s complexity and timing.”

The CTPA has similarities to the CFPB’s 1033, and as Hamilton continued: “Others around the industry are also sharing questions or objections they’re hearing regarding the new regulation. They recognise that legitimate process and authority enhancement – primarily in data handling and banking relationship portability - are coming, and in fact many welcome the business expansion opportunities that being out front of their competitors on this landmark consumer-empowering rule might bring. Yet they’re still concerned about the short period of time allocated for them to do a great deal of preparation, in light of the CFPB’s proposal for an unusually aggressive implementation timeline.”

 

Delaware

Delaware Personal Data Privacy Act to be enacted in January 2025

Similar to the CTPA, the Delaware Personal Data Privacy Act will provide consumers the right to opt-out of profiling from 1 January 2025.

 

Florida

Enacted HB 919 in April 2024

If created with generative AI, HB 919 will require content to include a disclaimer. This bill took effect on 1 July 2024.

 

Georgia

Enacted HB 2023 in May 2023

HB 203, permits an optometrist or ophthalmologist to use an ‘assessment mechanism,’ to conduct an eye assessment or generate a prescription for contact lenses or spectacles subject to medical-related conditions. Bills that would have prohibited the use of AI in insurance decision making - HB 887 – and discrimination based on age, race, color, sex, sexual orientation, gender, gender expression, national or ethnic origin, religion, creed, familial status, marital status, disability or handicap, or genetic information - HB 890 – were brought to the government in 2024, but both failed.

 

Hawaii

Failed to enact regulation in January 2023 and January 2024

A number of regulations have been proposed in Hawaii, but none have been enacted. This includes a potential Hawaii Consumer Data Protection Act and laws around proof of product safety, data profiling, use of algorithmic information and political content.

 

Idaho

No regulation proposed

 

Illinois

Enacted Illinois AI Video Interview Act in 2019

In 2019, Illinois became the first state to enact restrictions with respect to the use of AI in hiring.  The Illinois AI Video Interview Act requires employers to notify applicants of AI use, explain how AI works, obtain consent, and conduct appropriate sharing. HB 3773, an amendment to the Human Rights Act, also ensures that an employer that uses predictive data analytics in its employment decisions may not consider the applicant’s protected class information or ZIP code when used as a proxy for race to make certain employment-related decisions. This was signed into law on 12 August 2024.

 

Indiana

Enacted SB5 in January 2023

Similar to the Colorado Privacy Act, from 9 January 2023, SB5, sets out rules for profiling and automated decision-making.

 

Iowa

No regulation proposed

 

Kansas

No regulation proposed

 

Kentucky

SB 266 regulation proposed

On the road to being enacted from March 2024, SB 266, would prohibit automated online accounts or bots from communicating with others to mislead.

 

Louisiana

HB 673 and SB 118 regulation proposed

Two regulations have been proposed, one that would provide consumer protection from discrimination by AI across insurance - HB 673 – and enforce the registration of AI foundation models - SB 118.

 

Maine

No regulation enacted

 

Maryland

HB 102 enacted in October 2020

HB 1202 prohibits an employer from using a facial recognition to create a facial template during an applicant’s pre-employment interview, unless the applicant consents. This went into force on 1 October 2020.

 

Massachusetts

SD 745 and HD 2281 proposed in January 2023

The Massachusetts Data Privacy Protection Act (MDPPA) - filed in both the Senate SD 745, and in the House HD 2281, is based on federal American Data Privacy Protection and would require companies to conduct impact assessments if they use a ‘covered algorithm’ – a computational process that uses AI - in a way that poses a consequential risk of harm to individuals.

What does this mean for financial services? In a community article published on Finextra, Ray Connolly, sales director, Regtick, said: “Organisations must start thinking about establishing a robust framework to navigate and comply with the evolving regulatory landscape surrounding AI and ML. This involves understanding the specific requirements set forth by upcoming regulations, conducting impact assessments, implementing governance structures, and ensuring accountability for AI systems. Organisations should also invest in regular audits, testing, and monitoring to ensure ongoing compliance and address any emerging risks effectively. Collaboration with regulatory bodies, industry peers, experts and within internal departments is crucial in developing best practices and shared standards, promoting responsible and ethical AI adoption.”

 

Michigan

No regulation proposed

 

Minnesota

Enacted HF2309 in March 2023

HF2309 creates an consumer privacy law based on the Colorado Privacy Act and Connecticut Data Privacy Act, to regulate the processing of personal information and profiling with automated decision-making.

 

Mississippi

No regulation proposed

 

Missouri

No regulation proposed

 

Montana

Enacted SB384 in February 2023

SB384 is aligned with Consumer Data Privacy Act, regulating the collection, processing of personal information, and profiling and automated decision-making.

 

Nebraska

No regulation proposed

 

Nevada

No regulation proposed

 

New Hampshire

Enacted SB255 in January 2023

SB 255 sets out rules for profiling and automated decision-making. It was reintroduced and passed on 18 January 2024.

 

New Jersey

Enacted S332 in January 2022

Although it was initially introduced in 2022, the bill was signed into law in January 2024 and will go into effect 15 January 2025. The act will require companies to conduct data protection assessments.

 

New Mexico

Enacted HB 182 in May 2024

HB 182 outlines how advertisements containing AI-generated media must be published with a disclaimer.

 

New York

Enacted Local Law 144 in December 2021

New York City passed the first law (Local Law 144), in the US that requires employers to conduct bias audits of AI-enabled tools used for employment decisions. After being delayed multiple times, enforcement began on 5 July 2023.

 

North Carolina

No regulation proposed

 

North Dakota

No regulation proposed

 

Ohio

SB 217 proposed in January 2024

SB 217 would require AI-generated products to have a watermark, prohibit removing such a watermark, prohibit simulated child pornography, and prohibit identity fraud using a replica of a person.

 

Oklahoma

HB 3453 proposed in February 2024

HB 3453, the Oklahoma Artificial Intelligence Bill of Rights would give residents the right to know when they are interacting with an AI engine rather than a real person, when their data is being used in an AI model, the right to opt-out and when content was created by AI.

 

Oregon

Enacted SB19 in August 2023

SB619 sets out rules for profiling and automated decision-making.

 

Pennyslvania

HB49 proposed in March 2023

HB49 would direct the Department of State to establish a registry of businesses operating AI systems in the State. There has been no further action on HB49 since 7 March 2023.

What does this mean for financial services? By maintaining an AI registry, companies can track and manage AI projects effectively, identify project ownership, and ascertain the individuals responsible for reporting on their outcomes, be they success or failure.

 

Rhode Island

No regulation enacted

 

South Carolina

No regulation enacted

 

South Dakota

No regulation enacted

 

Tennessee

Enacted HB1181 and ELVIS Act in July 2024

HB1181, the Tennessee Information Protection Act, establishes a consumer privacy law along the lines of those enacted in other states. The Ensuring Likeness Voice and Image Security Act (ELVIS Act) was signed into law on 21 March 2024, and protects the voices of songwriters, performers, and celebrities from AI and deepfakes by prohibiting the use of AI to mimic a person’s voice without their permission.

 

Texas

Drafted TRAIGA in October 2024

Released as a draft on 28 October 2024, the Texas Responsible AI Governance Act (TRAIGA) is expected to be introduced by Rep. Capriglione in the 2025 legislative session. According to BCLP Law, “Rep. Capriglione has had prior success with privacy-related bills in Texas, such as the Texas Data Privacy and Security Act, and worked with industry stakeholders to draft TRAIGA. If passed, TRAIGA would amend the Texas Data Privacy and Security Act to establish risk-based obligations in connection with the use of AI systems. TRAIGA would also establish an AI Council in Texas, would require developers of high-risk AI systems to protect consumers from known risks of algorithmic discrimination, would be required to provide risk assessments. TRAIGA would also establish an AI Regulatory Sandbox Programme for participating AI developers to test AI systems under a statutory exemption.

What does this mean for financial institutions? The National Law Review’s editor Oliver Roberts predicts that “state legislators will copy the ‘risk-based approach to AI regulation, which is at the core of the European Union AI Act and the Colorado AI Act. One example of this is the draft Texas Responsible AI Governance Act (TRAIGA), which is set to be introduced by Republican Texas State Rep. Giovanni Capriglione in January 2025. If enacted, TRAIGA would become the nation’s most restrictive state AI bill. However, given Texas’s pro-business political climate, I anticipate that this bill will fail to pass. I also believe that many legislatures will come to recognize the illogical and overreaching nature of these ‘risk-based’ regulatory approaches. For instance, TRAIGA prohibits the use of AI systems for ‘social scoring,’ yet it does not ban social scoring conducted without AI. This discrepancy highlights a fundamental flaw in such frameworks: they penalize the use of AI broadly without fully addressing the underlying harmful behavior. This approach not only stifles innovation and burdens smaller businesses but also focuses on speculative risks rather than addressing actual harms, creating a fragmented and overly restrictive regulatory landscape. As AI rapidly develops, I expect legislatures to eventually ditch this approach.”

 

Utah

No regulation proposed

 

Vermont

No regulation enacted

 

Virginia

Enacted VCDPA in January 2023

The Virginia Consumer Data Protection Act (VCDPA) sets out rules for profiling and automated decision-making, enabling consumers to opt-out.

 

Washington

No regulation enacted

 

West Virginia

No regulation enacted

 

Wisconsin

No regulation proposed

 

Wyoming

No regulation proposed

Comments: (0)

Editorial

This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community.