The Evolving Landscape of AI Regulation in Financial Services
Artificial intelligence (AI) is increasingly woven into financial services operations, transforming everything from consumer interactions through chatbots and targeted marketing to essential functions like underwriting, credit decisions, fraud detection, fair lending, and collections. Financial institutions increasingly rely on AI to analyze consumer complaints, manage customer relationships, and craft business strategies. But as AI adoption accelerates, the question of which agencies will regulate its use remains unsettled.
Initial Federal Oversight
When AI gained momentum in financial services, federal agencies initially took charge. The Federal Housing Finance Agency and Consumer Financial Protection Bureau issued AI compliance directives in September 2022, April 2023, and September 2023, respectively. Other federal agencies, including the Federal Trade Commission, Department of Justice, Office of the Comptroller of the Currency, Federal Reserve, and Equal Employment Opportunity Commission, quickly followed with their own AI oversight statements.
However, neither a consensus nor a binding law on AI regulation at the federal level formed. As the federal momentum faded, state regulators stepped in, passing legislation focused on bias, transparency, and compliance in the use of AI-driven decision-making for lending and employment. Several states also clarified that discriminatory AI behavior would be assessed under their Unfair or Deceptive Acts or Practices (UDAP) laws, creating a patchwork of oversight.
Shifts in Regulation
Earlier this year, the Trump administration moved to deregulate the use of AI. President Trump signed Executive Order 14179 on January 23, 2025, revoking President Biden’s comprehensive AI Executive Order, which sought to place guardrails for AI use. Shortly thereafter, the One Big Beautiful Bill (OBBB) Act was introduced. The OBBB Act, which passed the House on May 22, 2025, seeks a 10-year moratorium on state and local AI regulation, with exceptions only for laws that encourage AI adoption or avoid imposing requirements on AI systems. If passed by the Senate, state regulators would be stripped of their ability to enforce AI-specific regulations — both those pending and those already enacted — for a decade, leaving only UDAP laws or other generally applicable laws as backstops.
The Importance of Understanding AI Regulation
The ongoing evolution of AI regulation is challenging to follow for the most sophisticated compliance teams and in-house counsel, yet it is critical to understand to remain competitive in the financial services industry today. Below, to help understand where AI regulation currently stands, we provide an overview of UDAP statements and guidance related to AI followed by enacted and pending AI legislation that could be preempted and thus rendered unenforceable — or put on a 10-year hold — if the OBBB Act passes the Senate.
State Guidance on Application of UDAP and Existing Laws to AI
State enforcement through existing consumer protection laws would remain intact under the federal moratorium. Several states have already issued guidance explicitly stating that their UDAP laws or existing consumer protection laws apply to AI:
- California issued a legal advisory on January 13, 2025, highlighting that existing consumer protection laws apply to AI-driven decisions.
- Oregon provided guidance on AI-related compliance requirements on December 24, 2024, emphasizing that AI development must prioritize consumer protection, privacy, and fairness.
- Massachusetts issued an advisory on April 16, 2024, clarifying that existing state laws and regulations apply to AI systems.
- The New York Department of Financial Services issued an industry letter on October 16, 2024, providing guidance on the risks posed by AI.
Enacted State AI-Specific Legislation Relating to Financial Services
Several states have gone beyond UDAP enforcement and introduced legislation specifically targeting AI use in financial services, employment decisions, and data privacy. However, if enacted in its current form, the OBBB Act would render all enacted and pending state legislation unenforceable.
- California enacted the Generative Artificial Intelligence: Training Data Transparency Act in the autumn of 2024, requiring developers to publicly disclose specified information related to training data.
- Colorado enacted two laws in 2024 that directly target the use of AI in consumer finance, including transparency in AI-driven lending decisions.
- Illinois amended the Consumer Fraud and Deceptive Business Practices Act in the summer of 2024, expanding oversight of predictive data analytics and AI applications.
- New York City enacted the Bias Audit Law in 2021, mandating independent audits of automated employment decision tools.
- The Texas attorney general introduced a data privacy and security initiative focused on AI risks in consumer transactions.
- Utah passed the Artificial Intelligence Policy Act in 2024, establishing an Office of AI Policy and requiring disclosure of AI interactions.
Proposed State AI-Specific Legislation Relating to Financial Services
Several states have proposed legislation specifically targeting AI use in financial services, but this proposed legislation would likely not advance any further under the federal moratorium.
- California introduced various bills in the 2025–2026 legislative session focusing on civil immunity for developers and establishing human oversight over AI systems.
- Connecticut introduced SB 2, focusing on AI governance and transparency.
- Hawaii introduced SB 59, prohibiting discriminatory algorithmic eligibility determinations.
- Illinois introduced SB 2203, requiring annual impact assessments for automated decision tools.
Conclusion: The Future of AI Regulation
With the proposed decade-long federal moratorium and the patchwork of pending state legislation, the future of AI regulation remains uncertain. One consistent theme across all potential outcomes is an emphasis on transparency. Whether AI is used in customer-facing chatbots or in back-end decision-making processes, state AI-specific legislation and existing state consumer protection legislation alike are converging on the need for clear disclosure and accountability in AI deployment.
Despite the present state of uncertainty, financial institutions should take measures to ensure their AI systems comply with the basic tenets of consumer protection law. Companies should implement the following best practices to stay ahead of the approaching regulatory landscape:
- Build a robust AI governance framework. Establish oversight bodies and accountability structures for AI system outcomes.
- Prioritize transparency and explainability. Use explainable AI (xAI) in high-stakes areas and ensure traceability of model decisions.
- Align with emerging global standards. Monitor existing frameworks and consider adopting voluntary standards to stay ahead of regulation.
- Maintain data hygiene and governance. Ensure high-quality, unbiased data inputs and conduct data privacy impact assessments.
This study provides a comprehensive overview of the current landscape of AI regulation in financial services, highlighting the importance of proactive compliance and transparency in the face of evolving legal frameworks.