AI Regulation in Financial Services: Current Trends and Future Challenges

The Evolving Landscape of AI Regulation in Financial Services

Artificial intelligence (AI) is increasingly woven into financial services operations, transforming everything from consumer interactions through chatbots and targeted marketing to essential functions like underwriting, credit decisions, fraud detection, fair lending, and collections. Financial institutions increasingly rely on AI to analyze consumer complaints, manage customer relationships, and craft business strategies. But as AI adoption accelerates, the question of which agencies will regulate its use remains unsettled.

Initial Federal Oversight

When AI gained momentum in financial services, federal agencies initially took charge. The Federal Housing Finance Agency and Consumer Financial Protection Bureau issued AI compliance directives in September 2022, April 2023, and September 2023, respectively. Other federal agencies, including the Federal Trade Commission, Department of Justice, Office of the Comptroller of the Currency, Federal Reserve, and Equal Employment Opportunity Commission, quickly followed with their own AI oversight statements.

However, neither a consensus nor a binding law on AI regulation at the federal level formed. As the federal momentum faded, state regulators stepped in, passing legislation focused on bias, transparency, and compliance in the use of AI-driven decision-making for lending and employment. Several states also clarified that discriminatory AI behavior would be assessed under their Unfair or Deceptive Acts or Practices (UDAP) laws, creating a patchwork of oversight.

Shifts in Regulation

Earlier this year, the Trump administration moved to deregulate the use of AI. President Trump signed Executive Order 14179 on January 23, 2025, revoking President Biden’s comprehensive AI Executive Order, which sought to place guardrails for AI use. Shortly thereafter, the One Big Beautiful Bill (OBBB) Act was introduced. The OBBB Act, which passed the House on May 22, 2025, seeks a 10-year moratorium on state and local AI regulation, with exceptions only for laws that encourage AI adoption or avoid imposing requirements on AI systems. If passed by the Senate, state regulators would be stripped of their ability to enforce AI-specific regulations — both those pending and those already enacted — for a decade, leaving only UDAP laws or other generally applicable laws as backstops.

The Importance of Understanding AI Regulation

The ongoing evolution of AI regulation is challenging to follow for the most sophisticated compliance teams and in-house counsel, yet it is critical to understand to remain competitive in the financial services industry today. Below, to help understand where AI regulation currently stands, we provide an overview of UDAP statements and guidance related to AI followed by enacted and pending AI legislation that could be preempted and thus rendered unenforceable — or put on a 10-year hold — if the OBBB Act passes the Senate.

State Guidance on Application of UDAP and Existing Laws to AI

State enforcement through existing consumer protection laws would remain intact under the federal moratorium. Several states have already issued guidance explicitly stating that their UDAP laws or existing consumer protection laws apply to AI:

  • California issued a legal advisory on January 13, 2025, highlighting that existing consumer protection laws apply to AI-driven decisions.
  • Oregon provided guidance on AI-related compliance requirements on December 24, 2024, emphasizing that AI development must prioritize consumer protection, privacy, and fairness.
  • Massachusetts issued an advisory on April 16, 2024, clarifying that existing state laws and regulations apply to AI systems.
  • The New York Department of Financial Services issued an industry letter on October 16, 2024, providing guidance on the risks posed by AI.

Enacted State AI-Specific Legislation Relating to Financial Services

Several states have gone beyond UDAP enforcement and introduced legislation specifically targeting AI use in financial services, employment decisions, and data privacy. However, if enacted in its current form, the OBBB Act would render all enacted and pending state legislation unenforceable.

  • California enacted the Generative Artificial Intelligence: Training Data Transparency Act in the autumn of 2024, requiring developers to publicly disclose specified information related to training data.
  • Colorado enacted two laws in 2024 that directly target the use of AI in consumer finance, including transparency in AI-driven lending decisions.
  • Illinois amended the Consumer Fraud and Deceptive Business Practices Act in the summer of 2024, expanding oversight of predictive data analytics and AI applications.
  • New York City enacted the Bias Audit Law in 2021, mandating independent audits of automated employment decision tools.
  • The Texas attorney general introduced a data privacy and security initiative focused on AI risks in consumer transactions.
  • Utah passed the Artificial Intelligence Policy Act in 2024, establishing an Office of AI Policy and requiring disclosure of AI interactions.

Proposed State AI-Specific Legislation Relating to Financial Services

Several states have proposed legislation specifically targeting AI use in financial services, but this proposed legislation would likely not advance any further under the federal moratorium.

  • California introduced various bills in the 2025–2026 legislative session focusing on civil immunity for developers and establishing human oversight over AI systems.
  • Connecticut introduced SB 2, focusing on AI governance and transparency.
  • Hawaii introduced SB 59, prohibiting discriminatory algorithmic eligibility determinations.
  • Illinois introduced SB 2203, requiring annual impact assessments for automated decision tools.

Conclusion: The Future of AI Regulation

With the proposed decade-long federal moratorium and the patchwork of pending state legislation, the future of AI regulation remains uncertain. One consistent theme across all potential outcomes is an emphasis on transparency. Whether AI is used in customer-facing chatbots or in back-end decision-making processes, state AI-specific legislation and existing state consumer protection legislation alike are converging on the need for clear disclosure and accountability in AI deployment.

Despite the present state of uncertainty, financial institutions should take measures to ensure their AI systems comply with the basic tenets of consumer protection law. Companies should implement the following best practices to stay ahead of the approaching regulatory landscape:

  • Build a robust AI governance framework. Establish oversight bodies and accountability structures for AI system outcomes.
  • Prioritize transparency and explainability. Use explainable AI (xAI) in high-stakes areas and ensure traceability of model decisions.
  • Align with emerging global standards. Monitor existing frameworks and consider adopting voluntary standards to stay ahead of regulation.
  • Maintain data hygiene and governance. Ensure high-quality, unbiased data inputs and conduct data privacy impact assessments.

This study provides a comprehensive overview of the current landscape of AI regulation in financial services, highlighting the importance of proactive compliance and transparency in the face of evolving legal frameworks.

More Insights

Enhancing AI Safety through Responsible Alignment

The post discusses the development of phi-3-mini in alignment with Microsoft's responsible AI principles, focusing on safety measures such as post-training safety alignment and red-teaming. It...

Mastering Sovereign AI Clouds in Intelligent Manufacturing

Sovereign AI clouds provide essential control and compliance for manufacturers, ensuring that their proprietary data remains secure and localized. As the demand for AI-driven solutions grows, managed...

Empowering Ethical AI in Scotland

The Scottish AI Alliance has released its 2024/2025 Impact Report, showcasing significant progress in promoting ethical and inclusive artificial intelligence across Scotland. The report highlights...

EU AI Act: Embrace Compliance and Prepare for Change

The recent announcement from the EU Commission confirming that there will be no delay to the EU AI Act has sparked significant reactions, with many claiming both failure and victory. Companies are...

Exploring Trustworthiness in Large Language Models Under the EU AI Act

This systematic mapping study evaluates the trustworthiness of large language models (LLMs) in the context of the EU AI Act, highlighting their capabilities and the challenges they face. The research...

EU AI Act Faces Growing Calls for Delay Amid Industry Concerns

The EU has rejected calls for a pause in the implementation of the AI Act, maintaining its original timeline despite pressure from various companies and countries. Swedish Prime Minister Ulf...

Tightening AI Controls: Impacts on Tech Stocks and Data Centers

The Trump administration is preparing to introduce new restrictions on AI chip exports to Malaysia and Thailand to prevent advanced processors from reaching China. These regulations could create...

AI and Data Governance: Building a Trustworthy Future

AI governance and data governance are critical for ensuring ethical and reliable AI solutions in modern enterprises. These frameworks help organizations manage data quality, transparency, and...

BRICS Calls for UN Leadership in AI Regulation

In a significant move, BRICS nations have urged the United Nations to take the lead in establishing global regulations for artificial intelligence (AI). This initiative highlights the growing...