Michigan’s Financial Services Regulator and AI Compliance
On January 14, Michigan’s Department of Insurance and Financial Services (DIFS) issued a bulletin to remind all financial service providers under its regulation about the necessity of compliance when utilizing advanced analytical and computational technologies, including AI systems. This advisory emphasizes that any decisions or actions derived from AI that may impact consumers must adhere to all applicable laws and regulations.
Expectations for AI Technology Governance
The bulletin delineates DIFS’ expectations for the governance of the development, acquisition, and use of AI technologies. It advises financial service providers on the types of information and documentation that may be requested during investigations or examinations, ensuring transparency and accountability in AI operations.
DIFS also acknowledged a report by the Treasury from December 2024 titled “Artificial Intelligence in Financial Services” as a relevant guideline for organizations developing and implementing AI systems.
Risks Associated with AI Systems
The bulletin outlines several risks posed by AI systems, including:
- Inaccuracy: AI systems can produce erroneous outputs if not properly monitored.
- Unfair Discrimination: There is a potential for bias in AI decision-making processes.
- Data Vulnerability: AI systems may expose sensitive consumer data to risks.
- Lack of Transparency: Understanding how AI systems reach conclusions can be challenging.
- Inability to Map Decision Processes: It can be difficult to trace how decisions are made by AI models.
To mitigate these risks, DIFS encourages financial service providers to implement verification methods aimed at identifying errors and biases within their models and systems.
Implementation of AI Systems Programs
All regulated financial service providers are expected to develop, implement, and maintain a formal written AI systems program to ensure the responsible use of AI technologies. For those not formally engaging with AI systems, it is imperative to establish employee acceptable use policies to guide the use of technology within the organization.
This proactive approach by DIFS aims to foster a secure and compliant environment as the financial services industry increasingly adopts AI technologies. The emphasis on governance, risk management, and compliance underscores the critical need for accountability in leveraging AI for consumer-related decisions.