Overview of Quebec’s AMF Guideline on AI Use in Financial Institutions
The Autorité des marchés financiers (AMF) of Quebec released a comprehensive guideline governing the deployment of artificial intelligence (AI) systems by authorized insurers, financial services cooperatives, trust companies, and deposit institutions. Effective from May 1, 2027, the guideline outlines governance, risk management, lifecycle, and client‑fairness requirements to ensure responsible AI usage.
Governance Structure
The guideline mandates a clear division of responsibilities between the board of directors and senior management:
- Board duties include ensuring a corporate culture that prioritizes responsible AI and confirming that board members possess sufficient competence to understand AI‑related risks.
- Senior management duties involve implementing governance mechanisms, maintaining up‑to‑date AI knowledge, and designating a senior executive accountable for all AI systems.
Risk Management and Rating
Financial institutions must adopt a risk‑based classification for each AI system, assigning a risk rating that drives:
- Centralized AI system directories.
- Periodic review and updating of risk ratings.
- Tailored approval procedures and monitoring activities.
This approach ensures that risk considerations remain at the core of decision‑making throughout the AI system’s lifecycle.
AI System Lifecycle Requirements
The guideline defines seven lifecycle stages, each with specific expectations:
Choosing to Use an AI System
Document organizational needs and reassess suitability at each revalidation, considering the system’s risk rating.
Training Data
Guarantee high‑quality data for both training and deployment phases.
Procurement or Development
Factor in risk rating and explainability when selecting or building AI solutions.
Validation
Implement validation processes that assess explainability, cybersecurity, and trigger controls for bias, discrimination, dynamic adjustment, hallucinations, and intellectual property concerns.
Approval
Apply mitigation measures aligned with the institution’s risk appetite; for high‑risk AI, require human review of outcomes.
Deployment
Conduct comprehensive risk assessments, including cyber‑risk and infrastructure vulnerability analyses, before deployment.
Monitoring
Maintain ongoing performance monitoring, with special focus on autonomous AI and dynamically adjusting models.
Sound Commercial Practices & Fair Treatment of Clients
When AI interacts directly with clients, the guideline requires:
- Integration of AI considerations into the institution’s code of ethics.
- Identification and mitigation of variables that could cause discriminatory outcomes.
- Transparent disclosure to clients that they are engaging with an AI system.
- Provision of clear mechanisms for clients to request human assistance promptly.
- Clear labeling of AI‑generated content.
- Simple, understandable explanations for AI‑influenced decisions.
Implementation Timeline
The guideline was published on April 7, 2026 after a public consultation in fall 2025 and will become binding on May 1, 2027. Institutions are encouraged to apply the principles proportionally, taking into account their size, complexity, and risk profile.
Relation to Federal Regulations
The AMF’s initiative aligns with broader Canadian efforts, notably the Office of the Superintendent of Financial Institutions’ forthcoming Guideline E-23 on Model Risk Management, which also takes effect on April 1, 2027 and includes AI models within its scope.