EIOPA’s Insights on AI Governance in Insurance

EIOPA’s Draft Opinion on AI Governance and Risk Management

On February 12, 2025, the European Insurance and Occupational Pensions Authority (EIOPA) published a consultation regarding its draft opinion on artificial intelligence (AI) governance and risk management. This opinion addresses supervisory authorities and encompasses the activities of both insurance undertakings and intermediaries, collectively referred to as Undertakings, that utilize AI systems within the insurance value chain.

Objective of the Opinion

The main objective of the Opinion is to clarify the core principles and requirements within the insurance sectoral legislation concerning AI systems that are neither prohibited nor classified as high-risk under Regulation (EU) 2024/1689, commonly known as the AI Act. The guidance aims to assist Undertakings in applying existing legislation to AI systems that were not prevalent at the time the AI Act was enacted.

Key Principles and Responsibilities

The Opinion establishes high-level supervisory expectations regarding the governance and risk management principles that Undertakings should adopt to utilize AI systems responsibly. The following key points summarize the expectations:

  • Risk Assessment: Undertakings must evaluate the risks associated with various AI use cases, acknowledging that different levels of risk exist among those that are not deemed prohibited or high-risk under the AI Act.
  • Proportionate Measures: Following risk assessment, Undertakings should implement tailored governance and risk management measures to ensure responsible AI usage.
  • Legal Compliance: In accordance with Article 41 of the European Directive 2009/138/EC and other relevant directives, Undertakings should establish governance and risk management systems focusing on:
    • Fairness and ethics
    • Data governance
    • Documentation and record-keeping
    • Transparency and explainability
    • Human oversight
    • Accuracy, robustness, and cybersecurity
  • Policy Development: Undertakings are encouraged to define and document their AI usage policies, which should be regularly reviewed for effectiveness.
  • Accountability Frameworks: Implementing accountability frameworks is recommended, regardless of whether AI systems are developed internally or by third parties.
  • Customer-Centric Approach: EIOPA advocates for a customer-centric approach to AI governance, ensuring fair treatment of customers in line with existing regulations.
  • Data Integrity: The Opinion emphasizes the necessity of utilizing complete, accurate, and unbiased data for training AI systems, along with regular monitoring and auditing of AI outcomes.
  • Redress Mechanisms: Adequate mechanisms should be established to allow customers to seek redress if adversely affected by AI systems.
  • Internal Controls: Effective compliance and risk management programs should include:
    • Designated individuals responsible for AI system oversight
    • Compliance and audit functions
    • A data protection officer ensuring adherence to data protection regulations
    • Training for staff to enhance human oversight
  • Performance Metrics: AI systems should demonstrate consistent performance regarding accuracy, robustness, and cybersecurity, with metrics established to measure performance.

Conclusion

The Opinion does not propose additional legislation or amendments to existing laws but seeks to provide sector-specific guidance on the operation of AI systems under current EU regulations. EIOPA is collaborating with the European Commission’s AI Office, with potential future commentary expected.

Responses to the draft Opinion must be submitted by May 12, 2025, after which EIOPA will consider feedback and revise the Opinion as necessary.

More Insights

AI Governance: Essential Insights for Tech and Security Professionals

Artificial intelligence (AI) is significantly impacting various business domains, including cybersecurity, with many organizations adopting generative AI for security purposes. As AI governance...

Government Under Fire for Rapid Facial Recognition Adoption

The UK government has faced criticism for the rapid rollout of facial recognition technology without establishing a comprehensive legal framework. Concerns have been raised about privacy...

AI Governance Start-Ups Surge Amid Growing Demand for Ethical Solutions

As the demand for AI technologies surges, so does the need for governance solutions to ensure they operate ethically and securely. The global AI governance industry is projected to grow significantly...

10-Year Ban on State AI Laws: Implications and Insights

The US House of Representatives has approved a budget package that includes a 10-year moratorium on enforcing state AI laws, which has sparked varying opinions among experts. Many argue that this...

AI in the Courts: Insights from 500 Cases

Courts around the world are already regulating artificial intelligence (AI) through various disputes involving automated decisions and data processing. The AI on Trial project highlights 500 cases...

Bridging the Gap in Responsible AI Implementation

Responsible AI is becoming a critical business necessity, especially as companies in the Asia-Pacific region face rising risks associated with emergent AI technologies. While nearly half of APAC...

Leading AI Governance: The Legal Imperative for Safe Innovation

In a recent interview, Brooke Johnson, Chief Legal Counsel at Ivanti, emphasizes the critical role of legal teams in AI governance, advocating for cross-functional collaboration to ensure safe and...

AI Regulations: Balancing Innovation and Safety

The recent passage of the One Big Beautiful Bill Act by the House of Representatives includes a provision that would prevent states from regulating artificial intelligence for ten years. This has...

Balancing Compliance and Innovation in Financial Services

Financial services companies face challenges in navigating rapidly evolving AI regulations that differ by jurisdiction, which can hinder innovation. The need for compliance is critical, as any misstep...