Sample EU AI Act Checklist
This checklist serves as a comprehensive guide to ensure compliance with the EU AI Act, focusing on risk management, governance, and accountability in AI systems.
1. Risk Identification & Classification
- [ ] Determine if the AI falls under unacceptable, high, limited, or minimal risk categories.
- [ ] Check if it qualifies as general-purpose AI (GPAI) or an agentic system with autonomy.
- [ ] Map jurisdictional scope (EU AI Act, GDPR, national laws, global markets).
2. Governance & Accountability
- [ ] Assign a clear accountable owner for AI compliance.
- [ ] Establish an AI governance framework (policies, committees, escalation paths).
- [ ] Define roles for provider, deployer, distributor, importer as per EU AI Act.
3. Data Management & Quality
- [ ] Ensure datasets are representative, relevant, and documented.
- [ ] Conduct bias and fairness audits during data preparation.
- [ ] Apply data protection by design (minimization, anonymization, lawful basis).
4. Design & Development
- [ ] Perform risk assessments at each development stage.
- [ ] Document model design, training, and limitations.
- [ ] Implement security by design (adversarial robustness, penetration testing).
5. Transparency & Documentation
- [ ] Maintain technical documentation (model cards, data sheets, intended use).
- [ ] Provide instructions for use to downstream deployers.
- [ ] Clearly state capabilities, limitations, and error rates to users.
- [ ] Log training data sources, model changes, and decision flows.
6. Human Oversight & Control
- [ ] Ensure human-in-the-loop (HITL) or human-on-the-loop (HOTL) mechanisms.
- [ ] Provide means to override or shut down the system safely.
- [ ] Train users in effective oversight and decision review.
7. Testing & Validation
- [ ] Conduct pre-deployment testing for accuracy, robustness, safety.
- [ ] Simulate adversarial and misuse scenarios.
- [ ] Validate against compliance and ethical standards.
8. Deployment & Monitoring
- [ ] Keep continuous monitoring for performance, drift, anomalies.
- [ ] Log significant events for traceability and accountability.
- [ ] Collect user feedback and incident reports systematically.
- [ ] Establish a decommissioning process when systems are retired.
9. Impact & Rights Assessment
- [ ] Conduct Fundamental Rights Impact Assessment (FRIA) if risk is non-trivial.
- [ ] Map risks to privacy, equality, safety, freedom of expression, employment.
- [ ] Document mitigation strategies for identified harms.
10. Regulatory Compliance
- [ ] Verify obligations under EU AI Act (risk tier-based).
- [ ] Ensure compliance with GDPR, cybersecurity acts, consumer protection laws.
- [ ] For high-risk systems, prepare conformity assessment files.
- [ ] Track timelines for phased compliance obligations.
11. Security & Cyber-resilience
- [ ] Secure model against data poisoning, adversarial inputs, model extraction.
- [ ] Protect infrastructure from cyber-attacks.
- [ ] Monitor for misuse and malicious repurposing of outputs.
12. Culture & Training
- [ ] Provide responsible AI training to developers, managers, deployers.
- [ ] Build a culture of responsibility, questioning, and escalation.
- [ ] Encourage reporting of ethical or compliance concerns.