Are You Ready to Comply With the EU AI Act?
The first set of rules banning high-risk artificial intelligence systems under the European Union AI Act went into effect on Sunday. Starting this week, companies are now barred from deploying AI-driven emotion recognition in the workplace and schools.
This ban is part of the EU’s phased approach to deploying the EU AI Act, which represents the first-ever binding rule on AI development and deployment. Non-compliance with the regulation could lead to a fine of up to 35 million euros or 7% of a corporate annual turnover.
The Phased Approach
The phased approach will provide companies with some time to prepare for compliance, which is viewed positively in the industry. This strategy is expected to create a domino effect, similar to the rollout of the General Data Protection Regulation (GDPR).
Implications for Companies
The EU’s risk-based approach is influential with new legislations, meaning companies need to be much more specific about the intended outcomes of their AI systems. This requirement places a significant emphasis on AI governance and accountability.
Key Challenges
As companies navigate the implications of the EU AI Act, they are facing various challenges, including:
- Understanding the nuances of the phased approach of the EU rule.
- Addressing the specific compliance challenges in relation to high-risk AI systems.
- Preparing for potential regulatory scrutiny and ensuring transparency in AI operations.
In conclusion, as the EU AI Act comes into effect, organizations must prioritize compliance and understand the broader implications of AI governance. The introduction of these regulations marks a significant step in ensuring that AI systems are developed and deployed responsibly, minimizing risks to society.