The EU AI Act: Key Developments and Insights
The EU AI Act continues to evolve, marking significant progress in the regulation of artificial intelligence within the European Union. This study examines recent developments, including the establishment of expert panels, the anniversary of the EU AI Office, and various international responses to the EU’s regulatory framework.
Commission Seeks Experts for AI Scientific Panel
The European Commission is actively seeking to form a scientific panel comprised of independent experts to assist in the implementation and enforcement of the AI Act. This panel will focus on general-purpose AI (GPAI) models and systems, providing crucial advice on systemic risks, model classification, evaluation methods, and cross-border market surveillance. A total of 60 members will be selected for a renewable two-year term, with an emphasis on gender balance and representation from EU and EEA/EFTA countries. Candidates are required to possess a PhD or equivalent experience in relevant fields such as AI impacts, risk assessment, and cybersecurity. Applications are due by September 14.
Celebrating the EU AI Office’s First Anniversary
As the EU AI Office celebrates its first anniversary, it has expanded to over 100 experts specializing in AI policy, research, regulation, and international cooperation. Key achievements include:
- Implementing the AI Act with practical guidelines and governance structures.
- Issuing definitions and prohibitions regarding AI systems.
- Establishing an AI literacy repository.
Looking ahead, the office plans to publish a Code of Practice for GPAI, developed with input from over 1,000 experts, to be assessed by August 2025.
International Reactions to the EU AI Act
A coalition of AI researchers and industry representatives, including Nobel laureates, has urged EU leaders to maintain stringent GPAI rules that serve the interests of European businesses and citizens. Their letter emphasizes that the EU can foster innovation without compromising on health and safety. They recommend:
- Mandatory third-party testing for systemic risk models.
- Robust review mechanisms to adapt to emerging risks.
- Strengthening the AI Office’s enforcement capabilities.
In the U.S., the regulatory environment is becoming increasingly complex, with nearly 700 AI-related bills introduced in 2024 alone. States like Colorado and Texas are adopting comprehensive approaches similar to the EU AI Act, while major tech companies are lobbying for federal regulations to unify state laws.
Kazakhstan’s AI Law Inspired by the EU
Kazakhstan is making strides toward becoming Central Asia’s first nation to regulate AI comprehensively, drawing inspiration from the EU AI Act. The draft ‘Law on Artificial Intelligence’ aims to develop a human-centric regulatory framework but faces challenges, including inadequate algorithmic transparency and insufficient enforcement institutions.
Generative AI and Its Implications
The European Commission’s Joint Research Centre has published a report on the impact of generative AI (GenAI) within the EU. Many GenAI systems fall under the limited risk category, requiring providers to ensure users are aware they are interacting with machines. Prohibited practices include harmful AI-based manipulation, such as chatbots impersonating loved ones. With current GenAI models exhibiting capabilities of general-purpose AI, they are subject to relevant obligations outlined in the AI Act.
In conclusion, the EU AI Act represents a significant step forward in AI regulation, fostering a balanced approach to innovation and safety. The establishment of expert panels, the achievements of the EU AI Office, and international responses highlight the ongoing dialogue and collaboration required to navigate the complexities of AI governance.