The EU AI Act Newsletter #78: Cutting Red Tape
This edition of the EU AI Act Newsletter discusses recent developments and analyses of the EU artificial intelligence law, with a particular focus on the ongoing legislative process and stakeholder feedback.
Legislative Process
Stakeholder feedback on AI definitions and prohibited practices: The European Commission has published a report analyzing feedback from two public consultations on regulatory obligations of the AI Act. These consultations aimed to clarify the definition of AI systems and identify prohibited practices, effective since February 2, 2025. The report reveals that industry stakeholders represented 47.2% of nearly 400 replies, while citizen engagement was notably low at 5.74%. Participants expressed the need for clearer definitions of terms such as “adaptiveness” and “autonomy” and raised concerns about regulating conventional software inadvertently. Key issues identified included the prohibition of practices like emotion recognition, social scoring, and real-time biometric identification.
AI literacy questions and answers: In response to the requirements under Article 4 of the AI Act, which became applicable on February 2, 2025, the European Commission released an extensive Q&A on AI literacy. This article mandates that providers and deployers of AI systems ensure sufficient literacy among their staff and others handling AI systems. The guidance emphasizes the importance of assessing individuals’ technical knowledge, experience, and education in the context of how AI systems will be used.
No lead scientific advisor on AI yet despite dozens of applications: Reports indicate that the European Commission has not yet appointed a lead scientific adviser for its AI Office despite numerous applications. This role is crucial for ensuring a high level of scientific understanding in AI regulations, especially as general-purpose AI regulations are set to take effect on August 2, 2025. The recruitment process continues, with a preference for candidates from European countries.
Analyses
The EU should cut actual red tape, not AI safeguards: An op-ed argues that Europe’s competitiveness in AI lies in reducing bureaucratic inefficiencies rather than loosening AI safety regulations. The article suggests that independent experts working with the EU AI Office can help convert regulatory principles into actionable practices, particularly benefiting large tech companies rather than European startups, which may struggle under existing regulations. The commentary stresses that such safety assessments are essential to prevent systemic risks, akin to established practices in sectors like pharmaceuticals and aviation.
The value of the Code of Practice safety and security framework: A newsletter highlights the importance of the Code of Practice, which translates the AI Act’s essential requirements into actionable guidance for AI providers. The Code compiles best practices from leading AI companies, aiming to establish industry standards. The European Commission encourages adoption of this Code, which offers benefits like increased trust and streamlined enforcement for signatories while imposing additional scrutiny on non-signatories.
ABBA legend warns against diluted rights in the EU AI code: Björn Ulvaeus, a member of ABBA, has voiced concerns to MEPs regarding proposals from Big Tech that may weaken creative rights under the AI Act. He criticized the voluntary Code of Practice on General Purpose AI for failing to address transparency demands from the creative sector, advocating for a regulatory approach that protects original principles rather than compromising them.
Feedback
The newsletter concludes with a call for 2-minute feedback on the AI Act website, emphasizing the importance of user input in developing resources that best meet stakeholder needs. The site aims to provide objective information on the EU AI Act and has garnered over 150,000 users monthly, highlighting its significance in the ongoing discourse surrounding AI regulations.