Experts Needed: Join the EU’s AI Scientific Panel

The EU AI Act: Key Developments and Insights

The EU AI Act continues to evolve, marking significant progress in the regulation of artificial intelligence within the European Union. This study examines recent developments, including the establishment of expert panels, the anniversary of the EU AI Office, and various international responses to the EU’s regulatory framework.

Commission Seeks Experts for AI Scientific Panel

The European Commission is actively seeking to form a scientific panel comprised of independent experts to assist in the implementation and enforcement of the AI Act. This panel will focus on general-purpose AI (GPAI) models and systems, providing crucial advice on systemic risks, model classification, evaluation methods, and cross-border market surveillance. A total of 60 members will be selected for a renewable two-year term, with an emphasis on gender balance and representation from EU and EEA/EFTA countries. Candidates are required to possess a PhD or equivalent experience in relevant fields such as AI impacts, risk assessment, and cybersecurity. Applications are due by September 14.

Celebrating the EU AI Office’s First Anniversary

As the EU AI Office celebrates its first anniversary, it has expanded to over 100 experts specializing in AI policy, research, regulation, and international cooperation. Key achievements include:

  • Implementing the AI Act with practical guidelines and governance structures.
  • Issuing definitions and prohibitions regarding AI systems.
  • Establishing an AI literacy repository.

Looking ahead, the office plans to publish a Code of Practice for GPAI, developed with input from over 1,000 experts, to be assessed by August 2025.

International Reactions to the EU AI Act

A coalition of AI researchers and industry representatives, including Nobel laureates, has urged EU leaders to maintain stringent GPAI rules that serve the interests of European businesses and citizens. Their letter emphasizes that the EU can foster innovation without compromising on health and safety. They recommend:

  • Mandatory third-party testing for systemic risk models.
  • Robust review mechanisms to adapt to emerging risks.
  • Strengthening the AI Office’s enforcement capabilities.

In the U.S., the regulatory environment is becoming increasingly complex, with nearly 700 AI-related bills introduced in 2024 alone. States like Colorado and Texas are adopting comprehensive approaches similar to the EU AI Act, while major tech companies are lobbying for federal regulations to unify state laws.

Kazakhstan’s AI Law Inspired by the EU

Kazakhstan is making strides toward becoming Central Asia’s first nation to regulate AI comprehensively, drawing inspiration from the EU AI Act. The draft ‘Law on Artificial Intelligence’ aims to develop a human-centric regulatory framework but faces challenges, including inadequate algorithmic transparency and insufficient enforcement institutions.

Generative AI and Its Implications

The European Commission’s Joint Research Centre has published a report on the impact of generative AI (GenAI) within the EU. Many GenAI systems fall under the limited risk category, requiring providers to ensure users are aware they are interacting with machines. Prohibited practices include harmful AI-based manipulation, such as chatbots impersonating loved ones. With current GenAI models exhibiting capabilities of general-purpose AI, they are subject to relevant obligations outlined in the AI Act.

In conclusion, the EU AI Act represents a significant step forward in AI regulation, fostering a balanced approach to innovation and safety. The establishment of expert panels, the achievements of the EU AI Office, and international responses highlight the ongoing dialogue and collaboration required to navigate the complexities of AI governance.

More Insights

G7 Summit Fails to Address Urgent AI Governance Needs

At the recent G7 summit in Canada, discussions primarily focused on economic opportunities related to AI, while governance issues for AI systems were notably overlooked. This shift towards...

Africa’s Bold Move Towards Sovereign AI Governance

At the Internet Governance Forum (IGF) 2025 in Oslo, African leaders called for urgent action to develop sovereign and ethical AI systems tailored to local needs, emphasizing the necessity for...

Top 10 Compliance Challenges in AI Regulations

As AI technology advances, the challenge of establishing effective regulations becomes increasingly complex, with different countries adopting varying approaches. This regulatory divergence poses...

China’s Unique Approach to Embodied AI

China's approach to artificial intelligence emphasizes the development of "embodied AI," which interacts with the physical environment, leveraging the country's strengths in manufacturing and...

Workday Sets New Standards in Responsible AI Governance

Workday has recently received dual third-party accreditations for its AI Governance Program, highlighting its commitment to responsible and transparent AI. Dr. Kelly Trindle, Chief Responsible AI...

AI Adoption in UK Finance: Balancing Innovation and Compliance

A recent survey by Smarsh reveals that while UK finance workers are increasingly adopting AI tools, there are significant concerns regarding compliance and oversight. Many employees express a desire...

AI Ethics Amid US-China Tensions: A Call for Global Standards

As the US-China tech rivalry intensifies, a UN agency is advocating for global AI ethics standards, highlighted during UNESCO's Global Forum on the Ethics of Artificial Intelligence in Bangkok...

Mastering Compliance with the EU AI Act Through Advanced DSPM Solutions

The EU AI Act emphasizes the importance of compliance for organizations deploying AI technologies, with Zscaler’s Data Security Posture Management (DSPM) playing a crucial role in ensuring data...

US Lawmakers Push to Ban Adversarial AI Amid National Security Concerns

A bipartisan group of U.S. lawmakers has introduced the "No Adversarial AI Act," aiming to ban the use of artificial intelligence tools from countries like China, Russia, Iran, and North Korea in...