Impacts of the EU AI Act on Video Game Development

Some Implications of the EU AI Act on Video Game Developers

This article provides a brief overview of the impact of Regulation (EU) 2024/1689 of 13 June 2024, which lays down harmonized rules on artificial intelligence (“AI Act”), on video game developers. As AI systems become increasingly integrated into video games—ranging from generating backgrounds to creating non-player characters (NPCs)—it is essential for developers to understand the obligations imposed by this regulation.

The AI Act entered into force on 1st August 2024 and will gradually apply over the next two years. The application of the provisions of the AI Act predominantly depends on two factors: the role of the video game developer and the AI risk level.

The Role of the Video Game Developer

Article 2 of the AI Act delineates the scope of the regulation, specifying who may be subject to it. Video game developers can fall under two categories:

  • Providers of AI systems: These are developers who place their AI systems on the EU market or put them into service under their own name or trademark, whether for payment or free of charge (Article 3(3) AI Act).
  • Deployers of AI systems: These are users of AI systems in the course of a professional activity, provided they are established in the EU or have users of the AI system based in the EU (Article 3(4) AI Act).

Thus, video game developers will be considered (i) providers if they develop their own AI system and (ii) deployers if they integrate existing AI systems made by a third party into their video games.

The AI Risk Level and Related Obligations

The AI Act classifies AI systems into four categories based on the associated risk (Article 3(1) AI Act). Obligations on economic operators vary depending on the level of risk posed by the AI systems used:

  • AI systems with unacceptable risks: These are prohibited (Article 5 AI Act). In the video game sector, notable prohibitions include the provision or use of AI systems that deploy manipulative techniques or exploit people’s vulnerabilities, causing significant harm. For example, it is prohibited to use AI-generated NPCs to manipulate players towards increased spending in a game.
  • AI systems with high-risk: These trigger strict obligations for providers and, to a lesser extent, for deployers (Articles 6, 7 and Annex III AI Act). Relevant high-risk AI systems in video games include those posing significant risk to health, safety, or fundamental rights of natural persons, particularly AI systems used for emotional recognition (Annex III(1)(c) AI Act). Such systems could enhance interactions between players and NPCs, eliciting genuine emotions like empathy, compassion, or anger.
  • The obligations for providers of high-risk AI systems include implementing quality and risk management systems, maintaining appropriate data governance, ensuring transparency, and cooperating with relevant authorities. Deployers of high-risk systems must operate the system per the provider’s instructions, ensure human oversight, and monitor operations.
  • AI systems with specific transparency risk: This category includes chatbots, content-generating AI, and emotion recognition systems, triggering limited obligations (Article 50 AI Act). Providers must ensure players are informed they are interacting with an AI system, while deployers must disclose the nature of generated content, particularly in deep fakes.
  • AI systems with minimal risk: These are not regulated under the AI Act and include all other AI systems that do not fall into the aforementioned categories.

The European Commission has stated that, generally, AI-enabled video games face no obligations under the AI Act. However, companies might voluntarily adopt additional codes of conduct. It is crucial to note that in specific cases, such as those outlined, the AI Act will apply. Moreover, the AI literacy obligation applies irrespective of the risk level, including minimal risk.

The AI Literacy Obligation

The AI literacy obligation applies from February 2025 (Article 113 a) AI Act) to both providers and deployers (Article 4 AI Act). AI literacy encompasses the skills, knowledge, and understanding necessary for informed deployment of AI systems and awareness of their opportunities and risks.

The goal is to ensure that video game developers’ staff can make informed decisions regarding AI, considering their technical knowledge, experience, education, and the context in which the AI system is utilized.

While the AI Act does not specify how compliance with the AI literacy obligation should be achieved, several practical steps can be taken, including:

  • Determining which employees currently use or plan to use or develop AI in the near future.
  • Assessing employees’ current AI knowledge to identify gaps through surveys or quizzes.
  • Providing training activities and materials on AI basics, emphasizing relevant concepts, rules, and obligations.

Conclusion

The regulation of AI systems in the EU has potentially significant implications for video game developers, depending on how AI is utilized within specific games. As the AI Act evolves to adapt to new technologies, it is essential for developers to remain informed and proactive in compliance and understanding of their obligations.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...