Impacts of the EU AI Act on Video Game Development

Some Implications of the EU AI Act on Video Game Developers

This article provides a brief overview of the impact of Regulation (EU) 2024/1689 of 13 June 2024, which lays down harmonized rules on artificial intelligence (“AI Act”), on video game developers. As AI systems become increasingly integrated into video games—ranging from generating backgrounds to creating non-player characters (NPCs)—it is essential for developers to understand the obligations imposed by this regulation.

The AI Act entered into force on 1st August 2024 and will gradually apply over the next two years. The application of the provisions of the AI Act predominantly depends on two factors: the role of the video game developer and the AI risk level.

The Role of the Video Game Developer

Article 2 of the AI Act delineates the scope of the regulation, specifying who may be subject to it. Video game developers can fall under two categories:

  • Providers of AI systems: These are developers who place their AI systems on the EU market or put them into service under their own name or trademark, whether for payment or free of charge (Article 3(3) AI Act).
  • Deployers of AI systems: These are users of AI systems in the course of a professional activity, provided they are established in the EU or have users of the AI system based in the EU (Article 3(4) AI Act).

Thus, video game developers will be considered (i) providers if they develop their own AI system and (ii) deployers if they integrate existing AI systems made by a third party into their video games.

The AI Risk Level and Related Obligations

The AI Act classifies AI systems into four categories based on the associated risk (Article 3(1) AI Act). Obligations on economic operators vary depending on the level of risk posed by the AI systems used:

  • AI systems with unacceptable risks: These are prohibited (Article 5 AI Act). In the video game sector, notable prohibitions include the provision or use of AI systems that deploy manipulative techniques or exploit people’s vulnerabilities, causing significant harm. For example, it is prohibited to use AI-generated NPCs to manipulate players towards increased spending in a game.
  • AI systems with high-risk: These trigger strict obligations for providers and, to a lesser extent, for deployers (Articles 6, 7 and Annex III AI Act). Relevant high-risk AI systems in video games include those posing significant risk to health, safety, or fundamental rights of natural persons, particularly AI systems used for emotional recognition (Annex III(1)(c) AI Act). Such systems could enhance interactions between players and NPCs, eliciting genuine emotions like empathy, compassion, or anger.
  • The obligations for providers of high-risk AI systems include implementing quality and risk management systems, maintaining appropriate data governance, ensuring transparency, and cooperating with relevant authorities. Deployers of high-risk systems must operate the system per the provider’s instructions, ensure human oversight, and monitor operations.
  • AI systems with specific transparency risk: This category includes chatbots, content-generating AI, and emotion recognition systems, triggering limited obligations (Article 50 AI Act). Providers must ensure players are informed they are interacting with an AI system, while deployers must disclose the nature of generated content, particularly in deep fakes.
  • AI systems with minimal risk: These are not regulated under the AI Act and include all other AI systems that do not fall into the aforementioned categories.

The European Commission has stated that, generally, AI-enabled video games face no obligations under the AI Act. However, companies might voluntarily adopt additional codes of conduct. It is crucial to note that in specific cases, such as those outlined, the AI Act will apply. Moreover, the AI literacy obligation applies irrespective of the risk level, including minimal risk.

The AI Literacy Obligation

The AI literacy obligation applies from February 2025 (Article 113 a) AI Act) to both providers and deployers (Article 4 AI Act). AI literacy encompasses the skills, knowledge, and understanding necessary for informed deployment of AI systems and awareness of their opportunities and risks.

The goal is to ensure that video game developers’ staff can make informed decisions regarding AI, considering their technical knowledge, experience, education, and the context in which the AI system is utilized.

While the AI Act does not specify how compliance with the AI literacy obligation should be achieved, several practical steps can be taken, including:

  • Determining which employees currently use or plan to use or develop AI in the near future.
  • Assessing employees’ current AI knowledge to identify gaps through surveys or quizzes.
  • Providing training activities and materials on AI basics, emphasizing relevant concepts, rules, and obligations.

Conclusion

The regulation of AI systems in the EU has potentially significant implications for video game developers, depending on how AI is utilized within specific games. As the AI Act evolves to adapt to new technologies, it is essential for developers to remain informed and proactive in compliance and understanding of their obligations.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...