Impacts of the EU AI Act on Video Game Development

Some Implications of the EU AI Act on Video Game Developers

This article provides a brief overview of the impact of Regulation (EU) 2024/1689 of 13 June 2024, which lays down harmonized rules on artificial intelligence (“AI Act”), on video game developers. As AI systems become increasingly integrated into video games—ranging from generating backgrounds to creating non-player characters (NPCs)—it is essential for developers to understand the obligations imposed by this regulation.

The AI Act entered into force on 1st August 2024 and will gradually apply over the next two years. The application of the provisions of the AI Act predominantly depends on two factors: the role of the video game developer and the AI risk level.

The Role of the Video Game Developer

Article 2 of the AI Act delineates the scope of the regulation, specifying who may be subject to it. Video game developers can fall under two categories:

  • Providers of AI systems: These are developers who place their AI systems on the EU market or put them into service under their own name or trademark, whether for payment or free of charge (Article 3(3) AI Act).
  • Deployers of AI systems: These are users of AI systems in the course of a professional activity, provided they are established in the EU or have users of the AI system based in the EU (Article 3(4) AI Act).

Thus, video game developers will be considered (i) providers if they develop their own AI system and (ii) deployers if they integrate existing AI systems made by a third party into their video games.

The AI Risk Level and Related Obligations

The AI Act classifies AI systems into four categories based on the associated risk (Article 3(1) AI Act). Obligations on economic operators vary depending on the level of risk posed by the AI systems used:

  • AI systems with unacceptable risks: These are prohibited (Article 5 AI Act). In the video game sector, notable prohibitions include the provision or use of AI systems that deploy manipulative techniques or exploit people’s vulnerabilities, causing significant harm. For example, it is prohibited to use AI-generated NPCs to manipulate players towards increased spending in a game.
  • AI systems with high-risk: These trigger strict obligations for providers and, to a lesser extent, for deployers (Articles 6, 7 and Annex III AI Act). Relevant high-risk AI systems in video games include those posing significant risk to health, safety, or fundamental rights of natural persons, particularly AI systems used for emotional recognition (Annex III(1)(c) AI Act). Such systems could enhance interactions between players and NPCs, eliciting genuine emotions like empathy, compassion, or anger.
  • The obligations for providers of high-risk AI systems include implementing quality and risk management systems, maintaining appropriate data governance, ensuring transparency, and cooperating with relevant authorities. Deployers of high-risk systems must operate the system per the provider’s instructions, ensure human oversight, and monitor operations.
  • AI systems with specific transparency risk: This category includes chatbots, content-generating AI, and emotion recognition systems, triggering limited obligations (Article 50 AI Act). Providers must ensure players are informed they are interacting with an AI system, while deployers must disclose the nature of generated content, particularly in deep fakes.
  • AI systems with minimal risk: These are not regulated under the AI Act and include all other AI systems that do not fall into the aforementioned categories.

The European Commission has stated that, generally, AI-enabled video games face no obligations under the AI Act. However, companies might voluntarily adopt additional codes of conduct. It is crucial to note that in specific cases, such as those outlined, the AI Act will apply. Moreover, the AI literacy obligation applies irrespective of the risk level, including minimal risk.

The AI Literacy Obligation

The AI literacy obligation applies from February 2025 (Article 113 a) AI Act) to both providers and deployers (Article 4 AI Act). AI literacy encompasses the skills, knowledge, and understanding necessary for informed deployment of AI systems and awareness of their opportunities and risks.

The goal is to ensure that video game developers’ staff can make informed decisions regarding AI, considering their technical knowledge, experience, education, and the context in which the AI system is utilized.

While the AI Act does not specify how compliance with the AI literacy obligation should be achieved, several practical steps can be taken, including:

  • Determining which employees currently use or plan to use or develop AI in the near future.
  • Assessing employees’ current AI knowledge to identify gaps through surveys or quizzes.
  • Providing training activities and materials on AI basics, emphasizing relevant concepts, rules, and obligations.

Conclusion

The regulation of AI systems in the EU has potentially significant implications for video game developers, depending on how AI is utilized within specific games. As the AI Act evolves to adapt to new technologies, it is essential for developers to remain informed and proactive in compliance and understanding of their obligations.

More Insights

CII Advocates for Strong AI Accountability in Financial Services

The Chartered Insurance Institute (CII) has urged for clear accountability frameworks and a skills strategy for the use of artificial intelligence (AI) in financial services. They emphasize the...

Regulating AI in APAC MedTech: Current Trends and Future Directions

The regulatory landscape for AI-enabled MedTech in the Asia Pacific region is still developing, with existing frameworks primarily governing other technologies. While countries like China, Japan, and...

New York’s AI Legislation: Key Changes Employers Must Know

In early 2025, New York proposed the NY AI Act and the AI Consumer Protection Act to regulate the use of artificial intelligence, particularly addressing algorithmic discrimination in employment...

Managing AI Risks: Effective Frameworks for Safe Implementation

This article discusses the importance of AI risk management frameworks to mitigate potential risks associated with artificial intelligence systems. It highlights various types of risks, including...

Essential Insights on the EU Artificial Intelligence Act for Tech Companies

The European Union has introduced the Artificial Intelligence Act (AI Act), which aims to manage the risks and opportunities associated with AI technologies across Europe. This landmark regulation...

South Korea’s Landmark AI Basic Act: A New Era of Regulation

South Korea has established itself as a leader in AI regulation in Asia with the introduction of the AI Basic Act, which creates a comprehensive legal framework for artificial intelligence. This...

EU AI Act and DORA: Mastering Compliance in Financial Services

The EU AI Act and DORA are reshaping how financial entities manage AI risk by introducing new layers of compliance that demand transparency, accountability, and quantifiable risk assessments...

AI Governance: Bridging the Transatlantic Divide

Artificial intelligence (AI) is rapidly reshaping economies, societies, and global governance, presenting both significant opportunities and risks. This chapter examines the divergent approaches of...

EU’s Ambitious Plan to Boost AI Development

The EU Commission is launching a new strategy to reduce barriers for the deployment of artificial intelligence (AI) across Europe, aiming to enhance the region's competitiveness on a global scale. The...