AI and Regulatory Challenges in the Gambling Industry

AI in Gambling: A Race Between Improvement Chances, Regulatory Compliance, and Potential Liability

In the evolving digital landscape, the deployment of Artificial Intelligence (AI) in gambling presents bold promises, including enhanced operational efficiency, real-time risk management, and personalized user experiences. However, these advantages come with rising regulatory expectations and potential liability risks.

I. Tech Failures Trigger Enforcement

Technical failures in gambling operations have escalated from minor issues to potential multimillion-dollar liabilities. For instance, in France, Unibet faced an €800,000 fine due to a software malfunction that allowed self-excluded users access to their platform. Similarly, Australian regulators imposed a 1 million AUD fine for the same failure. In the UK, Bet365 was fined over 500,000 GBP for deficiencies in its responsible gambling software.

These incidents underscore the increasing enforcement actions, where regulators issued over 184 million USD in fines globally in 2024. Operators leveraging AI-driven systems face multiple layers of regulatory exposure, including compliance with gambling supervision, the upcoming EU AI Act, GDPR, and AML/CTF regulations. A single system failure can trigger enforcement across various legal frameworks, necessitating that systems be transparent, auditable, and compliant by design.

II. AI in Gambling Operations

AI systems can perform various operational and compliance functions in the gambling sector, including:

  • Biometric identity verification of players
  • Risk scoring for players
  • Player segmentation for targeted marketing
  • Early detection of problematic gambling behavior
  • Automated transaction monitoring for AML/CTF purposes
  • Dynamic game adaptations based on player ability
  • AI-driven customer service via chatbots

While these systems aim to support compliance and player protection, improper outputs can inadvertently create compliance risks. AI decisions are probabilistic and shaped by training data, often lacking transparency. This can lead to overreach, misclassification, or legally problematic outcomes. Thus, ensuring data integrity in AI systems is crucial to avoid biases and potential discrimination.

III. The EU AI Act: A New Compliance Frontier

The adoption of the EU Artificial Intelligence Act (EU) 2024/1689 introduces serious liability risks for operators. It establishes obligations across the entire AI lifecycle, from design and training to deployment and post-marketing oversight. The Act categorizes AI systems into three classes: prohibited, high-risk, and limited-risk.

1. Prohibited AI Practices

Prohibited systems are banned outright due to their unacceptable potential for harm. While online gambling providers are unlikely to fall into these categories, engaging in prohibited practices can result in significant fines—up to €35 million or 7% of global turnover.

2. High-Risk AI

More relevant to the gambling sector is the classification of high-risk AI systems. Compliance-relevant AI systems for financial scoring, particularly in affordability assessments, are likely to be classified as high-risk. These systems must adhere to a comprehensive set of legal requirements, including:

  • Development within a robust risk management framework
  • Incorporation of accurate training data
  • Human oversight mechanisms
  • Transparency protocols
  • Cybersecurity safeguards

Non-compliance can result in administrative fines of up to €15 million or 3% of global turnover, whichever is higher.

3. Limited Risk AI

AI systems that do not fall under the high-risk category, such as biometric verification tools or early warning systems, are classified as limited-risk systems. These face moderate obligations primarily related to ensuring AI literacy among staff, transparency requirements, and compliance with applicable regulations.

IV. No Stand-Alone Compliance Frontier

The AI Act mandates that AI systems be auditable and explainable by design, requiring operators to maintain risk logs and monitor data inputs for drift or bias. Compliance with the AI Act is not standalone; it must be coordinated with existing data protection requirements under the GDPR and other regulatory frameworks.

V. Gaining Competitive Advantage

Forward-thinking gambling operators are conducting AI audits, mapping their systems, and aligning with AI risk frameworks to prepare for compliance with the AI Act. Recognizing AI as not just an efficiency tool but also a regulated system is crucial. Operators should focus on mapping AI systems, developing compliance procedures, and ensuring meaningful human oversight of AI-supported decisions.

VI. Conclusion

AI in gambling is more than a compliance challenge; it is a powerful market differentiator. Operators who proactively address compliance issues related to their AI systems will not only reduce regulatory risks but also gain a competitive edge. Embracing regulatory requirements and best practices can facilitate new partnerships and enhance customer retention in an industry where speed and agility define success.

More Insights

Classifying Your AI System Under the EU AI Act Made Easy

The EU AI Act categorizes AI systems into four risk levels: Unacceptable, High-risk, Limited, and Minimal. Genbounty offers a free Risk Classification Wizard to help teams quickly determine their...

AI Legislation: Bridging Global Gaps at AIPPI 2025

The AIPPI 2025 congress in Yokohama will address crucial topics in AI law, such as artificial intelligence and copyright, compulsory licenses, and exhaustion of trademark rights. AIPPI president...

Colorado’s AI Act: New Compliance Challenges for Businesses

Last week, Colorado lawmakers decided to delay the implementation of the Colorado Artificial Intelligence Act (CAIA) until June 30, 2026, extending the timeline for businesses to prepare. The CAIA...

AI Surveillance: Ensuring Safety Without Sacrificing Privacy

AI-driven surveillance enhances safety through advanced technologies like facial recognition and behavior analysis, but it poses significant risks to privacy, civil liberties, and social equity. As...

Responsible AI in Finance: From Theory to Practice

The global discussion around artificial intelligence in finance has shifted towards responsible usage, emphasizing the importance of trust, compliance, and education. Startups like WNSTN AI are...

Building Trust in AI Through Certification for a Sustainable Future

The article discusses how certification can enhance trust in AI systems, transforming regulation from a constraint into a competitive advantage in the market. With frameworks like the EU's AI Act...

Trust in Explainable AI: Building Transparency and Accountability

Explainable AI (XAI) is crucial for fostering trust and transparency in critical fields like healthcare and finance, as regulations now require clear explanations of AI decisions. By empowering users...

Regulating AI: Balancing Innovation and Safety

Artificial Intelligence (AI) is a revolutionary technology that presents both immense potential and significant risks, particularly due to the opacity of its algorithms. Without regulation, AI can...

Responsible AI Workflows for Transforming UX Research

The article discusses how AI can transform UX research by improving efficiency and enabling deeper insights, while emphasizing the importance of human oversight to avoid biases and inaccuracies. It...