AI and Regulatory Challenges in the Gambling Industry

AI in Gambling: A Race Between Improvement Chances, Regulatory Compliance, and Potential Liability

In the evolving digital landscape, the deployment of Artificial Intelligence (AI) in gambling presents bold promises, including enhanced operational efficiency, real-time risk management, and personalized user experiences. However, these advantages come with rising regulatory expectations and potential liability risks.

I. Tech Failures Trigger Enforcement

Technical failures in gambling operations have escalated from minor issues to potential multimillion-dollar liabilities. For instance, in France, Unibet faced an €800,000 fine due to a software malfunction that allowed self-excluded users access to their platform. Similarly, Australian regulators imposed a 1 million AUD fine for the same failure. In the UK, Bet365 was fined over 500,000 GBP for deficiencies in its responsible gambling software.

These incidents underscore the increasing enforcement actions, where regulators issued over 184 million USD in fines globally in 2024. Operators leveraging AI-driven systems face multiple layers of regulatory exposure, including compliance with gambling supervision, the upcoming EU AI Act, GDPR, and AML/CTF regulations. A single system failure can trigger enforcement across various legal frameworks, necessitating that systems be transparent, auditable, and compliant by design.

II. AI in Gambling Operations

AI systems can perform various operational and compliance functions in the gambling sector, including:

  • Biometric identity verification of players
  • Risk scoring for players
  • Player segmentation for targeted marketing
  • Early detection of problematic gambling behavior
  • Automated transaction monitoring for AML/CTF purposes
  • Dynamic game adaptations based on player ability
  • AI-driven customer service via chatbots

While these systems aim to support compliance and player protection, improper outputs can inadvertently create compliance risks. AI decisions are probabilistic and shaped by training data, often lacking transparency. This can lead to overreach, misclassification, or legally problematic outcomes. Thus, ensuring data integrity in AI systems is crucial to avoid biases and potential discrimination.

III. The EU AI Act: A New Compliance Frontier

The adoption of the EU Artificial Intelligence Act (EU) 2024/1689 introduces serious liability risks for operators. It establishes obligations across the entire AI lifecycle, from design and training to deployment and post-marketing oversight. The Act categorizes AI systems into three classes: prohibited, high-risk, and limited-risk.

1. Prohibited AI Practices

Prohibited systems are banned outright due to their unacceptable potential for harm. While online gambling providers are unlikely to fall into these categories, engaging in prohibited practices can result in significant fines—up to €35 million or 7% of global turnover.

2. High-Risk AI

More relevant to the gambling sector is the classification of high-risk AI systems. Compliance-relevant AI systems for financial scoring, particularly in affordability assessments, are likely to be classified as high-risk. These systems must adhere to a comprehensive set of legal requirements, including:

  • Development within a robust risk management framework
  • Incorporation of accurate training data
  • Human oversight mechanisms
  • Transparency protocols
  • Cybersecurity safeguards

Non-compliance can result in administrative fines of up to €15 million or 3% of global turnover, whichever is higher.

3. Limited Risk AI

AI systems that do not fall under the high-risk category, such as biometric verification tools or early warning systems, are classified as limited-risk systems. These face moderate obligations primarily related to ensuring AI literacy among staff, transparency requirements, and compliance with applicable regulations.

IV. No Stand-Alone Compliance Frontier

The AI Act mandates that AI systems be auditable and explainable by design, requiring operators to maintain risk logs and monitor data inputs for drift or bias. Compliance with the AI Act is not standalone; it must be coordinated with existing data protection requirements under the GDPR and other regulatory frameworks.

V. Gaining Competitive Advantage

Forward-thinking gambling operators are conducting AI audits, mapping their systems, and aligning with AI risk frameworks to prepare for compliance with the AI Act. Recognizing AI as not just an efficiency tool but also a regulated system is crucial. Operators should focus on mapping AI systems, developing compliance procedures, and ensuring meaningful human oversight of AI-supported decisions.

VI. Conclusion

AI in gambling is more than a compliance challenge; it is a powerful market differentiator. Operators who proactively address compliance issues related to their AI systems will not only reduce regulatory risks but also gain a competitive edge. Embracing regulatory requirements and best practices can facilitate new partnerships and enhance customer retention in an industry where speed and agility define success.

More Insights

Microsoft Embraces EU AI Code While Meta Withdraws

Microsoft is expected to sign the European Union's code of practice for artificial intelligence, while Meta Platforms has declined to do so, citing legal uncertainties. The code aims to ensure...

Colorado’s Groundbreaking AI Law Sets New Compliance Standards

Analysts note that Colorado's upcoming AI law, which takes effect on February 1, 2026, is notable for its comprehensive requirements, mandating businesses to adopt risk management programs for...

Strengthening Ethical AI: Malaysia’s Action Plan for 2026-2030

Malaysia's upcoming AI Technology Action Plan 2026–2030 aims to enhance ethical safeguards and governance frameworks for artificial intelligence, as announced by Digital Minister Gobind Singh Deo. The...

Simultaneous Strategies for AI Governance

The development of responsible Artificial Intelligence (AI) policies and overall AI strategies must occur simultaneously to ensure alignment with intended purposes and core values. Bhutan's unique...

Guidelines for AI Models with Systemic Risks Under EU Regulations

The European Commission has issued guidelines to assist AI models deemed to have systemic risks in complying with the EU's AI Act, which will take effect on August 2. These guidelines aim to clarify...

Kerala: Pioneering Ethical AI in Education and Public Services

Kerala is emerging as a global leader in ethical AI, particularly in education and public services, by implementing a multi-pronged strategy that emphasizes government vision, academic rigor, and...

States Lead the Charge in AI Regulation

States across the U.S. are rapidly enacting their own AI regulations following the removal of a federal prohibition, leading to a fragmented landscape of laws that businesses must navigate. Key states...

AI Compliance: Harnessing Benefits While Mitigating Risks

AI is transforming compliance functions, enhancing detection capabilities and automating tasks, but also poses significant risks that organizations must manage. To deploy AI responsibly, compliance...

AI Compliance: Harnessing Benefits While Mitigating Risks

AI is transforming compliance functions, enhancing detection capabilities and automating tasks, but also poses significant risks that organizations must manage. To deploy AI responsibly, compliance...