Kazakhstan’s Bold Step Towards Human-Centric AI Regulation

Kazakhstan’s Ambitious AI Regulation: A Human-Centric Approach

With its draft ‘Law on Artificial Intelligence’ approved in May, Kazakhstan is setting an ambitious course to regulate AI with a focus on human-centric principles. This initiative comes as the Central Asian nation seeks to establish a framework that not only aligns with global trends but also reflects its own national values.

Reflecting Global Trends

The head of the Centre for Public Legislation and Public Administration, Shoplan Saimova, stated, “The bill reflects major global trends in AI regulation.” The EU’s AI Act serves as a model, and Kazakhstan aims to lead rather than follow, creating a system that builds trust between humans and algorithms while safeguarding public interests.

Broad Consultation for Development

The development of this law involved extensive consultation with stakeholders, including lawmakers, tech experts, and industry representatives. This collaborative approach highlights a robust cross-sector backing for the initiative, which aims to regulate AI across various societal domains.

Core Principles of the Legislation

The draft legislation is built on fundamental principles such as:

  • Fairness
  • Legality
  • Accountability
  • Human well-being

Notably, the law prohibits unauthorized data collection and considers imposing criminal liability for significant misuse of AI systems that pose risks to the public.

Impact on Workforce and Training

This legislation could reshape the workforce in Kazakhstan. Experts emphasize the urgent necessity to retrain IT professionals, especially in areas like AI system design and digital ethics. The government recognizes that a responsible approach to AI can foster economic growth while prioritizing human rights.

Legal Framework and Human Rights Concerns

Igor Rogov, the head of Kazakhstan’s human rights commission, has raised important questions regarding accountability in AI systems. Key concerns include:

  • Who is responsible when AI causes harm?
  • Who owns content generated by AI?
  • How can we prevent AI from being utilized for fraud or deception?

The judiciary in Kazakhstan has begun integrating AI to draft decisions in civil cases, indicating the early stages of AI adoption in public institutions while ensuring judges retain final authority.

Challenges and Regulatory Gaps

A recent academic study conducted by five Kazakh scholars offers a comparative legal analysis of Kazakhstan’s draft AI legislation and the EU’s AI Act. While Kazakhstan adopts several elements of a risk-based approach, the framework is found to be lacking in:

  • Clear risk classification systems
  • Requirements for algorithmic transparency
  • Robust personal data protections
  • Strong institutional enforcement mechanisms

The authors of the study suggest that Kazakhstan should selectively adopt components of the EU model, adjusting them to fit the national legal framework. Establishing clear regulatory standards and enhancing institutional capacity are deemed necessary for ensuring compliance.

Transparency and Public Education

The unique challenges posed by Kazakhstan’s media landscape and multilingual environment necessitate that transparency and accountability efforts are supported by public education and sufficient technical infrastructure.

Conclusion: Towards a Responsible AI Ecosystem

While the legal efforts to regulate AI in Kazakhstan are promising, further work is needed. By drawing insights from the EU’s experiences, Kazakhstan can strive to establish a responsible and trusted AI ecosystem—one that not only protects citizens’ rights but also attracts global partnerships and investment.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...