Kazakhstan’s Bold Step Towards Human-Centric AI Regulation

Kazakhstan’s Ambitious AI Regulation: A Human-Centric Approach

With its draft ‘Law on Artificial Intelligence’ approved in May, Kazakhstan is setting an ambitious course to regulate AI with a focus on human-centric principles. This initiative comes as the Central Asian nation seeks to establish a framework that not only aligns with global trends but also reflects its own national values.

Reflecting Global Trends

The head of the Centre for Public Legislation and Public Administration, Shoplan Saimova, stated, “The bill reflects major global trends in AI regulation.” The EU’s AI Act serves as a model, and Kazakhstan aims to lead rather than follow, creating a system that builds trust between humans and algorithms while safeguarding public interests.

Broad Consultation for Development

The development of this law involved extensive consultation with stakeholders, including lawmakers, tech experts, and industry representatives. This collaborative approach highlights a robust cross-sector backing for the initiative, which aims to regulate AI across various societal domains.

Core Principles of the Legislation

The draft legislation is built on fundamental principles such as:

  • Fairness
  • Legality
  • Accountability
  • Human well-being

Notably, the law prohibits unauthorized data collection and considers imposing criminal liability for significant misuse of AI systems that pose risks to the public.

Impact on Workforce and Training

This legislation could reshape the workforce in Kazakhstan. Experts emphasize the urgent necessity to retrain IT professionals, especially in areas like AI system design and digital ethics. The government recognizes that a responsible approach to AI can foster economic growth while prioritizing human rights.

Legal Framework and Human Rights Concerns

Igor Rogov, the head of Kazakhstan’s human rights commission, has raised important questions regarding accountability in AI systems. Key concerns include:

  • Who is responsible when AI causes harm?
  • Who owns content generated by AI?
  • How can we prevent AI from being utilized for fraud or deception?

The judiciary in Kazakhstan has begun integrating AI to draft decisions in civil cases, indicating the early stages of AI adoption in public institutions while ensuring judges retain final authority.

Challenges and Regulatory Gaps

A recent academic study conducted by five Kazakh scholars offers a comparative legal analysis of Kazakhstan’s draft AI legislation and the EU’s AI Act. While Kazakhstan adopts several elements of a risk-based approach, the framework is found to be lacking in:

  • Clear risk classification systems
  • Requirements for algorithmic transparency
  • Robust personal data protections
  • Strong institutional enforcement mechanisms

The authors of the study suggest that Kazakhstan should selectively adopt components of the EU model, adjusting them to fit the national legal framework. Establishing clear regulatory standards and enhancing institutional capacity are deemed necessary for ensuring compliance.

Transparency and Public Education

The unique challenges posed by Kazakhstan’s media landscape and multilingual environment necessitate that transparency and accountability efforts are supported by public education and sufficient technical infrastructure.

Conclusion: Towards a Responsible AI Ecosystem

While the legal efforts to regulate AI in Kazakhstan are promising, further work is needed. By drawing insights from the EU’s experiences, Kazakhstan can strive to establish a responsible and trusted AI ecosystem—one that not only protects citizens’ rights but also attracts global partnerships and investment.

More Insights

Transforming Corporate Governance: The Impact of the EU AI Act

This research project investigates how the EU Artificial Intelligence Act is transforming corporate governance and accountability frameworks, compelling companies to reconfigure responsibilities and...

AI-Driven Cybersecurity: Bridging the Accountability Gap

As organizations increasingly adopt AI to drive innovation, they face a dual challenge: while AI enhances cybersecurity measures, it simultaneously facilitates more sophisticated cyberattacks. The...

Thailand’s Comprehensive AI Governance Strategy

Thailand is drafting principles for artificial intelligence (AI) legislation aimed at establishing an AI ecosystem and enhancing user protection from potential risks. The legislation will remove legal...

Texas Implements Groundbreaking AI Regulations in Healthcare

Texas has enacted comprehensive AI governance laws, including the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) and Senate Bill 1188, which establish a framework for responsible AI...

AI Governance: Balancing Innovation and Oversight

Riskonnect has launched its new AI Governance solution, enabling organizations to manage the risks and compliance obligations of AI technologies while fostering innovation. The solution integrates...

AI Alignment: Ensuring Technology Serves Human Values

Gillian K. Hadfield has been appointed as the Bloomberg Distinguished Professor of AI Alignment and Governance at Johns Hopkins University, where she will focus on ensuring that artificial...

The Ethical Dilemma of Face Swap Technology

As AI technology evolves, face swap tools are increasingly misused for creating non-consensual explicit content, leading to significant ethical, emotional, and legal consequences. This article...

The Illusion of Influence: The EU AI Act’s Global Reach

The EU AI Act, while aiming to set a regulatory framework for artificial intelligence, faces challenges in influencing other countries due to differing legal and cultural values. This has led to the...

The Illusion of Influence: The EU AI Act’s Global Reach

The EU AI Act, while aiming to set a regulatory framework for artificial intelligence, faces challenges in influencing other countries due to differing legal and cultural values. This has led to the...