AI Regulation in Nepal: Addressing Ethical Concerns and Implementation Challenges

AI Regulation in Nepal: Addressing Ethical Concerns and Policy Gaps

The implementation of artificial intelligence (AI) tools in law enforcement raises significant ethical concerns, particularly in the context of Nepal. The scenario where the National Police utilize AI for issuing search warrants or executing arrests prompts questions about the ethical safeguards that would be in place. Would there be guaranteed human oversight in the decision-making process? These questions are critical as AI becomes increasingly integrated into public safety operations.

The EU AI Act: A Model for Regulation

In August 2024, the European Union’s AI Act established the world’s first comprehensive legislation governing AI. This Act introduced a risk-based approach to AI deployment, categorizing risks into four levels and prohibiting certain high-risk applications, such as real-time facial recognition in public spaces, with exceptions for law enforcement. This regulatory framework, while facing criticism for its loopholes, represents a significant attempt to balance effective AI use with ethical considerations.

As Nepal navigates the complexities of AI regulation, the EU’s experience underscores the need for robust governance structures that prioritize ethical considerations. The recent acquisition of AI software by the Nepal Police, coupled with staff training on its usage, highlights an urgent need for comprehensive policy development.

The Draft AI Policy: A Step Forward but Lacking Specificity

The Nepalese government has issued a draft AI Policy, which outlines several objectives aimed at ensuring that AI development serves the wider society. However, concerns arise regarding the lack of specificity in the implementation details. For instance, the draft emphasizes the importance of a strong data protection framework, yet fails to provide a clear timeline for its establishment.

The draft’s generalized approach is a significant shortcoming, particularly given the complexity of legislating data protection laws. Furthermore, the envisioned governance structure, which includes an AI Regulatory Council led by the Minister for Communications and Information Technology, raises questions about its effectiveness. High-level councils often struggle to maintain engagement and accountability.

The Role of the National AI Center

In contrast, empowering a dedicated institution like the National AI Center may be more beneficial. This center could serve as a guardian of AI legislation, ensuring its proper implementation and oversight. The EU AI Act’s establishment of a European AI Office, equipped with enforcement powers, illustrates the potential benefits of a well-structured regulatory body.

Global Perspectives on AI Governance

Recent discussions at RightsCon 2025 emphasized the necessity of inclusivity, ethics, and accountability in AI governance. Key topics included the role of civil society in AI policy-making and the integration of diverse perspectives to shape responsible AI systems. Such considerations are vital for Nepal as it finalizes its AI draft policy.

Urgent Need for a Holistic Framework

For Nepal to emerge as a viable IT hub, it must accelerate its efforts in AI regulation. A holistic AI framework grounded in strong data and privacy rights is essential. By adopting ethical guidelines and learning from global best practices, Nepal can create an agile and responsible AI system that aligns with human rights.

In conclusion, any deployment of AI tools by state agencies in Nepal should be reconsidered until a robust regulatory framework is established. The journey towards ethical AI governance is complex but necessary for the future of technology in Nepal.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...