AI Regulation in India: Current Landscape and Future Directions

The AI Regulatory Landscape in India: What to Know

The Artificial Intelligence (AI) landscape in India is growing rapidly. The country is now being recognized as OpenAI’s second-largest market by users, which massively underscores the widespread adoption and interest in AI technologies.

But with rapid adoption comes pressing concerns—ethics, privacy, and security are at the forefront, making a strong regulatory framework essential.

In this article, we’ll be looking at how AI and digital advancements are influencing India’s regulatory landscape, the key challenges involved, and the steps being taken to ensure AI is deployed responsibly.

The Current AI Regulatory Landscape in India

As of February 2025, India does not have a dedicated law exclusively governing AI. Instead, the regulatory environment consists of policies, guidelines, and sector-specific regulations that collectively address different aspects of AI deployment.

In 2018, the National Institution for Transforming India (NITI Aayog) released the National Strategy for Artificial Intelligence, aiming to position India as a global leader in AI. This strategy emphasizes the adoption of AI in key sectors such as healthcare, agriculture, education, smart cities, and smart mobility. It also highlights the importance of research and development, workforce reskilling, and the establishment of infrastructure to support AI innovation.

In 2021, NITI Aayog published the Principles for Responsible AI, outlining ethical standards such as safety, inclusivity, privacy, and accountability. This document provides guidelines focusing on transparency, accountability, privacy, and security in AI applications, serving as a foundational framework for organizations developing or deploying AI systems.

Recognizing the critical role of data in AI systems, the Indian government enacted the Digital Personal Data Protection Act in 2023. This legislation provides a comprehensive framework for the processing of personal data, emphasizing individual rights, consent mechanisms, and obligations for data fiduciaries.

The government has also introduced sector-specific guidelines to regulate AI applications pertinent to their domains. These guidelines aim to mitigate industry-specific risks while ensuring AI-driven innovations comply with ethical and legal standards.

For example, in the finance sector, the Securities and Exchange Board of India (SEBI) issued a circular in January 2019 mandating reporting requirements for AI and machine learning (ML) applications used by market participants, enhancing transparency and managing AI’s impact on financial markets.

In the healthcare sector, the National Digital Health Mission set standards to ensure the reliability and safety of AI-driven healthcare systems, including protocols for data handling, patient consent, and the validation of AI-powered diagnostic tools.

India’s approach to AI regulation is described as pro-innovation, aiming to unlock AI’s potential while addressing anticipated risks. The government is balancing between a hands-off approach and more direct intervention, focusing on developing policies and guidelines that acknowledge ethical concerns and risks rather than enacting binding AI-specific laws.

Key AI Regulation Challenges in India

The integration of AI into India’s socio-economic landscape brings several regulatory challenges that demand careful attention. Addressing these issues requires collaboration between policymakers, industry leaders, and the public to develop regulatory frameworks that prioritize ethics and transparency.

1. Ethical and Bias Concerns: AI systems can inherit biases from the data they are trained on, leading to discriminatory outcomes in critical areas such as hiring, lending, and law enforcement. Ensuring fairness and minimizing bias requires clear guidelines for data collection, algorithm design, and ongoing monitoring to detect and correct any disparities.

2. Data Privacy and Security: AI’s reliance on vast amounts of data raises serious concerns about privacy and security. To maintain public trust and comply with legal standards, it’s essential to implement strong data protection measures, establish clear data usage policies, and ensure users give informed consent when their data is collected and processed.

3. Lack of Standardization: The absence of standardized protocols for AI development and deployment leads to inconsistencies in quality, safety, and ethical practices across industries. Establishing common standards and best practices is key to improving interoperability, reliability, and responsible AI usage.

4. Skill Gap: India faces a shortage of professionals skilled in AI technologies. Bridging this gap requires investments in education, specialized training programs, and stronger collaboration between academia and industry.

5. Regulatory Overreach vs. Innovation: Striking the right balance between regulation and innovation is challenging. Overregulation can stifle progress, while underregulation may lead to ethical concerns and public harm. Policymakers must craft regulations that encourage innovation while upholding ethical standards.

How India’s AI Regulation Compares to Global Standards

India’s AI regulatory framework shares common ground with global principles while addressing its own national priorities. Unlike the EU’s AI Act, which enforces strict accountability and uniformity across industries, India’s approach emphasizes adaptability—balancing innovation with compliance.

At the core of this approach are India’s Principles for Responsible AI, which align with international guidelines by prioritizing transparency, accountability, and fairness. This alignment fosters international collaboration and strengthens India’s position as a responsible player in the global AI landscape.

A key element of responsible AI governance is data protection. The Digital Personal Data Protection Act of 2023 mirrors the EU’s General Data Protection Regulation (GDPR) in its focus on individual consent and data rights but is tailored to India’s needs, addressing challenges like digital literacy and accessibility.

Upcoming AI Regulations in India: What’s Changing?

India’s focus on Sovereign AI reflects its push for self-sufficiency in AI development. By building indigenous AI capabilities, India aims to create locally relevant AI solutions while reducing reliance on foreign technologies.

To support responsible AI adoption, the Indian government is actively redefining its regulatory framework. In January 2025, the Ministry of Electronics and Information Technology announced the IndiaAI Safety Institute, which will establish AI safety standards in collaboration with academic institutions and industry partners.

Similarly, the upcoming Digital India Act is set to replace the Information Technology Act of 2000, introducing AI-specific provisions related to algorithmic accountability, consumer rights, and regulatory oversight.

Recognizing that AI impacts different sectors in distinct ways, the government is also working on sector-specific AI policies for industries such as banking, healthcare, and education. These policies aim to mitigate industry-specific risks while fostering innovation.

How to Prepare for Regulatory Changes

Organizations operating in AI-driven industries must proactively prepare for regulatory changes to achieve compliance and mitigate risks. Here are strategies organizations can adopt:

  • Think Ethically from the Start: Build AI systems that align with India’s responsible AI principles, focusing on fairness and transparency.
  • Stay on Top of Compliance: Create dedicated teams and use AI governance tools to track evolving regulatory requirements.
  • Get Data Practices in Order: Strengthen security measures, limit unnecessary data collection, and ensure proper consent mechanisms.
  • Engage with Policymakers: Actively participate in discussions with regulators, industry bodies, and AI research institutions to help shape fair policies.
  • Train Your People: Provide ongoing education on AI regulations so employees understand the legal and ethical landscape.

Conclusion

AI regulation is a constantly evolving field, and staying informed is key. Whether you’re a business leader, developer, or policymaker, understanding AI laws and ethical guidelines can help you navigate the landscape effectively.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...