Focused Regulation for AI Advancement in India

Call for Focused Approach to AI Regulation in India

India is at a pivotal juncture concerning the regulation of artificial intelligence (AI), with a pressing need for a unified regulatory framework that aligns with its robust services-based economy. This economy spans diverse sectors, including IT services, telecommunications, e-commerce, healthcare, and financial services. These factors position India as a significant data repository capable of driving the development of AI, specifically Generative AI (GenAI).

The Indian government has recognized the transformative and economic potential of AI, initiating various programs such as the IndiaAI Mission, IndiaAI Dataset Platform, and AIKosha. These initiatives aim to enhance the delivery of essential services and capitalize on AI’s growth potential.

Current Regulatory Landscape

Despite these initiatives, the national approach remains fragmented, with the government attempting to balance innovation and regulatory oversight. While there is a push for AI-driven economic growth, the regulatory framework is still evolving, struggling to keep pace with the rapid advancements in technology.

Unlike jurisdictions such as the European Union, which have established prescriptive AI-specific laws, India’s regulatory approach has been somewhat reactive. The country has yet to establish a comprehensive legal framework tailored specifically to AI governance, relying instead on existing laws that are often interpreted through various institutional lenses.

Challenges in AI Regulation

Several critical legal issues remain unresolved in the context of AI regulation:

  • AI Bias and Algorithmic Accountability: AI systems have been criticized for exhibiting bias, particularly in sectors like hiring, lending, law enforcement, and healthcare. The current legal framework lacks provisions to ensure fairness, transparency, and accountability in AI systems.
  • Data Privacy and AI Training: The Digital Personal Data Protection Act, 2023 (DPDP Act) has indirect implications for AI development, particularly concerning personal data usage. The lack of clarity regarding public data and data holders’ rights poses challenges for AI training methodologies.
  • Copyright Issues: The use of copyrighted materials for AI training raises concerns regarding derivative works and potential infringement actions. The current legal stance on the copyrightability of AI-generated content remains ambiguous, complicating matters for businesses and creators.
  • Intermediary Liability: The classification of AI models as intermediaries requires careful legal scrutiny. Current regulations may not adequately cover the realities of AI systems, necessitating updates to reflect their unique characteristics.
  • Responsibility Allocation: Determining liability in the deployment of AI systems poses significant challenges, with ambiguity surrounding the responsibilities of developers, deployers, and users.

Conclusion

India’s approach to AI regulation has made significant strides in policy development, yet it continues to grapple with uncertainty regarding definitive legislation. Although there have been discussions around regulatory challenges, concrete measures may take time to materialize. The proposed Digital India Act (DIA), aimed at regulating high-risk AI systems, reflects the government’s acknowledgment of these challenges but remains in the drafting phase.

Moving forward, a balanced and thoughtful approach to AI-specific legislation is crucial. Such measures will foster business certainty, support user rights, and enable responsible innovation in a rapidly evolving technological landscape.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...