Focused Regulation for AI Advancement in India

Call for Focused Approach to AI Regulation in India

India is at a pivotal juncture concerning the regulation of artificial intelligence (AI), with a pressing need for a unified regulatory framework that aligns with its robust services-based economy. This economy spans diverse sectors, including IT services, telecommunications, e-commerce, healthcare, and financial services. These factors position India as a significant data repository capable of driving the development of AI, specifically Generative AI (GenAI).

The Indian government has recognized the transformative and economic potential of AI, initiating various programs such as the IndiaAI Mission, IndiaAI Dataset Platform, and AIKosha. These initiatives aim to enhance the delivery of essential services and capitalize on AI’s growth potential.

Current Regulatory Landscape

Despite these initiatives, the national approach remains fragmented, with the government attempting to balance innovation and regulatory oversight. While there is a push for AI-driven economic growth, the regulatory framework is still evolving, struggling to keep pace with the rapid advancements in technology.

Unlike jurisdictions such as the European Union, which have established prescriptive AI-specific laws, India’s regulatory approach has been somewhat reactive. The country has yet to establish a comprehensive legal framework tailored specifically to AI governance, relying instead on existing laws that are often interpreted through various institutional lenses.

Challenges in AI Regulation

Several critical legal issues remain unresolved in the context of AI regulation:

  • AI Bias and Algorithmic Accountability: AI systems have been criticized for exhibiting bias, particularly in sectors like hiring, lending, law enforcement, and healthcare. The current legal framework lacks provisions to ensure fairness, transparency, and accountability in AI systems.
  • Data Privacy and AI Training: The Digital Personal Data Protection Act, 2023 (DPDP Act) has indirect implications for AI development, particularly concerning personal data usage. The lack of clarity regarding public data and data holders’ rights poses challenges for AI training methodologies.
  • Copyright Issues: The use of copyrighted materials for AI training raises concerns regarding derivative works and potential infringement actions. The current legal stance on the copyrightability of AI-generated content remains ambiguous, complicating matters for businesses and creators.
  • Intermediary Liability: The classification of AI models as intermediaries requires careful legal scrutiny. Current regulations may not adequately cover the realities of AI systems, necessitating updates to reflect their unique characteristics.
  • Responsibility Allocation: Determining liability in the deployment of AI systems poses significant challenges, with ambiguity surrounding the responsibilities of developers, deployers, and users.

Conclusion

India’s approach to AI regulation has made significant strides in policy development, yet it continues to grapple with uncertainty regarding definitive legislation. Although there have been discussions around regulatory challenges, concrete measures may take time to materialize. The proposed Digital India Act (DIA), aimed at regulating high-risk AI systems, reflects the government’s acknowledgment of these challenges but remains in the drafting phase.

Moving forward, a balanced and thoughtful approach to AI-specific legislation is crucial. Such measures will foster business certainty, support user rights, and enable responsible innovation in a rapidly evolving technological landscape.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...