India Could Lead a Third Way in AI Geo-governance
In what has become a geo-technological rat race, countries with skilled human capital and resources are striving to expand their technological potential. The fast-evolving landscape of artificial intelligence (AI) is an inevitable response to this competition. However, this race has its own fault lines, including ethical dilemmas, regulatory gaps, widening inequalities, and the risk of misuse or unintended consequences.
Countries like China, the United States, and India have entered an unprecedented AI arms race. The UN Secretary-General has warned that the AI threat is on par with nuclear war, emphasizing the need for a shared, ethical global approach. Multilateral cooperation is necessary to regulate irresponsible AI use.
The Regulatory Landscape
The regulatory frameworks surrounding AI vary significantly between countries. The U.S. relies on state laws and internal company policing, while China has developed a tightly centralized, state-led regulatory model. The European Union (EU) is emerging with a risk-based legal framework. This fragmentation raises questions about how large tech companies will balance moral and ethical standards amid private competition.
China’s approach includes pre-deployment scrutiny, algorithmic registration, and traceability requirements, embedding AI governance into its administrative priorities. In contrast, the U.S. struggles to adopt a cohesive national policy framework due to its 50-state discordant policy.
EU as a Norm-setter
The EU aims to position itself as a global norm-setter through a rights-centric framework. The EU’s AI Act categorizes AI applications by risk, imposing strict obligations only on high-risk uses, such as biometric surveillance and welfare allocation. This model emphasizes human rights and data protection, even at the cost of slower innovation.
Complementing this, the AI Liability Directive clarifies accountability for harm caused by AI. It promotes innovation while embedding market development within a clear legal structure, ensuring only high-risk applications face stringent compliance.
Global South and India’s Position
The Global South finds itself in a precarious position in the AI era, often becoming norm-takers rather than norm-makers. Countries like Brazil, South Africa, and Kenya rely on fragmented rules, risking becoming unregulated testing grounds for AI.
India’s current reliance on the IT Act of 2000 and the Digital Personal Data Protection (DPDP) Rules of 2025 is insufficient for addressing core generative AI risks, such as model safety and algorithmic bias. The absence of a consistent framework limits India’s ability to navigate between China’s control-heavy model and the U.S. deregulated approach.
A Third Path for India
India must pursue a third path of regulated openness, combining innovation with credible safeguards. This would involve a risk-tiered regulatory framework, adapted from the EU model but tailored to Indian realities. High-risk domains, such as elections and biometric surveillance, should face stringent rules, while low-risk applications can innovate with limited regulatory burdens.
India’s strategic advantage lies in its Digital Public Infrastructure (DPI), which can serve as a testbed for AI systems. Platforms like Aadhaar and UPI enable inclusive deployment of AI technologies, shifting the focus from commercial innovation to public good.
Ultimately, AI regulation is about who controls digital power over data and algorithms. If India delays regulation in the name of innovation, it risks ceding control to foreign platforms. The emergence of initiatives like the World Artificial Intelligence Cooperation Organization (WAICO) places India at a strategic crossroads, necessitating active engagement in shaping global AI norms.
As the most important technology of this era presents significant risks, there is an urgent need for India to engage in global AI governance without ceding control to external rule-makers.