How India’s New AI Framework Targets Risks, Bias, and Misuse
Ahead of the five-day AI Impact Summit 2026, the Indian government has unveiled its first set of comprehensive artificial intelligence (AI) governance guidelines.
The framework is built on principles and existing laws, introducing new oversight bodies to maintain a balance between innovation and safeguards.
This move demonstrates India’s commitment to responsible AI governance without enacting a standalone law, addressing issues such as bias, misuse, and lack of transparency in AI systems, while ensuring that technological adoption is not hindered.
Guidelines Overview
The newly released guidelines detail how AI should be developed and deployed in sectors like healthcare, education, agriculture, finance, and governance.
The framework is based on seven broad principles, or sutras:
- Trust as the foundation
- People First
- Innovation over Restraint
- Fairness and Equity
- Accountability
- Understandable by Design
- Safety, Resilience, and Sustainability
These principles emphasize that AI systems should support human decision-making processes while being transparent to avoid discrimination, with clear safeguards in place.
Legal Framework
A key aspect of the guidelines is their reliance on existing laws. Officials have indicated that many AI-related risks are already covered under current legal provisions, such as IT rules, data protection laws, and criminal statutes.
Instead of enacting a separate AI law at this time, the government has opted for periodic reviews and targeted amendments as technology evolves.
Proposed National Oversight Bodies
The framework proposes the establishment of national-level bodies to oversee AI governance. These include:
- An AI Governance Group for policy coordination across ministries
- A Technology and Policy Expert Committee for specialist advice
- An AI Safety Institute focusing on testing standards, safety research, and risk assessment
Responsibilities of AI Developers and Deployers
The guidelines also define responsibilities for AI developers and deployers, such as:
- Transparency reports
- Clear disclosures when using AI-generated content
- Grievance redressal mechanisms for those affected by these systems
High-risk applications, especially those impacting safety, rights, or livelihoods, are expected to follow stronger safeguards with human oversight.
Conclusion
The guidelines reflect India’s belief that AI should not be limited to a few companies or countries but should be widely deployed to address real-world problems while remaining trustworthy.
By balancing innovation with safeguards, the government hopes to position India as not just a major user of AI but also a global leader in responsible and inclusive governance, aligned with the vision of ‘Viksit Bharat 2047’.