IndiaAI Unveils Five Innovative Projects for AI Safety and Trust

IndiaAI Selects Five AI Projects for Deepfake Detection, Bias Mitigation, and Generative AI Testing

The Central government’s artificial intelligence initiative, IndiaAI, has selected five groundbreaking projects aimed at reinforcing its “Safe and Trusted AI” framework. These projects focus on critical areas such as deepfake detection, bias mitigation, and generative AI security.

Background and Purpose

This announcement follows the second round of expressions of interest (EoI) initiated in December 2024 under the IndiaAI Mission. The primary goal of this mission is to ensure the ethical and transparent deployment of AI systems within India.

According to the Ministry of Electronics and Information Technology (MeitY), the initiative received over 400 proposals from a diverse range of stakeholders, including academic institutions, startups, research organizations, and civil society. A multi-stakeholder committee meticulously reviewed these submissions, ultimately shortlisting five innovative projects.

Selected Projects for Safe and Trusted AI Development

The five projects selected for further development include:

  • Saakshya: A multi-agent framework for deepfake detection and governance, developed by IIT Jodhpur and IIT Madras.
  • AI Vishleshak: A tool developed by IIT Mandi and the Directorate of Forensic Services, Himachal Pradesh, which enhances audio-visual deepfake and signature forgery detection using explainable AI.
  • Real-Time Voice Deepfake Detection System: Developed by IIT Kharagpur.
  • Evaluating Gender Bias in Agriculture LLMs: A project by Digital Futures Lab and Karya that aims to create digital public goods for benchmarking and fair data work.
  • Anvil: A penetration testing and evaluation tool for generative AI systems, led by Globals ITES Pvt Ltd and IIIT Dharwad.

Significance of the Selected Projects

IndiaAI has emphasized that these initiatives will bolster its commitment to establishing a secure, transparent, and inclusive AI ecosystem. The projects combine real-time forensics, resilience testing, and bias audits, ensuring that AI models deployed in India are accountable and reliable.

A senior official associated with the IndiaAI Mission stated, “These projects represent India’s drive to not only adopt AI but to shape it responsibly. By developing indigenous tools for detection, testing, and fairness evaluation, India aims to lead globally in safe AI innovation.”

Broader Mission of IndiaAI

IndiaAI operates as an independent business division under MeitY and serves as the implementation agency for the national AI strategy. Its objectives include improving access to artificial intelligence, promoting ethical AI practices, and fostering technological self-reliance.

The Safe and Trusted AI pillar is part of a multi-dimensional approach that encompasses building foundational models, advancing AI compute infrastructure, and fostering responsible AI adoption across various sectors, including agriculture, healthcare, and public safety.

Recently, IndiaAI has partnered with leading research institutes and private sector players to advance AI governance and data ethics frameworks, highlighting a growing emphasis on balancing innovation with responsibility in India’s rapidly expanding digital ecosystem.

The IndiaAI Mission has been approved with a significant budget outlay of ₹10,371.92 crore over the next five years.

Conclusion

The selected projects illustrate a proactive approach towards managing the challenges posed by AI technologies, ensuring their safe deployment while addressing critical issues such as bias and misinformation. As these initiatives progress, they will play a pivotal role in shaping a responsible AI landscape in India.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...