Tightening AI Regulations and the Ongoing Challenge of Deepfake Content

New MeitY Rules Tighten AI Oversight, But Obscene Deepfakes Continue to Haunt Advertisers

Just days after the Union Ministry of Electronics and Information Technology (MeitY) notified sweeping amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 to rein in harmful synthetically generated content, industry experts say regulation alone will not neutralise the growing flood of AI-generated obscenity online.

While the legal scaffolding is now sharper, mandating traceability, labelling, and rapid takedowns of sexually explicit, non-consensual, and borderline offensive AI content continues to circulate across social platforms, creating what experts describe as a “persistent brand safety and governance crisis” for advertisers.

Revised Rules and Their Implications

Under the revised rules, intermediaries must clearly label AI-generated material, embed traceable metadata, and remove unlawful content within three hours of being flagged by authorities, a significant tightening from the earlier 36-hour window. The amendments explicitly target deepfakes, non-consensual sexual imagery, and synthetic manipulation, placing strict due diligence obligations on platforms and AI service providers.

Yet enforcement remains the central challenge. MeitY had recently issued notices to Elon Musk-owned X over the alleged misuse of its AI chatbot Grok, which allowed users to generate and share sexually explicit or revealing AI images of women, sometimes even minors, through simple prompt inputs. Officials termed the issue a violation of women’s dignity and privacy, demanding audits and stronger safety guardrails.

The Grey Zone of AI-Generated Content

The episode underscores a deeper concern: even as SGI norms and intermediary rules are strengthened, generative AI tools are making the creation of explicit and exploitative content faster, cheaper, and more scalable. Dhruv Garg, Partner at IGAP, stated, “The more difficult area is the grey zone of sexualised, provocative, or not kid-appropriate AI influencer content that may be distasteful or ethically questionable but doesn’t clearly meet the legal test for obscenity.”

This is less a criminal law issue and more about platform governance, advertising standards, and social norms. Influencers have long used sexuality and visual appeal as marketing tools. AI models are now a cheaper, more scalable version of the same attention economy. In this space, the pressure points are demonetisation, age-gating, content labeling, and advertiser expectations rather than police action.

The distinction is critical for brands. While criminally unlawful content invites regulatory scrutiny and takedown mandates, the more prevalent challenge lies in algorithmically amplified “borderline” material that may not violate penal provisions but risks reputational backlash.

Accountability of Platforms

“Platforms, however, cannot hide behind neutrality if they are algorithmically amplifying and monetising borderline or harmful content, especially where young audiences are involved,” Garg added. “If content is illegal, platforms risk losing legal protections if they do not act. If it is legal but controversial, the accountability becomes regulatory and reputational.”

For advertisers, the threat is twofold: brand adjacency to explicit synthetic content and the misuse of brand ambassadors’ likeness through deepfakes. Influencer marketing, now central to digital brand strategy, is particularly exposed. A 2024 Humanise AI study found that a significant proportion of AI deepfake tools produce not-safe-for-work (NSFW) content, with many influencers globally reporting that their likeness has been manipulated without consent.

The Stance of Industry Leaders

Sahil Chopra, Co-Founder and CEO at iCubesWire, remarked, “Even though platforms are businesses, they don’t get a free pass to profit from content that breaks guidelines.” He emphasized that platforms are part of the problem if their systems intentionally boost inappropriate AI material just to keep people glued to screens and watching ads. This interpretation, if adopted more widely by regulators, could redefine safe harbour protections for intermediaries.

The new MeitY rules attempt to close these gaps by requiring embedded metadata in AI-generated material and mandating user declarations when synthetic tools are used. Automated safeguards against illegal AI content are now an explicit compliance requirement. However, industry observers note that detection technology often lags behind generation tools, making real-time moderation complex.

Brands Navigating AI Influencers

For brands experimenting with AI influencers or synthetic creative assets, caution appears to be the prevailing strategy. Pradeep Patteti, Co-Founder & CEO of Flutch, stated, “AI-generated content is a powerful tool for creativity and connection, but we must use it responsibly.” He suggested that brands can turn this challenge into an opportunity by partnering with creators and platforms that prioritise transparency, consent, and community standards.

“At Flutch, we believe AI should enable smarter, safer, and more meaningful brand–creator collaborations. When we focus on contextual relevance, ethical AI use, and strong moderation, we not only protect our brands but also build trust with our audiences while embracing the innovation AI offers,” he added.

Conclusion: The Road Ahead

The regulatory tightening also signals a broader policy shift. By compressing takedown timelines to three hours and explicitly banning AI-generated sexual and non-consensual imagery, the government has indicated that synthetic harm will be treated with urgency comparable to other forms of unlawful content.

However, experts caution that compliance will depend heavily on platform-level implementation. Automated filters, human moderation, advertiser pressure, and public scrutiny will collectively determine whether the new framework curbs misuse or merely raises paperwork standards.

Ultimately, as AI blurs the line between authentic and fabricated imagery, the reputational stakes for brands have never been higher. In a digital ecosystem driven by virality and monetised attention, SGI rules may define the boundaries, but sustained cooperation between regulators, platforms, and industry will determine whether those boundaries hold.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...