New MeitY Rules Tighten AI Oversight, But Obscene Deepfakes Continue to Haunt Advertisers
Just days after the Union Ministry of Electronics and Information Technology (MeitY) notified sweeping amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 to rein in harmful synthetically generated content, industry experts say regulation alone will not neutralise the growing flood of AI-generated obscenity online.
While the legal scaffolding is now sharper, mandating traceability, labelling, and rapid takedowns of sexually explicit, non-consensual, and borderline offensive AI content continues to circulate across social platforms, creating what experts describe as a “persistent brand safety and governance crisis” for advertisers.
Revised Rules and Their Implications
Under the revised rules, intermediaries must clearly label AI-generated material, embed traceable metadata, and remove unlawful content within three hours of being flagged by authorities, a significant tightening from the earlier 36-hour window. The amendments explicitly target deepfakes, non-consensual sexual imagery, and synthetic manipulation, placing strict due diligence obligations on platforms and AI service providers.
Yet enforcement remains the central challenge. MeitY had recently issued notices to Elon Musk-owned X over the alleged misuse of its AI chatbot Grok, which allowed users to generate and share sexually explicit or revealing AI images of women, sometimes even minors, through simple prompt inputs. Officials termed the issue a violation of women’s dignity and privacy, demanding audits and stronger safety guardrails.
The Grey Zone of AI-Generated Content
The episode underscores a deeper concern: even as SGI norms and intermediary rules are strengthened, generative AI tools are making the creation of explicit and exploitative content faster, cheaper, and more scalable. Dhruv Garg, Partner at IGAP, stated, “The more difficult area is the grey zone of sexualised, provocative, or not kid-appropriate AI influencer content that may be distasteful or ethically questionable but doesn’t clearly meet the legal test for obscenity.”
This is less a criminal law issue and more about platform governance, advertising standards, and social norms. Influencers have long used sexuality and visual appeal as marketing tools. AI models are now a cheaper, more scalable version of the same attention economy. In this space, the pressure points are demonetisation, age-gating, content labeling, and advertiser expectations rather than police action.
The distinction is critical for brands. While criminally unlawful content invites regulatory scrutiny and takedown mandates, the more prevalent challenge lies in algorithmically amplified “borderline” material that may not violate penal provisions but risks reputational backlash.
Accountability of Platforms
“Platforms, however, cannot hide behind neutrality if they are algorithmically amplifying and monetising borderline or harmful content, especially where young audiences are involved,” Garg added. “If content is illegal, platforms risk losing legal protections if they do not act. If it is legal but controversial, the accountability becomes regulatory and reputational.”
For advertisers, the threat is twofold: brand adjacency to explicit synthetic content and the misuse of brand ambassadors’ likeness through deepfakes. Influencer marketing, now central to digital brand strategy, is particularly exposed. A 2024 Humanise AI study found that a significant proportion of AI deepfake tools produce not-safe-for-work (NSFW) content, with many influencers globally reporting that their likeness has been manipulated without consent.
The Stance of Industry Leaders
Sahil Chopra, Co-Founder and CEO at iCubesWire, remarked, “Even though platforms are businesses, they don’t get a free pass to profit from content that breaks guidelines.” He emphasized that platforms are part of the problem if their systems intentionally boost inappropriate AI material just to keep people glued to screens and watching ads. This interpretation, if adopted more widely by regulators, could redefine safe harbour protections for intermediaries.
The new MeitY rules attempt to close these gaps by requiring embedded metadata in AI-generated material and mandating user declarations when synthetic tools are used. Automated safeguards against illegal AI content are now an explicit compliance requirement. However, industry observers note that detection technology often lags behind generation tools, making real-time moderation complex.
Brands Navigating AI Influencers
For brands experimenting with AI influencers or synthetic creative assets, caution appears to be the prevailing strategy. Pradeep Patteti, Co-Founder & CEO of Flutch, stated, “AI-generated content is a powerful tool for creativity and connection, but we must use it responsibly.” He suggested that brands can turn this challenge into an opportunity by partnering with creators and platforms that prioritise transparency, consent, and community standards.
“At Flutch, we believe AI should enable smarter, safer, and more meaningful brand–creator collaborations. When we focus on contextual relevance, ethical AI use, and strong moderation, we not only protect our brands but also build trust with our audiences while embracing the innovation AI offers,” he added.
Conclusion: The Road Ahead
The regulatory tightening also signals a broader policy shift. By compressing takedown timelines to three hours and explicitly banning AI-generated sexual and non-consensual imagery, the government has indicated that synthetic harm will be treated with urgency comparable to other forms of unlawful content.
However, experts caution that compliance will depend heavily on platform-level implementation. Automated filters, human moderation, advertiser pressure, and public scrutiny will collectively determine whether the new framework curbs misuse or merely raises paperwork standards.
Ultimately, as AI blurs the line between authentic and fabricated imagery, the reputational stakes for brands have never been higher. In a digital ecosystem driven by virality and monetised attention, SGI rules may define the boundaries, but sustained cooperation between regulators, platforms, and industry will determine whether those boundaries hold.