Explained: As Government Tightens AI Content Rules, What Must Social Media Platforms & Others Do
The central government has issued new guidelines mandating social media platforms and other entities to clearly label all artificial intelligence–generated or modified content. This significant development arises from amendments made by the Ministry of Electronics and Information Technology (MeitY) to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
Which Rules Have Changed?
The intermediary framework in India was initially established in 2011 and updated with the 2021 amendments, which expanded the due diligence obligations for major social media intermediaries and introduced regulations for digital news and curated audio-visual content. The latest guidelines further tighten these rules, particularly concerning synthetically generated information (SGI), which includes deepfakes.
What Has Changed?
Under the new regulations, intermediaries are required to ensure that AI-generated or modified content is clearly labeled or identifiable. This can be achieved through:
- Visible disclosures
- Embedded metadata
The amendments allow for the use of technical measures, such as embedded metadata, as identifiers to facilitate compliance while ensuring traceability. Once applied, these identifiers are irreversible.
Furthermore, platforms must warn users about the potential consequences of AI misuse at least once every three months. The government has also mandated the deployment of automated tools to detect and prevent the dissemination of illegal, sexually exploitative, or deceptive AI-generated content.
New Enforcement Measures
Previously, a 36-hour window was provided to intermediaries to comply with takedown orders. However, under the new enforcement measures, platforms are now required to remove or disable access to AI-generated content within three hours of receiving an order from a court or government authority.
Who Are Intermediaries?
Intermediaries are entities that store or transmit data on behalf of end users. This category includes:
- Telecom service providers
- Online marketplaces
- Search engines
- Social media platforms (e.g., Jio, Amazon, Google, Meta)
How Will the Rules Be Enforced?
The initial phase of enforcement will focus on large social media intermediaries with five million or more registered users in India. Consequently, these rules will predominantly affect foreign players, such as Meta and X (formerly Twitter).
Why Now?
These new measures come in the wake of the recent Grok controversy, where an AI chatbot generated non-consensual explicit deepfakes. The changes follow consultations between the government and industry bodies such as IAMAI and Nasscom.
Ultimately, these rules aim to ensure that platforms inform users about SGI and identify those involved in producing such content, thus enhancing accountability in the digital landscape.