India’s Tougher Rules on AI in Social Media Spur Censorship Fears
The Modi government in India is poised to implement tougher regulations on the use of artificial intelligence in social media, aiming to combat the rising tide of disinformation. The proposed measures also include a potential ban on children’s access to social media platforms.
Combatting Disinformation
The initial move seeks to address the increasing prevalence of fake videos and other fraudulent content, yet it raises alarms about potential censorship and the erosion of digital freedoms.
These regulations are set to take effect on February 20, coinciding with the conclusion of an international AI summit in New Delhi featuring prominent global tech figures. Social media platforms will now have three hours (down from 36) to comply with government takedown orders, an effort to prevent harmful content from spreading rapidly.
Challenges for Social Media Giants
With over a billion internet users, India is struggling with the overwhelming presence of AI-generated disinformation on social media. Companies such as Instagram, Facebook, and X face heightened scrutiny as they navigate growing public anxiety regarding the misuse of AI, including the dissemination of misinformation and sexualized imagery of children.
However, rights groups caution that stringent oversight of AI could jeopardize freedom of speech. Under Prime Minister Narendra Modi, accusations have emerged regarding the curtailing of freedom of expression, particularly aimed at activists and opposition figures, which the government has denied.
Compressed Timeframes for Compliance
The Internet Freedom Foundation (IFF), a digital rights organization, has expressed concerns regarding the rapid compliance requirements placed on social media platforms. The shortened timeframe for responding to takedown notices may compel platforms to act as “rapid-fire censors.”
Labelling Requirements
New regulations mandate that platforms must label any content that is “created, generated, modified, or altered” using computer resources, except for material modified during standard editing processes. This means that synthetic media must be clearly and permanently marked, raising questions about the effectiveness of such labels.
Apar Gupta, chief of the IFF, indicated that the stringent timelines make meaningful human review impractical, shifting control significantly away from users.
Automated Censorship Concerns
Critics argue that the rules represent a form of automated censorship. Many internet users are unaware of governmental orders to delete their content, raising alarms about transparency and user rights.
Responsibility Shift to Platforms
The new laws place the responsibility of content monitoring on the platforms themselves. Users must declare if their content is synthetic, while platforms must verify and label such material prior to publication. However, the broad parameters for takedown requests leave much open to interpretation, potentially affecting satire, parody, and political commentary.
This shift to upstream responsibility has sparked concerns about collateral censorship as platforms may err on the side of caution in their monitoring efforts.
Age Restrictions Under Discussion
In addition to the AI regulations, India is exploring age-based restrictions for social media users. IT Minister Ashwini Vaishnaw has indicated that discussions are ongoing regarding limitations similar to those implemented in countries like Australia and France, where young teens are banned from popular platforms.
Vaishnaw emphasized the need for more robust regulations on deepfakes and the protection of children in the digital landscape, acknowledging the growing challenges posed by such technologies.
The government aims to address these issues, asserting that stronger regulation is essential to safeguard society.