Meta Flags Operational Challenges in India’s 3-Hour Takedown Rule
Meta has expressed concerns regarding the operational feasibility of India’s newly proposed three-hour offensive content takedown requirement. Despite these challenges, the company reaffirms its commitment to comply with the evolving digital and AI regulations in the country.
Operational Feasibility Concerns
During the India AI Impact Summit, Rob Sherman, the vice president of policy and deputy chief privacy officer at Meta, highlighted the philosophical alignment of the company with the government’s safety objectives. However, he emphasized that the compressed timelines pose practical challenges. Sherman stated:
“Philosophically, we’re very aligned with the goals. But operationally, three hours is going to be really challenging.”
He elaborated that every government request requires thorough review and validation before action can be taken, which may not always be feasible within the three-hour window.
Amendments to IT Rules
The Indian government recently amended the Information Technology rules, mandating that platforms remove certain unlawful content within three hours of notification, a significant reduction from the previous 36-hour deadline. Additionally, the resolution time for user-reported grievances has been cut from 15 days to seven days, with non-consensual intimate imagery required to be removed within two hours.
India: Meta’s Largest AI Market
Sherman described India as the top market for Meta AI, noting strong adoption of its open-source large language model family, Llama. Indian developers have successfully created various local variants and effectively utilized Meta’s AI tools for businesses, which reportedly generate substantial returns on advertising investments.
Personal Super Intelligence Vision
Sherman shared CEO Mark Zuckerberg’s vision of “personal super intelligence”, where AI systems perform tasks at or beyond human capabilities tailored to individual users. This vision aims to democratize access to personalized AI assistants across various domains such as health coaching and career advice.
AI Regulation and Consultative Approach
Commending the Indian government’s consultative approach to AI policymaking, Sherman emphasized the importance of maintaining a balance between innovation and user safety. He cautioned against locking in rules that may become outdated due to rapid technological advancements, contrasting this with the European Union’s experience with the Artificial Intelligence Act.
Data Protection and Localisation Challenges
Regarding India’s Digital Personal Data Protection (DPDP) Act, Sherman acknowledged Meta’s established procedures for compliance but noted that India’s shorter timelines present unique challenges. He also discussed the complexities of data localisation, particularly for platforms like Facebook and Instagram that rely on cross-border communication.
Teen Safety and Age-Based Regulations
On the topic of age-based social media regulations, Sherman expressed that while the intent to protect teens is valid, blanket bans might push them toward less protected platforms. He advocated for differentiated protections by age, a practice Meta already implements.
AI in Content Moderation
AI technology is increasingly being utilized to detect harmful content, including child sexual abuse material (CSAM). Sherman noted that AI has significantly accelerated the identification of harmful content compared to traditional human review methods.
Backing IndiaAI Mission
Looking forward to the IndiaAI Mission 2.0, Sherman praised the government’s focus on data availability and infrastructure development as essential for scaling AI responsibly. He expressed enthusiasm for initiatives like curated public datasets, which enable local innovation without duplicating foundational research.
In summary, while Meta supports India’s regulatory goals, the company stresses the need for practical timelines and collaborative discussions to ensure effective implementation of these initiatives.