Superficial Regulation of Artificial Intelligence
Earlier this year, discussions surrounding the need for ethical AI governance gained momentum at an international conference. This conversation came on the heels of recent amendments to the Prevention of Electronic Crimes Act, 2016, which were perceived as infringing upon human rights, and the implementation of the Digital Nation Pakistan Act. The latter aims to advance digital public infrastructure without requisite safeguards for data privacy and exclusion.
The call for a framework governing AI was notably vague, lacking specifics on what such governance would entail. The surge of buzzwords surrounding AI and digital governance often obscures the underlying motivations for regulation; frequently, these motives revolve around exerting greater control, particularly over media.
The Threat of AI to Media and Information Ecosystems
The emergence of generative AI has intensified concerns about the potential threats it poses to media and our information ecosystem. Alarmist predictions regarding the unchecked spread of disinformation and the amplification of harmful content have led to panicked calls for additional restrictions on freedom of expression and access to information online.
However, many regulatory proposals focus merely on superficially regulating AI outputs, rather than fostering substantive governance centered on transparency and accountability.
Concerns Over AI and Freedom of Expression
While the dangers posed by AI to media and freedom of expression are undeniable, the regulatory responses have often missed the mark. Many journalists have sounded the alarm about AI’s capacity to generate inauthentic content at scale, particularly in regions where digital literacy is low. Users in these contexts are particularly vulnerable to manipulated information.
Most regulatory proposals tend to focus on content regulation without addressing the complexities involved in governing AI itself.
Model Regulation and Human Rights
Effective regulation of AI should prioritize human rights, transparency, and accountability. Emerging human rights standards demand that the development, deployment, and use of AI be conducted in a transparent manner, with clear pathways for accountability regarding harms resulting from AI use. The potential harms of AI extend far beyond the current discourse on misinformation, encompassing issues like facial recognition, predictive policing, and the exacerbation of discriminatory systems, particularly when AI is used in essential services.
Furthermore, the conversation around AI regulation often neglects the invisible labor that underpins AI technology, much of which comes from the global majority. Investigations reveal that the development of most AI tools relies heavily on manual data labeling performed by outsourced labor from countries like Pakistan, where workers face low wages and exposure to harmful content.
Regulatory Challenges and Censorship
Current laws and proposed legislation often aim to regulate AI-generated content based on vague criteria such as “fake news” or “national security”, which frequently serve as pretexts for suppressing dissent. The primary intention appears to be less about mitigating harm and more about regulating speech that challenges the powerful.
Despite legitimate government concerns regarding generative AI, particularly regarding frequent false accusations, sweeping regulations framed as AI governance often serve as convenient excuses to impose restrictions on freedom of expression. Such measures can be seen as attempts to regulate a technology that lawmakers neither fully comprehend nor have invested effort to understand adequately.
A Hopeful Outlook
There is a glimmer of hope, as the last discussion of the AI bill by the relevant Senate Committee recommended a cautious approach. Committee members acknowledged that it may be premature to establish a dedicated AI regulator, given that the ecosystem is still evolving. This call for caution and recognition of the complexities involved is reassuring.
Moving forward, a regulatory framework that emphasizes equity, transparency, and accountability—rather than censorship—is essential. AI regulation must prioritize the protection of people and their rights, rather than silencing them.