AI Regulation: Balancing Control and Freedom

Superficial Regulation of Artificial Intelligence

Earlier this year, discussions surrounding the need for ethical AI governance gained momentum at an international conference. This conversation came on the heels of recent amendments to the Prevention of Electronic Crimes Act, 2016, which were perceived as infringing upon human rights, and the implementation of the Digital Nation Pakistan Act. The latter aims to advance digital public infrastructure without requisite safeguards for data privacy and exclusion.

The call for a framework governing AI was notably vague, lacking specifics on what such governance would entail. The surge of buzzwords surrounding AI and digital governance often obscures the underlying motivations for regulation; frequently, these motives revolve around exerting greater control, particularly over media.

The Threat of AI to Media and Information Ecosystems

The emergence of generative AI has intensified concerns about the potential threats it poses to media and our information ecosystem. Alarmist predictions regarding the unchecked spread of disinformation and the amplification of harmful content have led to panicked calls for additional restrictions on freedom of expression and access to information online.

However, many regulatory proposals focus merely on superficially regulating AI outputs, rather than fostering substantive governance centered on transparency and accountability.

Concerns Over AI and Freedom of Expression

While the dangers posed by AI to media and freedom of expression are undeniable, the regulatory responses have often missed the mark. Many journalists have sounded the alarm about AI’s capacity to generate inauthentic content at scale, particularly in regions where digital literacy is low. Users in these contexts are particularly vulnerable to manipulated information.

Most regulatory proposals tend to focus on content regulation without addressing the complexities involved in governing AI itself.

Model Regulation and Human Rights

Effective regulation of AI should prioritize human rights, transparency, and accountability. Emerging human rights standards demand that the development, deployment, and use of AI be conducted in a transparent manner, with clear pathways for accountability regarding harms resulting from AI use. The potential harms of AI extend far beyond the current discourse on misinformation, encompassing issues like facial recognition, predictive policing, and the exacerbation of discriminatory systems, particularly when AI is used in essential services.

Furthermore, the conversation around AI regulation often neglects the invisible labor that underpins AI technology, much of which comes from the global majority. Investigations reveal that the development of most AI tools relies heavily on manual data labeling performed by outsourced labor from countries like Pakistan, where workers face low wages and exposure to harmful content.

Regulatory Challenges and Censorship

Current laws and proposed legislation often aim to regulate AI-generated content based on vague criteria such as “fake news” or “national security”, which frequently serve as pretexts for suppressing dissent. The primary intention appears to be less about mitigating harm and more about regulating speech that challenges the powerful.

Despite legitimate government concerns regarding generative AI, particularly regarding frequent false accusations, sweeping regulations framed as AI governance often serve as convenient excuses to impose restrictions on freedom of expression. Such measures can be seen as attempts to regulate a technology that lawmakers neither fully comprehend nor have invested effort to understand adequately.

A Hopeful Outlook

There is a glimmer of hope, as the last discussion of the AI bill by the relevant Senate Committee recommended a cautious approach. Committee members acknowledged that it may be premature to establish a dedicated AI regulator, given that the ecosystem is still evolving. This call for caution and recognition of the complexities involved is reassuring.

Moving forward, a regulatory framework that emphasizes equity, transparency, and accountability—rather than censorship—is essential. AI regulation must prioritize the protection of people and their rights, rather than silencing them.

More Insights

Building Trust in AI: Strategies for a Secure Future

The Digital Trust Summit 2025 highlighted the urgent need for organizations to embed trust, fairness, and transparency into AI systems from the outset. As AI continues to evolve, strong governance and...

Rethinking Cloud Governance for AI Innovation

As organizations embrace AI innovations, they often overlook the need for updated cloud governance models that can keep pace with rapid advancements. Effective governance should be proactive and...

AI Governance: A Guide for Board Leaders

The Confederation of Indian Industry (CII) has released a guidebook aimed at helping company boards responsibly adopt and govern Artificial Intelligence (AI) technologies. The publication emphasizes...

Harnessing AI for Secure DevSecOps in a Zero-Trust Environment

The article discusses the implications of AI-powered automation in DevSecOps, highlighting the balance between efficiency and the risks associated with reliance on AI in security practices. It...

Establishing India’s First Centre for AI, Law & Regulation

Cyril Amarchand Mangaldas, Cyril Shroff, and O.P. Jindal Global University have announced the establishment of the Cyril Shroff Centre for AI, Law & Regulation, the first dedicated centre in India...

Revolutionizing AI Governance for Local Agencies with a Free Policy Tool

Darwin has launched its AI Policy Wizard, a free and interactive tool designed to assist local governments and public agencies in creating customized AI policies. The tool simplifies the process by...

Building Trust in AI Through Effective Governance

Ulla Coester emphasizes the importance of adaptable governance in building trust in AI, highlighting that unclear threats complicate global confidence in the technology. She advocates for...

Building Trustworthy AI Through Cultural Engagement

This report emphasizes the importance of inclusive AI governance to ensure diverse voices, especially from the Global South, are involved in AI access and development decisions. It highlights the...

AI Compliance: Copyright Challenges in the EU AI Act

The EU AI Act emphasizes the importance of copyright compliance for generative AI models, particularly regarding the use of vast datasets for training. It requires general-purpose AI providers to...