AI Regulation: Balancing Control and Freedom

Superficial Regulation of Artificial Intelligence

Earlier this year, discussions surrounding the need for ethical AI governance gained momentum at an international conference. This conversation came on the heels of recent amendments to the Prevention of Electronic Crimes Act, 2016, which were perceived as infringing upon human rights, and the implementation of the Digital Nation Pakistan Act. The latter aims to advance digital public infrastructure without requisite safeguards for data privacy and exclusion.

The call for a framework governing AI was notably vague, lacking specifics on what such governance would entail. The surge of buzzwords surrounding AI and digital governance often obscures the underlying motivations for regulation; frequently, these motives revolve around exerting greater control, particularly over media.

The Threat of AI to Media and Information Ecosystems

The emergence of generative AI has intensified concerns about the potential threats it poses to media and our information ecosystem. Alarmist predictions regarding the unchecked spread of disinformation and the amplification of harmful content have led to panicked calls for additional restrictions on freedom of expression and access to information online.

However, many regulatory proposals focus merely on superficially regulating AI outputs, rather than fostering substantive governance centered on transparency and accountability.

Concerns Over AI and Freedom of Expression

While the dangers posed by AI to media and freedom of expression are undeniable, the regulatory responses have often missed the mark. Many journalists have sounded the alarm about AI’s capacity to generate inauthentic content at scale, particularly in regions where digital literacy is low. Users in these contexts are particularly vulnerable to manipulated information.

Most regulatory proposals tend to focus on content regulation without addressing the complexities involved in governing AI itself.

Model Regulation and Human Rights

Effective regulation of AI should prioritize human rights, transparency, and accountability. Emerging human rights standards demand that the development, deployment, and use of AI be conducted in a transparent manner, with clear pathways for accountability regarding harms resulting from AI use. The potential harms of AI extend far beyond the current discourse on misinformation, encompassing issues like facial recognition, predictive policing, and the exacerbation of discriminatory systems, particularly when AI is used in essential services.

Furthermore, the conversation around AI regulation often neglects the invisible labor that underpins AI technology, much of which comes from the global majority. Investigations reveal that the development of most AI tools relies heavily on manual data labeling performed by outsourced labor from countries like Pakistan, where workers face low wages and exposure to harmful content.

Regulatory Challenges and Censorship

Current laws and proposed legislation often aim to regulate AI-generated content based on vague criteria such as “fake news” or “national security”, which frequently serve as pretexts for suppressing dissent. The primary intention appears to be less about mitigating harm and more about regulating speech that challenges the powerful.

Despite legitimate government concerns regarding generative AI, particularly regarding frequent false accusations, sweeping regulations framed as AI governance often serve as convenient excuses to impose restrictions on freedom of expression. Such measures can be seen as attempts to regulate a technology that lawmakers neither fully comprehend nor have invested effort to understand adequately.

A Hopeful Outlook

There is a glimmer of hope, as the last discussion of the AI bill by the relevant Senate Committee recommended a cautious approach. Committee members acknowledged that it may be premature to establish a dedicated AI regulator, given that the ecosystem is still evolving. This call for caution and recognition of the complexities involved is reassuring.

Moving forward, a regulatory framework that emphasizes equity, transparency, and accountability—rather than censorship—is essential. AI regulation must prioritize the protection of people and their rights, rather than silencing them.

More Insights

Transforming AI Governance: The EU Act’s Framework Against Super AI Risks

The EU AI Act establishes a risk-based framework that categorizes AI systems based on their potential harm, imposing strict regulations on high-risk and prohibited uses to enhance human oversight and...

EU AI Act: Key Changes and Future Implications

The EU AI Act reached a significant milestone on August 2, 2025, marking the beginning of real obligations for general-purpose AI models. Providers must now meet specific requirements to enter the EU...

AI Copyright Dilemma in the EU

The European Union's implementation of the Artificial Intelligence Act introduces new guidelines that aim to balance AI growth with copyright compliance, but this creates significant challenges for...

EU AI Act: Key Compliance Dates and Implications for Medtech

The EU AI Act has come into effect, imposing compliance requirements for AI systems, especially high-risk ones, with penalties starting as of August 2, 2025. Companies must prepare for full...

China’s AI Content Labeling: Key Compliance Insights for Businesses

China has implemented new AI labeling rules that require clear identification of AI-generated content across various media when distributed on Chinese platforms. Companies must adapt their content...

Building Secure and Ethical AI in an Evolving Threat Landscape

Sam Peters, Chief Product Officer at ISMS.online, discusses the importance of building secure and ethical AI models in a rapidly evolving threat landscape, emphasizing that compliance must be the...

AI Recruitment Compliance: Key Insights for Employers in Bulgaria and the EU

Artificial intelligence is increasingly influencing recruitment practices, offering a data-driven approach that can streamline hiring processes and reduce human bias. However, the use of AI also...

EU AI Act: Setting the Standard for Global Super AI Regulation

The EU AI Act pioneers global super AI regulation through its risk-based framework, categorizing AI systems by their potential harm and implementing tailored controls to protect society. By focusing...

Classifying Your AI System Under the EU AI Act Made Easy

The EU AI Act categorizes AI systems into four risk levels: Unacceptable, High-risk, Limited, and Minimal. Genbounty offers a free Risk Classification Wizard to help teams quickly determine their...