AI Regulation: Balancing Control and Freedom

Superficial Regulation of Artificial Intelligence

Earlier this year, discussions surrounding the need for ethical AI governance gained momentum at an international conference. This conversation came on the heels of recent amendments to the Prevention of Electronic Crimes Act, 2016, which were perceived as infringing upon human rights, and the implementation of the Digital Nation Pakistan Act. The latter aims to advance digital public infrastructure without requisite safeguards for data privacy and exclusion.

The call for a framework governing AI was notably vague, lacking specifics on what such governance would entail. The surge of buzzwords surrounding AI and digital governance often obscures the underlying motivations for regulation; frequently, these motives revolve around exerting greater control, particularly over media.

The Threat of AI to Media and Information Ecosystems

The emergence of generative AI has intensified concerns about the potential threats it poses to media and our information ecosystem. Alarmist predictions regarding the unchecked spread of disinformation and the amplification of harmful content have led to panicked calls for additional restrictions on freedom of expression and access to information online.

However, many regulatory proposals focus merely on superficially regulating AI outputs, rather than fostering substantive governance centered on transparency and accountability.

Concerns Over AI and Freedom of Expression

While the dangers posed by AI to media and freedom of expression are undeniable, the regulatory responses have often missed the mark. Many journalists have sounded the alarm about AI’s capacity to generate inauthentic content at scale, particularly in regions where digital literacy is low. Users in these contexts are particularly vulnerable to manipulated information.

Most regulatory proposals tend to focus on content regulation without addressing the complexities involved in governing AI itself.

Model Regulation and Human Rights

Effective regulation of AI should prioritize human rights, transparency, and accountability. Emerging human rights standards demand that the development, deployment, and use of AI be conducted in a transparent manner, with clear pathways for accountability regarding harms resulting from AI use. The potential harms of AI extend far beyond the current discourse on misinformation, encompassing issues like facial recognition, predictive policing, and the exacerbation of discriminatory systems, particularly when AI is used in essential services.

Furthermore, the conversation around AI regulation often neglects the invisible labor that underpins AI technology, much of which comes from the global majority. Investigations reveal that the development of most AI tools relies heavily on manual data labeling performed by outsourced labor from countries like Pakistan, where workers face low wages and exposure to harmful content.

Regulatory Challenges and Censorship

Current laws and proposed legislation often aim to regulate AI-generated content based on vague criteria such as “fake news” or “national security”, which frequently serve as pretexts for suppressing dissent. The primary intention appears to be less about mitigating harm and more about regulating speech that challenges the powerful.

Despite legitimate government concerns regarding generative AI, particularly regarding frequent false accusations, sweeping regulations framed as AI governance often serve as convenient excuses to impose restrictions on freedom of expression. Such measures can be seen as attempts to regulate a technology that lawmakers neither fully comprehend nor have invested effort to understand adequately.

A Hopeful Outlook

There is a glimmer of hope, as the last discussion of the AI bill by the relevant Senate Committee recommended a cautious approach. Committee members acknowledged that it may be premature to establish a dedicated AI regulator, given that the ecosystem is still evolving. This call for caution and recognition of the complexities involved is reassuring.

Moving forward, a regulatory framework that emphasizes equity, transparency, and accountability—rather than censorship—is essential. AI regulation must prioritize the protection of people and their rights, rather than silencing them.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...