Thirty Years of the Original Sin of Digital and AI Governance
On February 8, 1996, two pivotal events set the stage for the original sin of digital and increasingly AI governance, influencing technological advancements to this day. In Davos, John Perry Barlow’s Declaration of the Independence of Cyberspace framed the internet as a sovereign realm beyond state authority. Concurrently, in Washington, D.C., the US Communications Decency Act came into effect, granting internet platforms an unprecedented legal shield from liability for hosted content. Together, these actions cultivated a pervasive belief that technological progress should outpace, and often exist outside, the realms of politics, law, and established governance frameworks.
The Declaration of Independence That Never Was
Barlow’s declaration boldly asserted:
“Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.”
This declaration represented a foundational myth—a political fantasy that led to a belief that the internet heralded the ‘end of geography’. Numerous articles, books, and speeches have argued for the necessity of new governance suitable for this ‘new brave world’ of the digital era. However, this ideological construct was predicated on the flawed assumption of a cyberspace existing independently from the physical realm.
Every online interaction—be it an email, social media post, or AI query—transpires as a physical event. These actions are facilitated by cables, Wi-Fi, data servers, and other internet infrastructures, all operating within the jurisdiction of one of 193 countries. In essence, Barlow’s declaration propagated a call for lawlessness disguised as liberty, misleading a generation into thinking that digital existence transcended traditional legal and ethical norms.
The Unprecedented Legal Shield
Simultaneously, President Clinton enacted the Communications Decency Act, which included Section 230, granting internet platforms remarkable immunity. For the first time, commercial entities were not held accountable as publishers or speakers of the content they hosted. This legal immunity was justified as a means to protect the nascent tech industry from costly lawsuits over hosted content, facilitating the internet’s rapid growth.
However, this legal immunity persists, despite the evolution of small tech startups into trillion-dollar corporations. This disconnection from legal accountability has created significant tensions within modern economies and legal frameworks.
The Convergence of Two Sins
These two events—the myth of a stateless cyberspace and the immunity from legal liability—reinforced one another. The illusion of a distinct cyberspace provided ideological justification for exceptional legal treatment. Critics, including US Judge Frank H. Easterbrook, argued against the need for internet-specific laws, emphasizing that existing legal principles should govern the internet just as they do other technologies.
Time has validated Easterbrook’s assertion. Law fundamentally regulates human interactions, regardless of the medium—be it smoke signals, horse transport, or the internet. The myth of cyberspace has been eroded by reality, where high-precision location and activity tracking have firmly anchored us in geography. Yet, despite widespread bipartisan concerns, the Communications Decency Act remains in effect, now extending into the era of AI.
The Poisoned Inheritance and AI Without Responsibility
With the protection of Section 230, AI platforms can deploy large language models and diffusion models with minimal oversight, operating under the same rationale: they are merely conduits, not speakers. This creates a pronounced imbalance compared to other industries. For instance, car manufacturers must issue recalls for defects, and pharmaceutical companies bear liability for their products. In stark contrast, AI companies can unleash systems that propagate hatred, disseminate harmful misinformation, or contribute to public health crises without facing similar legal repercussions. The onus of proof and the resultant tragedies fall solely on users and victims, leaving the architects of these systems unaccountable.
Returning to Millennia of Legal Wisdom
As AI raises significant political, societal, and economic stakes, it is imperative to revisit the original sins of governance. We must reestablish a fundamental legal principle developed over millennia: if you create, operate, and profit from a technology, you must be accountable for its foreseeable impacts.
This approach is not intended to stifle innovation but to align technological advancement with responsibility, as has been done with transformative technologies throughout history. The era of legal exceptionalism must conclude, paving the way for an age of accountability that addresses the profound influence of AI on society.