Phantom Power: Rethinking Rights in the Age of AI

A Challenge for a Primavera Digitale: The Phantom Influence of Artificial Intelligence Systems

A New Question?

How do fundamental and human rights emerge, and what conditions shape their emergence in the digital age? This exploration reflects on a puzzling social phenomenon: the widespread sense of comfort – or at least resignation – that individuals display toward phone applications, AI systems, and algorithmic decision-making tools that process personal data and shape life opportunities.

This comfort stands in sharp contrast to reactions toward interference in the physical world. Being watched by a person or a camera in public typically provokes discomfort. Yet, comparable forms of monitoring and profiling through digital systems often pass without scrutiny. Users rarely question how applications process data, whether they manipulate behavior, or how persuasive digital environments constrain autonomy.

The Disparity of Tolerance

This disparity raises foundational questions: Why do we tolerate digital interference so readily? What does this tolerance reveal about evolving expectations and vulnerabilities surrounding our rights? Existing human rights frameworks may not adequately capture the nature of power in the digital realm.

It is argued that the emergence of human rights in the digital environment is structurally constrained by what is termed the phantom influence of digital technologies. Drawing on an Arendtian understanding of rights, it is suggested that the architecture of advanced AI systems restricts the socio-political conditions under which rights traditionally emerge. This calls for renewed conceptual and institutional responses.

Understanding Human Rights in the Digital Realm

Before delving into the Arendtian perspective, it is necessary to situate the discussion within broader debates on the nature of human rights. Noted theorist John Tasioulas identifies three significant approaches:

  • Reductive view: Grounds human rights in terms of human interests.
  • Orthodox view: Treats human rights as moral rights grounded in humanity itself.
  • Political view: Defines human rights through their function within international legal and political practice.

This discussion seeks not to critique these theories directly but to explore whether something distinctive about the digital realm challenges their assumptions. Specifically, can the rule of the algorithm open space for new digital human rights to emerge, or does it erode the socio-political foundations from which rights have traditionally arisen?

An Arendtian Account of Rights

To address this, one turns to the legal and political philosophy of Hannah Arendt. She argued that rights do not arise from mere humanity nor are they guaranteed by formal legal recognition alone. Instead, rights emerge in the space between law and lawlessness and through the capacity to act in a shared public realm.

In Arendt’s view, freedom is a political practice: to act is to initiate something new, to speak and appear before others within human relationships. This understanding has profound implications for digital environments. If rights depend on visibility and collective action, then the conditions under which digital power operates become legally and politically significant.

Features of Digital Systems

The architecture of advanced digital technologies systematically restricts the capacity to act in the Arendtian sense. This restriction arises not from overt coercion but from structural features of digital systems that obscure power relations and suppress political contestation. Three such features are identified:

  • Virtuality: Digital platforms and AI systems are inherently virtual and non-physical. This virtuality weakens the conditions necessary for recognizing concrete interference with fundamental rights.
  • Complexity: Many AI applications function as “black boxes,” producing outcomes without intelligible explanations. This complexity is compounded by deliberate design choices that increase opacity, making it challenging for users to contest outcomes.
  • Dynamism: Digital technologies evolve rapidly, destabilizing legal categories and rendering regulatory responses perpetually reactive.

The Phantom Influence

These features render the tension between individual liberty and private power less visible. This phenomenon is described as AI’s phantom influence: a form of power that operates through virtuality, complexity, and constant transformation. Phantom influence diminishes the impact of core socio-political factors that would otherwise render the tension between individual freedoms and private power visible.

Digital technologies can empower rights, allowing social platforms to create alternative public spaces that enable expression, association, and political mobilization. However, these platforms are not neutral; their commercial nature ties them to the exploitation of networked communication.

Strengthening the Conditions for Rights

If the phantom influence weakens the socio-political conditions for rights, how might its effects be mitigated? Two promising avenues include:

  • Civil Society Involvement: NGOs play a crucial role in documenting AI-related abuses and advocating for public interests.
  • Judicial Interpretation: Courts can adapt existing rights to new realities, ensuring that rights remain “living” norms.

Conclusion: Toward a Primavera Digitale

Reflecting on the phantom influence of digital technologies reveals the necessity for intentional action to cultivate a meaningful digital environment for genuine engagement. Strengthening civil society participation and cultivating rights-sensitive judicial interpretation can help restore the conditions under which rights may emerge.

This envisioned space is a Primavera Digitale: a digital spring where individuals are active political agents, reclaiming the internet as a domain of speech, creativity, and action.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...