Who Will Govern the AI of the Future?
Amid the rapid expansion of artificial intelligence and the debate on how it should be regulated, a recent study examines a key question: who sets the rules and through what infrastructure?
Understanding the Governance Models
The research delves into how certain technological initiatives not only provide services but also promote governance models based on private digital identity and biometric data systems.
Case Study: World
A notable case study in the research is World (formerly Worldcoin), a project co-founded by OpenAI CEO Sam Altman. This initiative proposes verifying that a user is human by scanning their iris in exchange for a digital identity certificate. The study investigates how such initiatives connect narratives about future risks, including bots, fraud, and impersonation, with promises of security and inclusion.
The Implications of Technological Narratives
As the research highlights, the debate surrounding AI is not merely technological; it is also a discourse on the futures these technologies create and who will govern them. Projects like World do not simply offer a tool; they propose a model of governance that can erode the legitimacy of democratic institutions while presenting a private alternative.
Sociotechnical Fictions
The article introduces the concept of "sociotechnical fictions" to describe future narratives that, when framed as inevitable, can influence decisions about technological design and rollout, leading to significant political consequences.
According to the analysis, when future scenarios are depicted as unavoidable, technical decisions with policy implications become legitimized. The research examines how these narratives contribute to a project that emerged in the 1980s—one that rejects democracy, embraces fundamental individualism, and posits that engineering and the free market can replace politics in solving social problems. Ironically, many of these technologies were developed with substantial public funding.
Factors Contributing to Narrative Traction
The study identifies several factors that enhance the traction of these narratives:
- Presenting future scenarios as inevitable and urgent.
- Making technology attractive through design.
- Triggering emotions such as fear and hope to garner social support, creating the illusion that these technologies are unavoidable.
- Normalizing the idea that identity and governance functions depend on private systems.
Conclusion
While the study does not assess the empirical impact of the project on users, it provides valuable tools for understanding how certain imaginaries of the future may ultimately shape digital infrastructure and public discourse on identity, biometrics, and AI governance.