Artificial Intelligence—The Post-Davos Agenda
Last week at Davos, amidst discussions of Greenland and President Donald Trump, the world’s elite grappled with a more fundamental question: how to govern artificial intelligence without stifling innovation or sacrificing human welfare. Conversations exposed a stark divide: Europeans demanding government regulation and Americans championing unfettered innovation.
That choice—between innovation without oversight or regulation that stifles competitiveness—is a false one. Instead, government and tech companies must collaborate to forge a third path: one that harnesses AI’s transformative potential while establishing ethical guardrails that protect workers, communities, and democratic values.
U.S. Model Versus European Regulation
U.S. tech companies, unconstrained by serious regulation, are racing to build artificial intelligence tools that can match or exceed human cognitive abilities across any task—known as Artificial General Intelligence (AGI). While pursuing this goal, they are accumulating vast sums of wealth.
The U.S. model was on vivid display on the Promenade, Davos’s main drag. In converted storefronts built like stage sets for the World Economic Forum, the largest American tech firms offered breathless promotional displays extolling the economic and social benefits of evolving AI technologies. In recent years, these companies have driven the U.S. economy’s growth and success. They are racing to design the next generation of large language models, to monetize data collection, and to build mammoth data centers around the globe, aided by the Trump administration’s anti-regulatory approach.
The AI revolution looks very different from a European perspective. European companies operate in highly regulated economies where taxes are higher. These factors have contributed to Europe’s inability to compete, especially in today’s tech-dominated world. In a September 2024 report, Mario Draghi, former head of the European Central Bank, noted that only four of the world’s 50 largest tech firms are European. He lamented, “Europe largely missed out on the digital revolution led by the internet and the productivity gains it brought.”
The result, he noted, was that since 2000 real disposable income has increased almost twice as much in the U.S. as in the EU. At Davos, French President Emmanuel Macron warned that “Europe clearly has to fix its key issues—a lack of growth, the lack of GDP per capita growth.” He concluded, “the diagnosis is well known, European competitiveness still lags behind that of the U.S.” While the EU aspires to regulate tech companies, the absence of European firms in this industry diminishes its standing to do so.
Freedom and Democracy
Peter Thiel, the co-founder of Palantir and PayPal, clearly described the contrast between U.S. and European approaches in a 2009 essay entitled “The Education of a Libertarian.” Thiel opined, “I no longer believe that freedom and democracy are compatible.” According to Thiel, technological innovation drives human freedom while democracy inevitably leads to overregulation and higher taxes, which suffocate innovation. Thiel’s either/or dichotomy is crude and dangerous, but it resonates with the Trump team and leaders of major tech firms.
What was missing at Davos and in other AI discussions is a systematic effort to develop technology models that both embrace innovation and establish reasonable government regulations ensuring societal well-being and democratic preservation. Contrary to Thiel’s simplistic worldview, freedom and democracy must go hand in hand.
Three Critical Areas of Concern
This collaborative approach requires immediate attention in three critical areas where current AI development threatens to outpace both ethical considerations and regulatory frameworks:
-
Employment Displacement
The first is the almost inevitable loss of millions of jobs. McKinsey estimates that AI could replace 40% of American jobs. At Davos, Dan Schulman, Verizon’s CEO, warned that we might see “20%-30% unemployment levels over the next two to five years.” Jamie Dimon, CEO of JPMorgan Chase, has urged phasing in automation gradually and said he would welcome government restrictions on mass worker displacement if that is needed. Others propose that government and industry work together to develop a social safety net, with a guaranteed income for those left behind. These and other options need to be studied, debated, and acted upon.
-
Infrastructure Sustainability
Beyond employment, AI’s physical infrastructure presents equally urgent challenges. The social, environmental, and financial costs associated with rapid data center expansion demand immediate ethical oversight. The largest tech companies are investing significant amounts of capital to build massive structures that will provide AI infrastructure for the future. They are moving fast, racing against each other, without adequately considering the natural resources that will be consumed or the communities that will be disrupted. These challenges will be particularly acute in the Global South.
-
Content Governance
While data centers represent AI’s physical footprint, the technology’s digital impact poses equally complex challenges. Moderating harmful online content while maintaining free speech principles has become more urgent as AI amplifies both the scale and sophistication of potential harms. For years, critics charged that social media algorithms designed to maximize user engagement have promoted extremist content, disinformation, and posts that exploit people’s hate and fear.
AI technologies exacerbate these problems. They enable easy production of realistic deepfakes that distort images and words. They generate chatbots that lead some users to self-harming actions. The recent controversy involving Grok, the chatbot built into X that allows users to remove clothing from pictures of women without their consent, exemplifies the need for better regulation. These threats will grow more urgent as AI models become more sophisticated. Tech companies need to work with governments, rather than resist them, in developing regulatory models that address these problems while protecting freedom of expression.
Moving Forward
Six years ago, Pope Francis advanced the Rome Call for AI Ethics, promoting shared responsibility among international organizations, governments, and the tech sector to create “a future in which digital innovation and technological progress grant mankind its centrality.” Early last year, the Vatican published new AI guidelines prioritizing human dignity, accountability, and transparency to ensure AI serves the common good. Looking to turn these aspirations into practice, Pope Leo recently urged that artificial intelligence “is a tool that requires appropriate and ethical management.”
As we look to the future, leading tech companies must work together with governments to develop meaningful systems of ethical management for these powerful new tools that are remaking our world.