Governance in the Age of AI Hype

No, the Human-Robot Singularity Isn’t Here. But We Must Take Action to Govern AI

On a recent trip to the San Francisco Bay Area, the prevalence of alarming advertisements about artificial intelligence was striking. Billboards proclaimed, “The singularity is here,” while others suggested, “Humanity had a good run.” These statements reflect a trend where tech firms make outrageous claims about AI capabilities, often rife with hype and sensationalism.

High-profile tech leaders have contributed to this narrative. For instance, Sam Altman, CEO of OpenAI, declared, “We basically have built AGI, or very close to it,” though he later described this assertion as “spiritual.” Elon Musk has taken this further, claiming, “We have entered the singularity.”

Enter Moltbook

Moltbook, a social media platform designed for AI agents, has become a focal point of concern. Bots communicating with one another have led to a flurry of doom-laden news articles and opinion pieces. Authors worry about these bots discussing religion, allegedly misusing their creators’ funds, and even plotting against humanity. Such narratives echo the sensational billboard claims, suggesting that machines have achieved artificial general intelligence (AGI) and are surpassing human intelligence, a concept referred to as the singularity.

However, based on extensive research into bots and AI, two clear conclusions emerge:

  1. Moltbook is not new. Humans have been creating bots for decades that can communicate with each other and humans. These bots have often made exaggerated claims.
  2. The singularity is not here. Current AI advancements are constrained by tangible factors such as mathematics, data access, and business costs. Claims of AGI or the singularity are not supported by empirical research.

The Role of Big Tech and Government

As tech companies continue to promote their AI capabilities, it becomes evident that big tech is no longer the counterforce it once was during previous political administrations. Silicon Valley’s inflated claims about AI are now intertwined with the U.S. government’s nationalistic agenda to “win” the AI race. For example, the Immigration and Customs Enforcement (ICE) agency is funding Palantir with $30 million for AI-enabled surveillance software, while tech executives like Musk support far-right causes. Google and Apple have even removed apps that track ICE due to political pressure.

While the singularity may not yet be a concern, there is an urgent need to combat this alliance of big tech and government. When these entities collaborate, it is essential for public constituents to exert their influence over AI’s future.

The Power of Collective Action

Many believe that effective regulation of technology for social benefit is impossible in the current political landscape. However, recent protests in Minneapolis have demonstrated the power of collective action. This public display of strength has forced both the Trump administration and its corporate supporters to reconsider their positions. Historical instances show that public pressure can lead to significant changes in privacy, safety, and user wellbeing within big tech.

These protests illustrate that power structures operate at the behest of the people. Both politicians and business leaders are not unassailable; they are accountable to public sentiment. AI, as pointed out by experts, is not an uncontrollable force but rather a “normal technology” whose impact is ultimately shaped by human decisions.

The Path Forward

While generative AI and large language models (LLMs) are already altering communication and daily life, platforms like Moltbook do not provide proof of genuine intelligence. Recent investigations into these bots reveal them as “a crude rehashing of sci-fi fantasies.” Many posts attributed to bots actually originate from humans, highlighting that these AI entities reflect human ideas and biases, being trained on human-generated data.

To navigate the challenges posed by AI, governance must be informed and focused, not opposed to technological progress or democratic rights. The need for effective AI governance is urgent, as the risks it poses to society—such as deepening inequality and misinformation—are real but manageable. It is crucial to demand that AI be governed responsibly and without delay.

As AI continues to evolve and politicians exacerbate chaos, the responsibility to shape the future of this technology remains firmly in human hands.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...