No, the Human-Robot Singularity Isn’t Here. But We Must Take Action to Govern AI
On a recent trip to the San Francisco Bay Area, the prevalence of alarming advertisements about artificial intelligence was striking. Billboards proclaimed, “The singularity is here,” while others suggested, “Humanity had a good run.” These statements reflect a trend where tech firms make outrageous claims about AI capabilities, often rife with hype and sensationalism.
High-profile tech leaders have contributed to this narrative. For instance, Sam Altman, CEO of OpenAI, declared, “We basically have built AGI, or very close to it,” though he later described this assertion as “spiritual.” Elon Musk has taken this further, claiming, “We have entered the singularity.”
Enter Moltbook
Moltbook, a social media platform designed for AI agents, has become a focal point of concern. Bots communicating with one another have led to a flurry of doom-laden news articles and opinion pieces. Authors worry about these bots discussing religion, allegedly misusing their creators’ funds, and even plotting against humanity. Such narratives echo the sensational billboard claims, suggesting that machines have achieved artificial general intelligence (AGI) and are surpassing human intelligence, a concept referred to as the singularity.
However, based on extensive research into bots and AI, two clear conclusions emerge:
- Moltbook is not new. Humans have been creating bots for decades that can communicate with each other and humans. These bots have often made exaggerated claims.
- The singularity is not here. Current AI advancements are constrained by tangible factors such as mathematics, data access, and business costs. Claims of AGI or the singularity are not supported by empirical research.
The Role of Big Tech and Government
As tech companies continue to promote their AI capabilities, it becomes evident that big tech is no longer the counterforce it once was during previous political administrations. Silicon Valley’s inflated claims about AI are now intertwined with the U.S. government’s nationalistic agenda to “win” the AI race. For example, the Immigration and Customs Enforcement (ICE) agency is funding Palantir with $30 million for AI-enabled surveillance software, while tech executives like Musk support far-right causes. Google and Apple have even removed apps that track ICE due to political pressure.
While the singularity may not yet be a concern, there is an urgent need to combat this alliance of big tech and government. When these entities collaborate, it is essential for public constituents to exert their influence over AI’s future.
The Power of Collective Action
Many believe that effective regulation of technology for social benefit is impossible in the current political landscape. However, recent protests in Minneapolis have demonstrated the power of collective action. This public display of strength has forced both the Trump administration and its corporate supporters to reconsider their positions. Historical instances show that public pressure can lead to significant changes in privacy, safety, and user wellbeing within big tech.
These protests illustrate that power structures operate at the behest of the people. Both politicians and business leaders are not unassailable; they are accountable to public sentiment. AI, as pointed out by experts, is not an uncontrollable force but rather a “normal technology” whose impact is ultimately shaped by human decisions.
The Path Forward
While generative AI and large language models (LLMs) are already altering communication and daily life, platforms like Moltbook do not provide proof of genuine intelligence. Recent investigations into these bots reveal them as “a crude rehashing of sci-fi fantasies.” Many posts attributed to bots actually originate from humans, highlighting that these AI entities reflect human ideas and biases, being trained on human-generated data.
To navigate the challenges posed by AI, governance must be informed and focused, not opposed to technological progress or democratic rights. The need for effective AI governance is urgent, as the risks it poses to society—such as deepening inequality and misinformation—are real but manageable. It is crucial to demand that AI be governed responsibly and without delay.
As AI continues to evolve and politicians exacerbate chaos, the responsibility to shape the future of this technology remains firmly in human hands.