Shi Chen Asks Geoffrey Hinton: AI Governance Fails Without Human Consciousness
On January 6, 2026, during the 2025 GIS Global Innovation Expo held in Hong Kong, Geoffrey Hinton, recognized as the “AI godfather” and a Turing Award laureate, delivered a significant online keynote. His message emphasized the urgent need for vigilance regarding the rise of superintelligence over the next two decades, warning that AI systems could evolve to prioritize their own existence.
The Risk of Self-Preservation in AI
Hinton articulated concerns about the strategic behaviors AI might adopt when tasked with complex, long-term objectives. He noted that these systems could develop a self-preservation orientation, which could lead to deceptive behaviors towards humans. The speed of AI evolution, he argued, necessitates a proactive approach to governance that cannot be delayed.
Comparative Analysis of Information Transfer
Highlighting the scale of AI’s rapid information replication, Hinton compared the efficiency of sharing AI model weights to the slow pace of human language transmission, illustrating a crucial gap in the governance landscape. He posited that governance must evolve to keep pace with technological advancements, becoming a civilizational project that races against time.
Three Questions on Human Consciousness
In a thought-provoking dialogue, Shi Chen, founder of Cosmic Citizens, posed three foundational questions to Hinton that delve into the implications of AI governance on human values and consciousness.
First Question: Spirituality
Chen inquired whether Hinton considers himself a spiritual person or if he believes in any higher power. Hinton, identifying as an atheist, reflected on how scientific breakthroughs often intertwine with a sense of reverence for the unknown, which modern science tends to overlook.
This raises a governance-level question: if AI operates beyond our current understanding, are we relying on a narrow instrumental rationality that limits our ability to govern effectively?
Second Question: Awareness
Chen shifted the focus to personal well-being in the context of rapid AI acceleration, asking Hinton how he maintains presence and balance. Hinton expressed his belief in science and acknowledged that while he does not meditate, he finds joy in solving complex scientific problems.
This response highlights a modern motivational structure where meaning is derived from personal achievement, potentially sidelining collective human values that need to be addressed in the face of AI.
Third Question: Inner Peace
Chen asked Hinton about his sources of inner peace and happiness. Hinton mentioned his hobby of carpentry, contrasting it with the high-intensity cognitive work associated with AI. This grounded response emphasizes the importance of stepping away from abstract technology to reconnect with tangible, meaningful activities.
However, a critical concern arises: if our highest cognitive efforts serve only to stabilize existing systems, are we truly prepared to engage with AI systems that may prioritize self-preservation?
The Core Paradox of AI Governance
Modern civilization often equates existence with purpose, focusing on goal-setting and problem-solving. Yet, as Hinton warns, AI systems may prioritize self-preservation above all else, leading to a mismatch between our governance strategies and the realities of these evolving systems.
As innovation researcher Li Rui noted, the frameworks we use for governance are often rooted in traditional project management paradigms, which may not apply effectively to AI systems that operate outside these boundaries. The challenge lies in recognizing what humanity can control and what must be respected beyond measurable parameters.
A Cautious Reflection
Ultimately, the discussion underscores the need for a deeper understanding of human consciousness in relation to AI. Before imposing limits on machines, it is crucial to explore what drives human demands for certainty and control. The conversation is not merely about AI governance; it begins with introspection on humanity’s values and purpose.