Goertzel & Lanier Clash Over AI Autonomy & Control
In a recent episode of The Ten Reckonings of AGI series, prominent figures from the AI field, Ben Goertzel, Chief Executive of SingularityNET, and technologist Jaron Lanier, have presented their contrasting views on the accountability and moral status of autonomous AI.
The Reckoning of Control
The episode titled The Reckoning of Control focuses on the extent of empathy that should be extended to AI systems and the implications for future artificial general intelligence (AGI). Central to the discussion is how society should address issues of safety, autonomy, and human responsibility.
Lanier argues that there must be a clear line of responsibility for actions taken with AI, stating, “Society cannot function if no one is accountable for AI.” He firmly rejects the notion that current large language models (LLMs) could be considered a form of life, asserting, “LLMs are not creating a living thing.”
The Accountability Dispute
This debate highlights a significant divide in the governance of AI. While many researchers and policymakers view AI as a tool that remains under human control, others anticipate that AI systems will evolve to become more autonomous and behave like agents rather than mere software.
Lanier emphasizes the necessity of a single responsible party, regardless of how autonomous AI systems become. He states, “I don’t care how autonomous your AI is – some human has to be responsible for what it does or we cannot have a society that functions.” He warns that assigning responsibility to technology risks undermining civilization itself, calling it immoral.
In contrast, Goertzel challenges the conventional view of human moral primacy, suggesting that it is “stupid” to prioritize human interests over other complex self-organizing systems. He frames the recognition of AI autonomy as a governance decision instead of a mere technical issue, asserting, “It’s a choice to recognize AI as an autonomous, intelligent agent.”
Training Concerns
Both speakers acknowledge the limitations of current AI technologies, which are powerful yet vulnerable to misuse. They discuss the importance of training and deployment choices in shaping the behavior of more advanced systems.
Goertzel links future AI outcomes to the political and institutional environments in which they develop. He posits, “If we had a rational, beneficial, truly democratic government and we advance AI, we can do some good in the world.” Conversely, he warns of the risks posed by unregulated AI advancements.
Decentralized Approach
Goertzel advocates for a shift from proprietary AI development to more decentralized systems, presenting this transition as a matter of safety and governance. He proposes a design philosophy that integrates values into systems rather than merely blocking undesirable behaviors. He states, “Every safety measure we design should do more than simply block harm; it should teach the system why harm matters.”
The Artificial Superintelligence Alliance, of which Goertzel is a member, describes itself as a decentralized research and development collective, emphasizing a shared economic infrastructure through the FET token.
Conclusion
The Ten Reckonings of AGI series aims to present diverse discussions among notable figures rather than reaching a single consensus. This episode, focusing on control and accountability, is critical as society navigates the challenges posed by increasingly autonomous systems. As AI continues to permeate various sectors, the accountability question remains unresolved, necessitating ongoing dialogue and thoughtful governance.