NSU Seminar Examines Ethical Limits of Anthropic’s AI Model Claude
The School of Humanities and Social Sciences (SHSS) at North South University (NSU) recently organized a faculty seminar on February 5, delving into the philosophical and ethical questions surrounding Anthropic’s AI model Claude.
Overview of the Seminar
Titled A Philosophical Critique of Anthropic’s “Constitution” for its AI Model “Claude”, the seminar brought together faculty members, academics, researchers, and students to discuss artificial intelligence, ethics, and philosophy. The session was moderated by Md. Mehedi Hasan, Senior Lecturer in the Department of English and Modern Languages at NSU.
Keynote Presentation
The keynote presentation was delivered by Prof. Dr. Norman Kenneth Swazo, Director of the Office of Research and Professor of Philosophy in the Department of History and Philosophy at NSU. He critically examined Anthropic’s constitutional approach to AI governance, focusing on claims that Claude adheres to ethical principles related to safety, privacy, and accuracy.
Philosophical Questions Raised
Prof. Swazo raised critical questions about whether constitution-based governance can genuinely support claims of ethical reasoning, moral judgment, or consciousness in artificial systems, particularly given their lack of lived experience and contextual awareness. He argued that while AI systems can simulate complex behavior, this does not equate to genuine consciousness.
Discussion and Q&A Session
During the question-and-answer session, participants inquired whether constitutionally guided AI systems could ever possess consciousness, given that they rely on pattern recognition rather than understanding. Prof. Swazo expressed skepticism, stating that “simulation alone does not constitute consciousness.” He referenced neuroscientist Anil Seth, emphasizing that “simulation is not instantiation.”
Other questions centered on the limits of machine learning compared to human moral fallibility and whether society places excessive ethical responsibility on AI systems. Prof. Swazo distinguished between AI-induced and AI-associated psychosis, suggesting these cases should be examined through the lens of psychopathology rather than directly attributing actions to artificial systems. He reiterated that Claude is designed to follow broad ethical guidelines and operate within built-in constraints, rather than exercise moral autonomy.
Copyright and Access to Knowledge
Discussion also encompassed copyright and access to knowledge, particularly whether AI systems trained on large volumes of publicly available material might disadvantage students lacking access to certain academic resources. Prof. Swazo noted that Claude’s knowledge base includes millions of books and other legally available materials, while acknowledging that Anthropic has settled lawsuits related to copyright violations. He highlighted that ethical tensions remain in debates over access and fairness.
Assessing Consciousness in AI Models
Another pertinent question addressed how the absence of consciousness in AI models could be assessed. Prof. Swazo pointed out the difficulty of proving a negative but argued that observable behavior shows that the type of “knowledge” held by systems like Claude is insufficient to replicate the full complexity of human thought.
Conclusion
Following the lecture, Md. Rizwanul Islam, Professor of Law and Dean of the School of Humanities and Social Sciences at NSU, remarked that ethical frameworks embedded in AI systems might reflect cultural and ideological biases shaped by their socio-cultural origins. He emphasized that the key issue is not only how such technologies are designed, but also how societies critically respond to and engage with them.