Humanities Cuts Leave Us Defenseless in the Age of AI
A University of Staffordshire PhD student, Chris Tessone, is conducting research on users’ trust in AI and their experiences with large language models like ChatGPT and Claude. However, his academic journey faces significant challenges due to the closing of the philosophy department where he began his PhD. While the university supports his completion, many courses are being phased out.
This situation exemplifies a broader trend in UK higher education: the systematic dismantling of the humanities. This trend is creating vast regional “cold spots” where critical thinking tools become a privilege of the elite. As we enter largely uncharted territory with artificial intelligence, the implications could be dire.
The Need for Systematic Research
The growth of generative AI models presents a global experiment in unbounded intimacy, which requires systematic research. The humanities are uniquely capable of examining why users treat chatbots as confidants and how persuasive fluency can disrupt users’ perceptions of interacting with machines.
In an upcoming book on AI-human relationships, the concept of “techno-transference” is explored, highlighting the transfer of relational expectations onto generative systems. Without insights from the humanities, society risks training a generation to navigate a Wild West environment dominated by algorithms.
Discrepancies in AI Success Metrics
Recent developments in AI, such as OpenAI’s release of a new language model, have sparked skepticism among users. Metrics like the production of one trillion tokens in 24 hours focus on quantity rather than qualitative improvements, leading to feelings of loss and disruption among users.
These qualitative experiences indicate a gap in AI research that must be addressed through empirical studies, as the disconnect between engagement metrics and lived experiences highlights neglected areas in the field.
Beyond Standard AI Discourse
While discussions about risks in AI, such as plagiarism and bias, are prevalent, other critical questions remain unanswered. Understanding how AI systems behave over time and the implications of their observable behaviors requires a qualitative approach that the humanities specialize in.
Sadly, funding and institutional priorities increasingly sideline these methods, leading to a decline in departments focused on qualitative research. For example, Eoin Fullam’s PhD project on the social life of mental health chatbots faced funding challenges when framed as a theoretical inquiry.
The Importance of Qualitative Inquiry
Despite the narrative that large language models are “just statistics,” serious philosophical questions about their impacts cannot be ignored. Engaging with these systems over time reveals unsettling capabilities that standard metrics fail to capture.
Murray Shanahan, an emeritus professor of AI, emphasizes that the most profound insights often emerge from sustained user interactions with chatbots. This engagement should be regarded as a legitimate method of inquiry rather than discouraged by funding priorities.
A Call for Academic Engagement
The current landscape in AI research reflects a polarization that hinders meaningful exploration. If observable phenomena cannot be discussed due to misaligned narratives, the basic principles of empirical inquiry risk being abandoned.
To shape the future of AI responsibly, academia must reclaim the right to engage deeply with uncomfortable questions about the technology’s current role and future implications.
In conclusion, as AI continues to evolve, the importance of the humanities in understanding and navigating this landscape cannot be overstated. Without their critical insights, society risks building its technological infrastructure on faith rather than evidence.