Humanities Crisis in the Age of AI

Humanities Cuts Leave Us Defenseless in the Age of AI

A University of Staffordshire PhD student, Chris Tessone, is conducting research on users’ trust in AI and their experiences with large language models like ChatGPT and Claude. However, his academic journey faces significant challenges due to the closing of the philosophy department where he began his PhD. While the university supports his completion, many courses are being phased out.

This situation exemplifies a broader trend in UK higher education: the systematic dismantling of the humanities. This trend is creating vast regional “cold spots” where critical thinking tools become a privilege of the elite. As we enter largely uncharted territory with artificial intelligence, the implications could be dire.

The Need for Systematic Research

The growth of generative AI models presents a global experiment in unbounded intimacy, which requires systematic research. The humanities are uniquely capable of examining why users treat chatbots as confidants and how persuasive fluency can disrupt users’ perceptions of interacting with machines.

In an upcoming book on AI-human relationships, the concept of “techno-transference” is explored, highlighting the transfer of relational expectations onto generative systems. Without insights from the humanities, society risks training a generation to navigate a Wild West environment dominated by algorithms.

Discrepancies in AI Success Metrics

Recent developments in AI, such as OpenAI’s release of a new language model, have sparked skepticism among users. Metrics like the production of one trillion tokens in 24 hours focus on quantity rather than qualitative improvements, leading to feelings of loss and disruption among users.

These qualitative experiences indicate a gap in AI research that must be addressed through empirical studies, as the disconnect between engagement metrics and lived experiences highlights neglected areas in the field.

Beyond Standard AI Discourse

While discussions about risks in AI, such as plagiarism and bias, are prevalent, other critical questions remain unanswered. Understanding how AI systems behave over time and the implications of their observable behaviors requires a qualitative approach that the humanities specialize in.

Sadly, funding and institutional priorities increasingly sideline these methods, leading to a decline in departments focused on qualitative research. For example, Eoin Fullam’s PhD project on the social life of mental health chatbots faced funding challenges when framed as a theoretical inquiry.

The Importance of Qualitative Inquiry

Despite the narrative that large language models are “just statistics,” serious philosophical questions about their impacts cannot be ignored. Engaging with these systems over time reveals unsettling capabilities that standard metrics fail to capture.

Murray Shanahan, an emeritus professor of AI, emphasizes that the most profound insights often emerge from sustained user interactions with chatbots. This engagement should be regarded as a legitimate method of inquiry rather than discouraged by funding priorities.

A Call for Academic Engagement

The current landscape in AI research reflects a polarization that hinders meaningful exploration. If observable phenomena cannot be discussed due to misaligned narratives, the basic principles of empirical inquiry risk being abandoned.

To shape the future of AI responsibly, academia must reclaim the right to engage deeply with uncomfortable questions about the technology’s current role and future implications.

In conclusion, as AI continues to evolve, the importance of the humanities in understanding and navigating this landscape cannot be overstated. Without their critical insights, society risks building its technological infrastructure on faith rather than evidence.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...