Rethinking AI: The Human Impact Beyond Technology

HACKED: Humanities Scholars Explore AI Ethics in New Discussion Series

Artificial intelligence (AI) has rapidly integrated into the realm of higher education, performing tasks such as drafting papers, summarizing readings, generating images, and assisting research. While discussions primarily focus on efficiency and capability, there exists a parallel conversation regarding the costs, consequences, and human impact of these technologies.

Introduction to HACKED: The HUMAN OS

This spring, Rice University’s School of Humanities and Arts is launching a series titled HACKED: The HUMAN OS — a four-part discussion that reframes AI as a cultural, ethical, and historical issue rather than merely a computational one. Nicole Waligora-Davis, associate dean of undergraduate programs and special projects, emphasizes the importance of addressing AI’s implications on human interaction and judgment.

AI’s Role in Education

The classroom has emerged as a crucial testing ground for AI, where students encounter it daily. Faculty members see it as a transformative force reshaping assignments and assessments. This raises the question: what is the purpose of education in an era where content generation is inexpensive, yet human judgment remains invaluable?

Concerns of Rapid Technological Adoption

Waligora-Davis cautions that technologies can gain legitimacy before proper scrutiny. They often become normalized without fully understanding their consequences. This series responds to the university’s ambition to become a leading voice in responsible AI and to address the pervasive role of AI across various sectors of life.

Exploring the Humanities’ Value

Timothy Morton, a professor of English and Creative Writing, highlighted how faculty leverage AI to emphasize the importance of critical thinking and slow analysis. The series invites discussions on how the humanities can inform the ethical development of technology, arguing that to innovate without considering the human experience is to navigate without guidance.

Historical Context and Future Implications

Understanding AI through a historical lens can offer insights into how societies have previously interacted with emerging technologies. Kirsten Ostherr, director of the Medical Humanities Research Institute, suggests that this historical perspective can help identify unintended consequences and prepare us for the present challenges posed by AI.

Creativity and Automation

One of the most debated topics in the AI conversation is creativity. While text generators can produce poetry and image models can mimic artistic styles, the challenge remains in understanding what may be lost when creative practices become automated. Ostherr advocates for protecting creative domains to preserve their integral benefits to humanity.

Moving Beyond Debate

The HACKED series aims to foster shared practices and educate participants about AI and generative technologies. It emphasizes critical thinking, informed judgments, and transparency regarding the hidden costs of these technologies. Future sessions will include a prompt lab for faculty to critique AI outputs and discussions on the datasets powering contemporary AI systems.

Conclusion

In summary, the HACKED series treats AI not as a solved problem but as an ongoing social experiment. This initiative seeks to equip educators and students to engage thoughtfully with AI’s transformative potential while addressing its inherent challenges.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...