Academic Publisher Faces Backlash Over Fabricated Citations in AI Ethics Book

Publisher Under Fire After ‘Fake’ Citations Found in AI Ethics Guide

One of the world’s largest academic publishers is facing scrutiny for selling a book on the ethics of artificial intelligence research that appears to be riddled with fake citations, including references to journals that do not exist.

Concerns Over Academic Integrity

Academic publishing has recently come under fire for accepting fraudulent papers produced using AI, which have made it through a peer-review process intended to maintain high standards. A report by The Times revealed that a book published by the German-British publishing giant Springer Nature contains dozens of citations that seem to have been invented—a common indication of AI-generated material.

The Problematic Book

The book, titled Social, Ethical and Legal Aspects of Generative AI, is marketed as an authoritative review of the ethical dilemmas posed by the technology and is priced at £125. Disturbingly, at least two chapters include footnotes that cite scientific publications that cannot be verified. In one chapter, 8 of the 10 citations could not be confirmed, suggesting that 80 percent may have been fabricated.

AI Hallucination in Academia

There is growing anxiety within academic circles regarding citations and entire research papers being generated by AI tools that aim to mimic authentic scholarly work. In April, Springer Nature had to withdraw another title, Mastering Machine Learning: From Basics to Advanced, after it was discovered to contain numerous fictitious references.

Expert Analysis

In the book analyzed by The Times, one citation claims to refer to a paper published in the “Harvard AI Journal,” a journal that Harvard Business Review has confirmed does not exist. Guillaume Cabanac, an associate professor of computer science at the University of Toulouse, employed a tool called BibCheck to scrutinize two chapters. His analysis revealed that at least 11 of 21 citations in the first chapter could not be matched to known academic papers, and similarly, 8 of the 10 citations in chapter four were untraceable.

“This is research misconduct: falsification and fabrication of references,” Cabanac stated, noting a steady increase in AI “hallucinated” citations across academic literature. He emphasized the critical need for reliable references, stating, “When [these studies] are fragile or rotten, we can’t build anything robust on top of that.”

Additional Findings

A separate review by Dr. Nathan Camp of New Mexico State University corroborated Cabanac’s findings, reporting numerous erroneous, mismatched, or entirely fabricated references in the AI ethics book. Some citations combined details from genuine papers, while six chapters appeared to be accurate. Each chapter was authored by different contributors.

Camp noted, “While it is difficult to definitively ascertain whether or not the citations used are AI-generated, they are certainly erroneous at best, likely fabricated, and the simplest way to fabricate citations is with AI.”

Publisher’s Response

James Finlay, vice-president for applied sciences books at Springer Nature, stated, “We take any concerns about the integrity of our published content seriously. Our specialist research integrity team is prioritizing this investigation.” He added that while their integrity team works diligently with editors and utilizes specialist expertise and detection tools to maintain standards, a small number of issues may still slip through the cracks.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...