AI Ethics in Research: Navigating Challenges and Responsibilities

UW Experts Discuss AI Research Ethics

Researchers at the University of Wisconsin-Madison convened on January 30 to address the ethical implications of generative artificial intelligence in academia and research. The panel included specialists from the UW-Madison Data Science Institute, Libraries, and the Institutional Review Boards (IRB).

Ethical Concerns in AI Research

During the discussion, Dr. Anna Haensch, associated with the Data Science Institute, highlighted that AI complicates research practices. She noted that AI’s ability to generate lengthy research papers can lead to “hallucinations” of information, ultimately affecting the integrity of scholarly work.

Recommendations for Researchers

Jennifer Patiño, the digital scholarship coordinator at the UW-Madison Libraries, recommended that researchers should “make a plan” when integrating AI into their work. She emphasized the importance of understanding how AI tools function, their terms of use, and any licensing considerations involved.

Patiño pointed out the role of the Research Data Services center at UW-Madison, which aids researchers in making their data citable, reproducible, and publicly accessible. She stated, “We help researchers with data management plans and also to share their data, ensuring compliance with funders and their publications.”

Legal Implications of AI in Research

Concerns were raised regarding AI tools that harvest data from privately owned publications, leading to complex legal questions about intellectual property. Patiño noted, “Some publishers and journals completely ban AI use, while others allow it in limited capacities, such as improving language.”

Understanding these policies is crucial, as Patiño advised, “It’s really important to take a look at what both the publisher and the journal are saying.”

Human-Subject Research and AI

The panel also featured insights from Casey Pellien, the IRB’s associate director, and Lisa Wilson, the IRB’s director. They discussed the ethical considerations of using AI in human-subject research, defining it as any study that analyzes information from human subjects to draw conclusions about behavior.

Wilson urged researchers to thoroughly “read” and “understand” the terms of use of AI tools, emphasizing the risks associated with data reidentification and the subjects’ rights regarding their data.

University-Level AI Policies

The discussion shifted to institutional policies on AI use. At UW-Madison, information deemed public may be entered into generative AI tools, while sharing sensitive or restricted data, such as passwords and confidential documents, is strictly prohibited according to the university’s Chief Information Security Officer.

Legislative Impacts on AI

Haensch noted that AI is reshaping various career landscapes beyond research labs. She referenced federal legislation, such as the AI-Related Job Impacts Clarity Act and the PREPARE Act, which aim to regulate AI interference in workforce dynamics, including hiring and training practices. Additionally, she highlighted the December executive order from the Trump Administration aimed at promoting AI use across the U.S.

The panel concluded with a call for researchers to navigate the evolving landscape of AI responsibly, ensuring ethical practices while leveraging the potential of this transformative technology.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...