UW Experts Discuss AI Research Ethics
Researchers at the University of Wisconsin-Madison convened on January 30 to address the ethical implications of generative artificial intelligence in academia and research. The panel included specialists from the UW-Madison Data Science Institute, Libraries, and the Institutional Review Boards (IRB).
Ethical Concerns in AI Research
During the discussion, Dr. Anna Haensch, associated with the Data Science Institute, highlighted that AI complicates research practices. She noted that AI’s ability to generate lengthy research papers can lead to “hallucinations” of information, ultimately affecting the integrity of scholarly work.
Recommendations for Researchers
Jennifer Patiño, the digital scholarship coordinator at the UW-Madison Libraries, recommended that researchers should “make a plan” when integrating AI into their work. She emphasized the importance of understanding how AI tools function, their terms of use, and any licensing considerations involved.
Patiño pointed out the role of the Research Data Services center at UW-Madison, which aids researchers in making their data citable, reproducible, and publicly accessible. She stated, “We help researchers with data management plans and also to share their data, ensuring compliance with funders and their publications.”
Legal Implications of AI in Research
Concerns were raised regarding AI tools that harvest data from privately owned publications, leading to complex legal questions about intellectual property. Patiño noted, “Some publishers and journals completely ban AI use, while others allow it in limited capacities, such as improving language.”
Understanding these policies is crucial, as Patiño advised, “It’s really important to take a look at what both the publisher and the journal are saying.”
Human-Subject Research and AI
The panel also featured insights from Casey Pellien, the IRB’s associate director, and Lisa Wilson, the IRB’s director. They discussed the ethical considerations of using AI in human-subject research, defining it as any study that analyzes information from human subjects to draw conclusions about behavior.
Wilson urged researchers to thoroughly “read” and “understand” the terms of use of AI tools, emphasizing the risks associated with data reidentification and the subjects’ rights regarding their data.
University-Level AI Policies
The discussion shifted to institutional policies on AI use. At UW-Madison, information deemed public may be entered into generative AI tools, while sharing sensitive or restricted data, such as passwords and confidential documents, is strictly prohibited according to the university’s Chief Information Security Officer.
Legislative Impacts on AI
Haensch noted that AI is reshaping various career landscapes beyond research labs. She referenced federal legislation, such as the AI-Related Job Impacts Clarity Act and the PREPARE Act, which aim to regulate AI interference in workforce dynamics, including hiring and training practices. Additionally, she highlighted the December executive order from the Trump Administration aimed at promoting AI use across the U.S.
The panel concluded with a call for researchers to navigate the evolving landscape of AI responsibly, ensuring ethical practices while leveraging the potential of this transformative technology.