Trust and Transparency in AI-Driven Legal Practices

Expertise in the Age of Algorithms: Trust and Transparency in AI-Assisted Litigation

The Brief

Legal advisors should read this article to:

  • Spot legal and ethical risks of AI-generated evidence, including recent court sanctions and due diligence needs.
  • Stay updated on proposed Rule 707 and its potential impact on the admissibility of machine-generated evidence in litigation.
  • Apply best practices for supervising AI-using expert witnesses to ensure reliability and transparency in testimony.
  • Prepare for cross-examination and challenges to AI-assisted evidence; develop strategies to defend or contest credibility.

Insurance professionals should read this article to:

  • Identify ethical and legal risks of AI-generated evidence, including recent sanctions.
  • Understand proposed Rule 707’s potential impact on insurance claims litigation involving machine-generated evidence.
  • Apply best practices for integrating AI tools while balancing transparency and compliance.
  • Safeguard expert testimony through disclosure, verification, and reliance on human expertise.

Executive Summary

Synthetic technologies are reshaping the legal landscape, presenting expert witness reporting with a pivotal challenge: integrating the innovation that artificial intelligence provides without compromising ethical standards or credibility. Amidst rising distrust, the legal system must redefine human expertise and establish transparent frameworks for artificial intelligence (AI)-assisted analysis. Ethical risks posed by opaque algorithms, deepfakes, and AI-generated evidence hallucinations are making it more difficult for expert witnesses. These new, nuanced risks and responsibilities confronting attorneys, forensic, scientific, technical specialists, insurance professionals, and financial experts highlight the urgent need for defensible, accountable usage methodologies and ethical frameworks.

Introduction

Expert witnesses play a crucial role in influencing the outcome of complex litigation and arbitrations, making it essential that they bring the requisite knowledge and insight to each case – whether authoring a report, being deposed, or testifying in court or before a tribunal. While experts are not advocates and do not provide legal advice, their early involvement in a matter can help build effective case strategies and identify the need for additional expert analyses and opinions.

Increasingly, however, expert witnesses are being confronted with the question of whether to leverage artificial intelligence and machine learning to support their work product. AI continues to make extraordinary advances at a rapid rate. Generative AI (Gen-AI), a type of machine learning that includes large language models (LLMs), produces original content, such as text, through programs like OpenAI’s GPT series, Google’s Gemini, and Anthropic’s Claude. Other Gen-AI programs can create images, videos, life-like audio and music, and even code to automate repetitive tasks. Agentic AI, which is more autonomous and can learn and adapt, can conduct more complex tasks, such as automating corporate insurance claims processing, conducting legal document reviews, and supporting supply chain optimization.

This article explores the nuanced risks and responsibilities confronting attorneys, forensic, scientific, technical, and financial experts, as well as insurance professionals, highlighting the urgent need for defensible, accountable usage methodologies and ethical frameworks. This article also discusses the proposed Federal Rule of Evidence 707 and its potential impact on the admissibility of machine-generated evidence in legal proceedings and insurance claims litigation.

Risks for Experts in the Use of AI

AI is also posing significant risks to the legal industry, particularly in relation to how testifying experts can or cannot utilize this new technology in their work product, such as processing relevant data. While both lawyers and experts have faced issues with AI-generated inaccuracies, experts must be especially vigilant, as attorneys rely upon their specialized knowledge and analysis. In the US federal court system, Rule 702 of the Federal Rules of Evidence governs the admissibility of expert testimony in federal courts. The rule states that an expert witness may testify if several conditions are met, including that the “testimony is the product of reliable principles and methods.” We are left with the question: How reliable are the conclusions provided by testifying experts when AI tools are utilized in their research?

Reliability is just one of the key characteristics that courts look for when evaluating potential evidence. The lack of transparency and repeatability in AI processes is another major challenge. Since AI works as a “black box,” its systems may not be clearly examined or explained. That opacity that accompanies proprietary AI systems often means that the method by which the conclusion was reached is unclear and may be riddled with errors or biases.

Human expertise traditionally presents its ethics and defensibility by following the rules set forth in Rule 702. Humans can present all the facts, data, analysis, and results in a way that eliminates the “black box,” allowing an expert’s work to be reviewed and questioned. The expert can elaborate on and defend his or her work upon request.

One of the biggest challenges for experts seeking to utilize AI in their reports to clients and in litigation is the technology’s susceptibility to hallucinations and the potential for providing false references and citations. In fact, several courts have sanctioned lawyers for relying on AI to cite cases and other information that did not exist.

AI-generated reports often lack the nuanced understanding that expert analysis provides; they cannot substitute for expert judgment or provide insight into the reasoning behind AI’s conclusions. Without the benefit of an expert’s specialized knowledge and experience, prompts can result in incomplete or misleading outputs that fail to address the specific complexities of the subject matter.

The best experts have an earned reputation in their industry, technical knowledge, the skills to digest and retain large quantities of documents and data, an unmatched work ethic, a keen eye for detail, the ability to effectively describe complex technical matters, and the capacity to withstand cross-examination.

If a report uses sources developed through AI, a best practice is to require a hard copy of all those sources before publishing the report. This ensures that those sources are in the possession of the expert who will verify that the information is relevant, reliable, and unbiased.

Currently, there are no federal or state rules that categorically prohibit attorneys from using AI to support depositions live, provided they comply with professional conduct obligations — including competence, confidentiality, and supervision — and ensure that AI use does not compromise client confidentiality or privilege.

AI is increasingly used during depositions and trials to challenge the validity of expert opinions. Attorneys utilize AI tools to identify inconsistencies and guide questioning in real-time. As AI becomes more integrated into legal proceedings, it’s reshaping how expert testimony is scrutinized.

Proposed Rule of Evidence 707

Considering AI’s vulnerabilities, whether it is deepfakes, a lack of transparency, and more, the Committee on Rules of Practice and Procedure of the Judicial Conference of the United States has proposed Rule of Evidence 707 to address the admissibility of “machine-generated evidence.” Under the proposed rule, to be admissible, the party offering the evidence must show that the AI output is based on sufficient facts or data, produced through reliable principles and methods, and demonstrates a reliable application of the principles and methods to the facts. Public comment on the rule is open until February 16, 2026.

Proposed Rule 707, in its current form, states:

“When machine-generated evidence is offered without an expert witness and would be subject to Rule 702 if testified to by a witness, the court may admit the evidence only if it satisfies the requirements of Rule 702(a)-(d). This rule does not apply to the output of basic scientific instruments.”

The committee noted that the rule is not intended to encourage parties to choose machine-generated evidence over live expert witnesses, but rather, the goal is reliability.

Intersection of AI and Intellectual Property

AI is a tool and not a solution, says a Chief Intellectual Property Officer. While AI contributes to the work product of the Intellectual Property (IP) practice, every item is ultimately reviewed by a managing director and sourced back to the original, reliable evidence.

The IP practice is developing models that start with its own private sandbox, ensuring that any data placed in the AI engine will not be disclosed to third parties. In this protected environment, the model is trained using industry data, such as patent analytics, economics, or the IP group’s own prior work experience. Then, the model is queried for specific outputs for document review, data requests, and summarization. At the same time, the practice is double-checking all the analyses using traditional methods to compare and determine whether the AI is providing answers, the same answers, or lower-quality answers. The IP group has amended its standard form engagement letter to inform all clients that it will utilize AI tools in its work, unless they request to opt-out and decline their use.

“What’s unique about our practice is that 70% of our work is associated with patents, whether it be litigation, valuation, or transaction,” says the Chief Intellectual Property Officer. “Patents are complex documents that form part of a comprehensive collection of other patents. Therefore, when studying a patent for any purpose, we review it in detail and examine a small subset of related patents, which may be those of the same inventor, the same technology, or similar transactions. With AI, we can greatly expand the set of data that we’re reviewing, which would be impossible to do on a time- or cost-efficient basis without it. It becomes a valuable search tool for enhancing the quality of our practice.”

Conclusion: The Imminent Future of AI and Expert Testimony

The path forward for expert testimony in the age of AI requires a careful balance between technological innovation and the rigorous standards of reliability and ethical responsibility demanded by the courts.

AI is unlikely to replace experts, at least not in the near future. Litigation is replete with nuanced issues that AI, at its current level of sophistication, is unable to fully comprehend. Context and issues, such as motive in fraud cases, are difficult to uncover, even for experienced experts.

Additionally, if the proposed Rule 707 is adopted, opposing counsel will likely bring motions challenging any expert who uses AI, arguing that it was not conducted with sufficient control or quality. AI is likely to become a common part of every expert’s practice in the near future. Its net effect will be to substantially increase the quality of the work, both because of the ability of AI to directly contribute, and equally, if not more importantly, the ability for an opposing expert to use AI to identify flaws or errors. In response, experts will need to conduct preemptive AI critical reviews of their own work and should expect to be asked about their use of AI during legal proceedings.

About Our Contributors

We would like to thank our colleagues for their expertise and insights, which greatly assisted this research.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...