CONTACT US
Home > The Experienced Expert > Trends in Expert Witnessing: AI Disclosure 

Trends in Expert Witnessing: AI Disclosure 

April 16, 2026

By Noah Bolmer

“Anything that is fed into the system—not only is it machine learning, it’s also possibly discoverable.”
– Attorney Carl Taylor 

Reminder: Always check with your attorney before using AI in any capacity during an engagement.  

Experts preparing reports and testimony are increasingly encountering scrutiny when their analysis has been shaped by digital systems, including generative AI. What once remained in the background, such as running models through specialized software or using automated routines to process data, has become far more sophisticated, and is now a point of contention in litigation. Judges are asking whether conclusions can be traced back through transparent, reproducible methods, and whether the role of AI has been made explicit. This shift in expectation is redefining how experts prepare their reports and how their testimony is received. 

Background 

In the nineteenth and early twentieth centuries, courts demanded that experts explain their reasoning in ways that could be scrutinized by judges and opposing counsel. The emphasis was on professional qualifications and adherence to recognized scientific methods. Reliability meant showing that conclusions rested on accepted practice rather than personal opinion. 

As technology entered the courtroom, expectations shifted. The rise of statistical packages, forensic software, and specialized modeling tools in the late twentieth century introduced new layers of complexity. Experts were expected not only to present results but also to explain how those tools functioned and why they were appropriate. Disclosure of methodology became central to admissibility, reinforced by standards such as Daubert v. Merrell Dow Pharmaceuticals, which required courts to act as gatekeepers of scientific reliability. 

Generative AI represents the latest stage in this progression. Unlike traditional software, which applies defined algorithms, generative systems produce outputs without a transparent record of how they were derived. This lack of traceability raises questions about whether reliance on such tools undermines the reproducibility that courts have long demanded. While federal judges already screen expert evidence under the Daubert framework, which requires experts to show that their methods are reliable and reproducible, that gatekeeping role is now focused on how digital systems, including generative AI, shape expert conclusions. 

Rules in Flux 

The United States is moving rapidly from abstract debate to concrete rules on how experts must disclose the use of generative AI. Federal judges already apply Rule 702 to screen expert evidence, but the rise of AI has prompted proposals for new evidentiary rules and has produced case law that tests disclosure obligations directly. 

Proposed Federal Rules of Evidence 

The Advisory Committee on Evidence Rules has circulated draft amendments that would explicitly address AI. Most notable is proposed Federal Rule of Evidence 707, “MachineGenerated Evidence”, which would require parties to disclose when evidence or analysis is produced by an algorithmic system. The draft emphasizes transparency: experts would need to identify the system used, describe its inputs, and explain how its outputs shaped their conclusions. The Committee’s 2025 meeting notes highlighted concerns over Rule 707: 

Bias in AI outputs 

  • Committee members worried that AI systems may replicate or amplify biases present in training data. 
  • They noted that without disclosure of inputs and design choices, courts cannot evaluate whether outputs are skewed or discriminatory. 

Reproducibility challenges 

  • AI systems often generate outputs that cannot be exactly replicated, especially generative models. 
  • The Committee emphasized that reproducibility is central to Rule 702 reliability, and lack of reproducibility undermines adversarial testing. 

Interpretability and auditability 

  • Complex machinelearning processes may be “black boxes,” making it difficult for experts or judges to explain how conclusions were reached. 
  • The Committee highlighted that interpretability is essential for crossexamination and judicial gatekeeping. 

Risk of unreliable evidence 

  • Members noted that AIgenerated content (such as deepfake audio or video) poses heightened risks of inaccuracy and authenticity problems. 
  • They stressed that Rule 707 must ensure courts can distinguish between reliable machinegenerated evidence and deceptive or unverifiable outputs. 

The Advisory Committee will reconvene in mid-2026 to consider public comment, with a timeline for adoption in 2027 at the earliest. If adopted, this would be the first explicit rule requiring disclosure of AI in federal proceedings. States, meanwhile, would remain free to adopt 707 entirely or in part—or implement their own systems.  

Case Law 

Kohls v. Ellison arose from a constitutional challenge to Minnesota’s “deepfake” law. The plaintiffs’ expert submitted a report that included passages drafted with generative AI, but the role of the tool was not disclosed. The court insisted on transparency, requiring the expert to identify what the AI produced, explain how it was reviewed and edited, and preserve prompts or logs. 

The significance of Kohls lies in how the court treated nondisclosure. It did not ban AI use outright; instead, it treated the omission as a methodological flaw that undermined traceability. The ruling made clear that opposing parties must be able to test whether the expert’s reasoning is independent or whether it rests on machinegenerated text. In practical terms, Kohls warns experts that undisclosed AI use will be treated as a reliability problem, exposing testimony to exclusion or expanded discovery. The case is already being cited in commentary as a turning point: the first federal decision to demand explicit disclosure of generative AI in expert reports. Bioengineering expert Cris Daft puts it simply: “Experts should assume they’ll be asked in deposition whether AI was used.” 

Ferlito v. Harbor Freight Tools USA presented the opposite scenario. This product liability case involved a defective axe. The plaintiff’s expert consulted ChatGPT to support an alternative axe head attachment design theory. The defendant moved to exclude the testimony, arguing that reliance on ChatGPT rendered it unreliable. The court denied the motion, holding that disclosure of the AI’s role and the expert’s oversight satisfied Rule 702. 

The Ferlito ruling is important because it shows that courts are not treating generative AI as inherently inadmissible. Instead, the emphasis is on transparency and integration into professional judgment. The court accepted that the expert had disclosedthe use of ChatGPT and had exercised independent reasoning in evaluating its output. In doing so, the court signaled that AI can be part of expert methodology if it is openly acknowledged and subjected to the same scrutiny as any other tool. This decision provides a roadmap for experts: disclose the AI’s role, explain how its output was vetted, and demonstrate that the ultimate opinion rests on professional expertise. 

Taken together, Kohls and Ferlito illustrate the spectrum of judicial response. Nondisclosure triggers skepticism and demands for documentation, while disclosure and integration into professional reasoning can sustain admissibility. Draft Rule 707 would formalize this expectation, making disclosure a matter of rule rather than judicial discretion. Professional associations are already echoing this approach, recommending that experts retain prompts, logs, or code to document tool use. 

Europe: Regulatory Framework and Practice 

The European Union Artificial Intelligence Act, adopted in 2024, is the world’s first comprehensive legal framework for AI. It applies across the EU and sets out obligations based on risk categories. Highrisk AI systems, which include those used in judicial and lawenforcement contexts, must meet strict requirements for transparency, documentation, and human oversight. Certain practices deemed “unacceptable risk” were banned outright beginning in February 2025, while broader governance and transparency provisions began phasing in during August 2025, with full enforcement scheduled for August 2026. If an expert relies on an AI system to generate or process evidence, that system may fall into the highrisk category. The Act requiresdisclosure of the system’s characteristics, training data, and safeguards.  

Commission Guidelines and Templates 

In July 2025, the European Commission published Guidelines on General Purpose AI (GPAI) obligations, along with a training data disclosure template. These documents clarify how providers must summarize the content used to train their models and how users should disclose reliance on GPAI systems. The Commission also issued a Code of Practice for GPAI, emphasizing transparency and accountability in professional use. 

For experts, this means that reliance on tools like ChatGPT or other GPAI systems must be disclosed not only in court but also in compliance with EU regulatory obligations. The guidelines make disclosure a matter of statutory compliance, not just evidentiary prudence. 

Case Law and Judicial Practice 

Unlike the U.S., Europe has not yet produced headline cases directly excluding or admitting expert testimony based on AI use. Instead, courts are beginning to apply the AI Act’s transparency requirements in related contexts, such as administrative proceedings and regulatory enforcement. Early commentary suggests that once the Act’s provisions are fully in force, European judges will treat nondisclosure of AI use as a statutory violation, potentially rendering testimony inadmissible. Experts working in Europe must anticipate regulatory as well as judicial scrutiny. Reports that rely on AI must identify the system, disclose training data summaries where applicable, and demonstrate human oversight. 

Conclusion 

In the United States, courts are already demanding disclosure of generative AI use, and the Advisory Committee’s draft Rule 707 would make that obligation explicit. In Europe, the AI Act embeds transparency into statutory law, requiring experts to document and disclose reliance on algorithmic systems as part of regulatory compliance. Across jurisdictions, the principle is converging: AI is not necessarily forbidden, but it must be visible, documented, and subject to human oversight. 

For experts, this means the era of casual or unacknowledged reliance on AI tools is over. Reports that quietly incorporate machinegenerated text or analysis may be treated as unreliable, while testimony that openly acknowledges AI use and demonstrates independent professional judgment will have a stronger chance to survive admissibility challenges. The practical demands are straightforward, but vary by jurisdiction. Always check with your engaging attorney before using AI in any capacity.  

  • Identify the tool. Name the system used, whether generative AI or scripted processing. 
  • Document the process. Retain prompts, logs, code, or training data summaries where applicable. 
  • Explain the oversight. Show how the expert reviewed, edited, or validated the AI output. 
  • Separate reasoning from automation. Make clear that the ultimate opinion rests on professional expertise, not on the machine. 

Experts who embrace disclosure will not only protect admissibility but also strengthen credibility in a legal environment increasingly skeptical of opaque methods. Those who resist or conceal AI use risk exclusion, regulatory sanction, and reputational damage. 

If you’d like to be considered for expert witness opportunities, join Round Table Group. For over 30 years, we’ve connected litigators with the most qualified experts across disciplines. Call us at 2029084500 or sign up today to learn more. 

 

Share This Post

Subscribe to The Experienced Expert

Share This Post

Find more posts like this:

Trends in Expert Witnessing