Guest Column | April 16, 2026

AI Amplifies Capabilities But Also Risks: Learn The Legal Consequences Of AI In Clinical Research

By Brittnie Panetta, Matthews & Associates

emerging legal frameworks, regulations governing artificial intelligence-GettyImages-2245648442

The pharmaceutical industry has always operated at the intersection of innovation and risk, but the rapid integration of AI and digital monitoring into clinical research is changing that balance. What was once a tightly controlled, site-based process is now increasingly decentralized, data-driven, and automated. These advances promise efficiency, broader patient access, and faster time to market. But in the courtroom, they are creating a new generation of legal vulnerabilities, many of which the industry has yet to fully appreciate.

Having handled mass toxic tort cases against some of the largest drug and pharmaceutical manufacturers, one reality has become clear to me: AI redistributes and amplifies liability. The core legal questions remain unchanged — what the manufacturer knew, when it knew about it, and what it did about it — but the path to answering those questions is becoming much more complex.

When More Data Creates More Risk

One of the most common assumptions I hear from defense counsel is that more data leads to better outcomes and stronger legal defenses. In theory, AI-driven trials generate massive data sets from wearables, mobile apps, and remote monitoring platforms, capturing real-time patient information at a scale never before possible.

But in litigation, volume is not the same as clarity.

The explosion of data creates new attack surfaces. We are digging into raw data sets, algorithmic outputs, flagged anomalies, and internal communications surrounding those data points. When AI systems generate thousands of alerts, the question becomes: which ones were acted on, which were ignored, and why?

It is common to uncover situations where:

  • Early signals of adverse events were flagged but deprioritized.
  • Algorithms “smoothed” irregular data patterns, masking outliers.
  • Internal teams disagreed on the significance of certain findings.

In a traditional trial, those inconsistencies might be limited, but in an AI-driven environment, they multiply. And every inconsistency is an opportunity to argue that the manufacturer either missed or ignored a warning sign.

Data Privacy Can Be A Litigation Weapon

Data privacy has long been treated as a compliance issue, governed by statutes like the HIPAA. But in the context of AI-driven clinical trials, privacy is about preserving the integrity and admissibility of evidence.

Decentralized trials rely on continuous data streams from patients’ homes, usually collected through third-party platforms and stored in cloud environments that may span multiple jurisdictions. This raises a host of legal concerns:

  • Were patients fully informed about how their data would be used and shared?
  • Did cross-border transfers comply with international data protection laws?
  • Can the company account for every entity that had access to the data?

When the answer to any of these questions is unclear, it creates leverage for plaintiffs. Data that is improperly handled or insufficiently protected can be challenged as unreliable or incomplete. It can also support broader claims that the company prioritized speed and convenience over patient safeguards.

I have seen cases where data governance failures became the linchpin of the entire litigation strategy. Once you undermine the credibility of the data, you begin to undermine the conclusions drawn from it and from there, liability begins to crystallize.

The U.S. Patchwork: State Laws Are Catching Up Quickly

While the U.S. does not yet have a single comprehensive federal privacy law comparable to GDPR, individual states are rapidly stepping in to fill that gap in inconsistent ways. California has taken a leading role with the California Consumer Privacy Act and its expansion under the California Privacy Rights Act, both of which impose obligations that closely mirror many GDPR principles. These laws grant consumers, including clinical trial participants, significant rights over their personal data, such as the ability to access their information, request deletion, limit how it is shared or sold, and gain greater transparency into how it is being used.

However, California is no longer an outlier. Nearly half of U.S. states have enacted or are in the process of implementing their own privacy frameworks, each with its own nuances around consent requirements, definitions of sensitive health data, and enforcement mechanisms. For pharmaceutical companies conducting multistate or nationwide clinical trials, this creates a complex and often underestimated compliance challenge. A single decentralized trial may involve participants in California, where disclosure requirements are particularly strict; Texas, where sector-specific rules and biometric data considerations come into play; and states like Virginia or Colorado, which have their own consumer data rights frameworks. The result is that a single data set may be subject to multiple, and sometimes conflicting, legal obligations depending on where participants are located, significantly increasing both compliance burdens and litigation risk.

Algorithmic Accountability: The Liability Problem No One Owns

The most unsettled issue in this space is liability when AI systems fail. Clinical trial sponsors are increasingly relying on algorithms to:

  • identify adverse events
  • monitor patient compliance
  • detect safety signals across large populations.

But when those systems misinterpret data or fail to detect a developing risk, responsibility becomes diffuse. Pharmaceutical companies may argue that:

  • The algorithm was developed by a third-party vendor.
  • Data inputs were incomplete or corrupted.
  • Human oversight mechanisms were in place.

None of these defenses are particularly persuasive in a mass tort context.

Courts and juries tend to focus on control and responsibility. If a manufacturer chooses to integrate AI into its clinical trial process, it assumes responsibility for ensuring that the technology is reliable, validated, and appropriately supervised. Delegating critical safety functions to an opaque system does not reduce liability — it may increase it, particularly if the company cannot explain how decisions were made.

This is where the concept of the black box becomes legally dangerous. If an algorithm flags — or fails to flag — an adverse event, and no one can clearly articulate why, it creates a narrative that the company was operating without meaningful oversight. In front of a jury, that narrative is difficult to overcome.

DCTs: Convenience Vs. Control

The rise of DCTs is framed as a democratization of research, allowing patients to participate without geographic constraints. While that is undoubtedly true, it comes at a cost: loss of control over the data environment.

In traditional trials, data collection occurs in controlled clinical settings, under the supervision of trained professionals. In decentralized models, data is generated in the real world through wearable devices, mobile apps, and patient self-reporting. Each of these introduces variability:

  • Devices may be improperly calibrated or inconsistently used.
  • Patients may misreport or fail to report symptoms.
  • Connectivity issues may delay or disrupt data transmission.

These variables are fertile ground for challenge in litigation. Defense teams can argue that data inconsistencies are inherent to real-world collection methods. Plaintiffs will then argue that those inconsistencies should have been anticipated and mitigated.

Decentralized trials can also create gaps in the evidentiary record. If an AE occurs but is not captured in real time, or is recorded inaccurately, it raises questions about whether the trial design itself was adequate to detect risk. And if the trial cannot reliably detect risk, its conclusions become suspect.

What Happens When You Can’t Show Your Work

Discovery is where the battle is won or lost in every mass tort case. We are invested in the process behind each outcome. That includes:

  • internal emails and communications
  • draft reports and revisions
  • raw data and analytical methodologies.

AI complicates this process because it does not always produce a clear linear record of decision-making. Instead, it generates outputs based on complex, and sometimes opaque, calculations.

This creates what I refer to as the “audit trail gap.”

If a company cannot produce a clear record showing:

  • how data was processed,
  • how AEs were evaluated, or
  • why certain decisions were made,

this opens the door to allegations that critical information was overlooked or concealed.

In one case I handled, a manufacturer relied heavily on automated data analysis tools. When we requested documentation explaining how certain safety signals were evaluated, the responses were incomplete and inconsistent. That gap became a central theme at trial — the idea that the company itself did not fully understand the tools it was using to assess risk.

Jurors do not respond well to that kind of uncertainty.

Regulatory Lag Is A Breeding Ground For Liability

Regulators, including the FDA, have begun issuing guidance on the use of AI and digital tools in clinical research. But guidance is not the same as comprehensive regulation, and there remains a significant gap between technological capability and legal standards.

This gap creates both risk and opportunity.

On one hand, pharmaceutical companies must operate without clear, uniform rules governing AI validation, data integrity, and oversight. On the other hand, that lack of clarity allows plaintiffs to argue that companies failed to meet evolving standards of care — even if those standards were not formally codified.

We often frame this as a question of reasonableness. Did the company take reasonable steps to ensure that its AI systems were accurate, reliable, and properly monitored? Did it implement safeguards to detect and address potential failures?

Without clear regulatory benchmarks, those questions are answered by juries. And juries tend to evaluate reasonableness through a common sense lens, not a technical one.

The Future Of AI Accountability

What we are seeing now is the early formation of legal standards that will govern AI in clinical research for years to come. These standards are not being written solely by regulators — they are being shaped in courtrooms, through litigation and by jury verdicts.

Several themes are beginning to emerge:

  • Transparency is becoming a legal expectation. Companies must be able to explain how their AI systems work and how decisions are made.
  • Oversight cannot be outsourced. Reliance on third-party vendors does not absolve sponsors of responsibility.
  • Data integrity is paramount. Inconsistent or poorly documented data will be viewed with skepticism.
  • Patient safety remains the central obligation. Technological innovation does not excuse failures to detect or respond to risk.

These principles are likely to evolve into more formal legal doctrines, particularly as more cases involving AI-driven trials make their way through the courts.

Innovation Without Accountability Is A Litigation Strategy — For Plaintiffs

The pharmaceutical industry is not going to slow its adoption of AI and digital monitoring. These technologies have the potential to improve patient outcomes and accelerate medical breakthroughs.

But from a mass tort perspective, they also introduce new and powerful avenues for liability.

Every algorithm, every data stream, and every decentralized interaction becomes part of the evidentiary record. If records show any inconsistencies, it will strengthen the plaintiff’s case.

The companies that will succeed in this new environment are those that recognize a fundamental truth: technology magnifies accountability. Anyone that fails to internalize that lesson could find themselves defending the very systems they relied on to bring those products to market.

About The Author:

Since joining Matthews & Associates, Brittnie Panetta has been involved in complex civil litigation against some of the largest drug and pesticide manufacturers, while also assisting victims of the California wildfire secure their compensation.

Brittnie earned her Master’s Degree in international policy studies with a focus on trade investment and development and her Bachelor’s Degree in international studies from Middlebury Institute of International Studies, a graduate school of Middlebury College. Brittnie earned her Doctor of Jurisprudence at Santa Clara University School of Law.

Prior to her legal career, Brittnie co-founded a non-profit organization to promote the betterment of the social and economic conditions for the people of Santa Catarina, Mexico. Her decade-plus involvement with this non-profit and the Mexican government solidified her fluency in Spanish.