Guest Column | May 15, 2026

Where AI Appears In Clinical Trials — And Why Contracts Need To Catch Up (Part 1)

By Katherine Leibowitz, Leibowitz Law

AI contract-GettyImages-2207503201

AI is increasingly embedded in the daily operations of clinical trials, yet most contracts lack standardized language addressing AI. This makes diligence and contract drafting critical. Without asking the right questions — of your contracting partners, vendors, and internal teams — organizations expose themselves to unnecessary and potentially significant risk.

In many cases, AI enters clinical trial operations not through deliberate deployment but through tools already embedded in trial systems, vendor platforms, and everyday productivity software.

This three-part series examines how AI is being used in clinical trial operations and the contractual and operational risks that follow. Part 1 outlines where AI appears in clinical trial operations and supporting technologies, and the questions companies and organizations should ask when AI touches data (defined below). Parts 2 and 3 address the contract provisions that respond to those risks, including intellectual property, data rights, regulatory compliance, cybersecurity, monitoring and validation, and risk allocation.

As we discussed in a post on how technology is shaping clinical trial agreements, contracts have already been evolving to address increasingly complex data flows and technology-enabled services. The introduction of AI builds on those same pressures, but with additional considerations around data use, accountability, and oversight.

Many of the governance and contracting issues addressed here also arise in digital health platforms used in clinical care and healthcare operations. However, AI incorporated into regulated medical products themselves, such as software as a medical device (SaMD), raises additional product and regulatory issues beyond the scope of this discussion.

For purposes of this post, “data” refers broadly to data, documents, communications, and other information relating to the clinical trial, including outputs generated by AI systems using such information.

Where AI Is Appearing In Clinical Trial Operations

AI appears across study design and recruitment, day-to-day operational support, embedded within trial platforms, regulatory and reporting activities, and everyday productivity tools.

Not all AI use in clinical trial operations carries the same level of risk. For example, the level of concern increases where AI used for operational efficiencies impacts patient safety, drug quality, or the reliability of clinical study results. This distinction is reflected in January 2025 FDA draft guidance, which focuses on AI used to produce information or data intended to support regulatory decision-making.

Examples of AI in clinical trial operations include:

Study Design and Recruitment Analytics

  • Protocol drafting and summarization
  • Site feasibility and enrollment prediction
  • Patient eligibility screening, including analysis of EHR/EMR data

Operational Support During Trial Execution

  • Site communications and drafting
  • Ambient listening or AI scribes during visits
  • Visit summaries and clinical notes within EHR systems
  • Adverse event narrative drafting
  • Contract and document review
  • Informed consent translation

AI Embedded in Clinical Trial Platforms

  • AI functionality integrated into CTMS, EDC, and eTMF platforms
  • Analytics embedded within the platforms, including risk-based monitoring analytics

Regulatory and Reporting Assistance

  • Regulatory document drafting
  • Clinical study report preparation

Everyday Productivity Tools

  • AI-powered research, drafting, summarization, and document review used in daily work

Who Is Using AI, And Who Is Accountable?

Sponsors, sites, CROs, and service providers must understand where AI is embedded in trial operations and who remains accountable for its use.

Even where AI is deployed by sites or vendors, sponsors remain responsible for the data and analyses submitted to regulators. This creates direct regulatory exposure for sponsors, as FDA may scrutinize how AI is used in relation to trial data at inspections or in submission reviews, even where that AI is used by sites, CROs, or vendors.

In practice, AI can enter clinical trial operations without intentional deployment, through software updates, vendor platform features, or everyday productivity tools used by trial personnel.

Organizations need to review their relationships with entities across the trial ecosystem for AI usage and accountability, including:

  • sponsor relationships with sites, CROs, and technology and service providers
  • site relationships with sponsors, CROs, SMOs, and technology and service providers
  • IRBs, decentralized trial providers, and digital health technology vendors.

Personnel may use everyday productivity tools to summarize protocols, draft narratives, or translate consent forms, which can result in data being processed outside of controlled systems.

Accountability does not stop with the immediate contracting partner. Contracting parties may rely on downstream technology providers, creating third- and fourth-party AI risk that should be addressed through diligence and contractual controls.

How Are Third- And Fourth-Party AI Vendors Governed?

AI functionality is often delivered through layered vendor relationships, with technology providers relying on downstream AI developers or cloud services.

For example, a sponsor may access an EDC platform through a CRO. The CRO may license the platform from an EDC vendor, which embeds AI functionality developed by another provider. In this structure, data may pass through multiple organizations before the output is delivered, making it difficult to understand how data is processed, used, or stored.

If AI functionality is embedded within vendor platforms, sponsors and sites may have limited visibility into how data is processed or flows through the vendor’s technology stack, including downstream AI providers. This may limit their ability to conduct diligence or exercise oversight over how data is impacted by AI.

A related risk arises when vendors deploy AI tools without the sponsor’s or site’s knowledge or approval. In these situations, the AI provider may not be subject to the sponsor’s or site’s security review, data governance policy, or contractual controls.

For this reason, organizations should review the entire vendor stack for AI and ensure that contracts address downstream providers that may access or process data.

What AI Is Used, And Why?

To assess AI risk, organizations must understand what AI tools are being used — by themselves, their vendors, and their contracting partners — and how those tools interact with data.

Key questions include:

  • What AI is embedded in the technology?

Identify the AI functionality within the tools or platforms used in connection with the trial, including systems that may access or process data, even if not traditionally viewed as part of trial operations.

  • What data does the AI access, process, or generate?
    Identify the types of data involved, including (by way of example) source data, reported trial data, and operational documentation.
  • What is the AI designed to do?
    Understand the AI’s capabilities, purpose, and outputs.
  • Is the AI use transparent?
    Determine whether the service provider clearly discloses the AI functionality and provides documentation explaining how it operates.
  • Has the AI been appropriately evaluated?
    • Technical evaluation: Determine whether the AI has undergone appropriate technical review, including evaluation of training data sources and transparency regarding the model, what data it uses, and how outputs are generated.
    • Governance and safeguards: Confirm that appropriate controls are in place, including safeguards against bias, errors, and model drift; oversight (including human review) and monitoring; escalation procedures; and model updates.

AI use in clinical trial operations is expanding rapidly, and these examples represent only a subset of current use cases. As AI becomes more deeply embedded across trial operations and supporting technologies, organizations must understand where it is used and how it interacts with data.

In Part 2, we examine how contracts address these risks.

A version of this article first appeared on Leibowitz Law's blog. It is republished here with permission.

About The Author:

Katherine Leibowitz has supported the clinical trials enterprise for over 25 years. She cofounded Leibowitz Law in 2013 after spending 17 years at a top global law firm. Her boutique life sciences regulatory and transactional law firm is laser-focused on clinical trials and technology commercialization, serving sponsors/manufacturers, technology service providers, research institutions, CROs, and digital health companies.

Katherine handles the full clinical trial operations contracting process from CTAs and budgets to HIPAA authorizations, informed consent forms, EDC vendor agreements, CRO MSAs, committee membership, physician consulting, and more. In today’s fast-evolving world of electronic databases, decentralized trials, AI, cyber risk, secondary research, and biobanking, she excels at modernizing contract templates and negotiations to align with the shifting landscape and move deals forward efficiently.

A frequent speaker and author, Katherine enjoys combining the multiple regulatory, legal, and industry norms to provide integrated, practical guidance to the life sciences community.