Contracting For AI In Clinical Trials: Cybersecurity, Monitoring, And Risk Allocation (Part 3)
By Katherine Leibowitz, Leibowitz Law

This is the third installment of a three-part series on AI in clinical trial operations. Part 1 describes where AI appears in clinical trial operations and the questions organizations should be asking. Part 2 focuses on the contract provisions that address the resulting risks, particularly intellectual property, data rights, and regulatory compliance. Part 3 picks up with cybersecurity, monitoring and validation, and risk allocation. Each section lists key contract clauses to review. Even where not listed separately, indemnification and limitation of liability should be considered throughout.
Many agreements now include AI-specific provisions. There is no standard form, and their content varies depending on how AI is used and the risks involved. The issues below span multiple contract provisions and should be considered in that context.
For purposes of this series, “data” refers broadly to data, documents, communications, and other information relating to the clinical trial, including outputs generated by AI systems using such information.
Cybersecurity
Contract Clauses: Security, Indemnification, Limitation of Liability, and Insurance
- Increased Vulnerability: AI platforms expand the attack surface for clinical trial data. Vulnerabilities in AI systems may also create indirect entry points into sponsor or site environments if security architecture and access controls are not properly implemented.
- Data Leakage: AI systems may also introduce data leakage risks, particularly where platforms retain inputs, transmit data externally, or operate within multi-tenant environments. Similar risks arise when personnel enter trial materials into consumer AI tools outside of controlled systems.
- Security Controls: Organizations should assess vendor security standards, certifications, audit rights, and incident response procedures, including breach notification obligations.
Monitoring And Validation
Contract Clauses: AI, Oversight and Monitoring, Representations and Warranties, and Subject Injury
Organizations should evaluate:
- whether the AI tool has been validated for accuracy and reliability
- mechanisms to detect and remediate bias or performance issues
- whether and when human review is required
- what ongoing oversight exists for AI use, including monitoring, validation, and escalation procedures
- whether contracts require periodic revalidation and updates to the AI tool.
These considerations are particularly important where AI outputs are incorporated into trial records or relied upon in analyses, as failures in validation or oversight may not become apparent until later stages of monitoring or inspection and in some cases may affect subject safety.
Operational risks include:
- hallucinated or inaccurate content affecting clinical trial operations or incorporated into study records, safety narratives, or other trial documentation
- AI-generated content lacking audit trails
- errors propagating into regulatory submissions if outputs are not appropriately reviewed.
Where addressed, AI clauses may require human review or validation at a high level, but these obligations should be supported by specific monitoring, validation, and audit provisions.
Risk Allocation
Contract Clauses: Representations and Warranties, Indemnification, Subject Injury, Insurance, Limitation of Liability, and Cybersecurity
Contracts should address how AI-related risks are allocated between the parties, including careful consideration of:
- Representations and warranties relating to model performance, data provenance, intellectual property and vendor oversight
- Responsibility for errors for AI-generated output: Investigators, monitors, or other personnel who may sign or approve AI-generated narratives, deviation assessments, or monitoring reports remain responsible for the accuracy of those documents. Approval implies verification, so relying on AI output without adequate independent review may itself constitute a failure of oversight, compounding liability for any inaccuracies contained in the underlying record. Contracts should clearly allocate responsibility for AI-generated errors between sponsors, sites, and vendors.
- Indemnification for data privacy and security violations, bias claims, use of data by the contracting party and its vendors, and intellectual property infringement and, where appropriate, claims arising from AI-related impact on trial conduct or subject safety
- Subject injury: Contracts should address whether and to what extent injuries arising from the use of AI in trial operations are covered under subject injury provisions, including where AI-related errors or failures contribute to protocol deviations, operational decisions, or other conduct affecting subject safety.
- Insurance coverage should support the indemnification obligations, including AI-specific risks and potential subject injury exposure from operational AI use. Organizations should review their cyber policy coverage for AI exclusions.
- Liability exclusions and caps: These should account for AI-related risks, including HIPAA and state law violations, competitive injury, intellectual property, hallucinations, bias, and subject injury. Separate caps may be appropriate for certain AI-related risks.
- Cross-clause and contract coordination: Indemnification, insurance, and liability provisions should align internally and with upstream and downstream contracts. AI, data use, and cybersecurity should also be drafted with these relationships in mind.
Takeaway: AI does not shift responsibility. Contracts must clearly allocate risk for AI-generated outputs, including who is responsible for inaccuracies and any resulting effects on trial conduct or subject safety and how those risks are supported through indemnification, subject injury, insurance, and liability provisions.
Conclusion
AI is already embedded in many tools used to run clinical trials, often without organizations realizing it. As a result, contracts, diligence practices, and governance frameworks must evolve to address the risks created by AI-enabled technologies.
The regulatory framework governing clinical trial conduct, industry norms, and technology law are only beginning to address AI-specific risks, making contracts, diligence, and governance frameworks the primary tools available to manage them today. FDA’s draft guidance on AI used to support regulatory decision-making is a meaningful step, and FDA’s DHT guidance reinforces existing expectations for tools that capture clinical trial data. However, neither guidance fully resolves how AI embedded in trial operations — particularly at site and vendor levels — should be evaluated where its outputs may affect patient safety or the reliability of clinical trial results, or how the expectations will be applied in practice.
The issues outlined in this series are not exhaustive, and the landscape will continue to evolve. Organizations that ask the right questions — of their contracting partners, their vendors, and themselves — will be better positioned to deploy AI responsibly, protect the integrity of trial data, and meet their regulatory obligations.
Ultimately, deploying AI does not transfer responsibility. Sponsors, sites, and vendors remain accountable for how AI systems interact with clinical trial data and for the integrity of the records those systems help produce.
A version of this article first appeared on Leibowitz Law's blog. It is republished here with permission.
About The Author:
Katherine Leibowitz has supported the clinical trials enterprise for over 25 years. She cofounded Leibowitz Law in 2013 after spending 17 years at a top global law firm. Her boutique life sciences regulatory and transactional law firm is laser-focused on clinical trials and technology commercialization, serving sponsors/manufacturers, technology service providers, research institutions, CROs, and digital health companies.
Katherine handles the full clinical trial operations contracting process from CTAs and budgets to HIPAA authorizations, informed consent forms, EDC vendor agreements, CRO MSAs, committee membership, physician consulting, and more. In today’s fast-evolving world of electronic databases, decentralized trials, AI, cyber risk, secondary research, and biobanking, she excels at modernizing contract templates and negotiations to align with the shifting landscape and move deals forward efficiently.
A frequent speaker and author, Katherine enjoys combining the multiple regulatory, legal, and industry norms to provide integrated, practical guidance to the life sciences community.