The Uncomfortable Conversation: AI And Data Use In Clinical Trials
By Robert S. Goldman, global head of clinical operations, Contraline, Inc.

I want to start with a reality most of us already know but have not fully processed yet: artificial intelligence is already operating inside clinical trials. Not in controlled sponsor environments. Not someday. Not in pilot programs. Right now, at the site level, every day, often without the sponsor knowing it.
Over the past year I have watched a rapid rise in AI tools marketed directly to research sites promising efficiency. They draft emails, summarize visits, interpret protocols, translate eligibility criteria into simple English, and help coordinators keep pace with overwhelming workloads, and the list goes on. For a coordinator juggling multiple studies, data entry, queries, regulatory, pharmacy tasks, and clinic schedules, the appeal is obvious. Saving even one hour throughout the day matters.
The issue is not that sites want and need efficiency. The issue is what they are feeding into these AI tools to get it: protocols, investigator brochures, site initiation slides, source notes, participant information, internal correspondence, and sometimes even contracts and budgets.
At that moment operational convenience intersects with regulatory obligation, confidentiality agreements, and sponsor intellectual property ownership. Most people involved do not realize that intersection exists.
Access Versus Permitted Use
Clinical trials depend on controlled access. Sites receive sponsor materials so they can conduct a study. That access exists under confidentiality agreements, clinical trial agreements, and federal privacy obligations.
Access allows a site to perform study procedures. Access does not grant permission to upload, process, analyze, or externally store those materials in unrelated third-party systems. Those are fundamentally different actions.
When a coordinator copies the protocol or Investigator’s Brochure into an external tool to simplify language or create a cheat sheet checklist, sponsor-owned intellectual property leaves sponsor custody. When visit documentation containing participant information is uploaded for drafting assistance, protected health information may exist outside the regulated research record. When an investigator brochure is summarized externally, confidential development strategy may be disclosed.
There is rarely malicious intent. Most of the time it is someone trying to save time before the first participant arrives in the morning. Intent, however, does not determine compliance.
The Vendor Assurance Problem
A growing number of vendors now market AI solutions directly to sites and research networks, claiming they are secure or compliant. Some sponsors even encourage adoption without fully validating what those claims mean in a regulated research context.
Security and regulatory compliance are not interchangeable concepts. Encryption does not equal regulatory adherence. A privacy policy does not equal validation. A marketing claim does not equal alignment with federal research obligations.
Very few sites, and even small sponsors, do not formally audit or qualify these vendors. Very few have the expertise to do so. Most coordinators understandably assume that if a tool is sold into clinical research and the site leadership adopts them, it must already meet the necessary standards to hold up in an audit and standup against contractual obligations.
In multiple conversations and podcasts I have had with Darshan Kulkarni, Pharm.D., JD, from Kulkarni law firm, one consistent theme emerges. Every stakeholder assumes someone else performed the legal and regulatory diligence. Sponsors assume sites understand confidentiality restrictions. Sites assume vendors built the tool specifically for regulated research. Vendors assume enterprise safeguards are sufficient.
Meanwhile the study proceeds.
The Practical Risk Sponsors Cannot See
Consider a realistic scenario.
A coordinator uploads a section of a protocol into a free external tool to help explain procedures to a participant. No one reports it. No one documents it. No one intends harm.
Months later, similar language appears elsewhere in an unrelated context or data set.
Was there intent to disclose proprietary information? No.
Was sponsor intellectual property potentially exposed? Possibly.
Was there a breach of confidentiality obligations? Arguably, yes.
Could the sponsor detect it? Almost certainly not.
This is the new operational risk surface in clinical research. Not deliberate misconduct but invisible distribution created by convenience.
Sponsors have historically focused on the integrity of data inside the trial record. They must now consider where trial information travels outside the trial record.
Regulations Have Not Changed
Artificial intelligence feels new. Regulatory responsibility is not.
Clinical research still operates under obligations governing participant privacy, confidentiality, and controlled documentation of systems affecting study conduct. Uploading study information into unvalidated external environments conflicts with each of those expectations.
Responsibility does not sit with a single party. If sponsors provide no guidance, sites improvise. When sites improvise, vendors fill the gap. If vendors are not evaluated, risk distributes across the ecosystem.
This is not a site problem.
This is not a vendor problem.
This is a shared governance gap.
A Necessary Distinction
The industry needs a common understanding. Being entrusted with study materials does not grant permission to externalize them into independent technology environments.
Sites act as custodians of sponsor intellectual property for the purpose of running the study. They are not authorized processors of that property in unrelated systems unless explicitly permitted in writing. At the same time, sponsors cannot assume silence equals compliance. Expectations that are not written, trained, and reinforced effectively do not exist.
Vendors entering clinical research must also recognize they are operating adjacent to regulated activity. Generic enterprise safeguards are not enough when the information originates from human subject research.
Right to access is not consent to use.
Moving Forward Without Halting Innovation
This is not an argument to ban artificial intelligence from clinical trials. The efficiency benefits are real and necessary. It is a call to govern its use before informal habits become standard practice.
Sponsors should establish written policies addressing external AI tools, incorporate those expectations into agreements and training, evaluate vendors before recommending them, and provide practical examples of acceptable workflows.
Sites should treat protocol and participant information as non-exportable unless explicitly authorized and seek clarification before adopting new AI tools that interact with study content.
Vendors should provide transparent documentation of data handling and align development with regulated research expectations rather than general enterprise assumptions.
None of these steps slow innovation. They prevent disputes after confidential information has already traveled beyond recovery.
The Conversation The Industry Needs
Clinical research depends on trust between sponsors and sites. Artificial intelligence does not remove that trust, but it tests it. Operational tools are evolving faster than governance frameworks, and that gap will eventually close through either collaboration or conflict.
Many coordinators believe they are using a digital assistant. Many sponsors believe their information remains contained within the study environment. Both assumptions cannot always be true.
The industry does not need panic or alarm. It needs clarity.
Until we define the boundaries of AI within clinical trials, we are operating on assumptions rather than agreements. Addressing that now is far easier than reconstructing responsibility later.
Clinical research has always adapted to new technology, but adoption has historically followed governance at glacial speed. With AI, adoption is arriving first and policy is catching up. We should not wait for a confidentiality dispute, a regulatory question, or a sponsor-site conflict to define the boundaries for us after the fact. The right next step for the industry is simple conversation and shared expectations. Sponsors, sites, vendors, and regulators all benefit from clarity before convenience becomes custom. If we can agree on where responsible use begins and ends, AI can strengthen clinical trials rather than quietly erode trust.
Right to access does not mean consent to use.
About The Author:
Robert S. Goldman is global head of clinical operations, Contraline, Inc., where he oversees the performance of all global clinical trials, ensuring delivery to scope, timeline, budget, and quality standards. With more than 16 years of experience across sites, CROs, and sponsors, he brings deep therapeutic expertise spanning analgesia, gastroenterology, pulmonology, hepatology, cardiovascular disease, dermatology, endocrinology, rare disease, oncology, women’s health, men’s health, infectious disease, and CNS disorders.