Clinical Trials Get A Council For AI Oversight

By Dan Schell, Chief Editor, Clinical Leader

You can’t throw a rock in clinical research right now without hitting someone pitching some new AI-enabled app, solution, process … whatever. Need to speed things up, automate your processes, or become more efficient? AI has you covered. “There’s this panacea around AI,” Advarra CEO Gadi Saarony told me. “Everybody talks about it, everybody claims they’re using it. But in clinical trials, there’s very little focus on the ethics of it, or the transparency or oversight. How do we know these tools make sense not just for sponsors, but for participants and sites?”
That’s the gap the new Council for the Responsible Use of AI in Clinical Trials hopes to fill. In addition to Advarra, the fledgling group includes founding members Sanofi, Recursion, and Velocity Clinical Research. Saarony explains it’s a noncommercial attempt to put guardrails around a technology that is already burrowing deep into trial design, feasibility, and enrollment but is largely ungoverned.
Why These Companies, And Why Now?
When I first heard of this new council, I wondered why these specific companies were the founders. Advarra makes sense since it touches almost every corner of the research ecosystem through its IRB, technology, and consulting services, so it hears everyone’s frustrations. “We sit at the intersection of operations and oversight,” Saarony said. Over the years, he had discussions with various stakeholders in the industry about the need for these kinds of AI — guidelines, guardrails, frameworks, whatever you want to call them. So, to keep the first cohort nimble, Advarra reached out to a Big Pharma, a biotech (or a techbio, as Recursion prefers to be called), and a large site network. Sanofi brings enterprise‑level AI governance and real‑world experience using machine learning in protocol design. Recursion embodies the AI‑first biotech mindset, while Velocity represents the trial sites that must live with whatever plans sponsors sketch out.
When I asked Velocity CTO Raghu Punnamraju what he is looking forward to accomplishing the most with this council, he said, “I think we would love to have a set of shared standards that pertain to participant safety and scientific integrity but also represent operational reality. I think those are the three key areas how AI can be used or embraced, in general, across the industry.” As such, he says he sees this council being an enabling force rather than a constraining or controlling entity.
Getting back to the composition of the council, Saarony acknowledged that CROs are conspicuously absent, but that was intentional. “CROs absolutely play a vital role, and they’ll be included in the next phase. We felt it was most important to start with the entities that design the protocol and those that execute it. The CRO often lives in between, helping implement someone else’s plan.”
What “Responsible” Looks Like
The Council plans to meet quarterly, twice in person, with working groups focused on AI use cases, ethics, regulatory questions, and real‑world pilots. Early deliverables could include a shared AI glossary, a typology of use cases, and reference models for validating AI tools — the same way IT systems are validated today.
Saarony wants measurable outcomes, not philosophical manifestos. “We need benchmark KPIs embedded in tools and workflows, not just retrospective analyses that ask whether it worked,” he said. Think time‑to‑site activation, fewer protocol amendments, enrollment timelines, and data‑quality indicators. Ethical alignment matters just as much: transparency of algorithmic decisions, bias checks, and a clear line of sight to how each model affects participants.
Jain sees transparency as the linchpin. “We want to be as transparent as possible as we think about our solutions.”
Walk Before You Run
The Council is deliberately starting small. Too many industry initiatives drown in bureaucracy before they ship anything useful. “Some bodies fail because they try to be too prescriptive,” Saarony said. “We don’t have the authority to regulate, and frankly, AI is evolving too fast to regulate that way.”
Initial outputs are targeted for late 2025, with a broader framework in early 2026. Real‑world pilot data will follow to make sure the guidance survives contact with actual studies. Additional members — including CROs, regulators, and ethicists from outside Advarra — will be added as the Council proves it can deliver.
Collaboration, not empire‑building, is the goal. Saarony wants to work with TransCelerate, CTTI, ACRO, and the FDA, none of whom have focused squarely on AI ethics in trials. “We know what we don’t know,” he admitted. “The question is how to bring it all together without boiling the ocean.”
The Bigger Picture
For Saarony, the stakes stretch beyond any single technology. Drug development timelines haven’t kept pace with scientific discovery for decades. “The science is evolving fast, but trials are still slow, complex, and disconnected,” he said. “It’s the same problems we had 25 years ago.”
AI could be the inflection point that finally changes that — if the industry resists the urge to chase speed at the expense of trust. “We need guardrails,” Saarony said, “or we’ll be flying blind.” Jain and Punnamraju agree, saying that a council that balances oversight with enablement might be the best chance to get there. “We’re trying to create enough of a halo effect that the assumption becomes, ‘Of course this was done responsibly,’ instead of people wondering if corners were cut,” Saarony said.
If that sounds ambitious, it is. But without a serious effort to define responsible AI now, clinical research risks lurching forward on shaky ground. The Council’s first order of business is to make sure that doesn’t happen.