My SCOPE Groundhog Day Experience
By Dan Schell, Chief Editor, Clinical Leader

As I moved from session to session at my third SCOPE Summit in Orlando, I felt an uneasy sense of déjà vu. I don’t mean that feeling of familiarity that comes with having the event in the same venue (which I love, BTW) with close to the same schedule each year. I actually applaud Micah Lieberman and the whole SCOPE staff for running an entertaining and informative event. Heck, there 5,000+ people there! That’s impressive!
No, the part where I felt like Bill Murray in Groundhog Day, was when I kept hearing the same clinical industry problems pop up over and over and over again. Different speakers, different sessions, different tracks — same old problems with the promise of very little measurable progress.
Two sessions featuring Ken Getz of the Tufts Center for the Study of Drug Development brought that feeling into focus. One centered on patient centricity, public trust, and AI. The other zeroed in on feasibility, site selection, and the operational burden placed on investigative sites. On the surface, they seemed like separate discussions. In reality, they were reflections of the same underlying issue: growing complexity and limited accountability for changing it.
Complexity Is Still Winning
Ken opened the feasibility session with data that should concern anyone responsible for timelines — and it’s not new data. Trial durations are not shrinking. In fact, in some cases, they are getting longer, with startup often getting the brunt of the blame. That means site identification, site selection, contract negotiation, and activation remain stubborn bottlenecks.
Protocol complexity is also climbing across nearly every measurable variable. More endpoints. More procedures. More countries. More data points. More customization. I know, none of this is likely surprising to you. But, that’s kind of my point. The fact that all of this continues to trend upward despite years of industry conversations about simplification should give us pause.
Feasibility, in particular, looks like a system stuck in neutral. The average site manages roughly 15 feasibility assessments annually. The typical feasibility cycle runs about a month, and the majority of that time is spent waiting for sponsor feedback. Many sites are asked to complete questionnaires without access to a near-final protocol. They are expected to estimate enrollment and operational feasibility based on partial information, then wait for a decision that may or may not come with useful feedback.
Digital adoption is higher at sites than many assume. Electronic systems and remote capabilities are widely in use. But sites report that supporting sponsor-mandated technologies often requires extra training, troubleshooting, and help desk responsibilities. For some, that translates into financial strain rather than operational relief.
So while we talk about innovation, sites are juggling overlapping tech platforms, repetitive surveys, and increasingly complex protocols. The burden accumulates.
Patient Centricity Cannot Be Symbolic
In an earlier keynote, Ken shifted the conversation to patient enrollment and public trust, joined by Eliav Barr of Merck. Barr was candid about enrollment realities, especially in U.S. oncology. Very few patients participate in clinical trials. Access to care is uneven. Many patients never make it to large academic centers. Community oncology practices carry much of the real-world patient load, yet trial infrastructure does not always fit seamlessly into their workflows.
Barr emphasized the importance of designing trials that mirror standard of care as closely as possible. If patients are already navigating serious illness, we cannot layer on unnecessary visits and procedures simply because it is a study. Activities that can be performed at home should be (i.e. decentralized). In-clinic time should be limited to what is essential.
He also made an important point that resonated with the feasibility discussion: Sites are evaluating sponsors as much as sponsors are evaluating sites. Patients, particularly in the United States, often have viable treatment alternatives outside of trials. If participation feels too burdensome or misaligned with real-world care, they can opt out.
Barr’s closing message was simple but powerful. The trials that ultimately move the needle for patients are not necessarily the most ornate or statistically optimized. They are the ones that can be implemented in the real world. Overly complex designs that look elegant on paper may struggle to translate into population-level impact. Amen, brother!
Trust Is Not An Abstraction
Public trust in science also surfaced as a pressing issue. Awareness of clinical research rose during the pandemic, but trust has not followed a straight upward trajectory. Barr’s response was grounded in humility. Meet people where they are. Accept that communities consume information through very different channels. Acknowledge fallibility. Explain why trial and error is not a flaw in science but a feature of how it advances. Really, now that I think about it, a lot of what he said is just common sense. But of course, we ignore that.
That theme of trust extends beyond the public. It applies to sponsor-site relationships as well. During the feasibility panel, one site leader openly acknowledged that, historically, sites have sometimes overpromised enrollment to secure business (What???!!!!). Sponsors, in turn, often discount site projections accordingly. The result is a cycle of guarded optimism and quiet skepticism on both sides.
Another sponsor panelist suggested eliminating repetitive surveys and relying more heavily on existing data and long-term partnerships. It was framed as a practical fix, but it pointed to a deeper need: transparency and shared accountability. If sponsors and sites do not trust each other’s data, no algorithm will solve that.
AI As A Tool, Not A Savior
I know you’ll be shocked to hear this, but AI was discussed in both sessions, but with a notable shift in tone compared to some previous conferences. The emphasis was less on disruption and more on incremental, practical gains.
Barr highlighted internal uses of AI to reduce manual workload, summarize meetings, and centralize data visibility across large teams. Improving the day-to-day experience of clinical research professionals by even small margins can produce meaningful cumulative impact when multiplied across thousands of individuals.
More ambitious applications, such as drafting consent forms or predicting site performance, are on the horizon. But these require disciplined implementation and ongoing refinement.
On the feasibility panel, sponsors were urged to ask a simple question of technology vendors: How many study coordinators actually tested this tool? If digital solutions are not co-developed with the people expected to use them, they risk becoming another layer of friction.
AI cannot fix a misaligned process. It can only accelerate whatever structure is already in place. If the underlying process is bloated or mistrusted, AI may simply help us move faster in the wrong direction.
Same Old Same Old
What struck me most across both sessions was not a lack of awareness. Everyone in those rooms understood the core issues: protocol complexity, startup delays, feasibility inefficiencies, site burden, patient access, and eroding trust.
We have data. We have case studies. We have panels that articulate the problems clearly. And yet, the metrics Ken shared show little sustained improvement over decades.
Why?
Part of the answer may lie in incentives. Complexity often stems from legitimate scientific ambition. Every endpoint serves a purpose. Every data point has a rationale. But collectively, they produce trials that are difficult to operationalize.
Another factor is fragmentation. Sponsors, CROs, technology vendors, and sites each optimize within their own spheres. True system-wide simplification requires coordination that is difficult to achieve across organizational boundaries.
But we really can’t leave out culture. It is easier to discuss simplification than to remove an endpoint. Easier to launch a new tool than to retire an old one. Easier to add a layer of oversight than to streamline a process.
If I had to distill the takeaway from these sessions, it would not be a call for more innovation. It would be a call for subtraction. Fewer redundant surveys. Fewer unnecessary procedures. Fewer tech platforms that require separate logins and training modules. Fewer assumptions that more data automatically equals more insight.
None of this requires a breakthrough technology (much to the chagrin of all those AI vendors in the exhibit hall); it requires discipline.
Still, for me, the SCOPE Summit once again delivered a huge opportunity for thoughtful conversation and candid dialogue both in and out of the sessions. The challenge now is whether those conversations translate into action that changes the trajectory of our metrics. At some point, progress has to show up in the data, not just rhetoric on the stage.