From The Editor | June 2, 2025

Why Perfect Clinical Trial Data Is A Dangerous Myth

Dan_2023_4_72DPI

By Dan Schell, Chief Editor, Clinical Leader

Kara Harrison_cropped
Kara Harrison

I’ve interviewed former FDA investigators before, and each time I come away equal parts entertained and alarmed. Talking to Kara Harrison was no exception. Her résumé spans grading orange juice at the USDA to becoming a sharp-eyed quality consultant after a decade with the FDA, and I knew this conversation would be another one of those “I learned something today” moments.

Recently, I spoke with a site inspection veteran who once walked into a facility with a mold problem and a spider infestation. I also talked with a mock inspection expert who joked about giving hugs at the end of an inspection that also included a 483. Harrison didn’t bring up arachnids or emotional support, but she did deliver a clear-eyed look at what still goes wrong in trial execution, and how little the root causes have changed over the years.

Leaving The FDA

Harrison joined the FDA in the late '80s and was soon conducting BIMO inspections around Houston, which she called “a gold mine” of clinical activity. Over time, she worked with IRBs, bioanalytical labs, clinical sites, and even saw cell and gene therapy begin to take shape. But despite her enthusiasm for the science, she grew frustrated with the bureaucracy.

She described her years there as a time of “eating, sleeping, and breathing the regs,” but after topping out in her role and seeing promotion opportunities stall, she left. More than that, she was tired of coming in years after the fact and seeing problems that could have been easily avoided — problems that were now tanking submissions and wasting years of work. “I hated to see data wasted,” she told me, “and I’m not someone who can just skate through a career.” That desire to make a real impact led her into the industry, where she could play offense instead of just cleanup crew.

Monitoring The Wrong Things

One of the more sobering parts of our conversation was how easily entire studies can collapse over basic operational missteps. Harrison told me about a trial where weight-adjusted dosing was mishandled by the pharmacy, and no one — not even the CRAs — caught it, because it wasn’t in the monitoring plan. Doses were wrong, data was unreliable, and the study was scrapped.

It wasn’t fraud or sabotage — just a massive oversight due to checklist-driven thinking. “We’ve dumbed down monitoring and oversight to the point where if it’s not on the checklist, the CRA’s not going to look at it,” she said.

Additionally, she noted that too much monitoring still chases perfection, something the FDA doesn’t even expect. Instead of focusing on the critical variables, sponsors overload protocols with exploratory endpoints and drown sites in unnecessary data entry work. The result is confusion, inconsistency, and burnout. “This obsession with checking every data point causes teams to lose sight of what really matters. They’re missing the forest for the trees,” she said, noting that even the FDA has clarified it doesn’t require perfect data or full SDV.

In its guidance titled Oversight of Clinical Investigations — A Risk-Based Approach to Monitoring, the FDA emphasizes that sponsors should focus on critical data and processes that are essential to human subject protection and data integrity. The guidance states:

“Monitoring activities should focus on preventing or mitigating important and likely sources of error in the conduct, collection, and reporting of critical data and processes necessary for human subject protection and trial integrity.”

This approach allows for flexibility in determining the extent of SDV based on the specific risks associated with a trial, rather than mandating 100% verification of all data points. Or, as the FDA's A Risk-Based Approach to Monitoring of Clinical Investigations: Questions and Answers further clarifies:

“Focusing more monitoring activities on risks to the most critical data elements and processes should enable sponsors to achieve the objective of conducting a quality clinical investigation, including human subject protection and data integrity, without necessarily having to conduct frequent routine visits to all clinical sites and extensive SDV.”

More Data, More Complexity, More Room for Error

Harrison believes the problem starts much earlier than the monitoring plan. Trial design itself is often where things begin to go sideways.

She’s concerned that many clinical trials now are simply trying to do too much too soon — thousands of patients, multiple cohorts, and bloated endpoints that try to serve both exploratory and confirmatory aims at once. “We are introducing so many opportunities for error and interpretation,” she told me, and it’s hard to argue with that when so many studies fail because of executional messiness rather than scientific failure.

Her solution is one we’ve all heard over and over: Simplify the trial. Validate feasibility before scaling up. “Maybe have more focused trials before we go to that big, huge, thousands-of-people, seven-cohort-long trial.” That doesn’t just protect the data; it protects patients and resources, too.

Only 4 SOPs?

At a recent conference session I attended, someone said that a site really needs only four SOPs. It was a mic-drop moment that made the room gasp, but when I told Harrison this, she said she could understand where they were coming from, listing informed consent, IRB review, data entry, and delegation of authority as her four choices.

In fact, she said the sites that consistently perform well — the ones that breeze through inspections — are often the ones with a PI who is actively involved, not just rubber-stamping everything the coordinator hands over. At the good sites, the PI is engaged and shows up during inspections to take ownership. “Yes, I am responsible,” is what Harrison wants to hear. The bad sites? Think weekly fly-bys from the PI and coordinators quietly falsifying records because they know nobody’s really watching.

When issues arise (and they always do), she emphasized that successful teams are the ones who collaborate, communicate, and focus on critical risks. “There’s no perfect clinical trial; things inevitably are going to go wrong. So let’s figure out what’s most likely going to tank the trial and make sure it doesn’t happen.”

QA Is No Longer the Enemy

One of the more encouraging shifts Harrison has seen over the past 20 years is how organizations perceive quality assurance. In the early days, QA was seen as the police; someone you called only when there was a mess to clean up or a crisis brewing. But that thinking, she said, is outdated.

“Quality is a collaborator,” she told me. “Quality is the person who’s going to keep you out of jail.”

And while she meant it partly as a joke, Harrison wasn’t exaggerating the stakes. Regulatory violations don’t just threaten trial timelines, they can derail careers. And more importantly, they can harm patients. The best organizations, in her view, are those that bring QA into the conversation early and often. They don’t treat it as a hurdle to get over at the end. They treat it like a co-pilot.

But there’s still a catch. Many teams remain siloed. CRAs often don’t understand how their decisions influence data management downstream. Data managers write unclear assumptions. Site staff are left trying to interpret poorly worded protocols. Without a shared understanding of roles and risks, it’s no wonder things slip through the cracks.

That’s why Harrison pushes so hard for cross-functional training, better planning, and clear communication. Not because she’s chasing perfection, but because she’s seen firsthand what happens when nobody’s steering the ship.

Harrison also made a nuanced but important point about how we think about GCP. For all the procedures and policies, she said, GCP is not a checklist — it’s a framework. “That’s good clinical practice,” she told me. “It’s all interpretation. Everything depends on the context.”

That’s not license to cut corners, but a reminder that GCP demands critical thinking. It's about intent, risk management, and making context-based decisions, not blindly following a protocol written a year ago by a team that may never have met the sites executing it. Until we recognize that, we’ll keep creating overdesigned trials that burden sites with ambiguity and bury sponsors under data that doesn’t move the needle.