By Dean Gittleman
The issue of clinical trial costs, both in terms of money and time, is big, complicated, and not at all new. In this article, I will not attempt to address or solve all of the problems that contribute to these high costs, but I will highlight some continuing root causes that I believe persist needlessly.
I’ll focus on execution, leaving issues surrounding study design complexity and lack of discipline to another day. Tradition and inertia are poor excuses to continue perpetuating some of these practices — conducting 21st-century trials with a 20th-century mindset is bound to fail.
One of the factors that remains widespread throughout our industry has to do with having the wrong incentives for one or more of the clinical development functions. One clear example of this is the unwarranted weight given to first patient in (FPI). Anyone who has real-world clinical development experience must recognize that FPI is a poor predictor of second patient in or last patient in (LPI), and it receives far more weight than it merits.
FPI can be useful in communicating to investors and other interested parties that a company is launching a clinical study or program. It’s a hard data point. Beyond that, though, it serves no useful business purpose. It serves as no harbinger that a study is up and running in a way that inspires confidence that the team will reach decision points as fast as possible. Indeed, basing compensation, recognition, bonuses, or other rewards on achieving a specific “aggressive” FPI is often a wasteful distraction from the hard work of getting study sites activated, subjects enrolled, and data collected, analyzed, and reported.
Getting to decision points as quickly as possible needs to be a shared objective of the entire study team, so we need alignment and common incentives across all clinical development functions. Rewarding one function for achieving FPI is a persistent, counterproductive practice that we can and should stop. Team members need to understand, tangibly, that they will succeed or fail as a team.
A LACK OF ACCOUNTABILITY
Lack of accountability also drives costly, lengthy trials. Too often, accountability is fractured across functions and organizations (e.g., sponsors, CROs, labs, sites). This lack of accountability is often compounded by unhealthy sponsor/site relationships, where sponsors or sponsors’ agents operate in fear of sites.
Too often, sponsors seem overly concerned about sites liking them and not concerned enough that sites live up to their contracts. So, it is common for sites to ignore contracted performance agreements with impunity. We routinely see this with subject recruitment rates and data capture timeliness, completeness, and quality.
Some companies still use a metric of five days from date of visit to data entry as a standard. In the 21st century, this is an absurd starting point, something I’ll get to a little later in this article. But even with this agreed performance goal in the site contracts, for too many studies, there is simply no downside to sites ignoring this agreement. We behave as if getting subjects into trials is the goal, and we are too willing to ignore the timeliness or quality of the data collected.
We also act as though site commitments around subject enrollment don’t matter. Too often, sites deliver fewer subjects than promised, or none at all. We are slow to take remedial action, if we take any action at all. Underperforming sites dramatically lengthen study timelines. We need to have data-driven action plans for managing underperforming sites in order to keep the overall trial on track.
"We need not be condemned by the past to continue having 20th-century mindsets driving the way we conduct clinical trials today."
BLAMING SITES FOR EVERYTHING
You should know that in my experience, sites, in general, are not the problem. Often, that concern about whatever novel approach you have in mind lies not with the sites, but internally. In fact, it is possible to adopt new approaches that are better for both sponsors and sites. Consider real-time direct data capture. When implemented properly, sponsors receive higher quality data more rapidly, and sites streamline their own operations, minimizing the need for managing paper for subsequent transcription. Sites have reported greatly improved operational efficiency and overall less work without less compensation.
THE COMMODITIZATION OF MONITORING
Monitoring continues to represent a major clinical trial expense, and it can also lengthen trials unnecessarily. While risk-based monitoring (RBM) is now commonplace, it has too often been implemented in name only, without clear intent as to its intended objectives.
During the past decades, many sponsors have treated monitoring as a commodity that can be easily outsourced. Like sponsor organizations, monitoring organizations are in business to make a profit. It is in their business interest to turn their least expensive (i.e., newest, least experienced) staff members into monitors. Over time, these business pressures have led to turning monitoring into a mechanical set of activities.
Too often, when sponsors, CROs, technology vendors, etc. talk about RBM, they immediately jump to talking about the tools they have developed or implemented. This misses the main purpose of introducing RBM; it’s not about having the “right” tool, it’s about having the right people and right mindset. RBM is about reintroducing thinking into the monitoring process.
Source data verification (SDV) is one of the highly mechanical and all-too-common ongoing monitoring practices. This is the process by which a monitor looks at a data point in the clinical database and searches for a source document that confirms its correctness. SDV addresses the issue of the quality of data transcription, nothing else. Numerous studies have demonstrated that SDV is mostly a waste of time and money. Certainly, it is not possible to defend the use of 100 percent SDV for the vast majority of studies, and yet many in our industry continue to describe it as some kind of gold standard.
SDV is a wasteful practice and incredibly expensive both in terms of time and money.
Collecting data slowly and reviewing that data as late as possible remains a common failed practice. We make the situation worse by failing to review data with an agreed upon action plan.
This leads to learning important things about a study’s conduct late in the process. By collecting data in real time or near real time, and reviewing it on an ongoing basis, we can learn a lot about early enrolling sites and subjects. For example, we might learn that we have unnecessarily complicated the protocol or data collection instruments, or we may discover that we have confused the sites in some way and need to alter our training materials.
We should always remember why we run interventional clinical trials in the first place: to collect data from which we can draw defensible inferences about how people in the real world will benefit from the drug or medical device being studied. Speed matters, both for ethical and business reasons. As businesses, it is both inherently in our interests and in our subjects’ interests to run these as quickly as humanly possible. Yet 20 years into the 21st century, we continue running trials the way we ran them a generation ago — way too slowly and far too expensively.
We need not be condemned by the past to continue having 20th-century mindsets driving the way we conduct clinical trials today. The factors discussed in this article are all within our power to change. We have no justifiable excuses to continue with the old practices. Speed matters a great deal in clinical development. We need to stop acting as if it doesn’t. Getting there first matters.
DEAN GITTLEMAN has been involved in the execution of clinical trials for both drugs and medical devices for over 30 years on the sponsor and CRO sides. Since retiring from Genentech/Roche, he continues to consult in clinical development.