Sites Are 3X Likely To Die In The First Year Than New Businesses
A conversation with Andrew Friedson, Ph.D., head of research, Milken Institute Health

Opening and running a new business is no small feat. We know it colloquially, and we know it statistically. About 20 percent of new businesses in the U.S. close in their first year, according to the U.S. Bureau of Labor Statistics.
What, then, might we make of that data as we think about the survivability of clinical research sites? Anecdotally, we understand the uphill battle of engaging and retaining principal investigators after their first year in clinical research, but does that challenge also exist for sites themselves? Each year, how many sites are new, how many are still operating, and how many are now defunct? That information was not easy to come by. Until now.
Andrew Friedson and Bumyang Kim, director and associate director of health economics at the Milken Institute, respectively, recently detailed the supply of clinical research sites in their new report, “The Supply of Clinical Trial Sites: Openings, Closings, and Longevity.”
Much in the way the U.S. Bureau of Labor Statistics tracks new business survival rates, Friedson and Kim leveraged the Clinical Trial Transformation Initiative’s Aggregate Analysis of ClinicalTrials.gov database to track the survival rates of clinical research sites.
Here, Friedson discusses their findings and reveals how factors like available population and local economic conditions factor into survival rates.
Clinical Leader: Tell us about the impetus for this research. What were you hearing or seeing – or not – to make you think, “We need to explore this”?
Andrew Friedson, Ph.D.: When I think about access or utilization, I think of it as the intersection of the supply of something and the demand for something. So, we have a supply of clinical trials and clinical trial sites, and we have a demand for clinical trials and clinical trial participation.
If you want to be thinking about access, you need to be thinking about the demand side with regard to patient engagement, and you also need to be thinking about the supply side with regard to where, how many, and how big you are trying to recruit for. So, I started digging into the supply side.
We actually have really good statistics in the United States for how often businesses are opening and how often they're closing. If you want to get a sense of the scope of an industry or the geographic footprint of an industry, you can go right to the Bureau of Labor Statistics, and you can pull these data. And this just didn't exist for clinical trials. Given public records on ClinicalTrials.gov, you can back this out and follow clinical trial activity.
Given that no previous data on the life, death, and longevity of sites exists, how did you react when your study revealed that between 45 and 60 percent of sites “die” in the same year they are born?
My guide to doing empirical research is very much Sherlock Holmes —that is, you twist your theories to fit your facts and not your facts to fit your theories. This is a new fact, and we want to think about what explains this fact. So, we're looking at the number of businesses or the percent of businesses that start up in a given year that die – they start up and then close that same year. And we're looking at the number of clinical trial sites that start up and then take no further trials in that same year. And if you compare those two death rates, clinical trial sites die much more frequently. And the question then is: Why is that? What are some theories that fit the facts? And we don't have a definitive answer.
But when you think about it, it makes a lot of sense. Clinical trial sites and businesses are two very different entities. The objective of a business is to never close. The objective of a business is to open and to make profit, and then to continue to do those activities. And businesses fail. In the literal sense, they fail.
However, it is entirely possible that you could have a successful clinical trial site that dies in the first year that it's born. Our term “die” just means it doesn't open any new clinical trials; it’s by design one-and-done. You're opening up a more remote location, or you're opening up a temporary location with the purpose of running this one trial for this one pharmaceutical. You get the results you need, meet your endpoints, and get the answer to "Is this safe? Is this effective?” The design was never to continue past that point, and that is a death. If we're looking at churn, we're interested in how often these things are opening and closing from an access standpoint, but from a clinical success standpoint, that could be a very different answer.
There is some very basic opening and closing, churn-type data that we have for businesses that we use to understand business dynamics. Some people on the ground absolutely knew this about clinical research sites, but from a macro top-down perspective, this wasn't an easily accessible industry insight until now.
From 2017 to 2021, the net number of sites, balanced by births and deaths, remained positive. But in 2022 and 2023, the number of sites decreased. Have you attributed that drop – either with data or colloquially – to any certain factors, such as the COVID-19 pandemic?
It's not the only theory that fits the facts, but it does seem to me to be an incredibly likely theory. We know that COVID-19 closed a lot of things, including putting some clinical trials on hiatus, and we know that it moved some new trials toward what we'd call site-less alternatives.
Another thing is that ClinicalTrials.gov uses imperfect data capture. There's a window in which you have to upload your information, so some of what we are seeing could just be that these are trials that have completed but have not uploaded their information to ClinicalTrials.gov. Some of it's a data lag, and some of it certainly fits with COVID-19.
Your research found that the available population (large metro to rural areas) and local economic conditions (poverty rate, median household income, and health insurance coverage) have minimal impact on site survival. Before this research, what did you or the wider industry assume about these factors? And how has that changed or remained the same?
This is one of those things that we felt could go either way. Let's look at one of the variables: the rate at which people have health insurance. One story we were telling was, if you have health insurance, you are more connected to the healthcare system. We know that having health insurance makes you far more likely to have a regular source of care. It makes you far more likely to see a physician in a given year. So, if we view access to the healthcare system as one of the key ingredients to getting recruited into a clinical trial, then we might think that health insurance penetration is going to be strongly related to clinical trial sites being successful because you've got to be able to recruit enough people.
Another story is that people can't afford healthcare when they don't have health insurance. But when patients are in a clinical trial, they don't pay for the clinical trial, meaning they don’t necessarily have to have health insurance. Typically, the participants are not paying out of their own pocket for trial expenses. They're not buying the drug, but they're going to pay another way, such as travel time, and they may have to take time off work. But in terms of the large cost of paying for medical care, that's not a material component to the person who is participating in the trial.
Your research revealed that sites with at least one government-sponsored trial in their first year had the best survival rates, compared with trials funded by industry or a medical entity. Understanding that most government funding comes from the NIH, what impact can we expect on site survivability in 2025 in the midst of layoffs and the current government shutdown?
I would be super interested to see the answer to that. And I don't know the answer. I can tell you a few different theories that fit these facts. One theory goes like this: It is possible that government funding itself makes sites more stable. So, when you remove government funding, the sites are less stable and you're less likely to have those sites. However, another possibility is that the government is more attracted to sites that are inherently stable themselves, so if you are a more stable site, you are pulling in government funding. So, we have a little bit of a chicken and the egg.
What you can do is exactly what you suggested. You can continue to track these numbers to get a more complete picture. But with a single-year snapshot, all we can see is that these two things are related.
We know that federal investment in biomedical research is big for this engine of U.S. competitiveness. But from a strictly social science analytics perspective, this is one of the questions I'd really like to track over the next couple of years. One of the nice things about this report is that we show you where the public data are. Anyone can now start tracking this themselves.
Finally, how do you envision various audiences using this information – from sites to sponsors to other supporting partners?
This is what I would call foundational social science. It's similar to how, if you want to generate a breakthrough drug, you need to have an understanding of the basic biologic processes. And if you want to have an understanding of how to improve access for clinical trials, you need to have an understanding of the basic social science processes by which these sites and the people who go to these sites operate.
The idea here is I want to provide intelligence to people who are operating in this space, so when they're trying to figure out all the different things that matter, they're able to do that — to just grab some results and run with them. If you’re worried about the survivability of a clinical trial site and you're looking at places that are historically disadvantaged for clinical trials, for example, lower income rural areas, it seems like that doesn't have a huge impact on site survivability.
This might provide some early evidence that conventional wisdom isn't necessarily correct, and that some of these locations that have been underserved are actually not as bad a place to invest. At least, from the point of view of if I want to keep a site operational for multiple years, those places appear to be no less risky or no more risky than anywhere else.
About The Expert:
Andrew Friedson, Ph.D., is the head of research for the Milken Institute Health. He is an economist with expertise in health care and related sectors. Before joining the Milken Institute, he spent over a decade in academia, where he served as an associate professor of economics at the University of Colorado Denver, with a secondary appointment at the Colorado School of Public Health. He is the author of the textbook “Economics of Healthcare: A Brief Introduction,” published by Cambridge University Press and used in classrooms around the country.