From The Editor | March 18, 2015

When Measuring Clinical Quality, Beware Of The Metrics Trap

By Ed Miseta, Chief Editor, Clinical Leader

When discussing the topic of measuring quality in clinical trials, Mike Howley advises sponsors and CROs not to place too much focus on metrics. That might sound strange coming from someone who is a business professor with an MBA and a Ph.D. But Howley’s problem is not necessarily with the metrics, it’s how companies use them.

Howley, an associate clinical professor at LeBow College of Business at Drexel University, has always had an interest in measuring clinical quality. He chose Arizona State University for his doctorate because of the school’s focus on service quality and how to measure and manage it. He subsequently accepted a position at Drexel.  In 2010 when Peter Malamis, a pharma industry veteran, came up with the idea of measuring quality in a trial, he approached Drexel and Drexel directed him to Howley.

Although Howley has done similar research in other industries, he now spends over 80 percent of his research time focused on clinical trials. According to Howley, “I am not a clinical industry guy, but I do know how to measure service quality. That is what I bring to the table.”

Ed Miseta: You often make reference to the ‘Metrics Trap”. How do you define that and why is it an issue for pharma?

Michael Howley: In the pharma industry we find there is an emphasis on metrics. Operational metrics are things that we can easily define and measure. Some common metrics we might want to measure in clinical trials is how many patients did you recruit, how long did it take you to recruit them, or how many days elapsed from first to last patient in. Of course as soon as you have that data, you want to benchmark it. So for example, if a CRO took 100 days to recruit patients for a trial, you would want to know if that is good or is it bad. Unfortunately, in many cases, that answer is, it depends. For one trial that number might be very good, for another it might be very bad. So when your metric depends on the specific trial you are running, it is not a valid metric.

Then once you have the data, you will want to benchmark it. Benchmarking basically means comparing to averages. So if the average recruitment time is 120 days, and you did it in 100, you might be led to believe you are doing well. But that may not be the case if the industry is performing poorly. This is what we call the Metrics Trap.

Miseta: Is there an alternative method of determining if a CRO is performing well?    

Howley: What my colleague (Peter Malamis, CEO of CRO Analytics) and I have been working on is a predictive model. Instead of looking at hundreds of individual metrics and weighting each one equally, we will look at those factors that drive performance and are important to sponsors and weigh each one based on its contribution to overall performance. Once we have all of that information for a multitude of CROs, we can predict which ones will have the highest probability of success on a given trial.

What we are not doing is ranking CROs. We are looking at past data to see how they performed on different performance factors in the context of different trials. This should help us to determine those trials in which they will have the highest probability of success in the future.  

Miseta: Many industry groups are working to bring standardization to trials and improve quality and efficiency while bringing down costs. Do you see these efforts working?

Howley: I think many of these groups have the potential to make big improvements in both efficiency and quality but using operational metrics. But what we are doing with predictive modeling is actually complementary with what many groups are doing. The kind of quality assessment we are arguing for is more of a global perspective. It is comprehensive and it looks everywhere.

For example, a project manager might say they need very specific granular details. If there is a problem in their area, they need details in order to drill down into the issue. But if they are looking at 400 operational metrics, where do you start? Your intuition or expertise might tell you to look here or there, but what you’re really doing is guessing and in that case you’re just flying blind.

Predictive modeling will provide a more comprehensive view of quality and help companies keep an eye on their blind spots. Once you have that view, the operational metrics are complementary because they allow you to drill down on some of the granular issues that we would not cover.

Miseta: One of the trends we are seeing today is pharma companies partnering with other pharma companies, often in the area of clinical trials. Does this add more complexity to the problem of measuring quality?

Howley: That certainly adds a level of difficulty to this process. Everyone will focus on quality: what should we measure, how should we measure it, etc. We can debate that all day, but that is the easy part. There are established quality measures and we know how to do it. We basically have it down to a science. The really hard part is the sampling. Also, it’s not random sampling. It’s sampling where you have to identify that this person was a witness to this part of the clinical trial. So you have to get them the right assessment at the right time. This is what leads to increased complexity.

What companies need is a technology solution to help them manage that. The problem in assessing quality is not what to measure. The hard part is the sampling. It is complex enough when you work with one CRO. In a partnering situation you could find yourself in a situation where you have two sponsors working with two different CROs. But what happens when your CRO starts subcontracting part of the work, such as patient recruitment or data management? Suddenly you have to account for multiple subcontractors. You have to account for that, and we end up creating what is almost like a social network of who is doing what within the trial. It can really blow up the complexity of what you are trying to do.

I advise companies to remember that we are not measuring the performance of people here. We are measuring the performance of organizations. Our focus has to be on the quality of the output. In this kind of a dynamic and complex environment, it is important that everyone keep their eye on the ball.