Guest Column | February 20, 2025

Out Of The (Black) Box Thinking For Patient Recruitment

By Ross Jackson

square opened black box-GettyImages-873173128

For the fourth in the series “Inspired Patient Recruitment – Taking Inspiration from Business Advice Bestsellers,” I’ve looked at Matthew Syed’s book, Black Box Thinking.

The book explores how embracing failure and learning from mistakes are essential to success, using the metaphor of the aviation industry’s black box system — which is based on meticulously analyzing data from flight recorders (black boxes) after accidents and fostering a culture of accountability and continuous improvement to prevent future mistakes.

As with previous articles, I’ve framed this one based on two candidates for a job managing clinical trial patient recruitment and retention activities — one is on board with black-box thinking, the other adopting a more traditional approach.

Candidate Interviews: Thinking Out Of The (Black) Box, Or More Of the Same?

How could you gather feedback from patients who choose not to participate or drop out of your trial?

Candidate One: I’d implement a robust feedback system to capture the reasons why patients decline to participate or drop out. For non-participants, I’d send follow-up surveys that ask about their concerns or barriers to joining. For those who drop out mid-trial, I would gather insights through exit interviews conducted by neutral third-party staff. In previous trials, I’ve found participants are more candid when speaking to someone outside of the immediate trial team. This data is crucial for adapting our strategies — whether through addressing logistical concerns, clarifying trial procedures, or offering better incentives.

Candidate Two: I’ve never seen the point of gathering detailed feedback from non-participants or dropouts. If someone decides not to join or leaves mid-trial, I’d assume it’s either due to external factors beyond our control or that they just weren’t a good fit. Our team is already stretched thin, so following up on every participant who leaves would be too time-consuming. I’d prefer to focus on recruiting new participants instead.

If a recruitment campaign fails to attract sufficient participants, how would you analyze what went wrong?

Candidate One: I treat every recruitment phase like a learning experience. After every recruitment campaign, we’d perform a comprehensive post-mortem analysis, examining where we lost potential participants. We’d collect data from each touchpoint, from initial contact to consent, identifying any drop-off points. We’d analyze the effectiveness of the messaging, channels used, and the demographics reached — a process that can often reveal patterns, such as ineffective messaging or underutilized outreach channels. We could then address these issues in real-time for future campaigns.

Candidate Two: I don’t anticipate needing a formal process for analyzing failed recruitment campaigns. If the numbers don’t come in, we can safely assume it’s because we didn’t have the right participants available or the trial wasn’t appealing enough. If that does happen, I’ll suggest we move on to the next campaign without dwelling too much on what went wrong in the previous one. It’s more efficient and helps minimize time spent navel-gazing or overanalyzing things we can’t change.

Would you foster a culture of openness in the organization when discussing recruitment or retention failures, or do you sense there will always be an aversion to admitting mistakes?

Candidate One: I’d love to actively work toward fostering a culture of openness. Initially, of course, there may be some resistance to discussing failures, as people will probably feel we were focusing on their personal shortcomings. But I believe we could make strides by framing these conversations around continuous improvement rather than blame. We could hold monthly review meetings where every team member — whether involved in recruitment, data management, patient engagement, or whatever — contributes to analyzing what didn’t work and brainstorming solutions. By focusing on learning and improvement, we should be able to make incremental changes that lead to better outcomes over time.

Candidate Two: Honestly, I wouldn’t want to talk about recruitment failures. People understandably have an aversion to discussing mistakes because no one wants to be blamed for things that went wrong. If something doesn’t work, it’s easier to keep quiet and avoid drawing attention to it. Our focus would be more on hitting recruitment targets, and we wouldn’t want to spend too much time talking about setbacks. Discussing failures is uncomfortable, and it doesn’t help team morale.

Do you think it’s worth exploring the possibility of small, incremental improvements, or marginal gains, that could enhance your recruitment or retention strategy?

Candidate One: Absolutely. I’ve always adopted a marginal gains approach, which has been surprisingly impactful in my previous trials. For example, in one trial we made small but meaningful adjustments — such as tweaking the tone of patient outreach emails to make them more empathetic and offering additional support for questions about the trial. We also fine-tuned our retention efforts by offering more personalized follow-ups and flexibility in our scheduling, which noticeably reduced dropouts. These small changes accumulate, and over time, they can significantly improve the patient experience and likelihood of success for a trial.

Candidate Two: I don’t think it’s worth focusing much on small improvements, so I stick to the tried-and-tested methods for recruitment and retention. If something doesn’t work, there’s no guarantee it won’t work next time. And if it doesn’t, we can simply try a different approach. It seems more efficient to scrap what doesn’t work and go for something new, rather than investing time in small adjustments that may or may not make a difference anyway.

What might be your method for addressing patient concerns throughout the trial, particularly when participants express frustration or show signs of disengagement?

Candidate One: I’d like to develop a proactive communication strategy where we touch base with participants regularly, not just at the mandated check-ins. Every participant would be assigned a dedicated trial coordinator who is easily accessible via whichever method the participant prefers (e.g. phone, text, email). This coordinator would monitor any signs of frustration or disengagement — such as missed appointments or decreased responsiveness. We would then use this as a trigger to reach out, understand the issue, and find solutions. This might include such things as offering transportation, more flexible scheduling, or clarifying trial processes, to help keep participants engaged.

Candidate Two: When managing a clinical trial, I don’t think we have time to handle patient frustrations. If someone is unhappy or misses appointments, they’ll either get back on track or drop out. Obviously, I would expect us to send the usual reminders and follow the protocol. But if a participant isn’t engaged, there’s not much we can do. Addressing every individual concern would really be a waste of resources that we could ill afford.

How often might you run post-trial analyses to improve recruitment and retention strategies for future trials?

Candidate One: I’d recommend we conduct full post-trial reviews after every study, focusing not just on clinical outcomes but also on recruitment and retention metrics. We should look at what worked well and where we fell short, documenting every lesson learned. This review should include feedback from all stakeholders, including patients, coordinators, and other members of the study team. We should then compile this into a lessons-learned document that’s referenced every time we plan future trials. I’d also suggest we create an internal database of these reviews so that everyone in the organization could benefit from the insights, even if they aren’t directly involved in particular trials.

Candidate Two: Given how busy we always are, post-trial analysis isn’t something I’d prioritize. Once the trial is over, we should be more focused on reporting results and moving on to the next project. We wouldn’t want to waste our time reviewing what could have been done better in recruitment or retention, as there’s very little we could have changed anyway. The main concern is completing the trial on time and within budget. As you know, any problems that might have occurred along the way are part of the cost of doing business, so I don’t think we should dwell on them.

What do you think about collaborating with other clinical trial organizations to share recruitment and retention data and learn from each other’s failures?

Candidate One: In one of my previous roles, we started collaborating more actively with other clinical trial organizations. We were part of a regional consortium of research teams that shared anonymized recruitment data and lessons learned. This collective approach gave us access to a wider range of strategies that we could adapt and apply to our own trials. For instance, one team shared a particularly effective patient referral program that we adapted for our own recruitment, and it significantly boosted our numbers. This collaborative learning approach is invaluable, and I’d love to incorporate it here.

Candidate Two: Sharing data seems risky and could reveal too much about our internal processes. Besides, every trial is different, so what works for someone else might not work for us. I think we’d be better off operating independently to figure out our own solutions without sharing any secret sauce that might give us an advantage. And at the same time, there’s no point in exposing our failures to other companies.

How could you use technology to help predict potential recruitment failures in real-time?

Candidate One: I’m very excited about the possibilities for integrating AI-driven analytics into our recruitment process. We could use a system that tracks patient engagement from the first point of contact, analyzing patterns in such things as open rates, clicks, and responses to predict potential drop-off points. This real-time feedback could also allow us to make immediate adjustments, such as changing messaging or shifting outreach channels. We could also look to use this kind of predictive analytics for retention — flagging participants who might be showing signs of disengagement early on so that we could intervene before they drop out.

Candidate Two: I must admit I’m not convinced by all these shiny objects that are introduced into the field seemingly on a daily basis. If a recruitment campaign isn’t going well, it’s almost always due to external factors like timing or potential participants’ lack of interest in the trial, so I wouldn’t recommend investing in tools to track engagement or predict dropouts in real-time. Instead, we should just focus on getting through the campaign as best we can to deliver the necessary data.

Boxed-In Or Boxing Clever?

As usual, Candidate Two is voicing their perfectly legitimate concerns about changing the way things are done. Candidate One, on the other hand, is expressing their enthusiasm for a new approach that incorporates the learnings from Black Box Thinking for potentially better outcomes. I know which way of thinking I would be recommending. And hopefully, you can find some tips and ideas for yourself from this article and Matthew Syed’s book.

About The Author:

Ross Jackson is a patient recruitment specialist and author of the industry standard books The Patient Recruitment Conundrum and Patient Recruitment for Clinical Trials using Facebook Ads.

Having started with digital marketing in 1998, Ross quickly developed a specialty in the healthcare niche, evolving into a focus on clinical trials and the problems of patient recruitment and retention.

Over the years, Ross branched out from purely digital and now operates in an advisory capacity helping sponsors, CROs, sites, solutions providers, and others in the industry to improve their patient recruitment and retention capabilities – having advised and consulted on over 100 successful projects.