Will New AI Health Assistants Suggest Clinical Trials To Inquiring Patients?
By Ross Jackson, consultant

Recent announcements around AI health assistants, including OpenAI’s new health-focused ChatGPT experience, offer an interesting signal for the clinical trials ecosystem.
The real significance is not the specific product release itself but what it reflects about how people might increasingly engage with health information.
Patients are already using generative AI tools to help them make sense of symptoms, diagnoses, and treatment options. In many cases, these tools may no longer act simply as gateways to websites but as synthesis engines — summarizing information, prioritizing options, and framing possible next steps in plain language.
That shift has important implications for clinical trials — particularly for how patients first become aware that trials even exist.
Historically, interested patients have discovered trials through search engines. (Though, it has to be said, this is a very low proportion of the overall number of patients that become enrolled in trials.) Searchers would enter a query, scan results, click through multiple sources, and gradually piece together an understanding of their situation.
More commonly, patients are seeking information about their condition, rather than specifically about clinical trials. Relevant trials, if they appear at all, often sit awkwardly at the margins of that process — often framed as transactional recruitment initiatives rather than as part of a broader care or decision-making landscape.
Generative AI changes that dynamic.
When a patient asks an AI assistant about a condition, progression, or available treatment options, the response is increasingly a single, coherent narrative rather than a list of links. In that context, whether — and how — clinical trials are mentioned becomes a question of discoverability, not marketing.
This is not about training AI systems to promote trials nor about automating recruitment. It is about understanding how large language models draw on the public information ecosystem and how that ecosystem shapes what patients are likely to hear when they ask AI for guidance.
As AI health assistants become more capable and trustworthy, the way clinical trials are represented in these AI-generated responses will matter — for patient understanding, for health literacy, and ultimately for trial awareness itself.
So, How Do AI Assistants Actually "Know" Things?
Large language models (LLMs) — the underlying systems that generative AI tools such as ChatGPT use to produce responses — draw on the public ecosystem of information rather than on any single database or proprietary source. Their outputs are shaped by information that is most visible, most consistently expressed, and most clearly framed across that ecosystem.
In practical terms, LLMs synthesize information based on patterns. Ideas that appear repeatedly, across multiple credible sources, and in accessible language are more likely to be included in AI-generated responses. Conversely, concepts that are fragmented, inconsistently described, or narrowly framed are less likely to surface, even if they are technically accurate or scientifically important.
Credibility, from the LLM’s perspective, is not determined by editorial judgement in the human sense. Instead, it is inferred statistically through the frequency of references, the consistency of framing, the presence of citations, and the apparent authority of the sources in which the information appears. Content that is widely referenced and broadly contextualized tends to carry more weight than content that exists in isolation or is tied closely to a single organizational voice.
One important implication of this is that absence matters as much as presence. If a particular idea — such as the role of clinical trials in managing or treating a condition — is rarely mentioned in patient-facing discussions of that condition, the model has little basis on which to include it in its responses. Even when a user asks a broad question about available options, the AI can only draw on what the information ecosystem has already made visible.
In other words, AI assistants do not independently decide whether clinical trials are relevant. They reflect how the wider health information landscape has positioned trials up to that point. As a result, if clinical trials are peripheral in public discourse around a condition, they are likely to remain peripheral in AI-generated guidance as well.
Why Clinical Trials Are Currently Poorly Represented
There are several contributing factors that lead to this eventuality.
One is that information about clinical trials is often fragmented and closely tied to individual pharmaceutical or biotechnology companies. When references to a trial appear predominantly within sponsor-controlled content, the information may be interpreted by AI systems as partisan rather than broadly authoritative. Even when the underlying science is sound, a lack of independent or widely distributed references can reduce the perceived general relevance of that information.
Another factor is that much of the core trial information that does exist publicly is written primarily for regulatory or scientific audiences rather than for patients. Databases such as ClinicalTrials.gov play a critical role in transparency and credibility, but they are not designed to support general understanding (to say the least!). The language is technical, the structure is complex, and the context required to interpret the information is often assumed rather than explained. As a result, this content can be treated by AI models as niche or specialist, rather than as material suitable for inclusion in a broad, patient-facing narrative.
Another issue is that trial information is frequently buried within sponsor websites or registry pages, where it appears alongside compliance documentation rather than within wider discussions of care pathways or treatment decisions. From an AI perspective, this positioning can signal that the information is not intended for general consideration but is for a limited or highly specific audience.
And perhaps most importantly, clinical trials are most often framed through the lens of recruitment and enrollment rather than as a legitimate option to be considered alongside other aspects of disease management. Trial participation is typically presented as something patients are asked to do, rather than as something they might reasonably choose to explore as part of understanding their situation. That framing influences not only how patients perceive trials, but also how AI systems interpret their relevance within a broader informational context.
The cumulative effect of these factors is that information about clinical trials lacks the natural triggers that would prompt AI assistants to surface it spontaneously. Trials are not absent from the public ecosystem, but they are insufficiently integrated into the narratives that shape how conditions, options, and next steps are commonly discussed. As AI-driven health assistants increasingly rely on those narratives to guide their responses, this structural invisibility becomes more consequential.
Discoverability, Not Promotion
What we are ultimately trying to achieve is for AI health assistants to routinely introduce the idea of clinical trial participation when someone engages with them in a relevant way, such as when seeking information about a newly diagnosed condition, disease progression, or available management options.
This is not about persuading AI systems to "push" trials, nor is it about turning health assistants into recruitment tools. Rather, it is about ensuring that clinical trials are represented clearly and appropriately within the broader information landscape that AI systems draw upon when forming responses.
The term generative engine optimization (GEO) has begun to emerge as a way of describing this shift, echoing the earlier development of search engine optimization (SEO). Just as SEO evolved to help content appear in response to search queries, GEO reflects the principles that influence whether information is surfaced within AI-generated narratives.
Applying GEO in the context of clinical trials is not about paid advertising, algorithmic manipulation, or engineering thousands of inbound links. Instead, it centers on clarity and context and on integrating clinical trials into how conditions, options, and next steps are commonly discussed and understood.
Importantly, this is not only relevant for patients. Healthcare professionals are also part of this evolving information ecosystem. Clinicians have long relied on search engines to quickly contextualize unfamiliar conditions or treatment approaches, and it is reasonable to expect that AI assistants will increasingly play a similar role in clinical workflows. How trials are framed in AI-generated responses will therefore influence professional awareness and consideration, as well as patient understanding.
My own background in digital marketing (having started with SEO in the late 1990s) makes the parallels and the differences between these eras particularly striking. Early SEO best practices focused less on tactics and more on fundamentals — clear explanations, credible citations, relevant context, and information written with the end user in mind. Many of those same principles appear to apply in the emerging GEO-based, synthesis-driven environment.
Where the disconnect currently arises is between the transactional framing that dominates much patient recruitment activity and the informational relevance that large language models are designed to surface. Recruitment materials are often created to prompt action, whereas AI systems are optimized to support understanding. If information about clinical trials is primarily presented as a call to enroll rather than as part of a broader discussion about options, it is less likely to be included naturally in AI-generated guidance.
In this sense, discoverability is not necessarily a marketing problem to be solved but a representational one. It depends on whether clinical trials are positioned as a credible, comprehensible, and contextually appropriate element of the health information environment. Thus, they are visible not because they are promoted but because they genuinely belong in the conversation.
What Sponsors Should Be Thinking About Now
If AI-driven health assistants are likely to influence how and when people first encounter the idea of clinical trials, then sponsors need to rethink how trials are represented within the public information ecosystem. This does not require a wholesale change in recruitment strategy, but it does call for attention to a few foundational questions.
Firstly, sponsors should consider how their trials are described publicly, across all outward-facing touchpoints. This is not simply about language choice but about conceptual framing. Many trial descriptions focus almost exclusively on operational details — e.g., inclusion criteria, endpoints, study duration — without clearly explaining why the trial exists, what uncertainty it is addressing, or how it fits within the broader understanding of the condition. AI systems are more likely to surface information that is framed as meaningful and explanatory, rather than purely procedural.
Related to this, but distinct, is the question of whether genuinely patient-facing language exists at all. Scientific and regulatory descriptions of trials are essential, but if these are the only representations available publicly, AI assistants may treat trial information as specialist material rather than content suitable for a general audience. Patient-oriented explanations — written in plain language, with appropriate context and caveats — help signal that trial information is accessible and relevant to non-expert audiences, increasing the likelihood that it will be incorporated into AI-generated responses to broader health queries.
Finally, sponsors may want to reflect on how trial participation is positioned within that information. At present, much of the language patients encounter frames clinical trials primarily as recruitment opportunities — an invitation to enroll, often presented separately from discussions about care pathways or treatment decision-making. By contrast, positioning trial participation as one possible option among others, something to be explored, discussed, and weighed, aligns more closely with how AI assistants are designed to support understanding rather than action. This reframing can make trial information more contextually appropriate for inclusion in responses that aim to inform rather than persuade.
Of course, when a user asks an AI assistant directly for information about clinical trials, the system is likely to respond as expected, typically by drawing on established sources such as ClinicalTrials.gov. However, such direct queries represent only a small fraction of the health-related questions people ask. The greater opportunity lies in ensuring that clinical trials are surfaced naturally when users are seeking information about conditions, disease progression, or available options, rather than only when trials are explicitly requested.
Taken together, the considerations outlined above point to a shift in emphasis from promoting trials as discrete opportunities to ensuring they are visible, comprehensible, and appropriately contextualized within the broader health information environment.
AI health assistants may not change the fundamental principles of patient recruitment, but they do have the potential to influence when and how awareness of clinical trials is first formed. And that, in turn, may shape who is likely to consider participation in the first place.
About The Author:
Ross Jackson is a patient recruitment specialist and author of the books The Patient Recruitment Conundrum and Patient Recruitment for Clinical Trials using Facebook Ads.
Having started out with digital marketing in 1998, Ross quickly developed a specialty in the healthcare niche, evolving into a focus on clinical trials and the problems of patient recruitment and retention.
Over the years Ross branched out from the purely digital and now operates in an advisory capacity helping sponsors, CROs, sites, solutions providers, and others in the industry to improve their patient recruitment and retention capabilities — having advised and consulted on over 100 successful projects.