Guest Column | July 15, 2025

How To Avoid Hazards And Map A Safer, Smarter Path For AI, Part 2

By Vincent Puglia and Tanisha Patel

GettyImages-1461763742

In part one of this series, we explored the rocky road of incomplete data, discussed the ethical sinkholes, contemplated moral, legal, and regulatory considerations pharma must grapple. We also talked about the trap of underestimating non-linear usage costs of licensing, storage, and maintenance of AI. We also discussed the ownership risks of IP liability, insurance, and diligence, among other topics.

Now, in part two, we’re continuing the conversation, covering the hazards of training bias and attempts at transparency, as well as the importance of right-sizing AI ambitions and keeping patients close at hand. 

Wrong Way: AI Training Bias That Reinforces Flawed Patterns

The promise of AI and generative AI in clinical trials is immense, particularly in the ability to analyze vast data sets from past and ongoing studies to reveal trends, biomarkers, and patterns that humans might never detect unaided. However, this power comes with risks. AI models trained on narrow, unbalanced, or unrepresentative data sets can unintentionally reinforce existing biases or overlook key population differences.

For example, if an AI model is trained primarily on Japanese population data and then used to interpret real-world data from North American sources, it may produce inaccurate or misleading conclusions. These kinds of mismatches can compromise scientific validity, raise equity concerns, and lead to poor patient outcomes or misguided recruitment strategies. For sponsors and CROs operating globally, these risks are not hypothetical, and it is understood that they impact regulatory trust, payer engagement, and patient safety.

A better approach requires deliberate, cross-functional design. AI models must be trained on data that is not only large in volume but also diverse in scope. This includes integrating data from various geographies, demographics, and disease areas, and periodically retraining models as new data becomes available. Human oversight must remain central at every stage. One practical model is to use AI to ingest and standardize messy or raw clinical trial data, then apply traditional biostatistics tools for quality review and analytics, followed by additional AI-driven exploration. Each layer in this process presents opportunities for bias, whether from the training set, human error in curation, or misunderstanding of the outputs. This is where clear roles, standard operating procedures, and transparent governance become essential. Companies like BioNTech have pioneered this approach in personalized medicine by developing algorithms to identify patient-specific mutations and predict immune responses in silico, with AI accelerating development cycles while human experts remain involved in oversight and validation.

Ultimately, the life sciences industry must confront a series of hard questions as it moves toward AI-enabled drug development and personalized healthcare. What are the obligations when AI reveals a previously unknown biomarker tied to a serious health condition outside the trial’s intended purpose? Who must be informed, and how? Do existing consent frameworks cover this, or is new language required that addresses secondary use, data lineage, and recontact? As AI systems become self-learning and increasingly integrated into therapeutic discovery, companies must reflect these uncertainties in their contracts and operating models. Liability insurance, for instance, may need to account for a broader spectrum of unknowns, not only based on the direct deliverables of a supplier but also the unpredictable outcomes of generative processes. Procurement multipliers, traditionally set at 10 to 15 percent of service cost for coverage, may need to flex higher depending on model complexity, data sensitivity, and the potential for unintended insights. The industry must not only manage these risks reactively but also build proactive accountability into legal frameworks, model design, and stakeholder training. This is the only path toward responsible, ethical, and scalable AI in clinical development.

The Transparency Speed Bump 

As AI becomes more deeply embedded in clinical trials, regulatory expectations are evolving quickly and often unpredictably. Sponsors may begin a study under one framework, only to face new expectations around transparency, auditability, and validation before completion. Without careful planning, data or AI models used in the trial may later be deemed unsuitable for regulatory submission if they lack proper documentation or traceability. This is especially risky in high-stakes environments where missing audit trails or unclear model provenance can result in rework, rejection, or even the need for additional trials. For commercial teams, these delays can compress launch timelines and disrupt market readiness. Often, clinical teams scramble to recreate or reconstruct data trails after the fact, increasing workload and contributing to operational fatigue.

A more resilient approach starts with investing in platforms that are built for compliance from the ground up. This includes embedded data lineage, version control, and the ability to produce regulatory-grade documentation automatically. AI tools used in clinical settings should meet or exceed standards for traceability and validation, not only for internal QA but to satisfy emerging expectations from regulators such as the FDA, EMA, and ICH. Guidance is shifting, with recent statements from regulatory bodies emphasizing the need for explainability and transparency in AI systems used for decision-making. Organizations that proactively monitor these changes and adjust their frameworks early will reduce the risk of costly retrofits later.

Beyond compliance, the conversation around transparency must also include the question of mutual visibility. Sponsors are increasingly faced with tough decisions about how much of their data they should allow AI vendors to use for model improvement and learning. Sharing data can benefit the entire industry and lead to better-performing tools, but it also introduces concerns about IP protection and competitive advantage. On the other side, sponsors must consider how much insight they require into the AI models themselves. Understanding how a model was trained, how it makes predictions, and how those predictions evolve is no longer a nice-to-have. Balancing openness for regulatory and scientific rigor and caution for business and legal protection is one of the defining challenges of AI maturity in clinical development.

Small Sponsor Forks In The Road: How To Avoid Overextension

For small biotechs and midsize sponsors, the appeal of AI is often tied to promises of speed, efficiency, and a competitive edge. These companies may view AI as a way to level the playing field against larger, better-resourced competitors. However, without a solid internal structure, jumping too quickly into AI adoption can create more risk than reward. A single poor vendor selection or misjudged implementation can consume limited budgets and jeopardize study timelines. When the clinical, commercial, and technical foundations are not in place to support AI integration, the result is often confusion, inefficiency, and lost momentum.

Executive and procurement teams in small organizations face particular pressure. With lean resources, they may lack the time or expertise to conduct thorough due diligence. This opens the door to solutions that are not fit for purpose or require more overhead than anticipated. Commercial leaders, eager to show innovation and attract investor interest, may unintentionally set expectations around AI-driven acceleration that are not aligned with operational capacity. On the ground, clinical teams often find themselves overwhelmed, forced to integrate new AI tools while still managing legacy systems and core study operations. These teams may lack the training, bandwidth, or IT support to make AI tools effective, leading to underutilization or tool abandonment.

The better path begins with restraint and focus. Instead of deploying AI broadly across all functions, smaller sponsors should start with narrow, high-impact applications such as automated data cleaning, query generation, or metadata curation. Choosing modular and flexible platforms allows for scaling when ready, without requiring a complete overhaul of systems. Strategic partnerships can also be a powerful accelerator. Working with experienced CROs or AI-native vendors offers not only technical capabilities but also operational maturity and shared risk. These partners can provide ready-built workflows, regulatory familiarity, and hands-on guidance that reduces the burden on small internal teams. For sponsors with limited margins for error, thoughtful AI adoption is less about moving fast and more about moving smart.

Don't Forget The Passengers: How People Experience The Journey

In the rush to adopt AI across clinical development, organizations often concentrate on the technology itself while overlooking the people who will actually use it. This disconnect can undermine even the most promising tools. Executives may devote time to evaluating technical capabilities without planning how the new systems will integrate into existing workflows or impact teams doing the daily work. Procurement and the business may select tools that are not fully calibrated to the regulatory and contractual rigor of clinical trials. Without clear alignment to real-world processes and stakeholder needs, the result is often poor adoption, limited return on investment, and resistance from the very teams expected to drive success.

Commercial and clinical groups often face the greatest friction when people are not properly prepared. There is often no structured plan to upskill teams or give them the context to understand where AI fits into their work. Clinical professionals are increasingly expected to apply AI to areas like protocol development, risk monitoring, or data cleaning, yet few have the training or established frameworks to do so effectively. At the same time, commercial teams may be measured on performance gains from AI without clear strategies for how to implement or promote the tools internally. This disconnect slows progress and places undue burden on already stretched teams.

A more effective approach starts by aligning people and purpose. Teams need support not only in how to use AI tools but also in understanding what questions to ask and how to interpret outputs within the broader goals of the trial. Successful onboarding programs include incentives for early adopters, such as recognition, pilot ownership, or professional development tracks that tie directly to AI-enabled roles. Organizations should also build formal channels for clinical teams to offer feedback on AI tools and their effects on participant experience and trial delivery. This input ensures that the technology evolves in line with frontline needs and fosters a culture of shared learning. In clinical research, the success of AI is not just defined by algorithms and analytics but by the people who shape its application, challenge its limits, and ultimately turn innovation into outcomes.

Navigating Smart, Not Fast

The promise of AI in clinical trials is real, but only for those who navigate the journey with care. The road is full of unseen hazards: biased data, shifting regulations, ethical dilemmas, runaway costs, evolving technology, and people not yet ready for the ride.

To reach the destination of faster trials, better outcomes, and smarter science, stakeholders across the ecosystem must:

  • Map the terrain.
  • Choose reliable vehicles (tools, processes, and partners).
  • Watch for signs of trouble.
  • Course-correct quickly when necessary.

Those who drive with foresight, flexibility, and ethics at the wheel will arrive not only faster but far more safely and successfully.

About The Authors:

Tanisha Patel  is a global R&D clinical procurement professional with experience in sourcing, governance, project management, and category management in mid-sized and large pharma and biotechs. As an SME and superuser for procurement platforms, she has led the implementation and testing of automation in key modules of procurement platforms. In her category management roles, she has negotiated contracts with clinical technology suppliers and governed the partnerships. In collaboration with regulatory authorities, legal, quality, governance, and due diligence teams, she has critically analyzed AI capabilities and legal and regulatory requirements, as well as developed appropriate contractual language and guidance to support clinical trial teams throughout the study lifecycle to submission.

Vincent Puglia is a consultant and expert in clinical trial technology and operations, with over 20 years of experience in regulated environments. His extensive career has provided him with a comprehensive perspective, having worked across sponsors, providers, and sites to harmonize business, clinical, and technical priorities. With a strong focus on innovation and the goal of creating a more unified continuum of user and stakeholder experience, he has led tech-driven initiatives involving AI and automation, IRT and logistics systems, and integrations with pharmacy platforms and enterprise applications. Vincent has held a range of roles, spanning leadership positions to hands-on contributions, at leading sponsor and technology provider organizations, and earned his Bachelor of Science degree from DeSales University.

About The Authors:

Tanisha Patel is an established global R&D clinical procurement professional with experience in sourcing, governance, project management, and category management in mid-sized and large pharma and biotechs. As an SME and superuser for procurement platforms, she has led the implementation and testing of automation in key modules of procurement platforms. In her category management roles, she has negotiated contracts with clinical technology suppliers and governed the partnerships. In collaboration with regulatory authorities, legal, quality, governance, and due diligence teams, she has critically analyzed AI capabilities and legal and regulatory requirements, as well as developed appropriate contractual language and guidance to support clinical trial teams throughout the study lifecycle to submission.

Vincent Puglia is a consultant and expert in clinical trial technology and operations, with over 20 years of experience in regulated environments. His extensive career has provided him with a comprehensive perspective, having worked across sponsors, providers, and sites to harmonize business, clinical, and technical priorities. With a strong focus on innovation and the goal of creating a more unified continuum of user and stakeholder experience, he has led tech-driven initiatives involving AI and automation, IRT and logistics systems, and integrations with pharmacy platforms and enterprise applications. Vincent has held a range of roles, spanning leadership positions to hands-on contributions, at leading sponsor and technology provider organizations, and earned his Bachelor of Science degree from DeSales University.