Guest Column | July 15, 2025

How To Avoid Hazards And Map A Safer, Smarter Path For AI, Part 1

By Vincent Puglia and Tanisha Patel

route path, future transport logistics,GettyImages-1673636482

Whether you find it innately compelling or are compelled to get there, adopting AI for clinical trials is a journey. On this path, the business has plan for eventualities, manage risk, and help trial participants, sites, sponsors, and providers arrive at the benefits and treatments on or ahead of plan.

Even with a clear destination with real rewards, the path is full of hidden potholes, unexpected detours, and costly wrong turns. AI offers the promise of faster, smarter, and more inclusive clinical trials, but organizations that don’t prepare risk falling into traps that delay progress, inflate costs, or even derail programs. The key is anticipating obstacles, understanding risk, and choosing smarter routes. Each decision point presents opportunities to smooth the ride or stumble into setbacks.

Let us identify some of these obstacles or detours that might befall us and consider ways to move forward.

The Rocky Road Of Immature Tools And Incomplete Data

AI, automation, and machine learning offer the potential to transform clinical trials by accelerating timelines, enabling deeper insights, and supporting broader patient access. However, these benefits are not automatic. Many organizations enter the AI landscape without sufficient preparation, leading to avoidable setbacks. Challenges such as poor data quality, inconsistent standards, and unstructured formats often cause these advanced tools to fail before they gain traction.

Across the organization, different stakeholders face specific risks. Executives may assume AI tools are ready to use out of the box, which results in disappointment and eroded trust. Procurement teams might move forward without validating whether the organization’s data or the vendor’s solution is ready. Commercial timelines can be thrown off by unrealistic expectations of AI-driven acceleration, and clinical teams often struggle when AI tools are introduced without proper workflow integration.

A proper approach to AI integration involves early investment in data cleaning and standardization, selecting narrow but high-value AI use cases, and conducting cross-functional readiness assessments before scaling. Foundational agreements, such as MSAs, should address evolving questions around intellectual property, learning cycles, and how platform improvements are shared or protected. As the roles of sponsors and providers converge, especially around data ownership and flow, clear responsibilities and governance models become essential. Success with AI is less about the tools themselves and more about designing a well-prepared, collaborative path forward.

Ethical Sinkholes: What Happens When AI Finds Something Unexpected?

As AI tools advance, they accelerate processes as well as uncover insights that were not originally anticipated. In some cases, post-trial analysis may reveal significant health-related findings, such as a correlation between a biomarker and a disease. These discoveries can raise complex ethical and legal questions. What should an organization do if AI uncovers something important after a trial has ended? Who is responsible for determining whether to inform participants, and how should that be managed? Without clear policies, organizations risk falling into ethical and operational uncertainty where the cost of inaction can include legal liability, regulatory attention, or reputational damage.

Each stakeholder group faces specific challenges. Executives may be held accountable if post-study findings are withheld or mishandled. Procurement teams often miss the opportunity to include contract language that governs suppliers’ secondary data use and post-study obligations. Commercial leaders may underestimate the impact of disclosure policies on public trust, brand positioning, and even regulatory scrutiny. Clinical teams, without guidance or standard operating procedures, may be unsure how to handle participant recontact or assess new risk signals.

To navigate this terrain more effectively, organizations should establish clear ethical policies, including defined recontact protocols. They should also implement metadata repositories to preserve data linkages beyond the trial period and ensure that contracts include clauses that address responsibilities for follow-up findings. Building this readiness helps ensure that the benefits of AI are realized without falling into avoidable ethical pitfalls.

The Toll Booth Trap: Underestimating Usage Costs And Ownership Risks

The “toll booth” trap of AI usage costs can catch organizations by surprise. Many AI vendors use pay-per-use models, where you pay for each API call or token processed, and this initially looks manageable. In reality, those small tolls add up quickly. Costs don’t rise in a neat linear fashion; they often spike faster than expected as usage grows. In fact, industry surveys have found that introducing AI workloads can drive cloud bills roughly 30% higher than anticipated. For example, hosting a single large language model in the cloud (with continuous use) can cost up to $20,000 per month. Moreover, many advanced LLMs require cloud access, meaning each use not only incurs fees but also sends sensitive data off-site. This could mean a potential compliance and privacy risk for clinical trial systems if patient data is involved. What seemed like a simple SaaS tool can thus turn into unpredictable bills and new data governance headaches, undermining the expected ROI of the AI initiative. These hidden costs and risks can trip up stakeholders across the board.

Executives may underestimate the long-term total cost of ownership, focusing on up-front subscription fees and neglecting how ongoing usage charges and stringent data safeguards will compound over time. Procurement teams might choose a vendor without negotiating scalability safeguards, only to find that once they exceed a certain number of API calls or transactions, extra fees kick in unexpectedly. For commercial managers, such volatility makes budgeting a nightmare, as AI-driven features introduce variable costs that complicate launch planning and marketing spend. Even clinical project teams feel the pinch: If the contract or platform imposes limits on API calls or compute hours to control expenses, the tool’s usage may be throttled, reducing its effectiveness in practice.

To avoid this trap, organizations need a better route. Forecasting realistic AI usage patterns before deployment is crucial. For instance, projecting how many queries or analyses a clinical trial might generate can help you anticipate worst-case costs. Armed with these estimates, companies should negotiate flexible pricing tiers with hard caps or volume discounts to prevent runaway bills. In other words, ensure the vendor agreement has cost ceilings or tiered rates so the meter effectively stops or prices drop once you hit a certain usage. It’s also wise to weigh the build-vs.-buy decision with an eye on control: an in-house or on-premises LLM can demand more up-front investment, but it offers more predictable costs over time (no surprise token fees) and keeps sensitive data in-house. These are benefits that can pay off if usage is heavy and continuous.

By planning for scalability and securing the right pricing model, organizations can harness AI in clinical trials without falling into an open-ended toll booth of usage fees or exposing themselves to undue data risks.

The Detour Of Human Behavior: Resistance To Change And AI Misuse

Resistance to AI integration in the life sciences sector often stems from human behavior rather than technical limitations. Industry reports show that only about 24 percent of proof-of-concept AI projects in pharmaceutical companies reach production, compared to 35 percent in healthcare providers. This discrepancy highlights that despite strong pilot activity, final adoption frequently falls short. As reported in the Pharmacy Times, a recent survey by Gartner showed that while 72 percent of life sciences organizations have at least one generative AI use case in production and 30 percent deploy six or more, over half of pharmacy groups abandon AI programs due to insufficient budgets and lack of user readiness.

When executives do not allocate sufficient resources to change management, and procurement fails to require enablement in vendor SLAs, friction arises. Commercial teams may misinterpret AI-driven insights, reducing their strategic value, while clinical staff may misuse tools or rely on them excessively. Such behavior could lead to regulatory compliance breaches or compromised data integrity, which are particularly critical in regulated clinical settings.

To avoid this detour, life sciences organizations must treat AI literacy and change management as central to implementation rather than optional. Training must be contextual and woven into workflows. For example, leading pharmaceutical companies like Johnson & Johnson, Merck, and Eli Lilly have mandated generative AI training for tens of thousands of staff, ensuring consistent and responsible adoption. Embedding human-in-the-loop design ensures that final decisions remain with trained professionals, aligning with documented best practices for clinical safety. It is also essential to cultivate change champions across functions who can model correct usage, troubleshoot early challenges, and sustain momentum. By combining foundational training investments, thoughtful system design, and peer leadership, life sciences companies can transform AI from a theoretical innovation into a practical, trusted tool for clinical research.

The AI Road Trip Continues

Realizing the strength of investing in organization, tools, and people will lead to the successful implementation of AI. If one should veer off, the journey towards the final destination will become treacherous. Join us in part two, "How To Avoid Hazards And Map A Safer, Smarter Path For AI, Part 2," to dive into the dangers of biased AI models, the need for transparency, clarity of AI regulations and IP ownership in sponsor-AI supplier partnerships, the art of decision-making when investing in AI, and the important role people play in harnessing the power of AI appropriately.

About The Authors:

Tanisha Patel is an established global R&D clinical procurement professional with experience in sourcing, governance, project management, and category management in mid-sized and large pharma and biotechs. As an SME and superuser for procurement platforms, she has led the implementation and testing of automation in key modules of procurement platforms. In her category management roles, she has negotiated contracts with clinical technology suppliers and governed the partnerships. In collaboration with regulatory authorities, legal, quality, governance, and due diligence teams, she has critically analyzed AI capabilities and legal and regulatory requirements, as well as developed appropriate contractual language and guidance to support clinical trial teams throughout the study lifecycle to submission.

Vincent Puglia is a consultant and expert in clinical trial technology and operations, with over 20 years of experience in regulated environments. His extensive career has provided him with a comprehensive perspective, having worked across sponsors, providers, and sites to harmonize business, clinical, and technical priorities. With a strong focus on innovation and the goal of creating a more unified continuum of user and stakeholder experience, he has led tech-driven initiatives involving AI and automation, IRT and logistics systems, and integrations with pharmacy platforms and enterprise applications. Vincent has held a range of roles, spanning leadership positions to hands-on contributions, at leading sponsor and technology provider organizations, and earned his Bachelor of Science degree from DeSales University.