The Risk And Reward Of Clinical Investigators Integrating AI
By Kimberly Chew, Esq., Colleen Pert, Esq., and Kathleen Snyder, Esq., Husch Blackwell LLP

AI — whether that’s machine learning (ML), deep learning (DL), natural language processing (NLP), computer vision (CV), or generative AI — has become an increasingly popular tool to facilitate clinical trials. According to the FDA, AI1 has been used to make inferences regarding the safety and effectiveness of drugs, inform the design and efficiency of clinical trials, and extract and organize information from electronic health records to identify good clinical trial candidates.2
AI will continue to play a significant role in the modernization of clinical trials. For example, the FDA is using AI to analyze data in DCTs, DHTs, and more.3 Clinical trials are shifting toward decentralization to improve patient accessibility and engagement by utilizing technologies like telemedicine, wearable devices, and remote monitoring, which collect data automatically, allowing participants to take part from their homes rather than traveling to centralized trial sites. With the ability to gather extensive data remotely, DCTs and DHTs hold “a great potential” to streamline clinical trials, expand the reach of trials, and reduce the burden on participants.4 The high-quality structured data will facilitate the use of AI tools. Clinicians in decentralized clinical trials are increasingly leveraging AI tools to address the unique challenges associated with remote trial management and patient monitoring. AI-driven technologies enable real-time data collection and analysis from wearable devices, mobile apps, and other remote monitoring tools, enabling clinicians to track patient health and adherence to protocols without requiring frequent in-person visits.5 In short, AI tools can improve overall trial efficiency, allowing for wider patient access and faster trial completion while maintaining data quality.6
However, clinical investigators lacking proper support services for administering clinical trials may increasingly depend on AI-enabled tools to efficiently manage and conduct these trials. As a result, private physicians and hospitals may be increasingly using AI tools to analyze large data sets, improve diagnostic accuracy, and streamline operations and automate routine tasks.7 Challenges, however, include ensuring that the AI is safe, transparent, and effective. Currently, the “lack of ubiquitous, uniform standards for medical data and algorithms impedes system interoperability and data sharing.”8
As AI becomes increasingly integrated into clinical trials, clinical investigators must navigate a complex regulatory landscape. As part one of this series on clinical investigator use of AI, this article lists relevant regulatory frameworks and explores the risks and liability of using AI. Part two offers tips for clinical investigators in overseeing AI use and securing informed consent.
State, Federal, And International Impacts On AI Use In Clinical Trials
There are a variety of frameworks that regulate AI. For example, there are evolving state, federal, and international frameworks.
State Frameworks
Many states have pending legislation that could affect the use of AI in certain healthcare settings, such as clinical trials. As these legislative efforts progress, some states have already enacted specific regulations tailored to AI, such as California’s Artificial Intelligence in Healthcare Services Bill (AB 3030)9 and Colorado’s Consumer Protections for Artificial Intelligence (SB24-205);10 Utah enacted the Artificial Intelligence Policy Act in May 2024. AB 3030 requires that any AI-generated communications involving patient clinical information in California healthcare facilities include a disclaimer about the use of generative AI and instructions for contacting a human healthcare provider. AB 3030 may impact clinical trials utilizing AI-enabled tools by necessitating additional compliance measures, such as including disclaimers in AI-generated communications involving patient clinical information. Colorado’s SB24-205 introduces a robust framework for regulating high-risk AI systems, including those used in clinical trials. Utah’s law requires disclosures when consumers are interacting with AI systems in regulated occupations. Healthcare professionals are among those subject to the legislation.11
Federal Framework
On the federal level, the January 2025 FDA draft guidance offers considerations for AI in the drug product life cycle via a proposed riskābased credibility assessment framework.12 This guidance focuses on the use of AI models to produce information intended to support regulatory decision-making regarding safety, effectiveness, or quality for drugs.13 However, despite the recency of this draft guidance, the federal regulations and their enforceability remain a fluid situation, influenced by ongoing developments within the FDA and potential shifts in priorities under the current administration. These changes may lead to further refinements or updates to regulatory approaches as the agency adapts to the evolving landscape of AI and its integration into healthcare.
International Framework
Alongside U.S. laws regulating AI in healthcare, the World Health Organization (WHO) issued guidelines on the ethics and governance of artificial intelligence for health on January 18, 2024.14 The guidance offers recommendations for governance by companies, governments, and through international collaboration, in line with guiding principles. These principles and recommendations consider the unique ways humans can utilize generative AI for health.
Arguably, the most significant international regulation of AI is Europe’s EU Artificial Intelligence Act (EU AI Act).15 The EU AI Act, much like the GDPR, will have an impact on U.S. companies doing business in the EU. The act provides a framework that categorizes AI applications based on the application’s risk level and requires transparency on the use of AI. This risk-based approach is designed to protect safety and fundamental rights while promoting innovation. The categories are unacceptable, high, limited, and minimal. Unacceptable risk use cases are banned in the EU.16,17 In the areas of high-risk AI classifications related to healthcare, there may be regulatory crossover with software regulated as a medical device. With regard to clinical trials, researchers and clinical trial sponsors should incorporate an assessment of whether the AI deployed in a trial is high or low risk. They will also need to be transparent regarding the use of AI in their clinical trials.
Identifying And Resolving Potential Liability With AI
In addition to regulations governing AI use, clinical investigators must adhere to regulations such as HIPAA and GDPR to protect sensitive patient data. AI systems handle large data sets, which can be attractive targets for unauthorized access due to the valuable personal or proprietary information they contain. Additionally, the sheer volume of data can make it challenging to monitor and secure, increasing the risk of breaches. While AI-enabled tools can enhance efficiency, data analysis, and decision-making, they can also introduce or amplify certain vulnerabilities if not properly secured. If the AI-enabled tool processes, stores, or transmits sensitive clinical trial data (e.g., patient information, trial results), it must adhere to robust data security measures.
If an AI security incident occurs, the clinical investigator should take immediate and appropriate steps to address the issue, including assessing the scope of the incident, isolating the system affected if possible, documenting the incident, and informing the sponsor, IRB, and regulatory authorities. Establishing a comprehensive data breach response plan that outlines the steps to be taken during a security incident is advised. Furthermore, investigators must coordinate with AI tool providers to clearly understand their role in data security and the specific assistance they will offer in the event of a breach. This collaboration ensures that all parties are aligned. Regular drills or simulations should be considered in order to test the plan's effectiveness, allowing organizations to identify potential weaknesses and make necessary adjustments. By proactively preparing for potential data breaches, organizations can mitigate risks and minimize the impact of such incidents on their operations and reputation.
Data management practices that comply with regulatory standards require the clinical investigator to understand the types of data being collected, whether it constitutes protected health information (PHI), and the processes for anonymization. It is important to understand that despite anonymization efforts, there remains a risk of de-anonymization, which could lead to liability issues. Investigators should implement strategies to minimize these risks and ensure participants are informed, as well as implement data governance frameworks.
Conclusion
The integration of AI technologies into clinical trials presents transformative opportunities to enhance efficiency, accessibility, and data quality. From enabling decentralized clinical trials to improving patient monitoring, AI tools are reshaping the landscape of clinical research. However, as these technologies become more prevalent, clinical investigators must navigate a complex web of regulatory frameworks, data security challenges, and ethical considerations. State, federal, and international regulations, such as California’s AB 3030, the FDA’s draft guidance, and the EU AI Act, highlight the need for compliance and transparency in AI use. Additionally, the importance of robust data governance and proactive strategies to address potential liabilities cannot be overstated.
Part 2 of this series, “7 Steps for Sites and PIs to Implement a Robust AI Governance System,” will provide practical guidance for clinical investigators on how to oversee the use of AI in clinical trials. It will delve into strategies for securing informed consent, managing AI-driven tools responsibly, and fostering collaboration with AI providers with a focus on compliance and ethical use. Stay tuned for actionable insights to help navigate the evolving AI landscape in clinical research.
References:
- “The Role of Artificial Intelligence in Clinical Trial Design and Research with Dr. El Zarrad,” Food and Drug Administration, Q&A with FDA Podcast, May 30, 2024, https://www.fda.gov/drugs/news-events-human-drugs/role-artificial-intelligence-clinical-trial-design-and-research-dr-elzarrad
- Id.
- Id.
- Id.
- Askin S, Burkhalter D, Calado G, El Dakrouni S. Artificial Intelligence Applied to clinical trials: opportunities and challenges. Health Technol (Berl). 2023;13(2):203-213. doi: 10.1007/s12553-023-00738-2. Epub 2023 Feb 28. PMID: 36923325; PMCID: PMC9974218.
- Goldberg JM, Amin NP, Zachariah KA, Bhatt AB. The Introduction of AI Into Decentralized Clinical Trials: Preparing for a Paradigm Shift. JACC Adv. 2024 Jul 5;3(8):101094. doi: 10.1016/j.jacadv.2024.101094. PMID: 39070092; PMCID: PMC11277430.
- “Bipartisan House Task Force Report on Artificial Intelligence,” US House of Representatives, December 2024, P. 19, https://republicans-science.house.gov/index.cfm?a=Files.Serve&File_id=AA2EE12F-8F0C-46A3-8FF8-8E4215D6A72B
- Id.
- “AB-3030 Health Care Services: Artificial Intelligence,” https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240AB3030
- “SB24-205 Consumer Protections for Artificial Intelligence,” https://leg.colorado.gov/bills/sb24-205
- U.C.A. 1953 § 13-11-4
- “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products Guidance for Industry and Other Interested Parties,” Food and Drug Administration, January 2025, P. 8, https://www.fda.gov/media/184830/download
- Id.
- “Ethics and Governance of Artificial Intelligence for Health: Guidance on Large Multi-Modal Models,” World Health Organization, January 18, 2024, https://iris.who.int/bitstream/handle/10665/375579/9789240084759-eng.pdf?sequence=1
- https://eur-lex.europa.eu/eli/reg/2024/1689
- https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
- https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI
(2021)698792_EN.pdf
About The Experts:
Kimberly Chew is senior counsel in Husch Blackwell LLP’s virtual office, The Link. Chew is a seasoned professional with a rich background in biotech research, leveraging her extensive experience to guide clients through the intricate landscape of clinical trials, FDA regulations, and academic research compliance. As the co-founder and co-lead of the firm’s Psychedelic and Emerging Therapies practice group, Kimberly is particularly inspired by the potential of psychedelic therapeutics to address mental health conditions like PTSD. Her practice encompasses regulatory due diligence and intellectual property enforcement, particularly in patent infringement and validity.
Colleen Pert is an associate at Husch Blackwell LLP in the Houston, Texas office. Before attending law school, Colleen earned a master’s degree in healthcare administration from Texas Tech University. With family members in the healthcare industry, the practice area was a natural fit. Internships with Baylor College of Medicine’s Office of Risk Management and United Regional Health Care System solidified her understanding that clients seek clear, concise communication and value-driven performance. Colleen focuses her practice on healthcare regulatory counseling.
Kathleen Snyder is a senior counsel at Husch Blackwell where she runs the firm’s AI in Healthcare Working Group. Based in Boston, Kathleen practices at the intersection of healthcare and technology, providing clients with practical legal advice on AI Governance, strategic technology and commercial contracts, data strategies, intellectual property, and regulatory interpretation. With 20+ years of experience in the healthcare industry, Kathleen has an intrinsic understanding of the healthcare landscape. Her technology-focused transactional practice, coupled with her regulatory experience, gives her a unique perspective that allows her to provide holistic legal advice to clients ranging from seed-stage start-ups to large academic health centers.