Guest Column | May 1, 2025

7 Steps For Clinical Investigators To Implement A Robust AI Governance System

By Kimberly Chew, Esq., Colleen Pert, Esq., and Kathleen Snyder, Esq., Husch Blackwell LLP

number 7-GettyImages-1164369384

In part one of this series on clinical investigator use of AI, “The Risk And Reward Of Clinical Investigators Integrating AI,” we discussed relevant regulatory frameworks and the risks and liabilities of using AI.

Understanding those challenges and considerations, clinical investigators must implement a robust AI governance system to mitigate risks such as data security or privacy breaches. This involves a combination of technical, procedural, and organizational measures that safeguard sensitive patient information and ensure the AI tools used in clinical trials operate within regulatory guidelines.

If a clinical investigator is using an AI tool and relying on a company for much of the AI governance, there are still several important steps they should take to ensure responsible, ethical, and compliant use of the AI tool, as well as to protect patient data and maintain trust. These steps include:

1. Understand the AI Tool and Its Capabilities

  • Conduct a thorough review of the AI tool(s): Understand the intended use, limitations, and risk categories. This includes reviewing documentation provided by the company, such as user manuals, risk assessments, and performance metrics.
  • Confirm regulatory compliance: Verify that the AI tool complies with relevant healthcare regulations and standards (e.g., HIPAA in the U.S., EU AI Act or GDPR in the EU, or FDA or EMA guidelines for AI in medical devices).
  • Assess bias and fairness: Understand the data the AI tool was trained on and whether it reflects diverse populations to avoid biased outcomes.

2. Develop and Implement AI-Specific Policies

  • Define which AI tools are allowed: Specify which AI tools can be used for the trial and the scope of their use.
  • Define roles and responsibilities: Clearly outline who is responsible for overseeing the use of the AI tool, interpreting its outputs, and making final clinical decisions.
  • Establish accountability: Ensure that clinical decisions remain the responsibility of the physician and are not solely reliant on AI outputs.
  • Set usage guidelines: Define when and how the AI tool should be used in clinical practice, including any specific scenarios where it should not be relied upon.

3. Train Employees and Staff

  • Provide AI-specific training: Train all relevant staff on how to properly use the AI tool, interpret its outputs, and understand its limitations.
  • Educate on ethical use: Include training on ethical considerations, such as avoiding overreliance on AI, identifying potential biases, and maintaining patient-centric care.
  • Raise awareness of data privacy: Ensure staff are trained on handling patient data securely and in compliance with applicable laws. Train staff on best practices for data security.

4. Monitor and Audit AI Use

  • Track performance: Regularly monitor the AI tool's performance in clinical practice to ensure it is providing accurate and reliable outputs.
  • Audit compliance: Periodically audit the use of the AI tool to ensure adherence to established policies and guidelines.
  • Report adverse events: Establish a system for documenting and reporting any adverse events or errors associated with the AI tool.

5. Maintain Data Privacy and Security

  • Ensure secure data handling: Verify that the AI tool and the associated company have robust data protection measures in place, such as encryption and access controls.
  • Minimize data sharing: Only share the minimum necessary patient data with the AI tool.
  • Obtain informed consent: Inform patients about the use of AI in their care and obtain their consent (see the section below on informed consent).

6. Establish a Feedback Loop

  • Gather user feedback: Encourage staff to provide feedback on the AI tool's usability and performance.
  • Report issues to the AI provider: Communicate any technical problems, inaccuracies, or unexpected outcomes to the company for resolution.
  • Update policies as needed: Revise policies and procedures based on lessons learned and updates to the AI tool.

7. Ensure Ethical and Transparent Use

  • Avoid overreliance: Use the AI tool as a supplement to, not a replacement for, clinical judgment and expertise.
  • Address biases: Be vigilant about identifying and mitigating potential biases in AI outputs that could negatively impact patient care or clinical trial outcomes.

By implementing these steps, the clinical investigator can ensure that the use of the AI tool is safe, ethical, and compliant with relevant regulations, while also fostering trust among patients and staff.

Informed Consent: Beyond The Signature

Obtaining informed consent is a critical regulatory requirement. It involves more than just securing a signature; it requires ensuring that participants fully understand the AI tools being used. Clinical investigators should clearly explain the scope of the AI's functions, the type of data generated, and how it will be maintained. This includes discussing whether any data collected will be considered protected health information (PHI) and whether it will be anonymized.

Below is an outline of information to include in the informed consent relating to AI use in the clinical trial:

  1. Educate participants: Provide clear, accessible information about the AI tools, including their purpose and functionality. Explain the purpose of the form and the role of AI. Provide a brief explanation of the AI technology being used. Clarify why AI is being used and its potential benefits.
  2. Clarify data use: Explain what data will be collected, how it will be used, and the measures in place to protect it.
  3. Discuss anonymization: Inform participants about data anonymization processes and the potential risks of de-anonymization.
  4. Address liability: Make participants aware of any potential liabilities and limitations, ensuring they understand the implications of AI use in the trial.
  5. Data privacy and security: Explain how patient data will be used, stored, and protected.
  6. Voluntary participation: Make it clear that participation is optional and does not affect standard care.
  7. Continuous engagement: Maintain open communication with participants throughout the trial to address any concerns or questions. Provide details of whom to contact with questions or concerns.
  8. Right to withdraw consent: Inform patients of their right to withdraw consent at any time. However, any data collected to that point, whether through an AI-enabled tool or not, will not be withdrawn from the study.

Final Advice For Investigators Using AI

While AI tools are transforming clinical trials, the legal framework is attempting to catch up to these cutting-edge developments. Recent sources offer a glimpse into how these AI tools may be further regulated.

By focusing on transparency, regulatory compliance, and informed consent, clinical investigators can effectively integrate AI into clinical trials. Given the fluid regulatory landscape at the state, federal, and international levels, clinical investigators should consult with a regulatory professional to implement best practices and ensure compliance with current standards and practices. Understanding the regulatory requirements and maintaining transparency with participants will help safeguard patient data and uphold ethical standards. Through these practices, clinical investigators can leverage AI's potential while minimizing risks.

About The Experts:

Kimberly Chew is senior counsel in Husch Blackwell LLP’s virtual office, The Link. Chew is a seasoned professional with a rich background in biotech research, leveraging her extensive experience to guide clients through the intricate landscape of clinical trials, FDA regulations, and academic research compliance. As the co-founder and co-lead of the firm’s Psychedelic and Emerging Therapies practice group, Kimberly is particularly inspired by the potential of psychedelic therapeutics to address mental health conditions like PTSD. Her practice encompasses regulatory due diligence and intellectual property enforcement, particularly in patent infringement and validity.

Colleen Pert is an associate at Husch Blackwell LLP in the Houston, Texas office. Before attending law school, Colleen earned a master’s degree in healthcare administration from Texas Tech University. With family members in the healthcare industry, the practice area was a natural fit. Internships with Baylor College of Medicine’s Office of Risk Management and United Regional Health Care System solidified her understanding that clients seek clear, concise communication and value-driven performance. Colleen focuses her practice on healthcare regulatory counseling.

Kathleen Snyder is a senior counsel at Husch Blackwell where she runs the firm’s AI in Healthcare Working Group. Based in Boston, Kathleen practices at the intersection of healthcare and technology, providing clients with practical legal advice on AI Governance, strategic technology and commercial contracts, data strategies, intellectual property, and regulatory interpretation. With 20+ years of experience in the healthcare industry, Kathleen has an intrinsic understanding of the healthcare landscape. Her technology-focused transactional practice, coupled with her regulatory experience, gives her a unique perspective that allows her to provide holistic legal advice to clients ranging from seed-stage start-ups to large academic health centers.