Can't-Miss Advice On Selecting Your First AI-Enabled Vendor
A conversation with Meghan O’Connor, partner, and Simone Colgan Dunlap, partner, Quarles & Brady

Choosing the right AI vendor for clinical research is a complex, high-stakes process requiring expert oversight from legal, clinical, IT, and privacy leaders. In this conversation, Meghan O’Connor, an attorney specializing in healthcare privacy and AI regulation, and Simone Colgan Dunlap, an advisor with deep experience in clinical data governance and technology strategy, unpack the must-know terminology, foundational diligence, and regulatory expectations organizations should master before partnering with an AI technology provider.
In this Q&A, Colgan Dunlap and O’Connor offer actionable guidance to help sponsor companies make informed, strategic choices in an evolving AI market.
Clinical Leader: To start, what are some must-know terms or concepts when it comes to researching and selecting an AI vendor for clinical research?
Meghan O’Connor: Key concepts include:
Algorithm Transparency. The vendor should disclose how its models are trained, validated, and updated. This does not require the vendor to disclose proprietary IP, but the healthcare entity should be generally aware of how the algorithm will process data, how outputs are created, and if the vendor has developed its own algorithm or relies on open source software.
Algorithm Diligence. Entities should be prepared to ask vendors to provide sufficient information regarding the vendor’s processes for detecting and correcting algorithmic bias, data governance, explainability, and ability to interpret AI outputs for clinical decision-making and clinical validation (i.e., evidence that the AI tool performs accurately in real-world settings). If a vendor is unwilling to provide general information on these foundational concepts, that is cause for concern.
Regulatory Compliance. Companies should be familiar with frameworks like HIPAA, state consumer protection laws, state healthcare-specific privacy laws (including sensitive information privacy laws and consumer health data privacy laws), and emerging AI-specific regulations (e.g., HHS AI Transparency Rule if certified health IT is involved, state requirements), and be prepared to ask questions designed to assess compliance with applicable regulatory schemes. For example, understanding how the vendor will process and store input data, including whether the vendor will segregate the customer's input data from vendor and third-party data will be critical to ensuring that study subject consents accurately describe the extent to which their information will be confidential. In our experience, vendors are not proactively offering contract terms representing compliance with regulatory requirements, so it is up to entities to know what laws apply to the intended use cases and update contracts accordingly. A knowledgeable healthcare privacy and AI attorney can help companies understand the scope of applicable requirements and market trends.
What steps should companies take — such as defining the scope of the work needed — before contacting a potential vendor?
Simone Colgan Dunlap: Vendors are quick to market AI tools as solutions to almost everything, and while AI can provide fantastic ROI, AI tools are not the solution to every problem. Many of the studies we are seeing on AI implementation so far support the notion that there is limited ROI on AI investments, and we are seeing many companies lured by the possibility of high returns stumble in implementation.
Successful AI implementation often comes down to good planning. Companies should define the scope, objectives, and measurable goals of a desired use case. Planning should include engaged stakeholders, including compliance, IT, privacy, legal, and clinical teams, who will follow the implementation of an AI tool through to decommissioning. Companies should assess their data readiness to ensure data is accurate, complete, and structured and labeled for AI use. This can take time, and most companies don’t have the budget to support outsourcing this work to vendors.
Companies should also prepare vendor diligence expectations, understand the company’s risk tolerance and appetite, and align on expected performance metrics, integration needs, and privacy/security safeguards. There is often tension within an enterprise between factions pushing for robust processes to be in place prior to engaging a vendor and the desire to embrace innovation.
Understanding risk is paramount to any new partnership. What are the risks of contracting with an AI vendor? How can companies mitigate them?
O’Connor: At the foundational level, AI tools are software, so the risk profile is not a completely new set of considerations. Just like any other software agreement, parties need to consider issues like infringement and failure to meet service levels. Risks may be heightened in the AI context, however. Risks include privacy and security breaches related to personal data or company proprietary or confidential data. AI processing often requires much more data than other use cases, so these risks are heightened. Regulatory non-compliance in how AI tools are deployed is a key consideration, including privacy, common rule, and clinical deployment. Additional risks include operational disruption, performance issues, and research-related implications of unchecked bias.
To mitigate risk in AI vendor contracts, parties can utilize traditional contractual risk-shifting mechanisms focusing on indemnity, insurance, security and incident response, and service level agreement terms. Risk mitigation should also include strong vendor diligence and disclosures regarding algorithm transparency, vendor security assessments, and ongoing monitoring. With the evolution of technology and AI regulations, companies should be prepared to update contracts and statements of work as use cases, the AI, and the regulatory landscape evolve. AI vendor relationships are not set-it-and-forget-it relationships, which is one of the reasons engaged stakeholders are key to successful implementation.
Oversights happen. What are some common mistakes companies make in vetting and choosing an AI vendor? And what are the hazards if they’re not addressed or remediated?
Colgan Dunlap: Common mistakes in vetting an AI vendor include overlooking clinical validation and integration considerations, underestimating change management for clinician training and workflow alignment, failing to demand transparency on core AI diligence terms, and not addressing what success means in quantifiable terms. The time period before the contract is inked is the time when vendors are going to be the most forthcoming about functionality and issues. Bottom line, if you are not comfortable with the vendor’s transparency and sophistication, a signed contract is very rarely going to improve the relationship.
Hazards more specific to the research context include increased compliance risk for the company for deployment of a biased or unsecure tool, concerns regarding research validity if the tool does not perform as expected, wasted investment, reputational harm if publication is involved, and potential patient harm.
Data is king for clinical researchers. What data privacy, sharing, and ownership parameters should be in place?
O’Connor: If personal information (e.g., patient protected health information or sensitive clinical information) is involved, contracts with AI vendors must clearly specify data ownership with regard to input and AI tool outputs. The contract must clearly address vendor data use rights, including any authorized de-identification or anonymization of data, use of company data for product improvement, and any other data processing outside the specific services provided to the company. Data de-identification and anonymization standards should be clearly defined, and companies should consider whether the company should be involved in approving the specific de-identification methodology.
Robust data segregation protocols and post-termination handling of data should be addressed. The privacy team cannot swoop in after or in parallel with commercial contract negotiation to “fix gaps.” Because data is such a central issue in clinical research and AI use, the privacy and data handling considerations need to be addressed early.
Specifically, how should companies approach the ownership, use, and security of patient health information (PHI)?
Colgan Dunlap: Companies should set specific standards for encryption and de-identification of protected health information. HIPAA-authorized uses of PHI (e.g., treatment, payment, and healthcare operations) must guide all of the vendor’s authorized uses of PHI, including proposed secondary uses. If PHI will be de-identified, companies need to consider whether state law requirements (e.g., California Consumer Privacy Act) are triggered with any secondary use or sale of de-identified PHI. A standard business associate agreement is typically not sufficient for the deployment of AI, and companies should be prepared to address stronger privacy and security safeguards with their AI vendors. Further, if a HIPAA authorization is utilized, the authorization should be vetted to ensure compliance with HIPAA.
To what extent should a company understand a vendor’s algorithms, data sources, and operational logic? And how does that play into risk?
O’Connor: Companies should expect to understand their vendor’s algorithms, data sources, and operational logic. A vendor that is unwilling to provide basic information about algorithm transparency, validation, detecting and correcting bias, data governance, and explainability is going to expose the company to increased risk. Regardless of whether certified health IT is involved, healthcare companies can look to the HHS AI Transparency Rule for the types of questions that vendors should be willing to answer for companies considering deployment of the vendor’s AI tools.
Understanding how an AI tool works is key to a company selecting a trustworthy vendor. In addition, demonstrating appropriate diligence will go a long way in creating good evidence for the company should it find itself defending its choice of vendors or choice to deploy a specific AI tool to a regulator or judge who is Monday-morning quarterbacking the company’s decisions in the event of a poor outcome. When clinical decision-making or patient safety is involved (vs. a more administrative AI tool), the stakes are higher and the diligence obligations on companies are heightened. It is not sufficient to assume a vendor has appropriately developed and tested its AI tool. Companies must take an active role in understanding the AI tools they will deploy into their workstreams and clinical research settings.
What’s one final word of advice for companies looking to bring an AI vendor into the fold?
Colgan Dunlap: Although AI tools are software, companies should treat AI vendor selection as a strategic partnership and not a technology purchase. Companies should prioritize vendors with proven clinical track records, commitment to transparency and continuous improvement, appreciation of regulatory and risk factors affecting the company, and strong governance and compliance frameworks. Companies should consider starting small with pilot programs to validate outcomes and performance and scale responsibly before locking in a vendor with an expensive integration investment. In sum, focus on solutions that will drive real value, select the right vendor through appropriate vetting, follow through with effective implementation, monitor progress, and scale accordingly.
About The Experts:
Meghan O'Connor is a Milwaukee-based health & life sciences partner at Quarles & Brady as well as the co-chair of the data privacy & cybersecurity team and the AI team. She provides counsel to a wide range of companies on matters including data privacy and cybersecurity, regulatory compliance, information governance, commercial contracting, and transactions. A significant portion of her work involves advising companies that manage health and other sensitive data. She can be reached at meghan.oconnor@quarles.com and on LinkedIn.
Simone Colgan Dunlap is a Phoenix-based partner and national vice chair of Quarles & Brady’s health & life sciences practice group. She advises clients on regulatory compliance, related risk management, and corporate/contracting matters. Her work spans a diverse group of clients, including drug/device manufacturers, pharmacies, pharmacy benefit managers (PBMs), and specialty pharmacy hubs (HUBs). She can be reached at simone.colgandunlap@quarles.com and on LinkedIn.