Guest Column | April 22, 2026

Navigating Elsa's AI Transition: Practical Guidance To Safeguard Confidential Information (Part 3)

By Kimberly Chew, Esq., and Michael Yang, Esq.

AI Assistant Apps, ChatGPT, Claude, Gemini-GettyImages-2270277028

The FDA’s migration from Anthropic’s Claude to Google’s Gemini as the model for its Elsa AI tool1 is not a controlled, incremental upgrade but a rapid, politically driven shift2 with immediate consequences for sponsors. As detailed in previous articles,3 this transition introduces several acute risks: data security and confidentiality may be compromised as information moves through new, less-tested systems; compliance gaps can emerge if cloud environments, certifications, or retention policies change;4 and the integrity of regulatory records is threatened if submissions are reviewed with different AI models midstream. Elsa’s operational reliability was already under scrutiny before this migration, with reports of hallucinated citations and inconsistent outputs.5 Now, with the transition already underway and Gemini active within Elsa,6 sponsors cannot assume their data is being handled with the same rigor or protections as before. In this environment, proactive risk management is essential to safeguard trade secrets and regulatory outcomes.

To get up to speed on the series thus far, start with our first article, in which we described how Elsa is migrating from Anthropic’s Claude model to Google’s Gemini — and potentially OpenAI’s ChatGPT. In article two, we provided a technical analysis of the risks sponsors face, focusing on compliance, data residency, and the integrity of the regulatory record. In this final installment, discover practical strategies for navigating the change.

Immediate Safeguards: Protecting Confidential And Trade Secret Information

Given the uncertainty and heightened risk during Elsa’s migration, sponsors must take immediate steps to protect their confidential information and trade secrets.

Reassess and Minimize Disclosures

Share only what is strictly necessary in your FDA communications. Clearly mark all sensitive materials as “Confidential Commercial Information” (CCI) and/or “Trade Secret,” and explicitly reference statutory protections such as the Freedom of Information Act (FOIA) and the Federal Food, Drug, and Cosmetic Act in your cover letters and correspondence.7 This helps reinforce your expectations for how proprietary data should be handled, regardless of the underlying technology.

Seek Written Assurances and Clarifications from the FDA

Proactively request written confirmation from the FDA about which AI model is being used to process your submissions.8 Ask for documentation on data handling, retention, and security controls in the new environment,9 and request to be notified if the AI model changes during the review of your application. These steps establish a record of your concerns and expectations, which could be important if questions arise later.

Document Everything

Maintain contemporaneous, timestamped records of all submissions, FDA responses, and queries. Note any indication of AI-generated content or model transitions. A clear audit trail is essential in case you need to challenge regulatory decisions or address data integrity concerns in the future.

Transition-Specific, Advanced Risk Mitigation Actions

As Elsa’s migration unfolds, sponsors should adopt risk mitigation strategies tailored to this period of instability.

Determine Where and How Your Data is Processed

Ask the FDA to clarify whether your submissions are being processed within AWS GovCloud,10 Google Cloud,11 or another environment. If you notice any change in cloud provider or infrastructure, treat it as a material increase in risk, as each environment brings different compliance standards and data handling protocols.12 Knowing where your data resides is foundational to assessing your exposure.

Assess Timing and Submission Strategies

If you have flexibility with pending or planned submissions, carefully weigh the risks of submitting during this transition period versus waiting until the new system stabilizes. Submissions made now could be subject to review delays, inconsistent AI-generated analysis,13 or an uptick in information requests as the FDA adapts to new technology. A brief delay may reduce exposure to these transition-related risks.

Prepare for Potential Resubmission or Supplementation

Given the likelihood of AI-generated errors, hallucinations, or inconsistent outputs during migration,14 be prepared to re-brief or supplement your submissions if needed. Have key documents and clarifications ready in advance so you can respond quickly if the FDA requests additional information or if discrepancies arise in the review process.

Preserve Your Own Regulatory Record

Meticulously retain full timestamped copies of every submission, data set, and communication with the FDA. This is critical for defending your position if you need to contest an adverse finding or reconstruct the administrative record in the event of review inconsistencies or legal challenges.15

Proactive Engagement And Industry Coordination

Sponsors should not face these uncertainties alone. Collaborate with industry peers and trade associations such as BIO, PhRMA, or AdvaMed to collectively press the FDA for greater transparency about Elsa’s migration. Joint industry requests for disclosure on validation status, migration timelines, and data handling practices carry more weight and can prompt more detailed responses from the agency.

Additionally, stay alert to ongoing policy or legal developments, including Anthropic’s legal challenges to its supply chain risk designation.16 Regulatory technology requirements could shift rapidly if the migration is paused or reversed. Being part of a coordinated industry response ensures sponsors are informed and better positioned to adapt to sudden changes.

Technical And Compliance Due Diligence

As Elsa’s underlying technology changes, sponsors must actively assess whether the FDA’s deployment continues to meet their risk tolerance and compliance needs. Review and compare the compliance certifications (such as FedRAMP, SOC, and ISO) of the AI providers involved — Gemini and ChatGPT have different approval statuses and security postures.17 Confirm that the FDA’s chosen environment aligns with your own organization’s regulatory requirements and expectations for data protection.

Scrutinize data retention and metadata policies in the new environment. Ask the FDA for specifics on how long prompts and responses are stored, who can access them, and how metadata about your submissions is handled.18 Policies and defaults can vary significantly between providers, and even subtle differences could affect the confidentiality of your sensitive information.

Finally, be aware that FDA reviewers may bypass Elsa if they find its new AI model inadequate, turning instead to ChatGPT or other tools outside the validated, controlled environment.19 Ask the FDA to confirm that all sponsor data will be processed exclusively within Elsa’s compliant infrastructure. This step helps prevent inadvertent exposure of confidential data and ensures your information remains protected throughout the transition.

Preparing For Administrative And Legal Challenges

During this transition, sponsors should remain vigilant for signs of inconsistent or flawed regulatory records. If FDA feedback or queries appear to reflect conflicting interpretations, hallucinated citations,20 or abrupt shifts in analysis, these may be symptoms of the AI model migration. Such inconsistencies could undermine the integrity of the administrative record and, ultimately, the defensibility of FDA decisions.21

If you suspect your submission was reviewed using different AI models or during a period of system instability, consult legal counsel about how to preserve your rights under the Administrative Procedure Act (APA). Inconsistencies in the regulatory record can serve as grounds to challenge adverse decisions. Consider formally requesting that the FDA disclose which AI model(s) were used to review your specific submission and to notify you of any changes during the review process. Establishing this documentation now can be critical if you need to contest a regulatory outcome or address data integrity issues after the fact. Proactive legal preparation will help sponsors respond effectively to any disputes that may arise from this unprecedented transition.

Long-Term Strategic Considerations

Sponsors should plan for continued uncertainty, as further changes in FDA’s AI infrastructure are possible, especially if legal challenges or policy shifts force another migration or reversal. Building flexibility into your regulatory planning and data governance protocols will help your organization adapt quickly to future disruptions.

Additionally, assess your exposure to vendor lock-in as the FDA becomes dependent on specific AI providers. Consider advocating, individually or through industry groups, for greater transparency, independent validation, and portability in the FDA’s regulatory technology choices. Proactively addressing these strategic issues now will position your organization to better navigate both the current transition and any future regulatory technology shifts.

Sponsor Risk Management Checklist

  • Limit disclosures to only what is necessary; mark sensitive content as CCI/Trade Secret.
  • Request written confirmation of which AI model is used for your submissions.
  • Ask for details on data handling, retention, and security controls.
  • Retain timestamped records of all submissions and FDA responses.
  • Monitor for inconsistent or flawed regulatory feedback.
  • Coordinate with industry groups for collective transparency efforts.
  • Prepare to supplement or re-brief if AI-related errors arise.
  • Build flexibility into regulatory planning for future transitions.
  • Assess and address exposure to vendor lock-in.

Conclusion

The FDA’s Elsa migration represents an unprecedented high-risk shift with direct consequences for sponsor data security and regulatory outcomes. In this period of rapid change and heightened uncertainty, sponsors cannot rely on past assurances or assume business as usual. Proactive, well-documented steps are essential to protect confidential information and preserve regulatory rights. Ongoing vigilance, engagement with both the FDA and industry peers, and flexibility in regulatory strategy will be crucial as the situation continues to evolve. Sponsors who act decisively now will be best positioned to navigate whatever comes next.

References:

  1. Nicholas Florko, HHS starts phasing out Anthropic's Claude, STAT News (March 3, 2026), https://www.statnews.com/2026/03/03/hhs-starts-phasing-out-anthropic-claude-health-tech/; Margaret Manto, HHS Tells Employees to Stop Using Anthropic's Claude, NOTUS (March 2, 2026, 1:41 PM), https://www.notus.org/trump-white-house/hhs-employees-stop-anthropic-claude-ai-platform)
  2. Brendan Bordelon, Trump Orders All Federal Agencies to Cease Using Anthropic, POLITICO (Feb. 27, 2026, 4:11 PM), https://www.politico.com/news/2026/02/27/trump-orders-all-federal-agencies-to-stop-using-anthropic-00804517
  3. Kimberly Chew, Esq., & Michael Yang, Esq., FDA's Elsa AI Switches From Claude To Gemini: What Sponsors Need To Know, Clinical Leader (March 12, 2026), https://www.clinicalleader.com/doc/fda-s-elsa-ai-switches-from-claude-to-gemini-what-sponsors-need-to-know-0001; Kimberly Chew, Esq., & Michael Yang, Esq., Part 2 – Elsa's AI Model Migration: Technical, Compliance, and Regulatory Risks for Sponsors
  4. Alice Rison & Steven Hin, Gemini in Workspace Apps and the Gemini App Are First to Achieve FedRAMP High Authorization, Google Cloud Blog (March 17, 2025), https://cloud.google.com/blog/topics/public-sector/gemini-in-workspace-apps-and-the-gemini-app-are-first-to-achieve-fedramp-high-authorization; OpenAI, Providing ChatGPT to the Entire U.S. Federal Workforce: First-of-Its-Kind Partnership with General Services Administration Will Give Federal Agencies Access to ChatGPT Enterprise for $1 for the Next Year (Aug. 6, 2025), https://openai.com/index/providing-chatgpt-to-the-entire-us-federal-workforce/
  5. Chris Mazzolini & Mike Hollan, FDA's Elsa AI Tool Raises Accuracy and Oversight Concerns, Applied Clinical Trials (July 23, 2025), https://www.appliedclinicaltrialsonline.com/view/fda-elsa-ai-tool-raises-accuracy-and-oversight-concerns; Sarah Owermohle, FDA's artificial intelligence is supposed to revolutionize drug approvals. It's making up studies, CNN (July 23, 2025), https://www.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makary; Phie Jacobs, Trump Officials Downplay Fake Citations in High-Profile Report on Children's Health: References to Phantom Studies Comes After White House Pledge to Practice "Gold Standard" Science, SCIENCE (May 30, 2025, 4:50 PM ET), https://www.science.org/content/article/trump-officials-downplay-fake-citations-high-profile-report-children-s-health
  6. Margaret Manto, HHS Tells Employees to Stop Using Anthropic's Claude, NOTUS (March 2, 2026, 1:41 PM), https://www.notus.org/trump-white-house/hhs-employees-stop-anthropic-claude-ai-platform
  7. Kimberly Chew, Esq., & Michael Yang, Esq., FDA's Elsa AI Switches From Claude To Gemini: What Sponsors Need To Know, Clinical Leader (March 12, 2026), https://www.clinicalleader.com/doc/fda-s-elsa-ai-switches-from-claude-to-gemini-what-sponsors-need-to-know-0001
  8. Id.
  9. OpenAI, How We're Responding to The New York Times' Data Demands in Order to Protect User Privacy (June 5, 2025), https://openai.com/index/response-to-nyt-data-demands/; Google, Gemini Apps Privacy Hub, last updated March 10, 2026, https://support.google.com/gemini/answer/13594961?hl=en
  10. Brittany Trang, FDA Rolls Out AI Tool Agency-Wide, Weeks Ahead of Schedule, STAT+ (June 2, 2025), https://www.statnews.com/2025/06/02/fda-artificial-intelligence-implementation-plans-makary/
  11. Alice Rison & Steven Hin, Gemini in Workspace Apps and the Gemini App Are First to Achieve FedRAMP High Authorization, Google Cloud Blog (March 17, 2025), https://cloud.google.com/blog/topics/public-sector/gemini-in-workspace-apps-and-the-gemini-app-are-first-to-achieve-fedramp-high-authorization
  12. Google Cloud, FedRAMP Compliance, https://cloud.google.com/security/compliance/fedramp
  13. Chris Mazzolini & Mike Hollan, FDA's Elsa AI Tool Raises Accuracy and Oversight Concerns, Applied Clinical Trials (July 23, 2025), https://www.appliedclinicaltrialsonline.com/view/fda-elsa-ai-tool-raises-accuracy-and-oversight-concerns; Sarah Owermohle, FDA's artificial intelligence is supposed to revolutionize drug approvals. It's making up studies, CNN (July 23, 2025), https://www.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makary
  14. Phie Jacobs, Trump Officials Downplay Fake Citations in High-Profile Report on Children's Health: References to Phantom Studies Comes After White House Pledge to Practice "Gold Standard" Science, SCIENCE (May 30, 2025, 4:50 PM ET), https://www.science.org/content/article/trump-officials-downplay-fake-citations-high-profile-report-children-s-health; Chris Mazzolini & Mike Hollan, FDA's Elsa AI Tool Raises Accuracy and Oversight Concerns, Applied Clinical Trials (July 23, 2025), https://www.appliedclinicaltrialsonline.com/view/fda-elsa-ai-tool-raises-accuracy-and-oversight-concerns
  15. Kimberly Chew, Odette Hauke, and Kathleen Snyder, AI At The FDA: Legal Implications And Strategic Considerations For Drug Developers, Clinical Leader (Jan. 19, 2026), https://www.clinicalleader.com/doc/ai-at-the-fda-legal-implications-and-strategic-considerations-for-drug-developers-0001; Kimberly Chew, Odette Hauke, and Kathleen Snyder, Navigating FDA's New AI Systems: Practical Tips For Regulatory Success, Clinical Leader (Jan. 19, 2026), https://www.clinicalleader.com/doc/navigating-fda-s-new-ai-systems-practical-tips-for-regulatory-success-0001
  16. Anteau, Alex, Consequences Are 'Enormous': Anthropic Sues Department of War, Alleging 'Retaliation', Law.com (March 9, 2026), https://www.law.com/2026/03/09/consequences-are-enormous-anthropic-sues-department-of-war-alleging-retaliation/
  17. Alice Rison & Steven Hin, Gemini in Workspace Apps and the Gemini App Are First to Achieve FedRAMP High Authorization, Google Cloud Blog (March 17, 2025), https://cloud.google.com/blog/topics/public-sector/gemini-in-workspace-apps-and-the-gemini-app-are-first-to-achieve-fedramp-high-authorization; Nihal Krishan, Microsoft Launches Generative AI Service for Government Agencies, FEDSCOOP (June 7, 2023), https://fedscoop.com/microsoft-launches-azure-openai-service-for-government/; OpenAI, Providing ChatGPT to the Entire U.S. Federal Workforce: First-of-Its-Kind Partnership with General Services Administration Will Give Federal Agencies Access to ChatGPT Enterprise for $1 for the Next Year (Aug. 6, 2025), https://openai.com/index/providing-chatgpt-to-the-entire-us-federal-workforce/
  18. OpenAI, How We're Responding to The New York Times' Data Demands in Order to Protect User Privacy (June 5, 2025), https://openai.com/index/response-to-nyt-data-demands/; Google, Gemini Apps Privacy Hub, last updated March 10, 2026, https://support.google.com/gemini/answer/13594961?hl=en
  19. Margaret Manto, HHS Tells Employees to Stop Using Anthropic's Claude, NOTUS (March 2, 2026, 1:41 PM), https://www.notus.org/trump-white-house/hhs-employees-stop-anthropic-claude-ai-platform)
  20. Phie Jacobs, Trump Officials Downplay Fake Citations in High-Profile Report on Children's Health: References to Phantom Studies Comes After White House Pledge to Practice "Gold Standard" Science, SCIENCE (May 30, 2025, 4:50 PM ET), https://www.science.org/content/article/trump-officials-downplay-fake-citations-high-profile-report-children-s-health; Chris Mazzolini & Mike Hollan, FDA's Elsa AI Tool Raises Accuracy and Oversight Concerns, Applied Clinical Trials (July 23, 2025), https://www.appliedclinicaltrialsonline.com/view/fda-elsa-ai-tool-raises-accuracy-and-oversight-concerns
  21. Kimberly Chew, Odette Hauke, and Kathleen Snyder, AI At The FDA: Legal Implications And Strategic Considerations For Drug Developers, Clinical Leader (Jan. 19, 2026), https://www.clinicalleader.com/doc/ai-at-the-fda-legal-implications-and-strategic-considerations-for-drug-developers-0001; Kimberly Chew, Odette Hauke, and Kathleen Snyder, Navigating FDA's New AI Systems: Practical Tips For Regulatory Success, Clinical Leader (Jan. 19, 2026), https://www.clinicalleader.com/doc/navigating-fda-s-new-ai-systems-practical-tips-for-regulatory-success-0001

About The Authors:

Kimberly Chew is senior counsel in Husch Blackwell LLP’s virtual office, The Link. Chew is a seasoned professional with a background in biotech research, leveraging her experience to guide clients through the intricate landscape of clinical trials, FDA regulations, and academic research compliance. As the cofounder and co-lead of the firm’s Psychedelic and Emerging Therapies practice group, Kimberly is inspired by the potential of psychedelic therapeutics to address mental health conditions like PTSD. Her practice encompasses regulatory due diligence and intellectual property enforcement, particularly in patent infringement and validity. She can be reached at kimberly.chew@huschblackwell.com.

Michael Yang is a principal in Husch Blackwell Consulting’s AI Advisory Services practice, where he helps organizations navigate the practical, legal, and governance challenges of adopting AI. He works closely with executive leadership, legal and compliance teams, and technical stakeholders to ensure AI initiatives are effective, defensible, and responsibly deployed. Michael brings over 25 years of experience as a technology-focused attorney to his advisory work, providing a legal foundation for HBC AI’s consulting services. His background includes extensive work with AI and generative AI technologies, advising product and engineering teams on development, deployment, and risk management. He can be reached at michael.yang@hbconsulting.com.