Guest Column | March 26, 2026

Elsa's AI Model Migration: Technical, Compliance, And Regulatory Risks For Sponsors (Part 2)

By Kimberly Chew, Esq., and Michael Yang, Esq.

Claude, gemini, chatGPT AI icons-GettyImages-2232624047

In our first article,1 we described how the FDA’s generative AI assistant, Elsa, is undergoing a sudden politically-mandated migration from Anthropic’s Claude model to Google’s Gemini2 — and potentially OpenAI’s ChatGPT. President Trump directed this in February 2026 following a high-profile dispute between Anthropic and the Pentagon,3  resulting in Anthropic’s designation as a national security supply chain risk.4 Unlike a routine technology upgrade, this shift is happening rapidly, with little transparency and under significant political pressure.

For sponsors, this is not a controlled, agency-led technology refresh but a politically-driven upheaval with immediate implications for the security and reliability of regulatory review. Internal FDA communications confirm that Gemini is already live within Elsa and will soon be its primary model, while ChatGPT Enterprise remains available for other HHS tasks, raising new questions about data handling and compliance.

In this second installment, we provide a technical analysis of the risks sponsors face as Elsa migrates to a new AI foundation, focusing on compliance, data residency, and the integrity of the regulatory record.

The True Nature Of Elsa: Architecture And Migration Complexity

Elsa is not a generic chatbot; it’s a custom-built retrieval-augmented generation (RAG) system developed by Deloitte.5 Deployed within AWS GovCloud, developers originally optimized Elsa specifically for Anthropic’s Claude model,6 with its entire architecture, including embedding models, vector databases, retrieval logic, and prompt templates, tuned to Claude’s unique behavior. This design allowed Elsa to draw on FDA’s internal document stores and deliver regulatory insights tailored to the agency’s needs.7

The current migration is far more complicated than simply swapping one AI model for another. Developers engineered every component of Elsa’s RAG pipeline to work with Claude; moving to a different system, such as Gemini (or potentially ChatGPT) that is built on a different set of machine learning models, should require re-engineering and re-validating the entire system. Each new model processes information differently, which can affect how documents are retrieved, interpreted, and summarized. This raises the risk of degraded reliability, inconsistent outputs, or even new vulnerabilities in regulatory review.

Deloitte, as Elsa’s system integrator,8 is responsible for executing this technical migration. However, there is little public information about Deloitte’s timeline, testing plans, or criteria for validating the new system, though we do know that it took Deloitte over 18 months to develop and deliver the first iteration of Elsa.9 For sponsors, this lack of transparency and accountability means heightened uncertainty about how and how well their confidential data will be handled during and after the transition.

Cloud And Infrastructure Risks: AWS GovCloud Vs. Google Cloud

A critical but underappreciated risk in Elsa’s migration is the potential move from AWS GovCloud (where Elsa was originally deployed10) to Google’s FedRAMP-High cloud, should Gemini be unable to run within the existing AWS environment. This is not a minor technical detail; shifting cloud providers should trigger new requirements for data residency, access controls, and security validation. Each provider maintains different compliance standards, audit trails, and risk profiles, meaning sponsor data could be subject to unfamiliar or less-tested infrastructure.

For sponsors, this migration could expose confidential submissions to new pathways and oversight regimes before the new environment is fully validated. The lack of clear public information about where Elsa’s data will reside or how it will be protected during and after migration increases uncertainty. Sponsors should recognize that a cloud migration, even if temporary, fundamentally alters the security and compliance landscape for their proprietary information.

Compliance And Certification: What Really Changes?

The compliance landscape for Elsa is shifting along with its underlying AI model. Google’s Gemini has achieved FedRAMP-High authorization for Workspace and Vertex AI, along with other key certifications like SOC 1/2/3 and ISO 27001/17/18.11 In contrast, ChatGPT Enterprise is still “in process” for FedRAMP and, for government use, currently requires agency self-hosting on Azure Government rather than a provider-hosted solution.

Why does this matter for sponsors? Provider-hosted FedRAMP environments, like Google’s, offer more standardized and transparent security controls than agency-managed, self-hosted deployments.12 Each provider’s compliance posture directly affects how sponsor data is protected, audited, and accessed. During migration, any gaps or inconsistencies in certification or implementation could expose sponsor information to heightened risk. Sponsors should not assume that all “approved” models offer equivalent security or compliance; understanding these distinctions is essential for evaluating the real-world risks to confidential submissions.

Data Handling, Retention, And Metadata Risks

A key reassurance from the FDA has been that Elsa would not train AI models on sponsor data.13 However, this is an agency policy, not a technology guarantee, although enterprise versions of Gemini and ChatGPT can also be configured to avoid training on user data. The real risk lies in how each provider manages data retention, prompt logging, and metadata.

When Elsa migrates to a new model, default settings for how prompts and responses are stored may change or not be preserved. For example, Google and OpenAI differ in how long they retain prompt data and what administrative controls are available for deletion or access.14 If these defaults are not carefully aligned with FDA’s requirements before the transition, there may be a window where sponsor data is retained longer than intended or accessible to more personnel than before.

Beyond raw data, both Gemini and ChatGPT use automated tools for safety and abuse detection, which generate metadata about user queries, even if the text itself isn’t stored.15 This metadata can reveal sensitive information about the nature of sponsor submissions and is often overlooked as a risk. For sponsors, this means that even with “no training” assurances, confidential information may still be exposed or inferred through new infrastructure during migration.

Administrative Record Integrity And Legal Exposure

A critical but under-discussed risk is the integrity of the regulatory record during Elsa’s migration. If a sponsor’s submission is reviewed partly with Claude-powered Elsa and partly with Gemini, the resulting administrative record could contain inconsistent or even conflicting AI-generated analysis. Under the Administrative Procedure Act (APA), such internal inconsistencies may expose the FDA’s decisions to legal challenge, potentially undermining the defensibility of regulatory outcomes.16

Recent events underscore the risk: The HHS Make America Healthy Again report was criticized for referencing hallucinated studies generated by AI, highlighting how real-world consequences can arise from unreliable outputs.17 For sponsors, this means that any model transition period increases the chance of flawed or contradictory regulatory records, making it even more important to document submissions carefully and seek clarity from the FDA about which AI model was used in their review.18

Ongoing Operational And Performance Risks

Elsa’s performance was already a concern before the migration, with FDA reviewers reporting hallucinated citations, inconsistent outputs, and clunky user experience, even after months of development, testing, and fine-tuning on Claude.19 The forced accelerated switch to Gemini (or another model) increases the risk that Elsa’s reliability will degrade further, at least in the short term, especially if there is little time for thorough validation.

Additionally, much of Elsa’s effectiveness depended on model-specific prompt engineering and workflow optimization. Changing the underlying AI model means this tuning is lost, requiring new rounds of adjustment and testing.20 For sponsors, this translates to heightened uncertainty about the accuracy and consistency of AI-driven regulatory analysis during the transition and a greater need to monitor for errors or misinterpretations in FDA responses.

Gemini Vs. ChatGPT: Platform-Specific Risks

While Elsa’s primary migration is to Google’s Gemini, confirmed by internal FDA communications, ChatGPT Enterprise remains available for other HHS tasks.21 This distinction matters for sponsors because the risks to regulatory submissions are now closely tied to Gemini’s integration within Elsa. However, if FDA reviewers find Gemini-powered Elsa inadequate for complex scientific analysis, they may bypass Elsa and use ChatGPT or other tools outside the platform’s controlled, validated environment. Such workarounds could expose sponsor data to less secure or less compliant systems, increasing the risk of data leakage or inconsistent handling. Sponsors should be alert to the possibility of their information being processed outside Elsa’s intended safeguards.

Legal And Regulatory Uncertainty

The regulatory landscape remains unsettled as Anthropic has announced it will challenge its supply chain risk designation in court.22 If this litigation succeeds, the FDA’s migration to Gemini could be paused or even reversed, forcing sponsors to navigate yet another abrupt change in regulatory technology. Additionally, the shift to Gemini (or potentially ChatGPT) creates new dependencies on a single provider’s infrastructure, increasing the risk of vendor lock-in and strategic vulnerability if future policy or technology shifts occur. For sponsors, this means planning for regulatory uncertainty and being prepared for further disruptions that could affect the handling and review of confidential submissions.

Sponsor-Focused Risk Assessment Checklist

Given the complexity and uncertainty of Elsa’s migration, sponsors should proactively assess their risk exposure. Consider these key questions:

  • Where is your data being processed — AWS GovCloud or Google Cloud?
  • Has the new AI model been fully validated for your specific use case?
  • What are the current data retention and prompt logging policies, and have they changed?
  • Can you confirm which AI model was used to review your submission?
  • What contingency plans are in place if the migration is paused, reversed, or further changes occur?

Asking these questions can help sponsors protect their confidential information during this transition.

Conclusion And Preview

The FDA’s migration from Claude to Gemini is not a routine technical change but a high-risk, multilayered transformation with immediate consequences for the security, reliability, and defensibility of sponsor data and regulatory outcomes. Sponsors must remain vigilant, ask the right questions, and document interactions during this period of flux. In our next article, we will provide practical guidance on how sponsors can safeguard confidential information and adapt regulatory strategies as this unprecedented transition continues to unfold.

References:

  1. Kimberly Chew, Esq., & Michael Yang, Esq., “FDA’s Elsa AI Switches From Claude To Gemini: What Sponsors Need To Know,” Clinical Leader (March 12, 2026), https://www.clinicalleader.com/doc/fda-s-elsa-ai-switches-from-claude-to-gemini-what-sponsors-need-to-know-0001.
  2. Nicholas Florko, HHS starts phasing out Anthropic’s Claude, STAT News (March 3, 2026), https://www.statnews.com/2026/03/03/hhs-starts-phasing-out-anthropic-claude-health-tech/
  3. Brendan Bordelon, Trump Orders All Federal Agencies to Cease Using Anthropic, POLITICO (Feb. 27, 2026, 4:11 PM), https://www.politico.com/news/2026/02/27/trump-orders-all-federal-agencies-to-stop-using-anthropic-00804517
  4. Anteau, Alex. "Consequences Are 'Enormous': Anthropic Sues Department of War, Alleging 'Retaliation'." Law.com, March 9, 2026, https://www.law.com/2026/03/09/consequences-are-enormous-anthropic-sues-department-of-war-alleging-retaliation/.
  5. OpenAI and the FDA Are Holding Talks About Using AI in Drug Evaluation, WIRED (May 7, 2025, 3:59 PM), https://www.wired.com/story/openai-fda-doge-ai-drug-evaluation/
  6. FDA Press Release, FDA Launches Agency-Wide AI Tool to Optimize Performance for the American People (June 2, 2025), https://www.fda.gov/news-events/press-announcements/fda-launches-agency-wide-ai-tool-optimize-performance-american-people and FDA News Release, FDA Expands Artificial Intelligence Capabilities with Agentic AI Deployment (Dec. 1, 2025) https://www.fda.gov/news-events/press-announcements/fda-expands-artificial-intelligence-capabilities-agentic-ai-deployment
  7. Id.
  8. Natalia Mesa, FDA’s AI Rollout Raises Questions Around Readiness, Legality, BIOSPACE (June 30, 2025), https://www.biospace.com/fda/fdas-ai-rollout-raises-questions-around-readiness-legality
  9. Manto, Margaret. 2026. "HHS Tells Employees to Stop Using Anthropic's Claude." Notus, March 2, 2026. https://www.notus.org/trump-white-house/hhs-employees-stop-anthropic-claude-ai-platform.
  10. Brittany Trang, FDA Rolls Out AI Tool Agency-Wide, Weeks Ahead of Schedule, STAT+ (June 2, 2025), https://www.statnews.com/2025/06/02/fda-artificial-intelligence-implementation-plans-makary/
  11. Alice Rison & Steven Hin, “Gemini in Workspace Apps and the Gemini App Are First to Achieve FedRAMP High Authorization,” Google Cloud Blog (March 17, 2025), https://cloud.google.com/blog/topics/public-sector/gemini-in-workspace-apps-and-the-gemini-app-are-first-to-achieve-fedramp-high-authorization.
  12. Google Cloud, “FedRAMP Compliance,” https://cloud.google.com/security/compliance/fedramp.
  13. FDA Press Release, FDA Launches Agency-Wide AI Tool to Optimize Performance for the American People (June 2, 2025), https://www.fda.gov/news-events/press-announcements/fda-launches-agency-wide-ai-tool-optimize-performance-american-people and FDA News Release, FDA Expands Artificial Intelligence Capabilities with Agentic AI Deployment (December 1, 2025) https://www.fda.gov/news-events/press-announcements/fda-expands-artificial-intelligence-capabilities-agentic-ai-deployment
  14. OpenAI, “How We’re Responding to The New York Times’ Data Demands in Order to Protect User Privacy,” (June 5, 2025), https://openai.com/index/response-to-nyt-data-demands/. Google, “Gemini Apps Privacy Hub,” last updated March 10, 2026, https://support.google.com/gemini/answer/13594961?hl=en.
  15. OpenAI, “Keeping Users Safe in the Age of AI,” (October 2025), https://cdn.openai.com/global-affairs/keeping-users-safe-in-the-age-of-ai-oct25.pdf.
  16. For more background, see Kimberly Chew, Odette Hauke, and Kathleen Snyder, “AI At The FDA: Legal Implications And Strategic Considerations For Drug Developers,” Clinical Leader, Jan. 19, 2026, https://www.clinicalleader.com/doc/ai-at-the-fda-legal-implications-and-strategic-considerations-for-drug-developers-0001 and Kimberly Chew, Odette Hauke, and Kathleen Snyder, “Navigating FDA's New AI Systems: Practical Tips For Regulatory Success,” Clinical Leader, Jan. 19, 2026, https://www.clinicalleader.com/doc/navigating-fda-s-new-ai-systems-practical-tips-for-regulatory-success-0001
  17. Phie Jacobs, Trump Officials Downplay Fake Citations in High-Profile Report on Children’s Health: References to Phantom Studies Comes After White House Pledge to Practice “Gold Standard” Science, SCIENCE (May 30, 2025, 4:50 PM ET), https://www.science.org/content/article/trump-officials-downplay-fake-citations-high-profile-report-children-s-health
  18. See Kimberly Chew, Esq., & Michael Yang, Esq., “FDA’s Elsa AI Switches From Claude To Gemini: What Sponsors Need To Know,” Clinical Leader (March 12, 2026), https://www.clinicalleader.com/doc/fda-s-elsa-ai-switches-from-claude-to-gemini-what-sponsors-need-to-know-0001.
  19. Chris Mazzolini & Mike Hollan, “FDA’s Elsa AI Tool Raises Accuracy and Oversight Concerns,” Applied Clinical Trials (July 23, 2025), https://www.appliedclinicaltrialsonline.com/view/fda-elsa-ai-tool-raises-accuracy-and-oversight-concerns. Owermohle, Sarah. FDA’s artificial intelligence is supposed to revolutionize drug approvals. It’s making up studies. CNN. July 23, 2025https://www.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makary
  20. See OpenAI. n.d. "Prompt Engineering." Accessed March 16, 2026. https://developers.openai.com/api/docs/guides/prompt-engineering (noting that different model types may need to be prompted differently and that even model snapshots within the same family can behave differently).; Tianxiang Sun et al., On Transferability of Prompt Tuning for Natural Language Processing (NAACL 2022) (finding that directly reusing prompts in cross-model transfer is intractable). https://aclanthology.org/2022.naacl-main.290.pdf
  21. Margaret Manto, HHS Tells Employees to Stop Using Anthropic’s Claude, NOTUS (March 2, 2026, 1:41 PM), https://www.notus.org/trump-white-house/hhs-employees-stop-anthropic-claude-ai-platform
  22. Anteau, Alex. "Consequences Are 'Enormous': Anthropic Sues Department of War, Alleging 'Retaliation'." Law.com, March 9, 2026, https://www.law.com/2026/03/09/consequences-are-enormous-anthropic-sues-department-of-war-alleging-retaliation/

About The Authors:

Kimberly Chew is senior counsel in Husch Blackwell LLP’s virtual office, The Link. Chew is a seasoned professional with a background in biotech research, leveraging her experience to guide clients through the intricate landscape of clinical trials, FDA regulations, and academic research compliance. As the cofounder and co-lead of the firm’s Psychedelic and Emerging Therapies practice group, Kimberly is inspired by the potential of psychedelic therapeutics to address mental health conditions like PTSD. Her practice encompasses regulatory due diligence and intellectual property enforcement, particularly in patent infringement and validity. She can be reached at kimberly.chew@huschblackwell.com.

Michael Yang is a principal in Husch Blackwell Consulting’s AI Advisory Services practice, where he helps organizations navigate the practical, legal, and governance challenges of adopting artificial intelligence. He works closely with executive leadership, legal and compliance teams, and technical stakeholders to ensure AI initiatives are effective, defensible, and responsibly deployed. Michael brings over 25 years of experience as a technology-focused attorney to his advisory work, providing a strong legal foundation for HBC AI’s consulting services. His background includes extensive work with artificial intelligence and generative AI technologies, advising product and engineering teams on development, deployment, and risk management.