FDA's Elsa AI Switches From Claude To Gemini: What Sponsors Need To Know
By Kimberly Chew, Esq., and Michael Yang, Esq.

The FDA has been integrating AI into regulatory review with its generative AI assistant Elsa playing a central role in how the agency’s scientific reviewers analyze submissions, identify data gaps, and set regulatory priorities.1 Until now, Elsa’s foundation was Anthropic’s Claude model,2 deployed within a tightly controlled, government-approved cloud environment (FedRAMP-High AWS GovCloud).
That foundation is shifting suddenly and under circumstances that demand close attention from life sciences sponsors.
What Happened: The Political And Institutional Backdrop
This is not a routine technology upgrade. On February 27, 2026, President Trump issued a directive requiring all federal agencies to cease using Anthropic’s Claude, following a sharp dispute between the Pentagon and Anthropic over the use of AI for autonomous weapons and mass surveillance.3 Defense Secretary Pete Hegseth designated Anthropic a national security supply chain risk — a move that is being challenged by the company.4
This politically-forced migration is likely to be faster, less methodical, and less transparent than a planned, agency-led transition. The FDA and HHS have responded by directing staff to discontinue Claude and transition to Google’s Gemini or, in some contexts, OpenAI’s ChatGPT Enterprise.5 However, internal FDA communications indicate that Gemini is already available within Elsa and is set to become the primary model.6 There is no public indication that ChatGPT has been directly integrated into Elsa, though it remains an approved alternative for other HHS use cases.7
This context is critical: Sponsors must not assume that Elsa’s migration is a controlled, fully validated process. The speed and opacity of the transition, combined with Elsa’s complex architecture and the agency’s prior struggles with the platform, create a period of elevated risk for sponsors’ confidential data and regulatory submissions.
Elsa’s Technical Architecture: Why This Transition Is Not Just A “Model Swap”
Elsa is not a generic chatbot.8 Built by Deloitte, Elsa is a custom retrieval-augmented generation (RAG) system,9 originally evolved from CDER-GPT,10 and deployed in AWS GovCloud.11 It integrates FDA-specific document stores, embedding models, vector databases, and custom prompt templates — all tuned to Claude’s behavior.12
The practical upshot:
- Swapping out Claude for Gemini (or, hypothetically, ChatGPT) is not simply exchanging one API (application programming interface, which allows software applications to communicate and interact with each other) for another.
- The entire RAG pipeline — how documents are retrieved, interpreted, and summarized — must be reengineered and revalidated.
- If Gemini cannot run within AWS GovCloud, the migration may also entail a move to Google’s FedRAMP-High cloud, raising new data residency and security validation issues.
Compounding the risk: Elsa’s performance on Claude was already criticized internally for hallucinations and inconsistent outputs, even after extensive development.13 The forced accelerated migration increases the likelihood of degraded reliability and new vulnerabilities.
Legal, Compliance, And Record Integrity Risks
- Data Handling and Training: The FDA’s no-training-on-sponsor-data policy was a deliberate agency decision, not a technical feature of Claude. Both Gemini and ChatGPT Enterprise offer comparable non-training defaults. However, the technical implementation and the controls on data retention, prompt logging, and metadata generation must be revalidated for the new environment.
- Compliance Differences: Gemini has achieved FedRAMP-High authorization for Google Workspace and Vertex AI.14 ChatGPT Enterprise is “in process” and, for government use, currently requires self-hosting on Azure Government.15 These are not cosmetic distinctions; they fundamentally affect data security and sponsor risk.
- Administrative Record Integrity: If a submission is reviewed using both Claude- and Gemini-powered Elsa, the administrative record may contain inconsistent AI-generated analysis. Under the federal Administrative Procedure Act (APA), such inconsistencies could expose the FDA’s decisions to legal challenge.
- Ongoing Legal Uncertainty: Anthropic has announced it will challenge its supply chain risk designation in court. If successful, the transition could be paused or reversed, compounding uncertainty for sponsors.
What Sponsors Need To Do Now: High-Level Safeguards
Given the above, sponsors should treat this transition period as one of heightened risk. While a detailed technical and operational risk analysis will follow in our next article, here are immediate high-level safeguards to consider:
- Reevaluate Disclosure Practices: Disclose only what is strictly necessary in FDA communications. Mark all sensitive content as “Confidential Commercial Information” or “Trade Secret,” and reference statutory protections.
- Seek Updated Assurances: Request written clarification from the FDA about which model is being used for your submissions, how data is handled, and what security controls are in place post-migration.
- Document Everything: Maintain a contemporaneous log of all submissions, queries, and responses. Note any indication of AI-generated queries and model transitions.
- Request Model Disclosure: For pending or active submissions, ask the FDA to notify you if the underlying AI model changes mid-review.
- Coordinate with Industry Peers: Engage with industry associations to share information and press for collective transparency from the FDA.16
Looking Ahead
The FDA’s forced migration from Claude to Gemini (and potentially other models) is a watershed moment for AI in regulatory review. The risks are not theoretical; they are immediate, multilayered, and, in some cases, already materializing. In our next article, we will provide a deep technical analysis of the migration risks, including compliance, data residency, and administrative record issues. The third article will offer detailed practical guidance for sponsors navigating this evolving landscape.
For now, clinical trial sponsors should assume that Elsa’s processes are changing and take extra care to protect their confidential information until the new system is fully validated.
References:
- FDA Press Release, FDA Launches Agency-Wide AI Tool to Optimize Performance for the American People (June 2, 2025), https://www.fda.gov/news-events/press-announcements/fda-launches-agency-wide-ai-tool-optimize-performance-american-people and FDA News Release, FDA Expands Artificial Intelligence Capabilities with Agentic AI Deployment (December 1, 2025) https://www.fda.gov/news-events/press-announcements/fda-expands-artificial-intelligence-capabilities-agentic-ai-deployment
- Nicole Witowski, How AI tool Elsa could shape the FDA review process, Definitive Healthcare Blog, July 15, 2025, https://www.definitivehc.com/blog/fda-releases-ai-tool-elsa.
- Brendan Bordelon, Trump Orders All Federal Agencies to Cease Using Anthropic, POLITICO (Feb. 27, 2026, 4:11 PM), https://www.politico.com/news/2026/02/27/trump-orders-all-federal-agencies-to-stop-using-anthropic-00804517
- Sheera Frenkel, Cade Metz & Julian E. Barnes, How Talks Between Anthropic and the Defense Dept. Fell Apart, N.Y. TIMES (Mar. 1, 2026), https://www.nytimes.com/2026/03/01/technology/anthropic-defense-dept-openai-talks.html. See also, Anteau, Alex. "Consequences Are 'Enormous': Anthropic Sues Department of War, Alleging 'Retaliation'." Law.com, 9 Mar. 2026, https://www.law.com/2026/03/09/consequences-are-enormous-anthropic-sues-department-of-war-alleging-retaliation/.
- Nicholas Florko, HHS starts phasing out Anthropic’s Claude, STAT News (Mar. 3, 2026), https://www.statnews.com/2026/03/03/hhs-starts-phasing-out-anthropic-claude-health-tech/
- Margaret Manto, HHS Tells Employees to Stop Using Anthropic’s Claude, NOTUS (Mar. 2, 2026, 1:41 PM), https://www.notus.org/trump-white-house/hhs-employees-stop-anthropic-claude-ai-platform
- U.S. General Services Administration, GSA Announces New Partnership with OpenAI, Delivering Deep Discount to ChatGPT Gov-Wide Through MAS (Aug. 6, 2025), https://www.gsa.gov/about-us/newsroom/news-releases/gsa-announces-new-partnership-with-openai-delivering-deep-discount-to-chatgpt-08062025
- U.S. Food & Drug Admin., Press Release, FDA Launches Agency-Wide AI Tool to Optimize Performance for the American People (June 2, 2025), https://www.fda.gov/news-events/press-announcements/fda-launches-agency-wide-ai-tool-optimize-performance-american-people
- Retrieval-augmented generation (RAG) is a technique used with large language models (LLMs) that enables the model to access and use specific, up-to-date information from selected data sources, giving users more control over what information the AI draws on when generating responses. See Lewis P., “Retrieval‐Augmented Generation for Knowledge‐Intensive NLP Tasks,” (2020), https://ui.adsabs.harvard.edu/abs/2020arXiv200511401L
- OpenAI and the FDA Are Holding Talks About Using AI in Drug Evaluation, WIRED (May 7, 2025, 3:59 PM), https://www.wired.com/story/openai-fda-doge-ai-drug-evaluation/
- Brittany Trang, FDA Rolls Out AI Tool Agency-Wide, Weeks Ahead of Schedule, STAT+ (June 2, 2025), https://www.statnews.com/2025/06/02/fda-artificial-intelligence-implementation-plans-makary/
- Natalia Mesa, FDA’s AI Rollout Raises Questions Around Readiness, Legality, BIOSPACE (June 30, 2025), https://www.biospace.com/fda/fdas-ai-rollout-raises-questions-around-readiness-legality
- Phie Jacobs, Trump Officials Downplay Fake Citations in High-Profile Report on Children’s Health: References to Phantom Studies Comes After White House Pledge to Practice “Gold Standard” Science, SCIENCE (May 30, 2025, 4:50 PM ET), https://www.science.org/content/article/trump-officials-downplay-fake-citations-high-profile-report-children-s-health
- Rajat Gupta & Alice Rison, Vertex AI Search and Generative AI on Vertex AI Achieve FedRAMP High Authorization, GOOGLE CLOUD BLOG (Mar. 20, 2025), https://cloud.google.com/blog/topics/public-sector/vertex-ai-search-and-generative-ai-with-gemini-achieve-fedramp-high
- See Nihal Krishan, Microsoft Launches Generative AI Service for Government Agencies, FEDSCOOP (June 7, 2023), https://fedscoop.com/microsoft-launches-azure-openai-service-for-government/ and OpenAI, Providing ChatGPT to the Entire U.S. Federal Workforce: First-of-Its-Kind Partnership with General Services Administration Will Give Federal Agencies Access to ChatGPT Enterprise for $1 for the Next Year (Aug. 6, 2025), https://openai.com/index/providing-chatgpt-to-the-entire-us-federal-workforce/
- For more background, see Kimberly Chew, Odette Hauke, and Kathleen Snyder, “AI At The FDA: Legal Implications And Strategic Considerations For Drug Developers,” Clinical Leader, January 19, 2026, https://www.clinicalleader.com/doc/ai-at-the-fda-legal-implications-and-strategic-considerations-for-drug-developers-0001 and Kimberly Chew, Odette Hauke, and Kathleen Snyder, “Navigating FDA's New AI Systems: Practical Tips For Regulatory Success,” Clinical Leader, January 19, 2026, https://www.clinicalleader.com/doc/navigating-fda-s-new-ai-systems-practical-tips-for-regulatory-success-0001.
About The Authors:
Kimberly Chew is senior counsel in Husch Blackwell LLP’s virtual office, The Link. Chew is a seasoned professional with a background in biotech research, leveraging her experience to guide clients through the intricate landscape of clinical trials, FDA regulations, and academic research compliance. As the cofounder and co-lead of the firm’s Psychedelic and Emerging Therapies practice group, Kimberly is inspired by the potential of psychedelic therapeutics to address mental health conditions like PTSD. Her practice encompasses regulatory due diligence and intellectual property enforcement, particularly in patent infringement and validity. She can be reached at kimberly.chew@huschblackwell.com.
Michael Yang is a principal in Husch Blackwell Consulting’s AI Advisory Services practice, where he helps organizations navigate the practical, legal, and governance challenges of adopting AI. He works closely with executive leadership, legal and compliance teams, and technical stakeholders to ensure AI initiatives are effective, defensible, and responsibly deployed. Michael brings over 25 years of experience as a technology-focused attorney to his advisory work, providing a legal foundation for HBC AI’s consulting services. His background includes extensive work with AI and generative AI technologies, advising product and engineering teams on development, deployment, and risk management. He can be reached at michael.yang@hbconsulting.com.