AI At The FDA: Legal Implications And Strategic Considerations For Drug Developers
By Kimberly Chew, Esq., Odette Hauke, and Kathleen Snyder, Esq.

On December 1, 2025, the FDA announced the deployment of agentic AI capabilities for all agency employees. As part of the agentic AI deployment, the FDA is launching a two-month Agentic AI Challenge for FDA staff to build agentic AI solutions and demonstrate them at the FDA Scientific Computing Day in January 2026.1 This follows the FDA’s June 2, 2025, launch of Elsa, an agency‑wide generative‑AI assistant for employees, from scientific reviewers to investigators. Elsa, short for Electronic Language System Assistant, was built using Anthropic’s Claude model.2 Per the FDA’s announcements, both the agentic AI solutions and Elsa run in a high‑security GovCloud environment and their models do not train on data submitted by regulated industry.3 FDA describes current uses that include summarizing adverse events, performing label comparisons, assisting with clinical protocol review, generating code to improve internal databases, and helping identify high‑priority inspection targets, while keeping human decision‑making firmly in the loop.
Why this matters for drug developers and their counsel is not simply speed. AI‑assisted review changes how the administrative record is created, how queries are framed, and how internal expectations are set during scientific review. Those shifts have legal and operational consequences. They intersect with trade secret and confidential commercial information (CCI) protections4, information security obligations (FISMA, OMB A‑130, NIST frameworks), records management duties under the Federal Records Act, and foundational administrative law requirements of reasoned, reviewable decision‑making (APA; Motor Vehicle Mfrs. Ass’n v. State Farm, 463 U.S. 29 (1983); Camp v. Pitts, 411 U.S. 138 (1973)).5
This two-part series provides a practical, solution-focused guide to help sponsors realize efficiency gains while designing for legal robustness. Part one addresses:
- Data confidentiality & trade secrets: What the statutes and regulations already protect, and where AI‑mediated workflows can still create exposure
- Security of prompts and exchanges: Why prompts themselves may contain CCI/trade secrets, and how to minimize spillover risk
- Due process & transparency: Preserving a reviewable administrative record when AI‑assisted prompts or outputs influence staff judgments
- Operational readiness: Documentation discipline, AI‑literacy, and “AI‑readiness” reviews to preempt misinterpretations
- Novel modalities: How precedent‑leaning tools may underrepresent psychedelic therapies, CGT, and digital biomarkers, and what that means for strategy.
Part two will address (read the article here):
- Clarity and completeness in AI-era submissions
- Writing for both human and AI reviewers
- Proactive engagement with the FDA
- Trade secret and data protection in practice
- Meticulous recordkeeping and traceability
- Identifying and addressing AI limitations
- Building internal AI literacy
Our goal is to help teams realize the efficiency upside while designing for legal robustness, in support of efficient, fair, and secure reviews.
The FDA’s AI Turn: Elsa And The Modernization Of Drug Review
Elsa’s deployment at the FDA introduces new review workflows: summarizing adverse events, surfacing data gaps, and triaging submissions. For legal and regulatory teams, understanding these technical capabilities is essential: AI-generated queries may anchor expectations or create administrative record artifacts that differ from traditional human-led reviews. Legal counsel should work closely with scientific teams to anticipate how Elsa might interpret novel endpoints or data modalities, ensuring that both the submission and the record are robust and reviewable.
What Elsa Does Today (As Publicly Described)
The FDA’s deployment of Elsa marks a pivotal modernization in regulatory review.6 Publicly, the agency describes Elsa as a secure, internal-only AI assistant designed to enhance efficiency and analytical consistency. Its capabilities include summarizing adverse event trends, comparing product labels, and identifying inconsistencies across submissions. Elsa also supports protocol review by flagging anomalies and surfacing precedent-based questions, helping to triage data for inspection prioritization, and generating code snippets to streamline internal databases. Because Elsa operates in a FedRAMP‑certified GovCloud environment7 and does not train on sponsor data, the tool’s use is framed as risk-controlled and focused on augmenting, not replacing, human scientific judgment. For sponsors, understanding these technical contours is essential: AI‑generated prompts may anchor internal expectations, influence reviewer focus, or introduce artifacts into the administrative record that differ from traditional review processes.
Where Risk Surfaces For Sponsors
The adoption of AI in regulatory review introduces novel exposure points for sponsors. First, regulatory signaling risk arises when AI‑framed queries inadvertently shape reviewer expectations. A mis‑scoped or pattern‑driven prompt could imply a deficiency where none exists. Sponsors should counter this by providing precise dossier citations, offering concise rationales, and documenting clarifications in writing. Second, record integrity becomes critical. If Elsa‑generated prompts or summaries influence regulatory correspondence, those artifacts and the human rationale for adopting them should be preserved in the administrative record to ensure reviewability under State Farm and Camp.8 Third, trade secret hygiene requires vigilance: Prompts and responses may themselves contain CCI. Sponsors should avoid embedding proprietary details unnecessarily, tag sensitive content clearly, and route it through secure internal channels. Finally, even within secure GovCloud systems, AI‑assisted workflows create new intra‑agency sharing vectors.9 Counsel should account for this in risk assessments and reinforce the FDA’s statutory confidentiality obligations under 21 CFR 20.61, 314.430, and 601.51.10
Novel Or Underrepresented Modalities
In areas such as psychedelic drug development, where precedent is sparse and endpoints are subjective, Elsa may over‑weight historical comparators or misinterpret novel measures. Sponsors can mitigate this by including concise bridging rationales for each novel endpoint, cross‑referencing statistical analysis plans to show bias controls, and explicitly requesting human expert adjudication when AI‑framed queries appear modality‑inapt. Documenting these exchanges contemporaneously ensures a transparent and reviewable administrative record.
Practical Implications For Sponsors
Sponsors should institutionalize an AI‑readiness review before major submissions to identify ambiguities that an algorithm might surface. Maintaining a contemporaneous log of data submitted, queries received, and responses provided, with timestamps and dossier citations, will strengthen record integrity. Cross‑functional AI literacy programs should train regulatory, clinical, and legal teams to recognize AI‑generated language and respond appropriately. Finally, sponsors should establish formal clarification pathways to address AI‑framed requests that appear immaterial or mis‑scoped and map all AI‑related correspondence to internal records retention schedules under the Federal Records Act.
Key Legal Issues And Concerns For Life Sciences Companies
Data Confidentiality And Trade Secret Protection
The deployment of AI tools like Elsa at the FDA introduces new efficiencies, but also new complexities, for protecting the confidentiality of sponsor data, trade secrets, and CCI. While foundational statutes and regulations provide robust protections, the operational realities of AI-assisted review require sponsors to revisit and reinforce their data protection strategies. Legal, regulatory, and scientific teams must work together to ensure that both the substance and the process of FDA review remain compliant and defensible.
A comprehensive framework of federal statutes and FDA regulations governs the protection of trade secrets and CCI submitted in support of NDAs, BLAs, and other regulatory filings. The FD&C Act § 301(j) (21 U.S.C. § 331(j)) prohibits the FDA from revealing “any method or process which is entitled to protection as a trade secret,” a statutory bar that applies regardless of whether data is accessed by human or AI reviewers. Complementing this, the Trade Secrets Act11 criminalizes unauthorized disclosure by federal employees of trade secrets or confidential commercial or financial information obtained in the course of their duties. FOIA Exemption 412 further shields from public disclosure all “trade secrets and commercial or financial information obtained from a person and privileged or confidential,” with the Supreme Court’s decision in Food Marketing Institute v. Argus Leader Media clarifying that information is “confidential” if it is customarily kept private and provided to the government under an assurance of privacy.13 FDA regulations at 21 CFR 314.430 (for NDAs) and 21 CFR 601.51 (for BLAs) specify what information can be disclosed, with NDA existence not disclosed prior to approval14 and manufacturing methods, processes, and other trade secrets or CCI protected unless previously made public15, and parallel protections for BLAs. Additionally, 21 CFR 20.61 defines the agency’s treatment of trade secrets and CCI, reinforcing FDA’s obligation to withhold such information from public disclosure, while 21 CFR 20.85 allows for intra-agency sharing of otherwise exempt records but does not override statutory bars on trade secret or CCI disclosure. Together, these provisions establish a robust legal foundation for the protection of sponsor data throughout the FDA review process, including in the context of AI-assisted workflows.
While Elsa’s design mitigates many traditional security risks by operating within a FedRAMP-certified environment and not training on sponsor data, its use still raises important questions about the management and oversight of AI-generated regulatory artifacts. Prompts and exchanges with Elsa, including clarifications, queries, or responses, may themselves contain CCI or trade secrets. Unlike traditional review, where disclosure is generally limited to formal submissions, AI-assisted processes can generate new artifacts such as prompt logs or AI-generated summaries that may be stored or shared internally within the agency. To address this, sponsors should adopt a “minimum necessary disclosure” approach in all written communications with the FDA, particularly when responding to agency queries that may have been informed by AI tools. It is critical to avoid including unnecessary proprietary detail in prompts, emails, or uploaded artifacts and to clearly tag CCI while ensuring that sensitive content is routed through appropriate internal channels. Because any prompt or exchange with Elsa may be logged or become part of the administrative record, sponsors should document what information constitutes CCI, apply consistent tagging, and maintain meticulous records of what was submitted, when, and to whom. Remedies for inadvertent disclosure of trade secrets or CCI, especially if such disclosure results from AI system logs or internal sharing, may be limited and uncertain. For this reason, prevention through disciplined disclosure and recordkeeping is paramount. Finally, sponsors should reinforce the FDA’s confidentiality obligations in their correspondence by referencing the relevant statutory and regulatory provisions, such as 21 CFR 20.61, 21 CFR 314.430, 21 CFR 601.51, and FOIA Exemption 4.16 Doing so helps set clear expectations and signals heightened sensitivity regarding AI-mediated exchanges.
The Security Of Prompts And Data Submissions
The introduction of AI tools like Elsa into the FDA’s regulatory workflow brings new security considerations, especially regarding how prompts, queries, and data submissions are handled within digital systems. While Elsa’s FedRAMP-certified environment and the FDA’s existing confidentiality obligations provide a strong baseline of protection,17 AI-assisted review can generate new types of internal records, such as prompt logs or AI-generated outputs, that may contain proprietary information.
Unlike traditional review processes, AI systems can create additional artifacts, such as logs of prompts and responses, that may be stored, accessed, or shared more widely within the agency than intended. This increases the risk of inadvertent disclosure or access to sensitive information, particularly if these records are integrated with other internal tools or made available beyond the immediate review team.
To address these evolving risks, sponsors should:
- Limit detail in prompts and submissions: Avoid unnecessary disclosure of proprietary information in all communications, including responses to queries that may have originated from AI tools.
- Clearly mark CCI and trade secrets: Consistently tag and route sensitive content through secure channels, as outlined in existing legal and regulatory frameworks.
- Coordinate with IT and security teams: Ensure internal protocols and technical safeguards are updated to reflect the unique risks of AI-mediated review, such as the potential for broader internal sharing or retention of prompt logs.
- Clarify data handling expectations: In communications with the FDA, sponsors should seek confirmation on how sensitive data, including AI-generated artifacts, will be protected and request transparency about data retention and access within the agency.
Data security concerns extend beyond the risk of unauthorized access or disclosure within the FDA. Cyber threats, vulnerabilities in cloud-based systems, and the possibility of unintended sharing with other federal agencies all underscore the need for vigilance. Although 21 CFR 20.85 allows for intra-agency sharing of exempt records, it does not override statutory bars on trade secret or CCI disclosure. Sponsors should clarify these boundaries in communications with the FDA and request confirmation of how sensitive data, including prompts and AI-generated artifacts, will be protected throughout the review process. By practicing disciplined disclosure, reinforcing FDA’s obligations in correspondence, and collaborating with information security experts, sponsors can help ensure that their proprietary data remains protected in the evolving landscape of AI-enabled regulatory review.
Due Process And Transparency
The integration of AI tools like Elsa into the FDA’s regulatory review process introduces new challenges for due process and transparency, cornerstones of administrative law that are essential to fair, reasoned, and reviewable agency decision-making. The Administrative Procedure Act (APA)18, together with the Fifth Amendment Due Process Clause, requires that agency actions be grounded in a reasoned explanation and supported by an adequate administrative record. These requirements are further underscored by Supreme Court precedents19 that emphasize that agency decisions must be reviewable and based on the record before the agency at the time of the decision.
The use of AI-generated prompts, summaries, or analyses in FDA review does not alter these legal standards, but it does raise important questions about how the administrative record is created and maintained. When Elsa informs regulatory decisions or triggers requests for additional information, it is essential that both the AI-generated artifacts and the human reviewer’s rationale for relying on them are preserved in the administrative record. This dual documentation is critical to ensuring that sponsors can understand, respond to, and, if necessary, challenge the basis for agency actions. Sponsors should proactively request that the FDA retain both the AI-generated materials and the associated human judgment as part of the official record, as required for judicial review under the APA and relevant case law.
Transparency also demands that sponsors be able to discern the basis for any queries or requests generated by Elsa. If the rationale for an AI-driven request is unclear or appears inconsistent with the submitted data, sponsors should seek clarification in writing and, where appropriate, request an explanation of the underlying reasoning. This approach not only supports sponsors’ ability to provide effective responses but also helps ensure that the administrative record accurately reflects the decision-making process. Sponsors can further protect their procedural rights by referencing relevant FDA administrative regulations, including 21 CFR Part 10 (administrative practices and procedures) and 21 CFR 314.200 (NDA hearing procedures), which provide mechanisms for sponsors to submit additional information, request clarification, and formally respond to agency findings.
In practice, sponsors should maintain a contemporaneous log of all communications with the FDA, including timestamps, references to specific dossier sections, and records of both AI-generated and human-authored materials. This documentation will be essential for reconstructing the administrative record and supporting any future challenge or clarification of agency actions. In summary, the adoption of AI-assisted review at the FDA makes it more important than ever for sponsors to actively safeguard due process and transparency.
Practical Tips For Navigating AI-Assisted FDA Review
As discussed above, the FDA is rapidly modernizing its regulatory review processes with advanced AI systems, notably Elsa, and is developing other agentic AI capabilities. These technologies promise greater speed, consistency, and data-driven insights, but they also introduce new complexities for sponsors. To succeed in this evolving landscape, sponsor organizations should consider the following as they develop their submission strategies, documentation practices, and internal capabilities. As the FDA integrates AI into its regulatory workflows, sponsors should anticipate new communication patterns and data‑driven inquiries. Sponsors should meticulously document all data, prompts, and responses exchanged with the agency. Internal AI‑readiness simulations can help identify ambiguities before submission, reducing the likelihood of unexpected AI‑generated queries. When suspected, or identified, AI‑generated rationales are unclear, request clarification in writing and explicitly reference the corresponding dossier sections. Develop AI literacy within teams to recognize and respond to AI‑framed language, and apply trade secret strategies, such as redaction, anonymization, and CCI tagging, to limit unnecessary disclosure. Finally, engage early with the FDA to discuss how Elsa will be used in your review, particularly for novel modalities, and proactively address anticipated challenges in your filings.
Conclusion
The FDA’s adoption of advanced AI tools such as Elsa marks a significant shift in regulatory review, offering new opportunities for efficiency while introducing fresh legal and operational challenges for drug developers. As AI becomes further embedded in agency workflows, sponsors must be proactive in safeguarding data confidentiality, maintaining the integrity of the administrative record, and ensuring transparency and due process. By understanding the evolving risk landscape and reinforcing robust documentation, security, and compliance practices, organizations can position themselves for regulatory success in this new era.
In part two of this series, we will move from legal foundations to practical strategies. We’ll provide actionable tips for structuring submissions, engaging with the FDA, protecting trade secrets in practice, and building internal AI literacy — empowering sponsors to navigate AI-enabled review processes with confidence and agility.
Endnotes:
- https://www.fda.gov/news-events/press-announcements/fda-expands-artificial-intelligence-capabilities-agentic-ai-deployment
- https://www.definitivehc.com/blog/fda-releases-ai-tool-elsa
- FDA Press Release, FDA Launches Agency-Wide AI Tool to Optimize Performance for the American People (June 2, 2025), https://www.fda.gov/news-events/press-announcements/fda-launches-agency-wide-ai-tool-optimize-performance-american-people and FDA News Release, FDA Expands Artificial Intelligence Capabilities with Agentic AI Deployment (December 1, 2025) https://www.fda.gov/news-events/press-announcements/fda-expands-artificial-intelligence-capabilities-agentic-ai-deployment
- FD&C Act § 301(j); 21 CFR 20.61; 21 CFR 314.430 / 601.51; FOIA Exemption 4
- While Loper Bright significantly altered the field of administrative law, it did not affect the holding in State Farm.
- Reuters, U.S. FDA Launches AI Tool to Reduce Time Taken for Scientific Reviews (June 2, 2025).
- FDA Press Release, FDA Launches Agency-Wide AI Tool to Optimize Performance for the American People (June 2, 2025), https://www.fda.gov/news-events/press-announcements/fda-launches-agency-wide-ai-tool-optimize-performance-american-people.
- Motor Vehicle Mfrs. Ass’n v. State Farm Mut. Auto. Ins. Co., 463 U.S. 29 (1983); Camp v. Pitts, 411 U.S. 138 (1973).
- 21 C.F.R. § 20.85 (2024).
- 21 C.F.R. §§ 20.61, 314.430, 601.51 (2024); 21 U.S.C. § 331(j); 18 U.S.C. § 1905.
- 18 U.S.C. § 1905
- 5 U.S.C. § 552(b)(4)
- Food Marketing Institute v. Argus Leader Media, 139 S. Ct. 2356 (2019)
- 21 CFR 314.430(b)
- 21 CFR 314.430(d)
- U.S. Dep’t of Justice, Office of Information Policy, FOIA Guide: Exemption 4 (Oct. 2019), https://www.justice.gov/oip.
- 21 U.S.C. § 331(j); 5 U.S.C. § 552(b)(4); 21 CFR 20.61, 21 CFR 314.430, 21 CFR 601.51; 44 U.S.C. § 3551 et seq.; The Office of Management and Budget (OMB) revised Circular A-130, “Managing Information as a Strategic Resource, in 2016. See https://www.cio.gov/policies-and-priorities/circular-a-130/; See https://www.nist.gov/privacy-framework/nist-privacy-framework-and-cybersecurity-framework-nist-special-publication-800-53; FDA Press Release, FDA Launches Agency-Wide AI Tool to Optimize Performance for the American People (June 2, 2025), https://www.fda.gov/news-events/press-announcements/fda-launches-agency-wide-ai-tool-optimize-performance-american-people.
- 5 U.S.C. §§ 551–559 and 701–706
- See Motor Vehicle Manufacturers Association v. State Farm Mutual Automobile Insurance Co., 463 U.S. 29 (1983), and Camp v. Pitts, 411 U.S. 138 (1973)
About The Authors:
Kimberly Chew is senior counsel in Husch Blackwell LLP’s virtual office, The Link. Chew is a seasoned professional with a rich background in biotech research, leveraging her extensive experience to guide clients through the intricate landscape of clinical trials, FDA regulations, and academic research compliance. As the co-founder and co-lead of the firm’s psychedelic and emerging therapies practice group, Kimberly is particularly inspired by the potential of psychedelic therapeutics to address mental health conditions like PTSD. Her practice encompasses regulatory due diligence and intellectual property enforcement, particularly in patent infringement and validity.
Odette Hauke is a global regulatory affairs consultant supporting regulatory strategy across clinical development and registration, with an emphasis on clear regulatory narratives and submission strategies that meet heightened evidentiary expectations. She has 12+ years’ experience directing IND/CTA/NDA/BLA/MAA work across the U.S., EU, UK, Japan, Canada, APAC, and Latin America, and is experienced in integrating AI/ML-enabled regulatory intelligence into decision-making. Previously, she served as associate director of regulatory affairs at AtaiBeckley, leading global regulatory strategy for first-in-class psychedelic and neuropsychiatric programs including VLS-01 (DMT) and EMP-01 (MDMA), navigating novel endpoints, complex trial operations, Schedule I requirements, and evolving global guidance. Earlier, at Memorial Sloan Kettering Cancer Center, she managed 200+ oncology IND submissions and maintained regulatory documentation for 30+ clinical trials, including pediatric research. She holds an M.S. in Regulatory Affairs and a B.S. in Epidemiology.
Based in Boston, Kathleen Snyder practices at the intersection of healthcare and technology, providing clients with practical legal advice on AI governance, strategic technology and commercial contracts, data strategies, intellectual property, and regulatory interpretation. With 20+ years of experience in the healthcare industry, Kathleen has an intrinsic understanding of the healthcare landscape. Her technology-focused transactional practice, coupled with her regulatory experience, gives her a unique perspective that allows her to provide holistic legal advice.