FDA's Elsa May Prompt Pharma To Rethink Regulatory Filings
By Pradeepta Mishra, VP, AI Innovation, Beghou Consulting

For many years, pharmaceutical regulatory filings have been characterized by their complexity, high document volume, and manual preparation. Even with the implementation of the electronic filing standard, or eCTD, the process remained labor-intensive, dependent on human expertise for content generation, inter-document coherence, and quality assurance.
The recent decision by the FDA to pilot generative AI represents a foundational alteration, as regulators themselves begin to incorporate the same disruptive technologies that have influenced clinical development and manufacturing analytics. This development is more than just a small improvement in efficiency; it makes a fundamental shift in regulatory science. It could speed up the adoption of a new approach where the submissions are primarily data-driven.
This progression transcends mere incremental efficiency improvements; it signifies a paradigm shift in regulatory science, potentially accelerating an environment where data-centric, AI-interpretable, and automation-validated submissions become the industry standard. Consequently, pharmaceutical organizations will need to expeditiously evolve their regulatory strategies, authoring procedures, and governance frameworks.
What Exactly Is Changing Inside the FDA?
Historically, the FDA has been cautious about adopting automation within regulatory review, focusing instead on structured data formats (e.g., eCTD v4.0), structured product labeling (SPL), and initiatives such as KASA (knowledge-aided assessment & structured application) for chemistry, manufacturing, and controls (CMC). These efforts aimed to enhance data structure but still relied heavily on human reviewers.
Generative AI fundamentally changes this paradigm in several ways:
- Narrative analytics and summarization: FDA pilots are using large language models (LLMs) to synthesize clinical trial narratives, detect discrepancies across clinical summaries (Modules 2 and 5), and cross-reference risk narratives with raw study data.
- Automated consistency checks: AI systems can flag cross-document inconsistencies (e.g., efficacy endpoints reported differently in Clinical Study Report ( CSR) text vs. tabular data sets) at scale, something previously reliant on manual quality control by FDA staff.
- Pattern recognition across submissions: By analyzing thousands of historical submissions, AI models can highlight areas with high regulatory query frequency, allowing FDA reviewers to focus on high-risk dossier sections proactively.
These capabilities suggest a shift from procedural review to intelligent, data-driven assessment, one where variability or lack of structure in submissions will be more visible — and less tolerated.
Regulatory Ramifications
The agency’s adoption of a generative AI tool, dubbed Elsa, will have important regulatory ramifications. First, it will raise the bar for submission quality.
Historically, subtle narrative inconsistencies or minor structural errors might not have materially impacted regulatory review, given manual reviewers’ discretion and focus on clinical data integrity. With AI-driven checks, any inconsistency becomes machine-detectable, pushing sponsors toward precision authoring and machine-readable metadata tagging.
Next, it compresses regulatory timelines. If FDA review cycles accelerate (e.g., pre-NDA meetings informed by AI risk assessments or faster Day 74 filing reviews), sponsors can no longer rely on prolonged query resolution cycles. Up-front dossier quality and pre-validation will become strategically critical.
All of this signals a broader move toward continuous submissions. Generative AI is complementary to the FDA’s real-time review (RTR) and rolling submission pathways, as seen with COVID-19 and oncology fast tracks. AI-enabled review processes can support incremental dossier ingestion rather than episodic full submissions, nudging the industry toward continuous data-flow models.
Implications For Pharma
The FDA’s AI rollout means biopharma and medtech companies, many of whom are already embedding AI into internal workflows and data ecosystems, will have a regulatory partner able to match their technological progress.
One implication for the pharma industry is that the adoption of Elsa signals a shift from document-heavy to data-first submissions. Regulatory content historically emphasizes narrative packaging of data outputs. With regulators using AI tools, raw, structured data sets and metadata will carry greater importance than narrative volume.
That means sponsors will need to implement ontologies (e.g., aligning CDISC SDTM/ADaM data sets with narrative text in Module 2), as well as embed metadata layers (e.g., tagging study endpoints and populations) so AI can cross-validate across formats.
A second implication for industry involves AI-driven authoring and compilation. Pharma companies are already experimenting with AI-driven CSR summarization, CMC section drafting, and label text generation.
FDA’s own generative AI pilots will legitimize and likely accelerate adoption of automated template population, whereby AI-based natural language processing (NLP) tools extract content directly from validated clinical and manufacturing data systems, along with automated QC scripts, which are pre-submission validation engines that replicate likely AI review checks (e.g., cross-verifying numerical data in Module 5 appendices with statistical outputs).
Drugmakers may take this as a cue to transform their regulatory affairs workforce. Regulatory professionals will need data and AI literacy to complement traditional submission expertise.
With those skills in mind, executive search firms may be looking to fill several additional regulatory roles, like AI model validation specialists, who ensure AI-generated content meets good machine learning practices (GMLP); structured content engineers, tasked with building modular content repositories aligned with SPL and IDMP (identification of medicinal products) standards; and metadata curators, who manage controlled terminology, dictionaries, and ontologies feeding into AI-authoring tools.
Of course, any change this significant is sure to surface new compliance risks as well. Using AI internally (e.g., generative AI to generate Module 2 summaries) introduces considerations like traceability, bias/hallucination, and version control, to name a few.
To address traceability, sponsors must document how AI-generated content is verified and approved, aligning with GMLP. If a generative AI model hallucinates efficacy claims or misrepresents population subsets, this creates regulatory liability. And AI-generated content may vary between runs, so strict version locking and validation are mandatory.
What Happens Next
We anticipate such changes to precipitate an evolution in regulatory frameworks. In the near term, look for the FDA to issue AI-specific submission guidance influenced by its own internal experiences.
Potential areas include machine-readable dossier expectations, whereby sponsors may be required to submit structured data packages (e.g., JSON/XML metadata layers) alongside PDFs to allow AI ingestion.
Companies may also be expected to adopt AI-augmented authoring transparency. Those using AI for submission generation may need to submit audit trails and model performance reports as part of the quality documentation.
They’ll need real-time monitoring, too. Post-approval surveillance could increasingly rely on AI-driven ingestion of real-world evidence (RWE), which means submissions may need to include AI-interpretable safety signals.
How Pharma Should Respond
- Invest in regulatory tech infrastructure. Implement structured authoring platforms capable of modular content management. Build regulatory data lakes that unify clinical, safety, and manufacturing data with metadata tagging for AI-readiness.
- Mirror the FDA’s AI review process. Develop internal AI-based QC engines that simulate regulator checks, ensuring discrepancies or inconsistencies are caught internally. Benchmark submission quality metrics against known FDA query patterns and train internal generative AI tools accordingly.
- Establish AI governance. Adopt GMLP and document all AI usage in regulatory submissions, including validation steps and human-in-the-loop sign-off. Create AI governance committees within regulatory affairs to oversee compliance risk associated with generative AI.
- Proactively engage regulators. Participate in FDA emerging-technology programs or pilot initiatives for AI-enabled submissions. Share learning from internal AI adoption to influence and align with evolving regulatory guidance.
The Payoff
FDA’s generative AI adoption is not simply a technology upgrade. It represents a strategic signal of how regulatory review will evolve in the next five years: faster, more data-centric, and less tolerant of variability.
Pharma companies that move early — embedding AI-first regulatory operations, investing in data-driven submission architecture, and building robust AI governance — will not only adapt but also gain a competitive advantage through reducing time-to-approval via cleaner, AI-ready dossiers; lowering regulatory risk through automated QC and early error detection; and enhancing their reputation as tech-forward, regulator-aligned innovators.
In conclusion, the FDA’s embrace of generative AI represents a fundamental change in how regulatory reviews are conducted. For the industry, this is a clear signal: Regulatory strategy can no longer be document-centric and reactive. Instead, companies must transition to data-first, AI-validated, and continuously auditable submission ecosystems.
The shift will require technology investment, workforce re-skilling, and proactive regulator engagement. But the payoff, in the form of faster approvals and better compliance, is significant.
References:
- U.S. Food and Drug Administration. (2025, May 8). FDA announces completion of first AI-assisted scientific review pilot and aggressive agency-wide AI deployment. https://www.fda.gov/news-events/press-announcements/fda-announces-completion-first-ai-assisted-scientific-review-pilot-and-aggressive-agency-wide-ai
- Regulatory Affairs News. (2025, May 8). USFDA announcement: Shortening the drug approval process, integration of generative AI by June end. https://www.regulatoryaffairsnews.com/post/usfda-announcement-shortening-the-drug-approval-process-integration-of-generative-ai-by-june-end
- Reuters. (2025, May 8). U.S. FDA centers to deploy AI internally, following experimental run. https://www.reuters.com/business/healthcare-pharmaceuticals/us-fda-centers-deploy-ai-internally-immediately-2025-05-08/
- Reuters. (2025, June 2). U.S. FDA launches AI tool to reduce time taken for scientific reviews. https://www.reuters.com/business/healthcare-pharmaceuticals/us-fda-launches-ai-tool-reduce-time-taken-scientific-reviews-2025-06-02/
- Axios. (2025, June 2). FDA launches agencywide AI tool. https://www.axios.com/2025/06/02/fda-launches-ai-tool
- JD Supra. (2025, May 15). FDA goes all in on AI: What it means for life sciences companies. https://www.jdsupra.com/legalnews/fda-goes-all-in-on-ai-what-it-means-for-2675818/
- Fierce Biotech. (2025, May 9). FDA will fully adopt generative AI by end of June, Makary says. https://www.fiercebiotech.com/cro/fda-will-fully-adopt-generative-ai-end-june-makary-says
- Health & Pharma. (2025, May 10). FDA AI strategy pilot: Drug review modernization. https://healthandpharma.net/fda-ai-strategy-pilot-drug-review
- BioProcess Online. (n.d.). Understanding the FDA’s Knowledge-Aided Assessment & Structured Application (KASA) framework. https://www.bioprocessonline.com/doc/understanding-the-fda-s-knowledge-aided-assessment-structured-application-kasa-framework-0001
- U.S. Food and Drug Administration. (2023). Knowledge-aided assessment & structured application (KASA) for pharmaceutical quality [PDF]. https://www.fda.gov/media/162865/download
- Rathore, A. S., & Saluja, D. (2019). Knowledge-aided assessment and structured application: An innovative initiative of USFDA. Computational and Structural Biotechnology Journal, 17, 831–838. https://doi.org/10.1016/j.csbj.2019.05.008 — Full text: https://www.sciencedirect.com/science/article/pii/S2590156719300246
- U.S. Food and Drug Administration. (2022). KASA: A structured approach to CMC review [PDF]. https://www.fda.gov/media/161618/download
- Bazarragchaa, D. (2023, January). KASA framework overview [Conference presentation]. CASSS WCBP 2023. https://www.casss.org/docs/default-source/wcbp/2023-speaker-presentations/damdinsuren-bazarragchaa-cder-fda-2023.pdf
About The Author:
Pradeepta Mishra is VP, AI Innovation, Beghou Consulting. He is an AI and data science expert with nearly two decades of experience in AI, technology, and consulting. He has spent more than 12 years specializing in AI-driven product development across multiple industries, including life sciences, pharma, and enterprise solutions. He has authored 11 books and is a sought-after speaker and thought leader in AI, deep learning, and machine learning. He is also an inventor, with 17 patents (6 granted) in AI, deep learning, NLP, and machine learning.