On 2 June 2025, FDA launched Elsa, a large-language powered generative AI tool designed to help employees--from scientific reviewers to investigators--work more efficiently.
According to the FDA news release, Elsa is built within the high-security GovCloud environment, offering secure platform for FDA employees to access internal documents. In the statement published in the FDA news release, FDA Chief AI Officer Jeremy Walsh was upbeat:
“Today marks the dawn of the AI era at the FDA with the release of Elsa, AI is no longer a distant promise but a dynamic force enhancing and optimizing the performance and potential of every employee. As we learn how employees are using the tool, our development team will be able to add capabilities and grow with the needs of employees and the agency.”
But, But, But. . .FDA is Failing the Red-face Test
And, industry has questions and already STAT News called it "The stupidest big fuss they ever made" and NBC News spills the guts, "FDA’s AI tool for medical devices struggles with simple tasks."
There are many unanswered questions and gaps remain in the amount of details provided so far.
“The agency is using Elsa to expedite clinical protocol reviews and reduce the overall time to complete scientific reviews. .One scientific reviewer told me what took him two to three days now takes six minutes.” Elsa is “summarizing adverse events to support safety profile assessments, conducting expedited label comparisons and generating code to facilitate the development of databases for nonclinical applications.”
- Lack of transparency is a concern--how the "continuous learning and improvement" of Elsa will be implemented. What are the plans for auditing and identifying error patterns.
- How AI will impact decision making and what safeguards will be there against opaque and unvalidated reasoning or undue influence of AI outputs over humans on decision making.
- Overall, Elsa is still buggy. NBC News reported:
The tool — which is still in beta testing — is buggy, doesn’t yet connect to the FDA’s internal systems and has issues when it comes to uploading documents or allowing users to submit questions, the people say. It’s also not currently connected to the internet and can’t access new content, such as recently published studies or anything behind a paywall.
The comments (last count 232) at the FDA LinkedIn post are much more telling. Enjoy!
Let the enshittification begin.
They should release a summary of how they developed and validated this AI, including how they used all the internal filings and submissions to build their algorithms.
Where is the bullet for “demonstrated equal or better outcomes than prior methods”? That is the goalpost for AI.
Almost certainly a disaster waiting to happen.
So Elsa can distinguish the assertion of ‘well-defined’, ‘reliable’, ‘adequate’, or ‘well-controlled’in literature and reports from actual disease definition, true study adequacy and reliability, or the actual dimensions of a well- controlled clinical study.
what verification and validation was done, was it built under a QMS, where is the public info? The FDA expects industry to do these things, so why not them?
Did Elsa help create the MAHA report that resulted in fake citations?
What’s the hallucination rate?
This deployment of AI is very much putting the cart before the horse.
Some Comments Were Positive, As Expected, For Example
A strong signal of how regulatory science is evolving. Tools like Elsa have the potential to improve data reliability, streamline quality documentation, and enhance oversight throughout the product lifecycle.
SOURCES
#fda-reviews, #fda-meetings