Rise Of The Machines: FDA Artificial Intelligence Guidance Is Coming
US FDA says as artificial intelligence and machine learning offer new opportunities to improve patient care, the agency hopes to encourage innovation by developing a draft guidance on the issue for sponsors. It also released a discussion paper outlining key issues it wants feedback on from industry and other key stakeholders.
To keep up with advances in artificial intelligence and machine learning in the health-care sector, US FDA is working to develop a draft guidance that would help sponsors gain clarity on what the agency expects from them.
FDA Commissioner Scott Gottlieb issued a statement on April 2 espousing the potential of AI to change the face of health care. He timed it with the release of a discussion paper that highlights questions and considerations the agency wants feedback on from industry and other stakeholders as it develops its AI draft guidance.
“I can envision a world where, one day, artificial intelligence can help detect and treat challenging health problems,” FDA Commissioner Scott Gottlieb says.
While interest in use of artificial intelligence in the health-care sector has grown over the past decade, things have really started to heat up in just the past year. In April 2018 FDA approved IDx Inc.’s IDx-DR, a device that uses AI to detect diabetic retinopathy, and two de novos in September of the same year for Apple Inc.’s Apple Watch, which includes a software to detect risk for stroke. (Also see "Global Device Approvals, Weekly Snapshot: April 9-15, 2018" - Medtech Insight, 16 Apr, 2018.) and (Also see "FDA: Apple De Novo Approvals Signal Innovation To Digital Health Firms" - Medtech Insight, 12 Sep, 2018.)
“The authorization of these technologies was a harbinger of progress that the FDA expects to see as more medical devices incorporate advanced artificial intelligence algorithms to improve their performance and safety,” Gottlieb said.
He noted that AI could have a “profound and positive” impact on health care in the same way it has changed other major industries, such as finance and manufacturing. And Gottlieb isn’t alone in his belief in the technology. A wide array of experts have voiced excitement at the potential good AI can have on personalized medicine and patient care in general. (Also see "The AI Touch: Artificial Intelligence Could Boost Quality Systems, Cut FDA Inspections – But Is Industry Ready?" - Medtech Insight, 25 Oct, 2017.)
“I can envision a world where, one day, artificial intelligence can help detect and treat challenging health problems, for example, by recognizing the signs of disease well in advance of what we can do today,” Gottlieb said. “These tools can provide more time for intervention, identifying effective therapies and ultimately saving lives.”
Gottlieb says FDA’s objective is to develop a guidance document that allows the agency to keep up with the rapidly evolving nature of AI innovation.
He pointed out that the AI devices that have been greenlit by the agency so far use “locked” algorithms where the software does not continually adapt or learn based on how the product is used. Such AI devices are periodically modified by the manufacturers as they learn newer and better ways to improve their device, rather than letting the device itself learn how to improve.
However, looking beyond the basic locked algorithms, Gottlieb said there’s a great deal of promise for machine learning algorithms, also known as “adaptive” and “continuously learning” algorithms, that can update themselves based on what they learn over time.
“Adaptive algorithms can learn from new user data presented to the algorithm through real-world use,” he noted. “For example, an algorithm that detects breast cancer lesions on mammograms could learn to improve the confidence with which it identifies lesions as cancerous or may learn to identify specific subtypes of breast cancer by continually learning from real-world use and feedback.”
AI that can update itself creates a 510(k) conundrum for FDA.
But AI that can update itself creates a conundrum for FDA because typically such updates need to go through a 510(k)-review process – yet using traditional review pathways could be a significant barrier to getting such products to patients.
FDA does have a risk-based framework for software as a medical device (SaMD) products and is also developing a precertification program that would allow the agency to approve marketing of certain SaMDs based on its trust in the companies producing them.
AI and machine learning software is somewhat different, however, and the agency is exploring a new review framework for such devices that would allow for modifications to the algorithms based on real-world learning and adaptation that still ensures patients are kept safe.
“A new approach to these technologies would address the need for the algorithms to learn and adapt when used in the real world. It would be a more tailored fit than our existing regulatory paradigm for software as a medical device,” Gottlieb said.
As with other devices, FDA is looking to use a total product lifecycle approach with AI in order to oversee the iterative nature of these products and ensure it is consistent with the agency’s public-health mission.
“This first step in developing our approach outlines information specific to devices that include artificial intelligence algorithms that make real-world modifications that the agency might require for pre-market review,” Gottlieb noted. “They include the algorithm’s performance, the manufacturer’s plan for modifications and the ability of the manufacturer to manage and control risks of the modifications.”
FDA says it may also decide to review the predetermined change-control plan of an AI device that includes detailed information about the types of anticipated modifications the AI is capable of doing based on its retraining and update strategy, and the methods it's using to implement those changes.
“The goal of the framework is to assure that ongoing algorithm changes follow prespecified performance objectives and change-control plans, use a validation process that ensures improvements to the performance, safety and effectiveness of the artificial intelligence software, and includes real-world monitoring of performance once the device is on the market to ensure safety and effectiveness are maintained,” Gottlieb said.
“We’re exploring this approach because we believe that it will enable beneficial and innovative artificial intelligence software to come to market, while still ensuring the device’s benefits continue to outweigh its risks.”
Stakeholders can comment on the discussion paper through June 3 at www.Regulations.gov under docket No. FDA-2019-N-1185.