Source – https://www.bbntimes.com/
The FDA is the oldest consumer protection agency, and is a part of the U.S. Department of Health and Human Services. Its charter is to protect public health by regulating a broad spectrum of products, such as vaccines, prescription medication, over-the-counter drugs, dietary supplements, bottled water, food additives, infant formulas, blood products, cellular and gene therapy products, tissue products, medical devices, dental devices, implants, prosthetics, electronics that radiate (e.g., microwave ovens, X-ray equipment, laser products, ultrasonic devices, mercury vapor lamps, sunlamps), cosmetics, livestock feeds, pet foods, veterinary drugs and devices, cigarettes, tobacco, and more products.
In April 2019, the FDA released a discussion paper and request for feedback to its proposed regulatory framework for modifications to AI machine learning-based software as a medical device. Examples of SaMD include AI-assisted retinal scanners, smartwatch ECG to measure heart rhythm, CT diagnostic scans for hemorrhages, ECG-gated CT scan diagnostics for arterial defects, computer-aided detection (CAD) for post-imaging cancer diagnostics, echocardiogram diagnostics for calculating left ventricular ejection fraction (EF), and using smartphones to view diagnostic magnetic resonance imaging (MRI).
The newly released plan is a response to the comments received from stakeholder regarding the April 2019 discussion paper. The plan covers five areas: 1) custom regulatory framework for AI machine learning-based SaMD, 2) good machine learning practices (GMLP), 3) patient-centered approach incorporating transparency to users, 4) regulatory science methods related to algorithm bias and robustness, and 5) real-world performance.
This year the FDA plans to update the framework for AI machine learning-based SaMD via publishing a draft guidance on the “predetermined change control plan.” The FDA has cleared and approved AI machine learning-based software as a medical device. Usually these approvals were for “algorithms that are ‘locked’ prior to marketing, where algorithm changes likely require FDA premarket review for changes beyond the original market authorization.”
How to regulate evolving machine learning algorithms that change over time? These types of evolutionary algorithms are not uncommon in machine learning. Real-world data is often used to improve algorithms that were trained using existing data sets, or in some cases, computer-simulated training data. The incorporation of real-world data to fine-tune algorithms may produce different output. The goal of such evolving learning algorithms is to improve predictions, pattern-recognition, and decisions based on actual data over time. Nonetheless, even if these types of algorithms do result in better performance over time, it is still important to communicate to the medical device user what exactly to expect for transparency and clarity sake.
In the area of establishing and defining good machine learning practices (GMLP), the FDA is “committing to deepening its work in these communities in order to encourage consensus outcomes that will be most useful for the development and oversight of AI/ML based technologies,” and aims to provide “a robust approach to cybersecurity for medical devices.”
In 2021, the FDA plans to hold a public workshop on “how device labeling supports transparency to users and enhances trust in AI/ML-based devices” in efforts to promote transparency, an important part of a patient-centered approach.
To address algorithm bias and robustness, the FDA plans to support regulatory science efforts to develop methods to identify and eliminate bias. “The Agency recognizes the crucial importance for medical devices to be well suited for a racially and ethnically diverse intended patient population and the need for improved methodologies for the identification and improvement of machine learning algorithms,” wrote the FDA.
The FDA is supporting collaborative regulatory science research at various institutions to develop methods to evaluate AI machine learning-based medical software. These research partners include the FDA Centers for Excellence in Regulatory Science and Innovation (CERSIs) at the University of California San Francisco (UCSF), Stanford University, and Johns Hopkins University.
The final part of the plan aims to provide clarity on real-world performance monitoring for AI machine learning-based software as a medical device. The FDA plans to “support the piloting of real-world performance monitoring by working with stakeholders on a voluntary basis” and engaging with the public in order to assist in creating a framework for collecting and validating real-world performance metrics and parameters.
“The FDA welcomes continued feedback in this area and looks forward to engaging with stakeholders on these efforts,” wrote the FDA.
Artificial intelligence machine learning is gaining traction across many industries, including the areas of health care, life sciences, biotech, and pharmaceutical sectors. With this newly released plan, the FDA has advanced its ongoing discussion with its stakeholders in efforts to provide regulations that ensure the safety and security of AI machine learning-based software as a medical device in order to protect public health.