<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>FDA Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/fda/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/fda/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Thu, 10 Jun 2021 05:31:39 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>FDA HAS NEW REGULATORY PLANS FOR AI MACHINE LEARNING</title>
		<link>https://www.aiuniverse.xyz/fda-has-new-regulatory-plans-for-ai-machine-learning/</link>
					<comments>https://www.aiuniverse.xyz/fda-has-new-regulatory-plans-for-ai-machine-learning/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 10 Jun 2021 05:31:37 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[FDA]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[plans]]></category>
		<category><![CDATA[REGULATORY]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14151</guid>

					<description><![CDATA[<p>Source &#8211; https://www.bbntimes.com/ The FDA is the oldest consumer protection agency, and is a part of the U.S. Department of Health and Human Services. Its charter is <a class="read-more-link" href="https://www.aiuniverse.xyz/fda-has-new-regulatory-plans-for-ai-machine-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/fda-has-new-regulatory-plans-for-ai-machine-learning/">FDA HAS NEW REGULATORY PLANS FOR AI MACHINE LEARNING</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.bbntimes.com/</p>



<p>The FDA is the oldest consumer protection agency, and is a part of the U.S. Department of Health and Human Services. Its charter is to protect public health by regulating a broad spectrum of products, such as vaccines, prescription&nbsp;medication, over-the-counter drugs, dietary supplements, bottled water, food additives, infant formulas, blood products, cellular and&nbsp;gene&nbsp;therapy&nbsp;products, tissue products, medical devices, dental devices, implants, prosthetics, electronics that radiate (e.g., microwave ovens, X-ray equipment, laser products, ultrasonic devices, mercury vapor lamps, sunlamps), cosmetics, livestock feeds, pet foods, veterinary drugs and devices, cigarettes, tobacco, and more products.&nbsp;</p>



<p>In April 2019, the FDA released a discussion paper and request for feedback to its proposed&nbsp;regulatory&nbsp;framework for modifications to AI machine learning-based software as a medical device. Examples of SaMD include AI-assisted retinal scanners, smartwatch ECG to measure heart rhythm, CT diagnostic scans for hemorrhages, ECG-gated CT scan diagnostics for arterial defects, computer-aided detection (CAD) for post-imaging cancer diagnostics, echocardiogram diagnostics for calculating left ventricular ejection fraction (EF), and using smartphones to view diagnostic magnetic resonance imaging (MRI).&nbsp;</p>



<p>The newly released plan is a response to the comments received from stakeholder regarding the April 2019 discussion paper. The plan covers five areas: 1) custom regulatory framework for AI machine learning-based SaMD, 2) good machine learning practices (GMLP), 3) patient-centered approach incorporating transparency to users, 4) regulatory science methods related to algorithm&nbsp;bias&nbsp;and robustness, and 5) real-world performance.&nbsp;</p>



<p>This year the FDA plans to update the framework for AI machine learning-based SaMD via publishing a draft guidance on the “predetermined change control plan.” The FDA has cleared and approved AI machine learning-based software as a medical device. Usually these approvals were for “algorithms that are &#8216;locked&#8217; prior to&nbsp;marketing, where algorithm changes likely require FDA premarket review for changes beyond the original market authorization.”</p>



<p>How to regulate evolving machine learning algorithms that change over time? These types of evolutionary algorithms are not uncommon in machine learning. Real-world data is often used to improve algorithms that were trained using existing data sets, or in some cases, computer-simulated training data. The incorporation of real-world data to fine-tune algorithms may produce different output. The goal of such evolving learning algorithms is to improve predictions, pattern-recognition, and decisions based on actual data over time. Nonetheless, even if these types of algorithms do result in better performance over time, it is still important to communicate to the medical device user what exactly to expect for transparency and clarity sake.</p>



<p>In the area of establishing and defining good machine learning practices (GMLP), the FDA is “committing to deepening its work in these communities in order to encourage consensus outcomes that will be most useful for the development and oversight of AI/ML based technologies,” and aims to provide “a robust approach to cybersecurity for medical devices.”&nbsp;</p>



<p>In 2021, the FDA plans to hold a public workshop on “how device labeling supports transparency to users and enhances trust in AI/ML-based devices” in efforts to promote transparency, an important part of a patient-centered approach.</p>



<p>To address algorithm bias and robustness, the FDA plans to support regulatory science efforts to develop methods to identify and eliminate bias. “The Agency recognizes the crucial importance for medical devices to be well suited for a racially and ethnically diverse intended patient population and the need for improved methodologies for the identification and improvement of machine learning algorithms,&#8221; wrote the FDA.</p>



<p>The FDA is supporting collaborative regulatory science research at various institutions to develop methods to evaluate AI machine learning-based medical software. These research partners include the FDA Centers for Excellence in Regulatory Science and&nbsp;Innovation&nbsp;(CERSIs) at the University of California San Francisco (UCSF), Stanford University, and Johns Hopkins University.&nbsp;</p>



<p>The final part of the plan aims to provide clarity on real-world performance monitoring for AI machine learning-based software as a medical device. The FDA plans to “support the piloting of real-world performance monitoring by working with stakeholders on a voluntary basis” and engaging with the public in order to assist in creating a framework for collecting and validating real-world performance metrics and parameters.</p>



<p>“The FDA welcomes continued feedback in this area and looks forward to engaging with stakeholders on these efforts,” wrote the FDA.</p>



<p>Artificial intelligence machine learning is gaining traction across many industries, including the areas of health care, life sciences, biotech, and pharmaceutical sectors. With this newly released plan, the FDA has advanced its ongoing discussion with its stakeholders in efforts to provide regulations that ensure the safety and security of AI machine learning-based software as a medical device in order to protect public health.</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/fda-has-new-regulatory-plans-for-ai-machine-learning/">FDA HAS NEW REGULATORY PLANS FOR AI MACHINE LEARNING</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/fda-has-new-regulatory-plans-for-ai-machine-learning/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>FDA authorizes machine learning software to help diagnose autism</title>
		<link>https://www.aiuniverse.xyz/fda-authorizes-machine-learning-software-to-help-diagnose-autism/</link>
					<comments>https://www.aiuniverse.xyz/fda-authorizes-machine-learning-software-to-help-diagnose-autism/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 05 Jun 2021 05:08:18 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Autism]]></category>
		<category><![CDATA[Diagnose]]></category>
		<category><![CDATA[FDA]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[software]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14018</guid>

					<description><![CDATA[<p>Source &#8211; https://medcitynews.com/ The system, developed by digital health startup Cognoa, uses information from questionnaires and videos to help pediatricians diagnose autism. It received marketing authorization from <a class="read-more-link" href="https://www.aiuniverse.xyz/fda-authorizes-machine-learning-software-to-help-diagnose-autism/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/fda-authorizes-machine-learning-software-to-help-diagnose-autism/">FDA authorizes machine learning software to help diagnose autism</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://medcitynews.com/</p>



<p>The system, developed by digital health startup Cognoa, uses information from questionnaires and videos to help pediatricians diagnose autism. It received marketing authorization from the FDA on Wednesday.</p>



<p>In a first, the Food and Drug Administration gave the green light to an algorithm designed to help clinicians diagnose autism in young children. Developed by Palo Alto-based startup Cognoa, the software uses questionnaires from parents, clinicians, and home videos to make a recommendation to assist pediatricians with diagnosis.&nbsp;</p>



<p>The goal is to identify autism spectrum disorder (ASD) earlier. On average, most kids in the U.S. are diagnosed around age 4. </p>



<p>“Many of these children are waiting for long periods of time before they get in (to a specialist),” Cognoa CMO Dr. Sharief Taraman, a pediatric neurologist, said in a Zoom interview. “This is a really big deal. We have not had a diagnostic of this kind getting market authorization.”&nbsp;</p>



<p>Taraman said the software uses machine learning to identify “maximally predictive” features from the questionnaires and two short home videos&nbsp;</p>



<p>Of course, asking people to provide videos of their kids is very personal. He said families have to give permission for videos to be reviewed by video analysts and the physicians involved in their care.</p>



<p>The FDA’s authorization was based on results from a prospective, double-blinded study that compared how well the software performed in helping diagnose autism compared to a panel of clinicians making a diagnosis based on DSM-5 criteria. Cognoa went through the FDA’s de novo pathway for low- or moderate-risk devices that don’t have a predicate. </p>



<p>It was evaluated on 425 kids ages 18 months through five years, across 14 different sites. Taraman said the company also made a point to recruit a diverse group of patients for the trial, in terms of race, ethnicity, gender, education and socioeconomic status. Currently, girls and minorities are often diagnosed with ASD at a later age. </p>



<p>According to the FDA, Cognoa’s test yielded a false positive result in 15 out of 303 kids in the trial without ASD. Meanwhile, it yielded a false negative in just one of the 122 kids with ASD. </p>



<p>In cases where there wasn’t a clear diagnosis or a rule-out, the algorithm gave an indeterminate result. In total, it provided a diagnosis for about 32% of patients in the trial.&nbsp;</p>



<p>Having the ability to give an indeterminate result was important, Taraman said, that way the algorithm wouldn’t yield too many false positives, or overlook kids who have other neurodevelopmental conditions that need to be addressed.&nbsp;</p>



<p>“Technology’s always a tool. It should never be a replacement for a clinician,” he said. “The test is not meant to be a standalone.”</p>



<p>&nbsp;Cognoa plans to begin marketing the software, called Canvas Dx, later this year.&nbsp;</p>



<p>“Autism actually is a beautiful thing,” Taraman said. “Our goal is not to ‘turn off’ autism; our goal is to address challenges that come with autism.”&nbsp;</p>
<p>The post <a href="https://www.aiuniverse.xyz/fda-authorizes-machine-learning-software-to-help-diagnose-autism/">FDA authorizes machine learning software to help diagnose autism</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/fda-authorizes-machine-learning-software-to-help-diagnose-autism/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Patient Safety, Data Privacy Key for Use of AI-Powered Chatbots</title>
		<link>https://www.aiuniverse.xyz/patient-safety-data-privacy-key-for-use-of-ai-powered-chatbots/</link>
					<comments>https://www.aiuniverse.xyz/patient-safety-data-privacy-key-for-use-of-ai-powered-chatbots/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 29 Jul 2020 07:40:06 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[coronavirus]]></category>
		<category><![CDATA[data privacy]]></category>
		<category><![CDATA[FDA]]></category>
		<category><![CDATA[Natural language processing]]></category>
		<category><![CDATA[patient]]></category>
		<category><![CDATA[Safety]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10570</guid>

					<description><![CDATA[<p>Source: healthitanalytics.com Patient safety, data privacy, and health equity are key considerations for the use of chatbots powered by artificial intelligence in healthcare, according to a viewpoint piece published <a class="read-more-link" href="https://www.aiuniverse.xyz/patient-safety-data-privacy-key-for-use-of-ai-powered-chatbots/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/patient-safety-data-privacy-key-for-use-of-ai-powered-chatbots/">Patient Safety, Data Privacy Key for Use of AI-Powered Chatbots</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: healthitanalytics.com</p>



<p>Patient safety, data privacy, and health equity are key considerations for the use of chatbots powered by artificial intelligence in healthcare, according to a viewpoint piece published in JAMA.</p>



<p>With the emergence of COVID-19 and social distancing guidelines, more healthcare systems are exploring and deploying automated chatbots, the authors noted. However, there are several key considerations organizations should keep in mind before implementing these tools.</p>



<p>“We need to recognize that this is relatively new technology and even for the older systems that were in place, the data are limited,” said the viewpoint&#8217;s lead author, John D. McGreevey III, MD, an associate professor of Medicine in the Perelman School of Medicine at the University of Pennsylvania.</p>



<p>“Any efforts also need to realize that much of the data we have comes from research, not widespread clinical implementation. Knowing that, evaluation of these systems must be robust when they enter the clinical space, and those operating them should be nimble enough to adapt quickly to feedback.”</p>



<p>The authors outlined 12 different focus areas that leaders should consider when planning to implement a chatbot or conversational agent (CA) in clinical care. For chatbots that use natural language processing, the messages these agents send to patients are extremely significant, as are patient’s reactions to them.</p>



<p>“It is important to recognize the potential, as noted in the NAM report, that CAs will raise questions of trust and may change patient-clinician relationships. A most basic question is to what extent CAs should extend the capabilities of clinicians (augmented intelligence) or replace them (artificial intelligence),” the authors said.</p>



<p>“Likewise, determining the scope of the authority of CAs requires examination of appropriate clinical scenarios and the latitude for patient engagement.”</p>



<p>The authors considered the example of someone telling a chatbot something as serious as “I want to hurt myself.” In this case, the patient safety element is brought to the forefront, as someone would need to be monitoring the chatbot often.</p>



<p>This hypothetical situation also raises the question of whether patients would take a response from a chatbot seriously, as well as who is responsible if the chatbot fails in its task.</p>



<p>“Even though technologies to determine mood, tone, and intent are becoming more sophisticated, they are not yet universally deployed in CAs nor validated for most populations,” the authors said.</p>



<p>“Moreover, there is no mention of CAs in the US Food and Drug Administration’s (FDA) proposed regulatory framework for AI or machine learning for software as a medical device nor is there a user’s guide for deploying these platforms in clinical settings.”</p>



<p>The authors also noted that regulatory organizations like the FDA should develop frameworks for appropriate classification and oversight of CAs in healthcare. For example, policymakers could classify CAs as low risk versus higher risk.</p>



<p>“Low-risk CAs might be less automated, structured for a specialized task, and have relatively minor consequences if they fail. A CA that guides patients to appointments might be one such example,” the authors wrote.</p>



<p>“In contrast, higher-risk CAs would involve more automation (natural language processing, machine learning), unstructured, open-ended dialogue with patients, and have potentially serious patient consequences in the event of system failure. Examples of higher-risk CAs might be those that advise patients after hospital discharge or offer recommendations to patients about titrating medications.”</p>



<p>Additionally, the authors noted that in partnerships between vendors and healthcare organizations to use CAs, all should be mindful of converging incentives and work to balance these goals with attention to each of the domains.</p>



<p>“Given the potential of CAs to benefit patients and clinicians, continued innovation should be supported. However, hacking of CA systems (as with other medical systems) represents a cybersecurity threat, perhaps allowing individuals with malicious intent to manipulate patient-CA interactions and even offer harmful recommendations, such as quadrupling an anticoagulant dose,” the authors stated.</p>



<p>The authors stated that ultimately, the successful and effective deployment of chatbots in healthcare will depend on the industry’s ability to assess these tools.</p>



<p>“Conversational agents are just beginning in clinical practice settings, with COVID-19 spurring greater interest in this field. The use of CAs may improve health outcomes and lower costs. Researchers and developers, in partnership with patients and clinicians, should rigorously evaluate these programs,” the authors concluded.</p>



<p>“Further consideration and investigation involving CAs and related technologies will be necessary, not only to determine their potential benefits but also to establish transparency, appropriate oversight, and safety.”</p>



<p>Healthcare leaders will need to ensure they continually evaluate the capacity of these tools to improve care delivery.</p>



<p>“It&#8217;s our belief that the work is not done when the conversational agent is deployed,” McGreevey said. “These are going to be increasingly impactful technologies that deserve to be monitored not just before they are launched, but continuously throughout the life cycle of their work with patients.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/patient-safety-data-privacy-key-for-use-of-ai-powered-chatbots/">Patient Safety, Data Privacy Key for Use of AI-Powered Chatbots</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/patient-safety-data-privacy-key-for-use-of-ai-powered-chatbots/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Where Can Machine Learning Drive Efficiencies in Drug Development?</title>
		<link>https://www.aiuniverse.xyz/where-can-machine-learning-drive-efficiencies-in-drug-development/</link>
					<comments>https://www.aiuniverse.xyz/where-can-machine-learning-drive-efficiencies-in-drug-development/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 07 Feb 2020 05:40:29 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[data]]></category>
		<category><![CDATA[Drug Development]]></category>
		<category><![CDATA[FDA]]></category>
		<category><![CDATA[GAO]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Research]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6601</guid>

					<description><![CDATA[<p>Source: governmentciomedia.com The Government Accountability Office identified to lawmakers potential use cases for machine learning in drug development to help drive efficiencies and cost savings. The Food and Drug <a class="read-more-link" href="https://www.aiuniverse.xyz/where-can-machine-learning-drive-efficiencies-in-drug-development/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/where-can-machine-learning-drive-efficiencies-in-drug-development/">Where Can Machine Learning Drive Efficiencies in Drug Development?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: governmentciomedia.com</p>



<p>The Government Accountability Office identified to lawmakers potential use cases for machine learning in drug development to help drive efficiencies and cost savings.</p>



<p>The Food and Drug Administration has been tracking the use of artificial intelligence in drug development.&nbsp;Although the FDA does not currently have a regulatory policy around machine learning in drug development, GAO assembled the report at the request of Reps. Greg Walden, Michael Burgess and Brett Buthrie and Sen. Lamar Alexander, said&nbsp;Timothy Persons, GAO&#8217;s science, technology assessment&nbsp;and analytics chief scientist and managing director.</p>



<p>GAO identified overall benefits&nbsp;that include improving research and development to expediting preclinical and clinical trials — especially as preexisting technologies continue to make the health care industry data-rich, a key element to successful machine learning. &nbsp;</p>



<p>“Machine learning can make drug development more efficient and effective, decreasing the time and cost required to bring potentially more effective drugs to market,” the GAO report said. “Both of these improvements could save lives and reduce suffering by getting drugs to patients in need more quickly. Lower R&amp;D costs could also allow researchers to invest more resources in disease areas that are currently not considered profitable to pursue, such as rare or orphan diseases.”</p>



<p>More specifically in drug discovery, for instance, researchers can use machine learning to identify new drug targets, screen known compounds for new therapeutic applications and design new drug candidates.</p>



<p>To help regulate and support growth of its&nbsp;use in drug development, GAO proposed six policy recommendations for lawmakers to consider:</p>



<ul class="wp-block-list"><li><strong>Research</strong>: “Policymakers could promote basic research to generate more and better data and improve understanding of machine learning in drug development.” This area, GAO added, can result in the production of higher quality data for machine learning.</li><li><strong>Data Access</strong>: “Policymakers could create mechanisms or incentives for increased sharing of high-quality data held by public or private actors, while also ensuring protection of patient data.”</li><li><strong>Standardization</strong>: “Policymakers could collaborate with relevant stakeholders to establish uniform standards for data algorithms.”</li><li><strong>Human Capital</strong>: “Policymakers could create opportunities for more public and private sector workers to” learn interdisciplinary skills required to apply machine learning&nbsp;in drug development.</li><li><strong>Regulatory Uncertainty</strong>: “Policymakers could collaborate with relevant stakeholders to develop a clear and consistent message regarding regulation of machine learning in drug development” so that drug companies can better apply it&nbsp;if they know how regulators will review its&nbsp;algorithms.</li><li><strong>Status Quo</strong>: Policymakers should maintain current efforts — such as the 2018 Strategic Plan for Data Science — that commit to improved leveraging of data and machine learning in health care areas.</li></ul>



<p>These policy recommendations also address a number of obstacles that may hinder machine learning adoption in drug development. GAO found, for instance, that there are currently research gaps in areas such as biology, chemistry and machine learning that need further work&nbsp;to develop more effective models for drug development or how to represent molecules in machine-learning algorithms.</p>



<p>There is currently also a shortage of high-quality data for effective machine learning in drug development, as well as difficulties in sharing and accessing data due to the cost and privacy laws inhibiting sharing practices. GAO also found that there is a workforce shortage in workers who are skilled in both data and biomedical sciences, making machine learning in drug development difficult to progress.</p>



<p>Finally, drug companies are also uncertain about how the government will regulate machine learning in the near future, making them limited in their desire to invest deeply in&nbsp;machine learning for drug development.</p>



<p>In forming the final draft and publication of the report, its findings and recommendations, GAO consulted with the National Institute of Standards and Technology, as well as the FDA, to incorporate those agencies’ comments and concerns into the report.</p>
<p>The post <a href="https://www.aiuniverse.xyz/where-can-machine-learning-drive-efficiencies-in-drug-development/">Where Can Machine Learning Drive Efficiencies in Drug Development?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/where-can-machine-learning-drive-efficiencies-in-drug-development/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How Can We Be Sure Artificial Intelligence Is Safe For Medical Use?</title>
		<link>https://www.aiuniverse.xyz/how-can-we-be-sure-artificial-intelligence-is-safe-for-medical-use/</link>
					<comments>https://www.aiuniverse.xyz/how-can-we-be-sure-artificial-intelligence-is-safe-for-medical-use/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 15 Apr 2019 05:32:43 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[blindness]]></category>
		<category><![CDATA[diabetes]]></category>
		<category><![CDATA[Diabetic retinopathy]]></category>
		<category><![CDATA[FDA]]></category>
		<category><![CDATA[Medical]]></category>
		<category><![CDATA[medical devices]]></category>
		<category><![CDATA[vision loss]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=3426</guid>

					<description><![CDATA[<p>Source:- npr.org When Merdis Wells visited the diabetes clinic at the University Medical Center in New Orleans about a year ago, a nurse practitioner checked her eyes to <a class="read-more-link" href="https://www.aiuniverse.xyz/how-can-we-be-sure-artificial-intelligence-is-safe-for-medical-use/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-can-we-be-sure-artificial-intelligence-is-safe-for-medical-use/">How Can We Be Sure Artificial Intelligence Is Safe For Medical Use?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source:- npr.org</p>
<p>When Merdis Wells visited the diabetes clinic at the University Medical Center in New Orleans about a year ago, a nurse practitioner checked her eyes to look for signs of diabetic retinopathy, the most common cause of blindness.</p>
<p>At her next visit, in February of this year, artificial intelligence software made the call.</p>
<p>The clinic had just installed a system that&#8217;s designed to identify patients who need follow-up attention.</p>
<p>The Food and Drug Administration cleared the system — called IDx-DR — for use in 2018. The agency said it was the first time it had authorized the marketing of a device that makes a screening decision without a clinician having to get involved in the interpretation.</p>
<p>It&#8217;s a harbinger of things to come. Companies are rapidly developing software to supplement or even replace doctors for certain tasks. And the FDA, accustomed to approving drugs and clearing medical devices, is now figuring out how to make sure computer algorithms are safe and effective.</p>
<p>Wells was one of the first patients at the clinic in early February to be tested with the new device, which can be run by someone without medical training. The system produces a simple report that identifies whether there are signs that a patient&#8217;s vision is starting to erode.</p>
<aside id="ad-backstage-wrap" aria-label="advertisement"></aside>
<p>Wells had no problem with the computer making the call. &#8220;I think that&#8217;s lovely!&#8221; she says.</p>
<p>&#8220;Do I still get to see the pictures?&#8221; Wells asks nurse practitioner Debra Brown. Yes, Brown replies.</p>
<p>&#8220;I like seeing me because I want to take care of me, so I want to know as much as possible about me,&#8221; Wells says.</p>
<p>The 60-year-old resident of nearby Algiers, La., leans into the camera, which has an eyepiece for each eye.</p>
<p>&#8220;It&#8217;s just going to be like a regular picture,&#8221; Brown explains. &#8220;But when we flash, the light will be a little bright.&#8221;</p>
<p>Once Wells is in position, Brown adjusts the camera.</p>
<p>&#8220;Don&#8217;t blink!&#8221; she says. &#8220;3-2-1-0!&#8221; The camera flashes and captures the image. Three more flashes and the exam is done.</p>
<p>She says still planning to examine the images and backstop the computer&#8217;s conclusion. That reassures Wells.</p>
<p>The test is quick and easy, which is by design. People with diabetes are supposed to get this screening test every year, but many don&#8217;t. Brown says the new system could allow the clinic to screen a lot more patients for diabetic retinopathy.</p>
<p>That&#8217;s the hope of the system&#8217;s inventor, Michael Abramoff, an ophthalmologist at the University of Iowa and company founder.</p>
<p>&#8220;The problem is many people with diabetes only go to an eye-care provider like me when they have symptoms,&#8221; he says. &#8220;And we need to find [retinopathy] before then. So that&#8217;s why early detection is really important.&#8221;</p>
<p>Abramoff spent years developing a computer algorithm that could scan retina images and automatically pick up early signs of diabetic retinopathy. And he wanted it to work in clinics, like the one in New Orleans, rather than in ophthalmologists&#8217; offices.</p>
<p>Developing the computer algorithm wasn&#8217;t the hard part.</p>
<p>&#8220;It turns out the biggest hurdle, if you care about patient safety, is the FDA,&#8221; he says.</p>
<p>That hurdle is essential for public safety, but not an easy one for a brand-new technology — especially one that makes a medical call without an expert on hand.</p>
<p>Often medical software gets an easy road to market, compared with drugs. Software is handled through the generally less rigorous pathway for medical devices. For most devices, the evaluation involves a comparison with something already on the market.</p>
<p>But this technology for detecting diabetic retinopathy was unique, and a patient&#8217;s vision is potentially on the line.</p>
<p>When Abramoff approached the FDA, &#8220;of course they were uncomfortable at first,&#8221; he says, &#8220;and so we started working together on how can we prove that this can be safe.&#8221;</p>
<p>Abramoff needed to show that the technology was not just safe and effective but that it would work on a very diverse population since all sorts of people get diabetes. That ultimately meant testing the machine on 900 people at 10 different sites.</p>
<p>&#8220;We went into inner cities, we went into southern New Mexico to make sure we captured all those people that needed to be represented,&#8221; he says.</p>
<p>All the sites were primary care clinics, because the company wanted to demonstrate that the technology would well without having an ophthalmologist on hand.</p>
<p>That extensive test satisfied the FDA that the test would be broadly useable, and reasonably accurate. IDx-DR surpassed the FDA&#8217;s requirement. Test results that indicated eye disease needed to be correct at least 85 percent of the time, while those finding no significant eye damage needed to be correct at least 82.5 percent of the time.</p>
<p>&#8220;It&#8217;s better than me, and I&#8217;m a very experienced retinal specialist,&#8221; Abramoff says.</p>
<p>The FDA helped guide the company&#8217;s software through its regulatory process, which is evolving to accommodate inventions flowing out of artificial intelligence labs.</p>
<p>Bakul Patel, associate director for digital health at the FDA, says that in general, the FDA expects more evidence and assurances for technologies that have a greater potential to cause harm if they fail.</p>
<p>Some software is completely exempt from the FDA process. A simple tweak in a routine piece of software may not require any FDA review at all. The rules get tighter for a change that could substantially alter the performance of an artificial intelligence algorithm.</p>
<p>The agency has years of experience approving software that is part of medical devices, but new algorithms are creating new challenges.</p>
<p>For one thing, the agency needs to be wary of approving an algorithm that&#8217;s based on a particular set of patients, if it&#8217;s not clear that it will be effective in different groups. An algorithm to identify skin cancer may be developed primarily on white patients and may not work on patients with darker skin.</p>
<p>And many algorithms, once on the market, will continue to gather data that can be used to improve their performance. Some programs outside of health science continually update themselves to accomplish that. That raises questions about how and when updated software needs another round of review.</p>
<p>&#8220;We realize that we have to re-imagine how we look at these things, and allow for the changes that go on, especially in this space,&#8221; Patel says.</p>
<p>To do that, the FDA is testing out a whole new approach to clearing algorithms. The agency is experimenting with a system called precertification that puts more emphasis on examining the process that companies use to develop their products, and less emphasis on examining each new tweak. Continued monitoring is another element of this strategy.</p>
<p>&#8220;We&#8217;re going to take this concept and take it on a test run,&#8221; Patel says.</p>
<p>Because many algorithms will likely be in a state of continual evolution, &#8220;it&#8217;s really important when a system is deployed in the real world that we monitor those systems to make sure that they&#8217;re performing the way we expect,&#8221; says Christina Silcox, a researcher at the Duke-Margolis Center for Health Policy.</p>
<p>She&#8217;s enthusiastic about the prospects of AI in medicine, while alert to some of the challenges the FDA will face.</p>
<p>&#8220;Right now we might see an update to a medical <em>device</em>every 18 months,&#8221; she says. &#8220;In software you might expect to see one every two weeks or every month.&#8221;</p>
<p>Seemingly minor software glitches can occasionally have serious unintended consequences. One of the worst cases involved a radiation therapy machine that, in the 1980s, gave huge overdoses of radiation to some patients because of a software bug.</p>
<p>Researchers looking at more recent incidents identified 627 software recalls by the FDA from 2011 through 2015. Those included 12 &#8220;high risk&#8221; devices such as ventilators and a defibrillator.</p>
<p>Patel certainly doesn&#8217;t want to see a high-profile failure, because that could set back a promising and rapidly growing industry.</p>
<p>One challenge that&#8217;s beyond the FDA&#8217;s scope is figuring out how to resolve conflicting conclusions from rival devices. Genetic tests that are used to guide cancer treatment, for example, already provide conflicting treatment recommendations, says Isaac Kohane, a pediatrician who heads the biomedical informatics department at Harvard Medical School. &#8220;Guess what,&#8221; he says, &#8220;The same thing is going to happen with these AI programs.&#8221;</p>
<p>&#8220;We&#8217;re going to have built-in disagreements and no doctor and no patient will know what is right,&#8221; he says.</p>
<p>Indeed, IDx isn&#8217;t the only company that interested in using an algorithm to identify early signs of diabetic retinopathy. Among its competitors is Verily, one of Google&#8217;s sister companies, which is currently deploying its technology in India. (Google is among NPR&#8217;s financial supporters).</p>
<p>&#8220;Actually I&#8217;m quite bullish in the long term,&#8221; Kohane says, as he looks out on the burgeoning field of AI. &#8220;In the short term, it&#8217;s a wild land grab.&#8221;</p>
<p>He says we need the equivalent of <em>Consumer Reports</em> in this area to help resolve these disagreements and identify superior technologies. He would also like reviews to examine not simply whether a technology performs as expected, but if it&#8217;s an improvement for patients. &#8220;What you really want is to get healthy,&#8221; he says.</p>
<p>The cost of the camera and set-up for the IDx-DR systems is around $20,000, a company spokesperson said in an email. There are options to rent or lease-to-own the camera that can reduce the upfront costs.</p>
<p>The list price for each exam is $34, the spokesperson said. But it varies depending on factors including patient volume.</p>
<p>A technically accurate piece of software doesn&#8217;t automatically lead to better health.</p>
<p>At the diabetes clinic in New Orleans, for example, the system replaced a service that also checked for another cause of blindness, glaucoma.</p>
<p>Nurse practitioner Brown visually scans Wells&#8217; images for signs of glaucoma, but that wouldn&#8217;t happen when the work is handed off to someone who lacks her expertise. Instead, the diabetes clinic staff will refer patients to get another appointment for that test.</p>
<p>Wells also got something that future patients might not – a review of her retina images, so she could see for herself any suspected issues. That interaction with a health care professional was also an important moment to talk about her diet and what she can do to stay healthy.</p>
<p>Chevelle Parker, another nurse practitioner, points to some silvery lines inside the eye&#8217;s blood vessels.</p>
<p>&#8220;That happens when your sugar levels are high,&#8221; Parker explains. &#8220;It can also be an indication of diabetic retinopathy. So we&#8217;re going to do a referral and send you on for complete testing.&#8221;</p>
<p>The software did its intended job. While Wells seemed a bit upset by the news, at least she has found out about this concern early, while there&#8217;s still time to protect her vision.</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-can-we-be-sure-artificial-intelligence-is-safe-for-medical-use/">How Can We Be Sure Artificial Intelligence Is Safe For Medical Use?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-can-we-be-sure-artificial-intelligence-is-safe-for-medical-use/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
	</channel>
</rss>
