<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Safety Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/safety/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/safety/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Sat, 12 Sep 2020 10:56:36 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>
	<item>
		<title>Developing software for safety in medical robotics</title>
		<link>https://www.aiuniverse.xyz/developing-software-for-safety-in-medical-robotics/</link>
					<comments>https://www.aiuniverse.xyz/developing-software-for-safety-in-medical-robotics/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 12 Sep 2020 10:56:21 +0000</pubDate>
				<category><![CDATA[Robotics]]></category>
		<category><![CDATA[Developing]]></category>
		<category><![CDATA[Medical]]></category>
		<category><![CDATA[Safety]]></category>
		<category><![CDATA[software]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11550</guid>

					<description><![CDATA[<p>Source: medicaldesignandoutsourcing.com The use of robotics in medtech continues to grow. Whether it’s a cobot working alongside humans to automate manufacturing or a surgical robot in the OR, a single point of failure can cause serious harm. The incorporated software systems must take safety into account. IEC 61508-3 offers several techniques for developing software for <a class="read-more-link" href="https://www.aiuniverse.xyz/developing-software-for-safety-in-medical-robotics/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/developing-software-for-safety-in-medical-robotics/">Developing software for safety in medical robotics</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: medicaldesignandoutsourcing.com</p>



<p>The use of robotics in medtech continues to grow. Whether it’s a cobot working alongside humans to automate manufacturing or a surgical robot in the OR, a single point of failure can cause serious harm. The incorporated software systems must take safety into account.</p>



<p>IEC 61508-3 offers several techniques for developing software for safety-related systems, which the medical device software development community can draw on when designing and implementing risk-control measures as required by ISO 14971.</p>



<p>Developing “safe” software begins with establishing a software coding standard. IEC 61508-3 promotes using well-known techniques, including:</p>



<ul class="wp-block-list"><li>Using modular code.</li><li>Using preferred design patterns.</li><li>Avoiding reentrance and recursion.</li><li>Avoiding dynamic memory allocations and global data objects.</li><li>Minimizing the use of interrupt service routines and locking mechanisms.</li><li>Avoiding dead wait loops.</li><li>Using deterministic timing patterns.</li></ul>



<h2 class="wp-block-heading">Keep it simple</h2>



<p>There are other suggestions under the “keep it simple” principle around limiting the use of pointers, unions and type casting, and not using automatic type conversions while encouraging the use of parentheses and brackets to clarify intended syntax.</p>



<p>A hazard analysis might identify that your code or data spaces can get corrupted. There are well-known risk-control measures around maintaining code and memory integrity which can be easily adopted. Running code from read-only memory, protected with a cyclic redundancy check (CRC-32) that can be checked at boot time and periodically during runtime, prevents errant changes to the code space and provides a mechanism to detect these failures.</p>



<p>Segregating data into different memory regions that can be protected through virtual memory space and using CRC-32 over blocks of memory regions or even adding a checksum to each item stored in memory allows these CRC/checksums to be checked periodically.</p>



<p>CRC/checksums can be verified on each read access to a stored item and updated atomically on every write access to these protected items. Building tests into the software is an important tool as well. It’s a good idea to perform a power-on self-test (POST) at power-up to make sure the hardware is working and to check that your code and data spaces are consistent and not corrupt.</p>



<h2 class="wp-block-heading">What else can happen?</h2>



<p>Another hazardous situation arises when controlling and monitoring are performed on the same processor or in the same process. What happens to your safety system if your process gets hung up in a loop? Techniques that separate the monitor from the controlling function introduce some complexity to the software system, but this complexity can be offset by ensuring the controlling function implements the minimum safety requirements while the monitor handles the fault and error recovery.</p>



<p>Fault detection systems and error recovery mechanisms are much easier to implement when designed from the start. Poorly designed software can experience unexpected, inconsistent timing, which results in unexpected failures. It’s possible to avoid these failures by controlling latency in the software. State machines, software watchdogs and timer-driven events are common design elements to control this.</p>



<h2 class="wp-block-heading">Keep an eye on communications</h2>



<p>Inter-device and inter-process communications are another area of concern for safety-related systems. The integrity of these communications must be monitored to ensure they are robust. Using CRC-32 on any protocol between two entities is recommended. Separate CRC-32 on the headers and the payload helps to detect corruption of these messages. Protocols should be written and designed with the idea that at any time, your system could reboot due to some fault. Thus, building in retry attempts and stateless protocols is recommended.</p>



<p>Safe operational software verifies the ranges of all inputs at the interface where it is encountered; checks internal variables for consistency; and defines default settings to help recover from an inconsistent setting or to support a factory reset. Software watchdog processes can be put in place to watch the watcher and ensure that processes are running as they should.</p>



<p>By taking these techniques into account, software developers working on medical robotic devices can better address the concerns of safety-related systems.</p>
<p>The post <a href="https://www.aiuniverse.xyz/developing-software-for-safety-in-medical-robotics/">Developing software for safety in medical robotics</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/developing-software-for-safety-in-medical-robotics/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Patient Safety, Data Privacy Key for Use of AI-Powered Chatbots</title>
		<link>https://www.aiuniverse.xyz/patient-safety-data-privacy-key-for-use-of-ai-powered-chatbots/</link>
					<comments>https://www.aiuniverse.xyz/patient-safety-data-privacy-key-for-use-of-ai-powered-chatbots/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 29 Jul 2020 07:40:06 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[coronavirus]]></category>
		<category><![CDATA[data privacy]]></category>
		<category><![CDATA[FDA]]></category>
		<category><![CDATA[Natural language processing]]></category>
		<category><![CDATA[patient]]></category>
		<category><![CDATA[Safety]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10570</guid>

					<description><![CDATA[<p>Source: healthitanalytics.com Patient safety, data privacy, and health equity are key considerations for the use of chatbots powered by artificial intelligence in healthcare, according to a viewpoint piece published in JAMA. With the emergence of COVID-19 and social distancing guidelines, more healthcare systems are exploring and deploying automated chatbots, the authors noted. However, there are several key considerations organizations <a class="read-more-link" href="https://www.aiuniverse.xyz/patient-safety-data-privacy-key-for-use-of-ai-powered-chatbots/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/patient-safety-data-privacy-key-for-use-of-ai-powered-chatbots/">Patient Safety, Data Privacy Key for Use of AI-Powered Chatbots</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: healthitanalytics.com</p>



<p>Patient safety, data privacy, and health equity are key considerations for the use of chatbots powered by artificial intelligence in healthcare, according to a viewpoint piece published in JAMA.</p>



<p>With the emergence of COVID-19 and social distancing guidelines, more healthcare systems are exploring and deploying automated chatbots, the authors noted. However, there are several key considerations organizations should keep in mind before implementing these tools.</p>



<p>“We need to recognize that this is relatively new technology and even for the older systems that were in place, the data are limited,” said the viewpoint&#8217;s lead author, John D. McGreevey III, MD, an associate professor of Medicine in the Perelman School of Medicine at the University of Pennsylvania.</p>



<p>“Any efforts also need to realize that much of the data we have comes from research, not widespread clinical implementation. Knowing that, evaluation of these systems must be robust when they enter the clinical space, and those operating them should be nimble enough to adapt quickly to feedback.”</p>



<p>The authors outlined 12 different focus areas that leaders should consider when planning to implement a chatbot or conversational agent (CA) in clinical care. For chatbots that use natural language processing, the messages these agents send to patients are extremely significant, as are patient’s reactions to them.</p>



<p>“It is important to recognize the potential, as noted in the NAM report, that CAs will raise questions of trust and may change patient-clinician relationships. A most basic question is to what extent CAs should extend the capabilities of clinicians (augmented intelligence) or replace them (artificial intelligence),” the authors said.</p>



<p>“Likewise, determining the scope of the authority of CAs requires examination of appropriate clinical scenarios and the latitude for patient engagement.”</p>



<p>The authors considered the example of someone telling a chatbot something as serious as “I want to hurt myself.” In this case, the patient safety element is brought to the forefront, as someone would need to be monitoring the chatbot often.</p>



<p>This hypothetical situation also raises the question of whether patients would take a response from a chatbot seriously, as well as who is responsible if the chatbot fails in its task.</p>



<p>“Even though technologies to determine mood, tone, and intent are becoming more sophisticated, they are not yet universally deployed in CAs nor validated for most populations,” the authors said.</p>



<p>“Moreover, there is no mention of CAs in the US Food and Drug Administration’s (FDA) proposed regulatory framework for AI or machine learning for software as a medical device nor is there a user’s guide for deploying these platforms in clinical settings.”</p>



<p>The authors also noted that regulatory organizations like the FDA should develop frameworks for appropriate classification and oversight of CAs in healthcare. For example, policymakers could classify CAs as low risk versus higher risk.</p>



<p>“Low-risk CAs might be less automated, structured for a specialized task, and have relatively minor consequences if they fail. A CA that guides patients to appointments might be one such example,” the authors wrote.</p>



<p>“In contrast, higher-risk CAs would involve more automation (natural language processing, machine learning), unstructured, open-ended dialogue with patients, and have potentially serious patient consequences in the event of system failure. Examples of higher-risk CAs might be those that advise patients after hospital discharge or offer recommendations to patients about titrating medications.”</p>



<p>Additionally, the authors noted that in partnerships between vendors and healthcare organizations to use CAs, all should be mindful of converging incentives and work to balance these goals with attention to each of the domains.</p>



<p>“Given the potential of CAs to benefit patients and clinicians, continued innovation should be supported. However, hacking of CA systems (as with other medical systems) represents a cybersecurity threat, perhaps allowing individuals with malicious intent to manipulate patient-CA interactions and even offer harmful recommendations, such as quadrupling an anticoagulant dose,” the authors stated.</p>



<p>The authors stated that ultimately, the successful and effective deployment of chatbots in healthcare will depend on the industry’s ability to assess these tools.</p>



<p>“Conversational agents are just beginning in clinical practice settings, with COVID-19 spurring greater interest in this field. The use of CAs may improve health outcomes and lower costs. Researchers and developers, in partnership with patients and clinicians, should rigorously evaluate these programs,” the authors concluded.</p>



<p>“Further consideration and investigation involving CAs and related technologies will be necessary, not only to determine their potential benefits but also to establish transparency, appropriate oversight, and safety.”</p>



<p>Healthcare leaders will need to ensure they continually evaluate the capacity of these tools to improve care delivery.</p>



<p>“It&#8217;s our belief that the work is not done when the conversational agent is deployed,” McGreevey said. “These are going to be increasingly impactful technologies that deserve to be monitored not just before they are launched, but continuously throughout the life cycle of their work with patients.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/patient-safety-data-privacy-key-for-use-of-ai-powered-chatbots/">Patient Safety, Data Privacy Key for Use of AI-Powered Chatbots</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/patient-safety-data-privacy-key-for-use-of-ai-powered-chatbots/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
