<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Medical Research Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/medical-research/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/medical-research/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Thu, 17 Dec 2020 05:48:04 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>SwRI is developing machine vision tool to improve military medical training</title>
		<link>https://www.aiuniverse.xyz/swri-is-developing-machine-vision-tool-to-improve-military-medical-training/</link>
					<comments>https://www.aiuniverse.xyz/swri-is-developing-machine-vision-tool-to-improve-military-medical-training/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 17 Dec 2020 05:48:03 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Healthcare]]></category>
		<category><![CDATA[hospital]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Medical Research]]></category>
		<category><![CDATA[Research]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12440</guid>

					<description><![CDATA[<p>Source: news-medical.net Southwest Research Institute is developing a machine vision tool to help the U.S. Department of Defense assess the biomechanical movements of military medical personnel during <a class="read-more-link" href="https://www.aiuniverse.xyz/swri-is-developing-machine-vision-tool-to-improve-military-medical-training/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/swri-is-developing-machine-vision-tool-to-improve-military-medical-training/">SwRI is developing machine vision tool to improve military medical training</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: news-medical.net</p>



<p>Southwest Research Institute is developing a machine vision tool to help the U.S. Department of Defense assess the biomechanical movements of military medical personnel during training exercises.</p>



<p>The simulation-based training system will compare medical trainee performance to that of experts whose physical motions, or kinematics, have been pre-recorded and analyzed in 3D with artificial intelligence.</p>



<p>&#8220;Military medical training relies on subjective human evaluations where feedback may vary among trainers,&#8221; said Dr. Dan Nicolella, of SwRI&#8217;s Mechanical Engineering Division, who co-leads the Institute&#8217;s Human Performance Initiative with Kase Saylor, an Intelligent Systems Division manager. SwRI&#8217;s research will help both instructors and trainees to objectively observe how well they are performing a specific task, providing both a quantitative score, based on expert task performance, and task-specific feedback to improve performance.&#8221;</p>



<p>The $1.25 million project, known as Investigating Methods for Performance Overdrive (IMPROVE), is funded by the DOD&#8217;s Congressionally Directed Medical Research Programs (CDMRP), Joint Program Committee-1/Medical Simulation and Information Sciences.</p>



<p>The SwRI project is part of a greater DOD effort to improve patient safety and quality of care through strategic over-the-horizon research by transitioning more capable medical simulation technologies.</p>



<p>SwRI is adapting its markerless motion capture technology, used to assess the biomechanics of athletes as well as for clinical applications. Markerless motion capture, or MoCap, leverages computer vision algorithms to circumvent the tedious process of attaching physical body markers to a human subject when capturing motion in 3D data for biomechanical analysis in research, clinical, and sport science applications.</p>



<p>SwRI&#8217;s MoCap system uses standard cameras to capture video of human subjects and custom machine learning algorithms to quantify individual biomechanical performance related to walking, running, physical screening, sports, and other precise physical movements.</p>



<p>Applying SwRI&#8217;s technology to DOD medical training will allow complex assessments of 3D kinematic performance. The project will assess the detailed performance of trainees when they suture wounds and provide other combat and hospital care requiring precise hand movements or physical orientations.</p>



<p>The automated assessments will be based on SwRI-developed machine learning trained using actual data collected from ideal physical performance while completing specific medical tasks. The Uniformed Services University, a project team member, will contribute to the research with surveys and other data gathered from several military medical training programs and training best practices.</p>



<p>The project will focus on training custom artificial intelligence systems in a 3D biomechanical model space. This methodology will result in a biomechanically informed machine learning system to measure 3D spatial temporal biomechanics directly from 2D video data.</p>
<p>The post <a href="https://www.aiuniverse.xyz/swri-is-developing-machine-vision-tool-to-improve-military-medical-training/">SwRI is developing machine vision tool to improve military medical training</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/swri-is-developing-machine-vision-tool-to-improve-military-medical-training/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>WHAT IS DEEP LEARNING? A SIMPLE GUIDE WITH EXAMPLES</title>
		<link>https://www.aiuniverse.xyz/what-is-deep-learning-a-simple-guide-with-examples/</link>
					<comments>https://www.aiuniverse.xyz/what-is-deep-learning-a-simple-guide-with-examples/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 12 Oct 2020 06:59:12 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[ANN]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[human brain]]></category>
		<category><![CDATA[Medical Research]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12134</guid>

					<description><![CDATA[<p>Source: analyticsinsight.net Deep learning is an imitation of actual human brain neurons and its functions. Unlike any other time, the past decade has seen unprecedented development in <a class="read-more-link" href="https://www.aiuniverse.xyz/what-is-deep-learning-a-simple-guide-with-examples/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-deep-learning-a-simple-guide-with-examples/">WHAT IS DEEP LEARNING? A SIMPLE GUIDE WITH EXAMPLES</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: analyticsinsight.net</p>



<h3 class="wp-block-heading">Deep learning is an imitation of actual human brain neurons and its functions.</h3>



<p>Unlike any other time, the past decade has seen unprecedented development in the field of Artificial Intelligence (AI). There are a lot of talks on machine learning doing things humans currently do in our workplace. Deep learning is leading in some of the fronts of machine learning making practical changes.</p>



<p>Deep learning is an artificial intelligence function that imitates the working of the human brain in processing data and creating patterns for use in decision making. Deep learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural network (ANN). It has networks capable of learning unsupervised or unstructured data. Deep learning is often known as deep neural learning or deep neural network.</p>



<p>Deep learning is often compared with the actual human brain functions. The human brain can recognise a friend’s face or his/her voice even after a long gap. We can find our mother among so many people in a crowded marketplace. The human brain has learned to execute complex day-to-day activities. The functioning system behind the mechanism is 100 billion cells called neurons. The neurons build massive parallel and distributed networks, through which humans learn to carry out complex activities. The deep learning system is an inspiration of a biological neural system. Scientists and researchers started building artificial neural networks so that computers could eventually learn and exhibit intelligence like humans.</p>



<p>There are two types of neural network models used in deep learning,</p>



<ul class="wp-block-list"><li>Convolutional Neural Network (CNN)- used in image-related applications like autonomous driving and robot vision.</li><li>Recurrent Neural Network (RNN)- used in most of the Natural Language Processing (NLP) based text or voice applications such as chatbots, virtual assistants.</li></ul>



<h4 class="wp-block-heading"><strong>Functions of deep learning</strong></h4>



<p>Deep learning brings about an explosion of data in all forms and from across the globe. This large set of data, called big data, is collected from users interface in social media, internet search engines, e-commerce platforms, etc. This enormous data is considered as a data asset when it holds the details of an organisation or a company. Big data can be shared through applications like cloud computing.</p>



<p>Big data is mostly unstructured and contains files from diverse kind of sources like video, images and documents. It is so vast that it could take decades for humans to comprehend it and extract relevant information. Using AI and its applications, organisations make use of the data to increase their revenue and better the working system. Here are some of the use cases of deep learning at work.</p>



<p><strong>Self-driving technology:</strong> Self-driving technology is one of the most important prospects that researchers are trying to unravel in the upcoming years. Automotive researchers are using deep learning to automatically detect objects such as stop signs and traffic lights. In order to decrease accidents, deep learning helps detect pedestrians in the road.</p>



<p><strong>Aerospace and defence:</strong> Defence needs constant navigation. It will be very good if the navigation system is able to detect safe and unsafe zones from a long distance. Deep learning installed in satellites helps identify objects and locate areas of interest.</p>



<p><strong>Medical Research:</strong> Deep learning is a major component in detecting cancer cells. Cancer researchers at UCLA have built an advanced microscope that yields a high-dimensional at a set used to train a deep learning application to accurately identify cancer cells.</p>



<p><strong>Industry automation:</strong> Deep learning in industries is used to automatically find unsafe machines and alarms people to get away from the location. It ensures the security of workers in heavily machinery surroundings.</p>



<h4 class="wp-block-heading"><strong>Required tools&nbsp;</strong></h4>



<p>Deep learning mandates a lot of sophisticated tools in which some are free like TensorFlow, PyTorch and Keras. Whereas, some other tools are highly expensive. Data learning deals with enormous data and complex algorithms that needs luxurious hardware infrastructure to handle. The deep learning tools are referred to as Machine Learning as a Service (MLaaS) solutions. Amazon AWS, Microsoft Azure and Google Cloud are some of the platforms that provide deep learning tools.</p>



<h4 class="wp-block-heading"><strong>Advantages of deep learning</strong></h4>



<ul class="wp-block-list"><li>In deep learning, neurons are being trained to perform conceptual tasks such as finding edges in a photo or facial features within the face.</li><li>Deep learning over most of the other machine learning approaches keeps away the worry about trimming down the number of features used.</li></ul>



<h4 class="wp-block-heading"><strong>Disadvantages of deep learning</strong></h4>



<ul class="wp-block-list"><li>Deep learning networks may require hundreds of thousands of millions of hand-labelled examples.</li><li>In deep learning, it is very expensive to train in fast timeframes as fast players need commercial-grade GPUs.</li></ul>



<p>Sometimes deep learning is taken for a ‘black box’ for its complex and extremely difficult to understand the working model.</p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-deep-learning-a-simple-guide-with-examples/">WHAT IS DEEP LEARNING? A SIMPLE GUIDE WITH EXAMPLES</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/what-is-deep-learning-a-simple-guide-with-examples/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Artificial Intelligence Tool Diagnoses Alzheimer’s with 95% Accuracy</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-tool-diagnoses-alzheimers-with-95-accuracy/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-tool-diagnoses-alzheimers-with-95-accuracy/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 01 Sep 2020 08:22:09 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[data quality]]></category>
		<category><![CDATA[Medical Research]]></category>
		<category><![CDATA[neural networks]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11356</guid>

					<description><![CDATA[<p>Source: healthitanalytics.com A team from Stevens Institute of Technology has developed an artificial intelligence tool that can diagnose Alzheimer’s disease with more than 95 percent accuracy, eliminating the need <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-tool-diagnoses-alzheimers-with-95-accuracy/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-tool-diagnoses-alzheimers-with-95-accuracy/">Artificial Intelligence Tool Diagnoses Alzheimer’s with 95% Accuracy</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: healthitanalytics.com</p>



<p>A team from Stevens Institute of Technology has developed an artificial intelligence tool that can diagnose Alzheimer’s disease with more than 95 percent accuracy, eliminating the need for expensive scans or in-person testing.</p>



<p>In addition, the algorithm is also able to explain its conclusions, enabling human experts to check the accuracy of its diagnosis.</p>



<p>Alzheimer’s disease can impact a person’s use of language, the researchers noted. For example, people with Alzheimer’s tend to replace nouns with pronouns, and they can express themselves in a very roundabout, awkward way.</p>



<p>The team designed an explainable AI tool that uses attention mechanisms and a convolutional neural network to accurately identify well-known signs of Alzheimer’s, as well as subtle linguistic patterns that were previously overlooked.</p>



<p>Researchers trained the algorithm using texts composed by both healthy subjects and known Alzheimer’s sufferers describing a drawing of children stealing cookies from a jar. The team converted each individual sentence into a unique numerical sequence, or vector, representing a specific point in a 512-dimensional space.</p>



<p>This kind of approach allows even complex sentences to be assigned a concrete numerical value, making it easier to analyze structural and thematic relationships between sentences.</p>



<p>Using those vectors along with handcrafted features, the AI gradually learned to spot differences between sentences composed by healthy or unhealthy individuals, and was able to determine with significant accuracy how likely any given text was to have been produced by a person with Alzheimer’s.</p>



<p>“This is a real breakthrough,” said the tool’s creator, K.P. Subbalakshmi, founding director of Stevens Institute of Artificial Intelligence and&nbsp;professor of electrical and computer engineering at the Charles V. Schaefer School of Engineering &amp; Science.</p>



<p>“We’re opening an exciting new field of research, and making it far easier to explain to patients why the AI came to the conclusion that it did, while diagnosing patients. This addresses the important question of trustability of AI systems in the medical field.”&nbsp;&nbsp;</p>



<p>The AI system can also incorporate new criteria that may be identified by other research teams in the future, making the algorithm increasingly more accurate over time.</p>



<p>“We designed our system to be both modular and transparent,” Subbalakshmi explained. “If other researchers identify new markers of Alzheimer’s, we can simply plug those into our architecture to generate even better results.”</p>



<p>In the future, AI tools may be able to diagnose Alzheimer’s using any text, from emails to social media posts. However, to develop such an algorithm, researchers would need to train it on many different kinds of texts produced by known Alzheimer’s sufferers instead of just picture descriptions.</p>



<p>While this kind of data is not yet available, increasing access to this kind of information could lead to the development of accurate, comprehensive AI tools.</p>



<p>“The algorithm itself is incredibly powerful,” Subbalakshmi said. “We’re only constrained by the data available to us.”</p>



<p>The researchers’ next steps will be gathering new data that will help the algorithm diagnose patients with Alzheimer’s disease based on speech in languages other than English. The team is also uncovering ways in which other neurological conditions, such as aphasia, stroke, traumatic brain injuries, and depression, can impact language use.</p>



<p>“This method is definitely generalizable to other diseases,” said Subbalakshmi. “As we acquire more and better data, we’ll be able to create streamlined, accurate diagnostic tools for many other illnesses too.”&nbsp;</p>



<p>Researchers expect that providers can use this AI tool to more accurately diagnose Alzheimer’s, leading to earlier treatment and reduced healthcare costs.</p>



<p>“This is absolutely state-of-the-art,” said Subbalakshmi. “Our AI software is the most accurate diagnostic tool currently available while also being explainable.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-tool-diagnoses-alzheimers-with-95-accuracy/">Artificial Intelligence Tool Diagnoses Alzheimer’s with 95% Accuracy</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-tool-diagnoses-alzheimers-with-95-accuracy/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deep Learning Tools Can Kickstart Cancer Radiation Therapy</title>
		<link>https://www.aiuniverse.xyz/deep-learning-tools-can-kickstart-cancer-radiation-therapy/</link>
					<comments>https://www.aiuniverse.xyz/deep-learning-tools-can-kickstart-cancer-radiation-therapy/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 29 Jan 2020 07:59:07 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Imaging Analytics]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Medical Imaging]]></category>
		<category><![CDATA[Medical Research]]></category>
		<category><![CDATA[Preventive Care]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6436</guid>

					<description><![CDATA[<p>Source: healthitanalytics.com January 28, 2020 &#8211; New research from UT Southwestern has shown that deep learning technology could help providers quickly develop optimal treatment plans for cancer patients, decreasing the odds <a class="read-more-link" href="https://www.aiuniverse.xyz/deep-learning-tools-can-kickstart-cancer-radiation-therapy/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-tools-can-kickstart-cancer-radiation-therapy/">Deep Learning Tools Can Kickstart Cancer Radiation Therapy</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: healthitanalytics.com</p>



<p>January 28, 2020 &#8211; New research from UT Southwestern has shown that deep learning technology could help providers quickly develop optimal treatment plans for cancer patients, decreasing the odds that the disease will spread.</p>



<p>Patients usually have to wait several days to a week to begin therapy while doctors manually develop treatment plans, which can be a tedious, time-consuming process. Providers must carefully review a patient’s imaging data and conduct several phases of feedback within the medical team.</p>



<p>Delaying radiation therapy for even a week can increase the chance of some cancers recurring or spreading by 12 to 14 percent, researchers noted.</p>



<p>“Some of these patients need radiation therapy immediately, but doctors often have to tell them to go home and wait,” said&nbsp;Steve Jiang, PhD, who directs UT&nbsp;Southwestern’s&nbsp;Medical Artificial Intelligence and Automation (MAIA) Lab. “Achieving optimal treatment plans in near real time is important and part of our broader mission to use AI to improve all aspects of cancer care.”</p>



<p>The team explored how AI and deep learning tools could improve multiple aspects of radiation therapy, from initial dosage plans required before the treatment can begin, to the dose recalculations that occur as the plan progresses.</p>



<p>Researchers used data from 70 prostate cancer patients to train four deep learning models. The tools learned to develop 3D renderings of how to best distribute the radiation in each patient.</p>



<p>Each model accurately predicted the treatment plans developed by the medical team, and the technology was able to produce optimal treatment plans within five-hundredths of a second after receiving clinical data for patients.</p>



<p>“Our AI can cut out much of the back and forth that happens between the doctor and the dosage planner,” Jiang said. “This improves the efficiency dramatically.”</p>



<p>Jiang also led a second study that showed how AI can quickly and accurately recalculate dosages before each radiation session, taking into account how a patient’s anatomy may have changed since the last therapy. A traditional, accurate recalculation can require patients to wait up to ten minutes or more, in addition to the time needed to conduct anatomy imaging before each session.</p>



<p>Jiang and his team developed an AI model that combined two conventional models used for dose calculation: a simple, fast model that lacked accuracy, and a complex one that was accurate but required more time.</p>



<p>The newly developed AI technology assessed the differences between the models, and learned how to utilize both speed and accuracy to produce calculations within one second.</p>



<p>UT Southwestern plans to use these new deep learning and AI capabilities in clinical care after implementing a patient interface. The MAIA Lab is also currently developing deep learning tools for several other purposes, including enhanced medical imaging and image processing, automated medical procedures, and improved disease diagnosis and outcome prediction.</p>



<p>Researchers have taken an interest in using AI to improve radiation therapy for patients. A team from the University of Texas MD Anderson Cancer Center recently developed a machine learning tool that could accurately predict two of the most challenging side effects of radiation therapy for patients with head and neck cancers: significant weight loss or the need for a feeding tube.</p>



<p>The technology could help providers deliver more proactive care for patients with cancer.</p>



<p>“Being able to identify which patients are at greatest risk would allow radiation oncologists to take steps to prevent or mitigate these possible side effects,” said Jay Reddy, MD, PhD, an assistant professor of radiation oncology at The University of Texas MD Anderson Cancer Center and lead author on the study.&nbsp;</p>



<p>“If the patient has an intermediate risk, and they might get through treatment without needing a feeding tube, we could take precautions such as setting them up with a nutritionist and providing them with nutritional supplements. If we know their risk for feeding tube placement is extremely high, we could place it ahead of time so they wouldn’t have to be admitted to the hospital after treatment. We’d know to keep a closer eye on that patient.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-tools-can-kickstart-cancer-radiation-therapy/">Deep Learning Tools Can Kickstart Cancer Radiation Therapy</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deep-learning-tools-can-kickstart-cancer-radiation-therapy/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Machine Learning Tool Accurately Diagnoses Esophageal Cancer</title>
		<link>https://www.aiuniverse.xyz/machine-learning-tool-accurately-diagnoses-esophageal-cancer/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-tool-accurately-diagnoses-esophageal-cancer/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 09 Nov 2019 08:05:21 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Imaging Analytics]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Medical Imaging]]></category>
		<category><![CDATA[Medical Research]]></category>
		<category><![CDATA[Predictive Analytics]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5076</guid>

					<description><![CDATA[<p>Source: dqindia.com November 08, 2019 &#8211; Machine learning methods could accurately identify cancerous esophagus tissue on microscopy images without the time-consuming manual data input that is required for current <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-tool-accurately-diagnoses-esophageal-cancer/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-tool-accurately-diagnoses-esophageal-cancer/">Machine Learning Tool Accurately Diagnoses Esophageal Cancer</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: dqindia.com</p>



<p>November 08, 2019 &#8211; Machine learning methods could accurately identify cancerous esophagus tissue on microscopy images without the time-consuming manual data input that is required for current methods, according to a study published in <em>JAMA Network Open</em>.</p>



<p>Researchers at Dartmouth and Dartmouth-Hitchcock Norris Cotton Cancer Center have developed an innovative machine learning approach that automatically learns clinically important regions on whole-slide images to classify them.</p>



<p>Histopathology image analysis requires a manual annotation process that outlines the regions of interest on a high-resolution whole slide image to train the computer model. Although the method is advanced, the process is still tedious.</p>



<p>“Data annotation is the most time-consuming and laborious bottleneck in developing modern deep learning methods,” said Saeed Hassanpour, PhD, lead author of the study.</p>



<p>“Our study shows that deep learning models for histopathology slides analysis can be trained with labels only at the tissue level, thus removing the need for high-cost data annotation and creating new opportunities for expanding the application of deep learning in digital pathology.”</p>



<p>The team tested their method for identifying cancerous and precancerous esophagus tissue on high-resolution microscopy images without training on region-of-interest annotations. Researchers then applied the network to Barrett esophagus and esophageal adenocarcinoma detection and found that their method achieved better results than the traditional method.</p>



<p>“Our new approach outperformed the current state-of-the-art approach that requires these detailed annotations for its training,” said Hassanpour.</p>



<p>“The result is significant because our method is based solely on tissue-level annotations, unlike existing methods that are based on manually annotated regions.”</p>



<p>Machine learning technology has consistently demonstrated its potential to improve diagnostics and care management. Recently, a team of researchers used machine learning tools to accurately predict patients with cancer who were at high risk of six-month mortality, which could help clinicians engage in timely conversations with their patients.</p>



<p>“Our findings demonstrated that machine learning algorithms can predict a patient’s risk of short-term mortality with good discrimination and PPV. Such a tool could be very useful in aiding clinicians’ risk assessments for patients with cancer as well as serving as a point-of-care prompt to consider discussions about goals and end-of-life preferences,” the researchers stated.</p>



<p>“Machine learning algorithms can be relatively easily retrained to account for emerging cancer survival patterns. As computational capacity and the availability of structured genetic and molecular information increase, we expect that predictive performance will increase and there may be a further impetus to implement similar tools in practice.”</p>



<p>The research team on the esophageal study believes that this new machine learning approach could improve cancer diagnosis and care.</p>



<p>“Our method would facilitate a more extensive range of research on analyzing histopathology images that were previously not possible due to the lack of detailed annotations,” Hassanpour concluded.</p>



<p>“Clinical deployment of such systems could assist pathologists in reading histopathology slides more accurately and efficiently, which is a critical task for the cancer diagnosis, predicting prognosis, and treatment of cancer patients.”</p>



<p>In future work, the team plans to further validate the model by testing it on data from other institutions and running prospective clinical trials. Additionally, the group will apply the method to histological images of other types of tumors and lesions that have limited training data.</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-tool-accurately-diagnoses-esophageal-cancer/">Machine Learning Tool Accurately Diagnoses Esophageal Cancer</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-tool-accurately-diagnoses-esophageal-cancer/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AI equal with human experts in medical diagnosis, study finds</title>
		<link>https://www.aiuniverse.xyz/ai-equal-with-human-experts-in-medical-diagnosis-study-finds/</link>
					<comments>https://www.aiuniverse.xyz/ai-equal-with-human-experts-in-medical-diagnosis-study-finds/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 25 Sep 2019 12:19:40 +0000</pubDate>
				<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Health]]></category>
		<category><![CDATA[Human skills]]></category>
		<category><![CDATA[Medical Research]]></category>
		<category><![CDATA[NHS]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4588</guid>

					<description><![CDATA[<p>Source: theguardian.com Artificial intelligence is on a par with human experts when it comes to making medical diagnoses based on images, a review has found. The potential <a class="read-more-link" href="https://www.aiuniverse.xyz/ai-equal-with-human-experts-in-medical-diagnosis-study-finds/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/ai-equal-with-human-experts-in-medical-diagnosis-study-finds/">AI equal with human experts in medical diagnosis, study finds</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: theguardian.com</p>



<p>Artificial intelligence is on a par with human experts when it comes to making medical diagnoses based on images, a review has found.</p>



<p>The potential for artificial intelligence in healthcare has caused excitement, with advocates saying it will ease the strain on resources, free up time for doctor-patient interactions and even aid the development of tailored treatment. Last month the government announced £250m of funding for a new NHS artificial intelligence laboratory.</p>



<p>However, experts have warned the latest findings are based on a small number of studies, since the field is littered with poor-quality research.</p>



<p>One burgeoning application is the use of AI in interpreting medical images – a field that relies on deep learning, a sophisticated form of machine learning in which a series of labelled images are fed into algorithms that pick out features within them and learn how to classify similar images. This approach has shown promise in diagnosis of diseases from cancers to eye conditions.</p>



<p> However questions remain about how such deep learning systems measure up to human skills. Now researchers say they have conducted the first comprehensive review of published studies on the issue, and found humans and machines are on a par. </p>



<p>Prof Alastair Denniston, at the University Hospitals Birmingham NHS foundation trust and a co-author of the study, said the results were encouraging but the study was a reality check for some of the hype about AI.</p>



<p>Dr Xiaoxuan Liu, the lead author of the study and from the same NHS trust, agreed. “There are a lot of headlines about AI outperforming humans, but our message is that it can at best be equivalent,” she said.</p>



<p>Writing in the Lancet Digital Health, Denniston, Liu and colleagues reported how they focused on research papers published since 2012 – a pivotal year for deep learning.</p>



<p>An initial search turned up more than 20,000 relevant studies. However, only 14 studies – all based on human disease – reported good quality data, tested the deep learning system with images from a separate dataset to the one used to train it, and showed the same images to human experts.</p>



<p>The team pooled the most promising results from within each of the 14 studies to reveal that deep learning systems correctly detected a disease state 87% of the time – compared with 86% for healthcare professionals – and correctly gave the all-clear 93% of the time, compared with 91% for human experts.</p>



<p>However, the healthcare professionals in these scenarios were not given additional patient information they would have in the real world which could steer their diagnosis.</p>



<p>Prof David Spiegelhalter, the chair of the Winton centre for risk and evidence communication at the University of Cambridge, said the field was awash with poor research.</p>



<p>“This excellent review demonstrates that the massive hype over AI in medicine obscures the lamentable quality of almost all evaluation studies,” he said. “Deep learning can be a powerful and impressive technique, but clinicians and commissioners should be asking the crucial question: what does it actually add to clinical practice?”</p>



<p>However, Denniston remained optimistic about the potential of AI in healthcare, saying such deep learning systems could act as a diagnostic tool and help tackle the backlog of scans and images. What’s more, said Liu, they could prove useful in places which lack experts to interpret images.</p>



<p>Liu said it would be important to use deep learning systems in clinical trials to assess whether patient outcomes improved compared with current practices.</p>



<p>Dr Raj Jena, an oncologist at Addenbrooke’s hospital in Cambridge who was not involved in the study, said deep learning systems would be important in the future, but stressed they needed robust real-world testing. He also said it was important to understand why such systems sometimes make the wrong assessment.</p>



<p>“If you are a deep learning algorithm, when you fail you can often fail in a very unpredictable and spectacular way,” he said.</p>



<h4 class="wp-block-heading">Since you’re here&#8230;</h4>



<p>&#8230; we have a small favour to ask. More people are reading and supporting The Guardian’s independent, investigative journalism than ever before. And unlike many new organisations, we have chosen an approach that allows us to keep our journalism accessible to all, regardless of where they live or what they can afford. But we need your ongoing support to keep working as we do.</p>



<p>The Guardian will engage with the most critical issues of our time – from the escalating climate catastrophe to widespread inequality to the influence of big tech on our lives. At a time when factual information is a necessity, we believe that each of us, around the world, deserves access to accurate reporting with integrity at its heart.</p>



<p>Our editorial independence means we set our own agenda and voice our own opinions. Guardian journalism is free from commercial and political bias and not influenced by billionaire owners or shareholders. This means we can give a voice to those less heard, explore where others turn away, and rigorously challenge those in power.</p>



<p>We need your support to keep delivering quality journalism, to maintain our openness and to protect our precious independence. Every reader contribution, big or small, is so valuable.</p>
<p>The post <a href="https://www.aiuniverse.xyz/ai-equal-with-human-experts-in-medical-diagnosis-study-finds/">AI equal with human experts in medical diagnosis, study finds</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/ai-equal-with-human-experts-in-medical-diagnosis-study-finds/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>PHDA, Amazon Partner to Improve Care with Machine Learning</title>
		<link>https://www.aiuniverse.xyz/phda-amazon-partner-to-improve-care-with-machine-learning/</link>
					<comments>https://www.aiuniverse.xyz/phda-amazon-partner-to-improve-care-with-machine-learning/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 13 Aug 2019 17:32:28 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Amazon Web Services]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Cloud Computing]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Medical Imaging]]></category>
		<category><![CDATA[Medical Research]]></category>
		<category><![CDATA[Precision Medicine]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4334</guid>

					<description><![CDATA[<p>Source: healthitanalytics.com The Pittsburgh Health Data Alliance (PHDA) is partnering with Amazon Web Services (AWS) to improve medical imaging, cancer diagnostics, precision medicine, voice-enabled technologies, and other areas of <a class="read-more-link" href="https://www.aiuniverse.xyz/phda-amazon-partner-to-improve-care-with-machine-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/phda-amazon-partner-to-improve-care-with-machine-learning/">PHDA, Amazon Partner to Improve Care with Machine Learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: healthitanalytics.com</p>



<p>The Pittsburgh Health Data Alliance (PHDA) is partnering with Amazon Web Services (AWS) to improve medical imaging, cancer diagnostics, precision medicine, voice-enabled technologies, and other areas of healthcare with machine learning. </p>



<p>The AWS Machine Learning Research sponsorship will enable PHDA scientists from the University of Pittsburgh and Carnegie Mellon University (CMU) to accelerate research and product commercialization efforts across eight projects.&nbsp;</p>



<p>Projects could have the potential to create an individualized risk score for every cancer patient, which will help providers better predict a patient’s response to treatment. Other projects will aim to use a patient’s verbal and visual cues to diagnose and treat mental health symptoms, and to reduce medical errors by mining all data in patient medical records. </p>



<p>Researchers from the University of Pittsburgh are using AWS resources to improve diagnosis and treatment of abdominal aortic aneurysms, the 13th leading cause of death in western countries. Currently, clinicians can only use the measurements of an aneurysm’s diameter and growth rate to predict the risk of a rupture.&nbsp;</p>



<p>“With the latest advances in machine learning, we are developing an algorithm that will provide clinicians with an objective, predictive tool to guide surgical interventions before symptoms appear, improving patient outcomes,” said David Vorp, PhD, associate dean for research at Pitt’s Swanson School of Engineering and the John A. Swanson Professor of Bioengineering.</p>



<p>Additionally, a team from CMU will leverage AWS support to develop algorithms and software tools to better understand the origin and evolution of tumor cells. The project will use machine learning to generate insights into how tumors predict, as well as how likely they are to change and grow in the future.&nbsp;</p>



<p>“Data-driven, genomic methods guided by an understanding of cancers as evolutionary systems have relevance to numerous aspects of clinical cancer care,” said Russell Schwartz, PhD, professor of biological sciences and computational biology at CMU.&nbsp;</p>



<p>“These include determining which precancerous lesions are likely to become cancers, which cancers have a good or bad prognosis, and which of those with bad prognoses might respond long-term to specific therapies.”</p>



<p>AWS resources will also support several precision medicine projects. One of these projects will focus on identifying genetic drivers of cancer within individual tumors, while another will aim to create a personalized risk score for breast cancer recurrence. </p>



<p>Formed in 2015, the PHDA brings together the leading health sciences research at the University of Pittsburgh, computer science and machine learning capabilities of CMU, and the clinical care, patient data and commercialization expertise of the University of Pittsburgh Medical Center (UPMC). </p>



<p>The PHDA uses the big data generated in healthcare to transform the way providers treat and prevent diseases, and to engage patients in their own care. With new machine learning toolsand advances in computing power, like those offered by Amazon, PHDA will be able to rapidly translate research insights into treatments and services that could significantly improve patient health. </p>



<p>“We believe that machine learning can significantly accelerate the progress of medical research and help translate those advances into treatments and improved experiences for patients,” said Swami Sivasubramanian, vice president of machine learning for AWS.&nbsp;</p>



<p>“We are excited to bring our machine learning services and cloud computing resources to support the high-impact work being done at the PHDA.”</p>



<p>By partnering with AWS, PHDA will continue its efforts to advance healthcare delivery and disease treatment.&nbsp;</p>



<p>“This collaboration with AWS complements the unique strengths of the PHDA&#8217;s founders and will provide unparalleled resources to our researchers,” said Tal Heppenstall, president of UPMC Enterprises, which funds the PHDA and focuses on commercializing its breakthroughs.&nbsp;</p>



<p>“By leveraging AWS machine learning and artificial intelligence services, we can help Pittsburgh become the premier hub of technology innovation in health care, drawing innovators from companies big and small to join us in this critical effort to revolutionize the delivery of health care.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/phda-amazon-partner-to-improve-care-with-machine-learning/">PHDA, Amazon Partner to Improve Care with Machine Learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/phda-amazon-partner-to-improve-care-with-machine-learning/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
