<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Imaging Analytics Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/imaging-analytics/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/imaging-analytics/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Tue, 29 Sep 2020 07:20:53 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>
	<item>
		<title>Collaboration Will Offer Data to Train Machine Learning Tools</title>
		<link>https://www.aiuniverse.xyz/collaboration-will-offer-data-to-train-machine-learning-tools/</link>
					<comments>https://www.aiuniverse.xyz/collaboration-will-offer-data-to-train-machine-learning-tools/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 29 Sep 2020 07:19:25 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[analytics technologies]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Imaging Analytics]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Medical Imaging]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11834</guid>

					<description><![CDATA[<p>Source: healthitanalytics.com Researchers at the University of Iowa (UI) have received a $1 million grant from the National Science Foundation (NSF) to develop a machine learning platform to train algorithms with data from around the world. The phase one grant will enable the UI team to lead a multi-university and industry collaboration and address concerns <a class="read-more-link" href="https://www.aiuniverse.xyz/collaboration-will-offer-data-to-train-machine-learning-tools/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/collaboration-will-offer-data-to-train-machine-learning-tools/">Collaboration Will Offer Data to Train Machine Learning Tools</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: healthitanalytics.com</p>



<p>Researchers at the University of Iowa (UI) have received a $1 million grant from the National Science Foundation (NSF) to develop a machine learning platform to train algorithms with data from around the world.</p>



<p>The phase one grant will enable the UI team to lead a multi-university and industry collaboration and address concerns around patient privacy and data security in clinical AI development.</p>



<p>The researchers noted that although the use of AI is widespread in healthcare, training effective machine learning algorithms require thousands of samples annotated by doctors. This can lead to privacy and security issues, the team stated.</p>



<p>“Traditional methods of machine learning require a centralized database where patient data can be directly accessed for training a machine learning model,” said Stephen Baek, assistant professor of industrial and systems engineering at UI.</p>



<p>“Such methods are impacted by practical issues such as patient privacy, information security, data ownership, and the burden on hospitals which must create and maintain these centralized databases.”</p>



<p>The team will develop a decentralized, asynchronous solution called ImagiQ, which relies on an ecosystem of machine learning models so that institutions can select models that work best for their populations. Organizations will be able to upload and share the models, not patient data, with each other.</p>



<p>As each institution improves the model using their local patient data sets, models will be uploaded back to a centralized server. This ensemble learning approach will allow the most reliable and efficient models to come to the forefront, resulting in a better AI system for analyzing images like lung x-rays or CT scans that detect tumors.</p>



<p>The UI-led team includes researchers from Stanford University, the University of Chicago, Harvard University, Yale University, and Seoul National University.</p>



<p>Over the next nine months, the group will aim to develop a prototype of the system as well as participate in the Accelerator’s innovation curriculum to ensure the solution has societal impact. By the end of phase one, the team will participate in a pitch competition and proposal evaluation and if selected will proceed to phase two, with potential funding up to $5 million for 24 months.</p>



<p>“ImagiQ will further federated learning by decentralizing the model updates and eliminating the synchronous update cycle,” said Baek. “We are going to create a whole ecosystem of machine learning models that will evolve and improve over time. High performing models will be selected by many institutions, while others are phased out, producing more reliable and trustworthy outputs.”</p>



<p>The research team is part of the AI-driven data and model sharing track topic under the 2020 cohort NSF Convergence Accelerator program, designed to leverage a convergence approach to transition basic research and discovery into practice. NSF is investing more than $27 million to support the teams in phase one to develop the solution groundwork for AI-Driven Data and Model Sharing.</p>



<p>The Convergent Accelerator’s AI-Driven Innovation via Data and Model Sharing topic involves 18 teams concentrating on solution development. These research teams will also address a variety of data and model-related challenges and data types to include platform development to enable easy and efficient data matching and sharing.</p>



<p>“The quantum technology and AI-driven data and model sharing topics were chosen based on community input and identified federal research and development priorities,” said Douglas Maughan, head of the NSF Convergence Accelerator program. “This is the program&#8217;s second cohort and we are excited for these teams to use convergence research and innovation-centric fundamentals to accelerate solutions that have a positive societal impact.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/collaboration-will-offer-data-to-train-machine-learning-tools/">Collaboration Will Offer Data to Train Machine Learning Tools</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/collaboration-will-offer-data-to-train-machine-learning-tools/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deep Learning Tools Can Kickstart Cancer Radiation Therapy</title>
		<link>https://www.aiuniverse.xyz/deep-learning-tools-can-kickstart-cancer-radiation-therapy/</link>
					<comments>https://www.aiuniverse.xyz/deep-learning-tools-can-kickstart-cancer-radiation-therapy/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 29 Jan 2020 07:59:07 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Imaging Analytics]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Medical Imaging]]></category>
		<category><![CDATA[Medical Research]]></category>
		<category><![CDATA[Preventive Care]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6436</guid>

					<description><![CDATA[<p>Source: healthitanalytics.com January 28, 2020 &#8211; New research from UT Southwestern has shown that deep learning technology could help providers quickly develop optimal treatment plans for cancer patients, decreasing the odds that the disease will spread. Patients usually have to wait several days to a week to begin therapy while doctors manually develop treatment plans, which can be a <a class="read-more-link" href="https://www.aiuniverse.xyz/deep-learning-tools-can-kickstart-cancer-radiation-therapy/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-tools-can-kickstart-cancer-radiation-therapy/">Deep Learning Tools Can Kickstart Cancer Radiation Therapy</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: healthitanalytics.com</p>



<p>January 28, 2020 &#8211; New research from UT Southwestern has shown that deep learning technology could help providers quickly develop optimal treatment plans for cancer patients, decreasing the odds that the disease will spread.</p>



<p>Patients usually have to wait several days to a week to begin therapy while doctors manually develop treatment plans, which can be a tedious, time-consuming process. Providers must carefully review a patient’s imaging data and conduct several phases of feedback within the medical team.</p>



<p>Delaying radiation therapy for even a week can increase the chance of some cancers recurring or spreading by 12 to 14 percent, researchers noted.</p>



<p>“Some of these patients need radiation therapy immediately, but doctors often have to tell them to go home and wait,” said&nbsp;Steve Jiang, PhD, who directs UT&nbsp;Southwestern’s&nbsp;Medical Artificial Intelligence and Automation (MAIA) Lab. “Achieving optimal treatment plans in near real time is important and part of our broader mission to use AI to improve all aspects of cancer care.”</p>



<p>The team explored how AI and deep learning tools could improve multiple aspects of radiation therapy, from initial dosage plans required before the treatment can begin, to the dose recalculations that occur as the plan progresses.</p>



<p>Researchers used data from 70 prostate cancer patients to train four deep learning models. The tools learned to develop 3D renderings of how to best distribute the radiation in each patient.</p>



<p>Each model accurately predicted the treatment plans developed by the medical team, and the technology was able to produce optimal treatment plans within five-hundredths of a second after receiving clinical data for patients.</p>



<p>“Our AI can cut out much of the back and forth that happens between the doctor and the dosage planner,” Jiang said. “This improves the efficiency dramatically.”</p>



<p>Jiang also led a second study that showed how AI can quickly and accurately recalculate dosages before each radiation session, taking into account how a patient’s anatomy may have changed since the last therapy. A traditional, accurate recalculation can require patients to wait up to ten minutes or more, in addition to the time needed to conduct anatomy imaging before each session.</p>



<p>Jiang and his team developed an AI model that combined two conventional models used for dose calculation: a simple, fast model that lacked accuracy, and a complex one that was accurate but required more time.</p>



<p>The newly developed AI technology assessed the differences between the models, and learned how to utilize both speed and accuracy to produce calculations within one second.</p>



<p>UT Southwestern plans to use these new deep learning and AI capabilities in clinical care after implementing a patient interface. The MAIA Lab is also currently developing deep learning tools for several other purposes, including enhanced medical imaging and image processing, automated medical procedures, and improved disease diagnosis and outcome prediction.</p>



<p>Researchers have taken an interest in using AI to improve radiation therapy for patients. A team from the University of Texas MD Anderson Cancer Center recently developed a machine learning tool that could accurately predict two of the most challenging side effects of radiation therapy for patients with head and neck cancers: significant weight loss or the need for a feeding tube.</p>



<p>The technology could help providers deliver more proactive care for patients with cancer.</p>



<p>“Being able to identify which patients are at greatest risk would allow radiation oncologists to take steps to prevent or mitigate these possible side effects,” said Jay Reddy, MD, PhD, an assistant professor of radiation oncology at The University of Texas MD Anderson Cancer Center and lead author on the study.&nbsp;</p>



<p>“If the patient has an intermediate risk, and they might get through treatment without needing a feeding tube, we could take precautions such as setting them up with a nutritionist and providing them with nutritional supplements. If we know their risk for feeding tube placement is extremely high, we could place it ahead of time so they wouldn’t have to be admitted to the hospital after treatment. We’d know to keep a closer eye on that patient.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-tools-can-kickstart-cancer-radiation-therapy/">Deep Learning Tools Can Kickstart Cancer Radiation Therapy</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deep-learning-tools-can-kickstart-cancer-radiation-therapy/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Machine Learning Tool Accurately Diagnoses Esophageal Cancer</title>
		<link>https://www.aiuniverse.xyz/machine-learning-tool-accurately-diagnoses-esophageal-cancer/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-tool-accurately-diagnoses-esophageal-cancer/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 09 Nov 2019 08:05:21 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Imaging Analytics]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Medical Imaging]]></category>
		<category><![CDATA[Medical Research]]></category>
		<category><![CDATA[Predictive Analytics]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5076</guid>

					<description><![CDATA[<p>Source: dqindia.com November 08, 2019 &#8211; Machine learning methods could accurately identify cancerous esophagus tissue on microscopy images without the time-consuming manual data input that is required for current methods, according to a study published in JAMA Network Open. Researchers at Dartmouth and Dartmouth-Hitchcock Norris Cotton Cancer Center have developed an innovative machine learning approach that automatically learns clinically important <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-tool-accurately-diagnoses-esophageal-cancer/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-tool-accurately-diagnoses-esophageal-cancer/">Machine Learning Tool Accurately Diagnoses Esophageal Cancer</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: dqindia.com</p>



<p>November 08, 2019 &#8211; Machine learning methods could accurately identify cancerous esophagus tissue on microscopy images without the time-consuming manual data input that is required for current methods, according to a study published in <em>JAMA Network Open</em>.</p>



<p>Researchers at Dartmouth and Dartmouth-Hitchcock Norris Cotton Cancer Center have developed an innovative machine learning approach that automatically learns clinically important regions on whole-slide images to classify them.</p>



<p>Histopathology image analysis requires a manual annotation process that outlines the regions of interest on a high-resolution whole slide image to train the computer model. Although the method is advanced, the process is still tedious.</p>



<p>“Data annotation is the most time-consuming and laborious bottleneck in developing modern deep learning methods,” said Saeed Hassanpour, PhD, lead author of the study.</p>



<p>“Our study shows that deep learning models for histopathology slides analysis can be trained with labels only at the tissue level, thus removing the need for high-cost data annotation and creating new opportunities for expanding the application of deep learning in digital pathology.”</p>



<p>The team tested their method for identifying cancerous and precancerous esophagus tissue on high-resolution microscopy images without training on region-of-interest annotations. Researchers then applied the network to Barrett esophagus and esophageal adenocarcinoma detection and found that their method achieved better results than the traditional method.</p>



<p>“Our new approach outperformed the current state-of-the-art approach that requires these detailed annotations for its training,” said Hassanpour.</p>



<p>“The result is significant because our method is based solely on tissue-level annotations, unlike existing methods that are based on manually annotated regions.”</p>



<p>Machine learning technology has consistently demonstrated its potential to improve diagnostics and care management. Recently, a team of researchers used machine learning tools to accurately predict patients with cancer who were at high risk of six-month mortality, which could help clinicians engage in timely conversations with their patients.</p>



<p>“Our findings demonstrated that machine learning algorithms can predict a patient’s risk of short-term mortality with good discrimination and PPV. Such a tool could be very useful in aiding clinicians’ risk assessments for patients with cancer as well as serving as a point-of-care prompt to consider discussions about goals and end-of-life preferences,” the researchers stated.</p>



<p>“Machine learning algorithms can be relatively easily retrained to account for emerging cancer survival patterns. As computational capacity and the availability of structured genetic and molecular information increase, we expect that predictive performance will increase and there may be a further impetus to implement similar tools in practice.”</p>



<p>The research team on the esophageal study believes that this new machine learning approach could improve cancer diagnosis and care.</p>



<p>“Our method would facilitate a more extensive range of research on analyzing histopathology images that were previously not possible due to the lack of detailed annotations,” Hassanpour concluded.</p>



<p>“Clinical deployment of such systems could assist pathologists in reading histopathology slides more accurately and efficiently, which is a critical task for the cancer diagnosis, predicting prognosis, and treatment of cancer patients.”</p>



<p>In future work, the team plans to further validate the model by testing it on data from other institutions and running prospective clinical trials. Additionally, the group will apply the method to histological images of other types of tumors and lesions that have limited training data.</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-tool-accurately-diagnoses-esophageal-cancer/">Machine Learning Tool Accurately Diagnoses Esophageal Cancer</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-tool-accurately-diagnoses-esophageal-cancer/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Preparing for the Artificial Intelligence Explosion at RSNA 2019</title>
		<link>https://www.aiuniverse.xyz/preparing-for-the-artificial-intelligence-explosion-at-rsna-2019/</link>
					<comments>https://www.aiuniverse.xyz/preparing-for-the-artificial-intelligence-explosion-at-rsna-2019/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 05 Nov 2019 10:31:51 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Imaging Analytics]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Medical Imaging]]></category>
		<category><![CDATA[Natural language processing]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5007</guid>

					<description><![CDATA[<p>Source: healthitanalytics.com November 04, 2019&#160;&#8211;&#160;What do the following numbers have to do with the annual meeting of the Radiological Society of North America: 2, 12, 32, 271, and 308? They refer to the presence of “artificial intelligence” at the show from 2015 to 2019, in that order. RNSA sees tremendous potential in the application of <a class="read-more-link" href="https://www.aiuniverse.xyz/preparing-for-the-artificial-intelligence-explosion-at-rsna-2019/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/preparing-for-the-artificial-intelligence-explosion-at-rsna-2019/">Preparing for the Artificial Intelligence Explosion at RSNA 2019</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: healthitanalytics.com</p>



<p>November 04, 2019&nbsp;&#8211;&nbsp;What do the following numbers have to do with the annual meeting of the Radiological Society of North America: 2, 12, 32, 271, and 308? They refer to the presence of “artificial intelligence” at the show from 2015 to 2019, in that order.</p>



<p>RNSA sees tremendous potential in the application of AI and its various permutations to the work of radiologists across the continent — a significant shift from the initial belief that AI would make radiologists redundant.</p>



<p>The society has gone so far as standing up an expanded AI showcase for this year’s show, which takes place December 1 through 6 at the McCormick Place in Chicago.</p>



<p>“Many RSNA meeting attendees seek out AI subject matter. Creating an encompassing showcase on artificial intelligence for exhibitors, educators and researchers will create a dynamic environment for our attendees,” said Steve Drew, RSNA Assistant Executive Director of Scientific Assembly, Informatics and Corporate Relations in a July announcement.</p>



<p>“High interest by commercial companies and meeting attendees led to this exciting development,” added John Jaworski, CEM, Director: Meetings and Exhibition Services of RSNA. “We now have more than 100 AI Showcase companies participating—which is up 25 percent over 2018’s final showcase figures—and the AI Theater, Deep Learning Classroom and Hands-on Classroom will provide various educational opportunities on artificial intelligence within the Showcase.”</p>



<p>Given this explosion of AI at RNSA’s annual event, attendees must know the terms that will be thrown around and differentiate between hype and reality. So here’s a primer for you, dear reader.</p>



<p><strong>Getting Conversant in AI</strong></p>



<p>AI is often seen as the silver bullet to healthcare’s many problems. It holds the promise of detecting diseases earlier and with more accuracy, standardizing clinical processes, and eliminating scheduling and paperwork. Ultimately, integrating artificial intelligence into clinical workflows can help ease provider burnout and improve patient outcomes.</p>



<p>Since 2016 alone, the FDA has approved 38 artificial intelligence algorithms for clinical use. Nearly half of these apply to radiology practice, the field most quickly adopting AI. Images and image reads easily lend themselves to interpretation by artificial intelligence.</p>



<p>Radiology is littered with studies demonstrating how algorithms and machine models are outperforming providers in detecting, characterizing, and monitoring disease. In the future, many predict artificial intelligence will continue to improve, exceeding humans in certain, more complex tasks.</p>



<p>Many radiologists are fearful that the widespread use of AI will result in machines replacing their jobs. However, artificial intelligence should be a supplement to the traditional workflow of providers, complimenting their work rather than eliminating it.</p>



<p>In order for radiologists to confidently implement artificial intelligence into clinical workflow, they must understand the different types of artificial intelligence and how these methods can be leveraged in radiology practice to dissuade false assumptions and hesitancy towards adoption.&nbsp;</p>



<p><strong>Natural Language Processing</strong></p>



<p>Natural language processing (NLP) is one branch of artificial intelligence that allows computers to understand and interpret language. The technology can comb through reports, interpret spoken language, and generate structured text from free text.</p>



<p>A systematic review of NLP in radiology practice identified dozens of natural language processing methodologies applicable to clinical practice. Results demonstrated how the technology can be used for diagnostic surveillance, quality assessment, clinical support services, and cohort building for epidemiological studies.</p>



<p>All four of these aspects of care will help improve provider efficiency and care quality. Diagnostic surveillance allows the machine to alert providers when items have not been acted on, promoting efficient care and quick referral. The ability of natural language processing to transform free text into structured text can eliminate the administrative burden on providers, automating routine data entry and improving clinical workflow. Building a cohort allows researchers to quickly identify individuals for studies or allow providers to identify high-risk groups sooner.</p>



<p><strong>Machine Learning</strong></p>



<p>Another branch of artificial intelligence is machine learning. In this model, algorithms learn from a data set on how to solve a specific task. Data is inputted into the system, the machine learns from it, and uses that data to predict a desired outcome (e.g., risk of contracting a disease). Rather than being programmed to give a specific result from a data set, the machine learns how to predict outcomes using patterns in the data to identify which variables are most influential to the result.</p>



<p>In radiology practice, machine learning has a wide array of potential applications as the sheer amount of data radiology has is ripe for algorithm development. Machine learning processes can learn how to read and interpret a variety of medical images, including PET scans, MRIs, and CT scans. Quicker and more accurate reads of these imagines can identify disease faster and in an earlier stage with more accuracy.</p>



<p>Some studies indicate that machine learning can help improve overall workflow, communication, and patient safety if image read time is decreased and the quality of the image read is improved. Not only can this give providers more time to spend with patients instead of interpreting results, but it can also improve patient safety as more accurate reads will result in fewer false positive or false negative diagnoses.</p>



<p>Other research demonstrates how machine learning can help identify complex patterns in diagnosis. As a result, this artificial intelligence method can improve radiologists’ ability to make accurate decisions, identifying diseases more precisely and accurately.</p>



<p><strong>Deep Learning/Neural Networks</strong></p>



<p>Deep learning, often referred to as neural networks, is a type of machine learning where the algorithm is trained using a complex network of patterns similar to the brain’s neural network. The methodology has demonstrated high performance in identifying disease from imaging studies, taking the methods of machine learning one step further. Rather than learning from a set of inputs given to the machine from the algorithm developer, the algorithm learns from the data. It is a more advanced kind of machine learning that requires large datasets to train the algorithm and the data must be standardized as the machine has to learn where to identify irregularities in images.</p>



<p>With obvious applicability to radiology practice, research demonstrates deep learning models can be particularly useful in screening images or early-stage identification.</p>



<p>Deep learning algorithms, though, are at risk of the ‘black box’ problem if their neural networks are not extensively understood. ‘Black box’ AI is the development of an algorithm without an understanding of how the machine generated the output. Thus, many providers as uneasy trusting diagnostic and treatment decisions to an algorithm they do not understand.</p>



<p>If deep learning methods are to be more widely and confidently utilized in radiology practice, their interpretability will need to improve, and their methods must be clearly laid out. As with all artificial intelligence methodologies, the higher quality data inputted into generating the algorithm, the more accurate and more trusted the results will be.</p>
<p>The post <a href="https://www.aiuniverse.xyz/preparing-for-the-artificial-intelligence-explosion-at-rsna-2019/">Preparing for the Artificial Intelligence Explosion at RSNA 2019</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/preparing-for-the-artificial-intelligence-explosion-at-rsna-2019/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Medical Imaging, Machine Learning to Align in 10 Key Areas</title>
		<link>https://www.aiuniverse.xyz/medical-imaging-machine-learning-to-align-in-10-key-areas/</link>
					<comments>https://www.aiuniverse.xyz/medical-imaging-machine-learning-to-align-in-10-key-areas/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 12 Feb 2019 06:43:52 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Healthcare]]></category>
		<category><![CDATA[Imaging Analytics]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Medical Imaging]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=3327</guid>

					<description><![CDATA[<p>Source- healthitanalytics.com Medical imaging and machine learning are on a collision course that promises significant advancements in diagnostics and precision medicine, according to a new report from Frost &#38; Sullivan. Advances in artificial intelligence and powerful computing technologies to support highly detailed imaging studies are driving opportunities for vendors and providers to capture a segment of a quickly <a class="read-more-link" href="https://www.aiuniverse.xyz/medical-imaging-machine-learning-to-align-in-10-key-areas/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/medical-imaging-machine-learning-to-align-in-10-key-areas/">Medical Imaging, Machine Learning to Align in 10 Key Areas</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source- <a href="https://healthitanalytics.com/news/medical-imaging-machine-learning-to-align-in-10-key-areas" target="_blank" rel="noopener">healthitanalytics.com</a></p>
<p>Medical imaging and machine learning are on a collision course that promises significant advancements in diagnostics and precision medicine, according to a new report from Frost &amp; Sullivan.</p>
<p>Advances in artificial intelligence and powerful computing technologies to support highly detailed imaging studies are driving opportunities for vendors and providers to capture a segment of a quickly growing market.</p>
<p>The firm predicts that the precision medical imaging market, worth $120 million in 2017, will explode into an $8 billion opportunity by 2027.</p>
<p>“Precision medical imaging has tremendous potential to improve all aspects of the care continuum, thus supporting emerging care approaches that are more targeted, predictive, translational, personalized and effective,” said Siddharth Saha, Vice President of Research, Transformational Health.</p>
<p>“AI-enriched imaging equipment will help adapt and personalize the imaging protocols and procedures while precise radio mic and phenomic datasets from the given clinical context will enable deep learning, thereby reinforcing medical imaging&#8217;s contribution to precision medicine. There are several firms in the ecosystem making very valuable contributions to the care pathways and this pool is set to exponentially grow in the short term.”</p>
<p>The report identifies ten areas in which medical imaging and machine learning will combine to bring greater efficiencies, more precise diagnoses, and innovative treatment options for patients.</p>
<p><strong>EVIDENCE-BASED STUDY ORDERING</strong></p>
<p>Artificial intelligence can analyze massive volumes of clinical and imaging data to uncover patterns in the interplay between testing and long-term outcomes.  By generating evidence for when testing is needed and when it can be avoided, AI tools can not only create guidelines for ordering but can also reduce the billions spent each year on wasteful, low-value testing.</p>
<p><strong>PERSONALIZED IMAGING ACQUISITION PROTOCOLS</strong></p>
<p>Capturing clear and complete images of physical structures can be challenging for certain populations, including children, the obese, and individuals with physical impairments as well as those with anxiety, dementia, or claustrophobia.</p>
<p>Advanced imaging techniques and personalized protocols for imaging acquisition, supported by machine learning, can ensure that providers can reduce patient stress while still capturing the necessary data for diagnostics and care.</p>
<p><strong>ADAPTIVE MACHINE INTELLIGENCE</strong></p>
<p>Embedding intelligence into the imaging scanner itself can help to tailor imaging studies to the needs of the individual patient.  A collaboration between UC San Francisco and GE Healthcare, for example, is working to embed an AI-driven “library” of abnormal scans directly in its imaging machines to identify pneumothorax in trauma patients as speedily as possible.</p>
<p><strong>PRECISION REPORTING AND INTERPRETATION</strong></p>
<p>Correlating imaging studies with the latest research and studies can improve the accuracy of interpretation and connect patients with optimal treatment paths.</p>
<p>Using AI to comb through millions of pages of academic literature to present decision support for providers can ensure informed decision-making.</p>
<p><strong>ADVANCING RADIOGENOMICS</strong></p>
<p>Radiogenomics is a relatively new field that aims to correlate imaging studies with gene expression, particularly for cancer patients. By combining imaging studies with genomic data, providers may be able to identify cancers with much greater accuracy and offer personalized treatment options to patients based on their genetic and clinical data.</p>
<p><strong>SUPPORTING 3D PRINTING OF IMPLANTS AND ANATOMICAL GUIDES</strong></p>
<p>3D printing can now produce highly tailored implants for patients as well as reproduce complex structures in the body so surgeons can examine them in detail or rehearse a delicate surgery.  The more sophisticated the imaging, the more accurate and detailed the 3D reproduction can be.</p>
<p><strong>IMAGE-GUIDED INTERVENTIONS</strong></p>
<p>Interventional oncology, external beam radiotherapy, and focused ultrasound are non-invasive techniques that require accurate imaging tools to provide guidance to clinicians.  AI can support real-time imaging that enables the delivery of these therapies while protecting surrounding healthy structures from exposure.</p>
<p><strong>PRECISE ONCOLOGIC RADIATION THERAPY</strong></p>
<p>Correctly calculating the optimal dose of radiation therapy for cancer patients can be challenging for providers.  Machine learning tools can offer clinical decision support that accurately calculates dosages and plans therapies to ensure patients are receiving the right amount of treatment for their needs.</p>
<p><strong>MOLECULAR IMAGING OF THERANOSTIC RADIOTRACERS</strong></p>
<p>Theranostics is the combination of diagnostic and therapeutic tools into a single agent, while radiotracers are chemicals with radioactive components that can be watched for the rate of decay to monitor chemical reactions.  Radiotracers can be used to observe the metabolism of substances or the behavior of biological processes, giving researchers insight into the behavior of a drug, for example.</p>
<p>Imaging tools that can monitor theranostic radiotracers on a molecular level require AI to analyze the huge volumes of resulting data.  This area of exploration is likely to see significant growth in the next few years.</p>
<p><strong>UNDERSTANDING VALUE, QUALITY, AND OUTCOMES</strong></p>
<p>Precision therapies are intended to produce better outcomes at lower costs, but many organizations are currently struggling to generate business intelligence that would allow them to understand how their actions are affecting their bottom lines.  Artificial intelligence will play a key role in moving basic operational and financial analytics into a new realm of comprehensive insights.</p>
<p>As a result, organizations will be able to invest wisely in new products from vendors seeking to capitalize on the imaging analytics space.</p>
<p>But the ability to successfully combine machine learning with advanced imaging techniques is currently uneven across the market, Saha observed.</p>
<p>“While most major imaging companies are keen to make the most of the opportunities in precision imaging, they are at various levels of adoption. For instance, Siemens Healthineers has fully embraced the precision trend since it offers multi-pronged value through its solutions portfolio,” he said.</p>
<p>“At Philips Healthcare, a few precision hot spots have been forming, notably in image-guided therapies and oncology informatics. GE Healthcare, on the other hand, is looking to combine the precision paradigm with applied intelligence.”</p>
<p>Organizations looking to invest in AI-driven imaging technologies should carefully assess both their internal needs and the proffered capabilities of new products to ensure that they are partnering with a vendor that best suits their goals.</p>
<p>The post <a href="https://www.aiuniverse.xyz/medical-imaging-machine-learning-to-align-in-10-key-areas/">Medical Imaging, Machine Learning to Align in 10 Key Areas</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/medical-imaging-machine-learning-to-align-in-10-key-areas/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
	</channel>
</rss>
