<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Medical Imaging Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/medical-imaging/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/medical-imaging/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Thu, 04 Jul 2024 14:41:43 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>How can generative AI be integrated with other AI models and applications?</title>
		<link>https://www.aiuniverse.xyz/how-can-generative-ai-be-integrated-with-other-ai-models-and-applications/</link>
					<comments>https://www.aiuniverse.xyz/how-can-generative-ai-be-integrated-with-other-ai-models-and-applications/#respond</comments>
		
		<dc:creator><![CDATA[Maruti Kr.]]></dc:creator>
		<pubDate>Thu, 04 Jul 2024 14:41:42 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Anomaly Detection]]></category>
		<category><![CDATA[chatbots]]></category>
		<category><![CDATA[Content Creation]]></category>
		<category><![CDATA[Data Augmentation]]></category>
		<category><![CDATA[Financial Forecasting]]></category>
		<category><![CDATA[Fraud Detection]]></category>
		<category><![CDATA[game development]]></category>
		<category><![CDATA[Human-Robot Interaction]]></category>
		<category><![CDATA[Image generation]]></category>
		<category><![CDATA[Medical Imaging]]></category>
		<category><![CDATA[natural language processing (NLP)]]></category>
		<category><![CDATA[Personalized Learning]]></category>
		<category><![CDATA[Personalized Medicine]]></category>
		<category><![CDATA[Recommendation Systems]]></category>
		<category><![CDATA[virtual assistants]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=18963</guid>

					<description><![CDATA[<p>Integrating generative AI with other AI models and applications can enhance their capabilities and create more comprehensive and effective solutions. Here are several ways this integration can <a class="read-more-link" href="https://www.aiuniverse.xyz/how-can-generative-ai-be-integrated-with-other-ai-models-and-applications/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-can-generative-ai-be-integrated-with-other-ai-models-and-applications/">How can generative AI be integrated with other AI models and applications?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-large"><img fetchpriority="high" decoding="async" width="1024" height="585" src="https://www.aiuniverse.xyz/wp-content/uploads/2024/07/DALL·E-2024-07-04-20.05.42-An-illustration-showing-the-integration-of-generative-AI-with-various-AI-applications.-The-central-element-is-a-generative-AI-model-represented-as-a--1024x585.webp" alt="" class="wp-image-18964" srcset="https://www.aiuniverse.xyz/wp-content/uploads/2024/07/DALL·E-2024-07-04-20.05.42-An-illustration-showing-the-integration-of-generative-AI-with-various-AI-applications.-The-central-element-is-a-generative-AI-model-represented-as-a--1024x585.webp 1024w, https://www.aiuniverse.xyz/wp-content/uploads/2024/07/DALL·E-2024-07-04-20.05.42-An-illustration-showing-the-integration-of-generative-AI-with-various-AI-applications.-The-central-element-is-a-generative-AI-model-represented-as-a--300x171.webp 300w, https://www.aiuniverse.xyz/wp-content/uploads/2024/07/DALL·E-2024-07-04-20.05.42-An-illustration-showing-the-integration-of-generative-AI-with-various-AI-applications.-The-central-element-is-a-generative-AI-model-represented-as-a--768x439.webp 768w, https://www.aiuniverse.xyz/wp-content/uploads/2024/07/DALL·E-2024-07-04-20.05.42-An-illustration-showing-the-integration-of-generative-AI-with-various-AI-applications.-The-central-element-is-a-generative-AI-model-represented-as-a--1536x878.webp 1536w, https://www.aiuniverse.xyz/wp-content/uploads/2024/07/DALL·E-2024-07-04-20.05.42-An-illustration-showing-the-integration-of-generative-AI-with-various-AI-applications.-The-central-element-is-a-generative-AI-model-represented-as-a-.webp 1792w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>Integrating generative AI with other AI models and applications can enhance their capabilities and create more comprehensive and effective solutions. Here are several ways this integration can be achieved:</p>



<ol class="wp-block-list">
<li><strong>Natural Language Processing (NLP):</strong></li>
</ol>



<ul class="wp-block-list">
<li><strong>Chatbots and Virtual Assistants:</strong> Integrative generative AI can create more human-like and contextually aware responses, improving user interaction and satisfaction.</li>



<li><strong>Text Summarization and Translation:</strong> Combining generative AI with existing NLP models can improve the accuracy and fluency of summaries and translations.</li>
</ul>



<p>2. <strong>Computer Vision:</strong></p>



<ul class="wp-block-list">
<li><strong>Image Generation and Enhancement:</strong> Generative AI can be used for creating high-quality images from text descriptions, improving image resolution, and filling in missing parts of images.</li>



<li><strong>Object Detection and Recognition:</strong> Integrating generative models can help in generating synthetic data to train and enhance object detection models.</li>
</ul>



<p>3. <strong>Healthcare:</strong></p>



<ul class="wp-block-list">
<li><strong>Medical Imaging:</strong> Generative AI can enhance medical images, assist in creating synthetic medical data for training purposes, and improve diagnostics by integrating with existing imaging analysis models.</li>



<li><strong>Personalized Medicine:</strong> By generating patient-specific simulations and treatment plans, generative AI can assist in precision medicine efforts.</li>
</ul>



<p>4. <strong>Finance:</strong></p>



<ul class="wp-block-list">
<li><strong>Fraud Detection:</strong> Generative models can simulate fraudulent transactions to improve the training of detection algorithms.</li>



<li><strong>Financial Forecasting:</strong> Integrating generative AI with predictive models can enhance scenario analysis and risk assessment.</li>
</ul>



<p>5. <strong>Entertainment and Media:</strong></p>



<ul class="wp-block-list">
<li><strong>Content Creation:</strong> Generative AI can assist in creating music, art, and writing, augmenting the creative process and providing new tools for artists.</li>



<li><strong>Game Development:</strong> It can be used to create characters, dialogues, and scenarios, enhancing the gaming experience.</li>
</ul>



<p>6. <strong>Education:</strong></p>



<ul class="wp-block-list">
<li><strong>Tutoring Systems:</strong> Combining generative AI with educational models can create personalized learning experiences, generating tailored content and feedback for students.</li>



<li><strong>Content Generation:</strong> Automating the creation of educational materials, such as quizzes and study guides, based on curriculum data.</li>
</ul>



<p>7. <strong>Robotics:</strong></p>



<ul class="wp-block-list">
<li><strong>Behavior Simulation:</strong> Generative AI can simulate various robotic behaviors in different scenarios, improving the robustness of robotic models.</li>



<li><strong>Human-Robot Interaction:</strong> Enhancing the interaction by generating more natural and context-aware responses from robots.</li>
</ul>



<p>8. <strong>Data Augmentation:</strong></p>



<ul class="wp-block-list">
<li><strong>Training Data Generation:</strong> Generative models can create synthetic data to augment training datasets, improving the performance of machine learning models.</li>



<li><strong>Anomaly Detection:</strong> Generating normal behavior patterns to help identify deviations and anomalies more effectively.</li>
</ul>



<p>9. <strong>Personalization and Recommendation Systems:</strong></p>



<ul class="wp-block-list">
<li><strong>Content Personalization:</strong> Generative AI can create personalized content recommendations based on user preferences and behavior.</li>



<li><strong>Dynamic User Interfaces:</strong> Generating adaptive and personalized user interfaces that change based on user interactions and preferences.</li>
</ul>



<p>Integrating generative AI with other AI models and applications requires careful consideration of data quality, model training, and ethical implications to ensure the effectiveness and reliability of the integrated solutions.</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-can-generative-ai-be-integrated-with-other-ai-models-and-applications/">How can generative AI be integrated with other AI models and applications?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-can-generative-ai-be-integrated-with-other-ai-models-and-applications/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Applications of generative AI in various industries like healthcare, entertainment, and design?</title>
		<link>https://www.aiuniverse.xyz/applications-of-generative-ai-in-various-industries-like-healthcare-entertainment-and-design/</link>
					<comments>https://www.aiuniverse.xyz/applications-of-generative-ai-in-various-industries-like-healthcare-entertainment-and-design/#respond</comments>
		
		<dc:creator><![CDATA[Maruti Kr.]]></dc:creator>
		<pubDate>Sat, 15 Jun 2024 08:54:27 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[and design?]]></category>
		<category><![CDATA[Applications of generative AI in various industries like healthcare]]></category>
		<category><![CDATA[Architectural Design]]></category>
		<category><![CDATA[Automation]]></category>
		<category><![CDATA[Content Creation]]></category>
		<category><![CDATA[Drug Discovery]]></category>
		<category><![CDATA[entertainment]]></category>
		<category><![CDATA[Fashion Design]]></category>
		<category><![CDATA[Healthcare]]></category>
		<category><![CDATA[Medical Imaging]]></category>
		<category><![CDATA[Personalized Medicine]]></category>
		<category><![CDATA[Simulation]]></category>
		<category><![CDATA[User Experience Design]]></category>
		<category><![CDATA[virtual reality]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=18906</guid>

					<description><![CDATA[<p>Generative AI, which refers to artificial intelligence systems that can generate new content based on learned patterns and data, has transformative potential across a wide range of <a class="read-more-link" href="https://www.aiuniverse.xyz/applications-of-generative-ai-in-various-industries-like-healthcare-entertainment-and-design/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/applications-of-generative-ai-in-various-industries-like-healthcare-entertainment-and-design/">Applications of generative AI in various industries like healthcare, entertainment, and design?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-full"><img decoding="async" width="880" height="470" src="https://www.aiuniverse.xyz/wp-content/uploads/2024/06/image-5.png" alt="" class="wp-image-18907" srcset="https://www.aiuniverse.xyz/wp-content/uploads/2024/06/image-5.png 880w, https://www.aiuniverse.xyz/wp-content/uploads/2024/06/image-5-300x160.png 300w, https://www.aiuniverse.xyz/wp-content/uploads/2024/06/image-5-768x410.png 768w" sizes="(max-width: 880px) 100vw, 880px" /></figure>



<p>Generative AI, which refers to artificial intelligence systems that can generate new content based on learned patterns and data, has transformative potential across a wide range of industries. Here’s a deeper look into how this technology can be applied in healthcare, entertainment, and design:</p>



<h2 class="wp-block-heading">Healthcare</h2>



<figure class="wp-block-image size-full is-resized"><img decoding="async" width="365" height="250" src="https://www.aiuniverse.xyz/wp-content/uploads/2024/06/image-6.png" alt="" class="wp-image-18908" style="width:840px;height:auto" srcset="https://www.aiuniverse.xyz/wp-content/uploads/2024/06/image-6.png 365w, https://www.aiuniverse.xyz/wp-content/uploads/2024/06/image-6-300x205.png 300w" sizes="(max-width: 365px) 100vw, 365px" /></figure>



<ol class="wp-block-list">
<li> <strong>Drug Discovery and Development</strong>:</li>
</ol>



<p>Generative AI can accelerate the drug discovery process by predicting molecular behavior and generating new compounds that might be effective against specific diseases. This reduces the time and cost associated with traditional drug discovery methods.</p>



<p><strong>2. Personalized Medicine</strong>:</p>



<p>AI models can generate personalized treatment plans by analyzing patient data, including genetic information, lifestyle, and previous health records. This can lead to more effective and tailored treatments for individual patients.</p>



<p><strong>3. Medical Imaging</strong>:</p>



<p>AI can enhance image analysis in radiology and pathology. Generative models can improve the clarity of medical images, generate synthetic data for training purposes, and even help in reconstructing missing or corrupted data.</p>



<p><strong>4. Prosthetics and Implants Design</strong>:</p>



<p>AI can assist in designing custom prosthetics and implants by generating models that perfectly fit the unique anatomical structure of patients. This can improve comfort and functionality for the user.</p>



<h2 class="wp-block-heading">Entertainment</h2>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1024" height="377" src="https://www.aiuniverse.xyz/wp-content/uploads/2024/06/image-7.png" alt="" class="wp-image-18909" srcset="https://www.aiuniverse.xyz/wp-content/uploads/2024/06/image-7.png 1024w, https://www.aiuniverse.xyz/wp-content/uploads/2024/06/image-7-300x110.png 300w, https://www.aiuniverse.xyz/wp-content/uploads/2024/06/image-7-768x283.png 768w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<ol class="wp-block-list">
<li><strong>Content Creation</strong>:</li>
</ol>



<p>In film, music, and gaming, generative AI can create new scripts, compose music, or develop new gaming environments and scenarios. This can lead to more innovative and engaging content.</p>



<p><strong>2. Virtual Reality and Augmented Reality</strong>:</p>



<p>AI can generate immersive environments that are indistinguishable from real life, enhancing the user experience in VR and AR applications. This technology can create dynamic scenarios that react to the user&#8217;s actions in real-time.</p>



<p><strong>3.</strong> <strong>Animation and Visual Effects</strong>:</p>



<p>Generative AI can automate part of the animation process, creating realistic and complex animations that would be time-consuming and costly to produce manually. It can also be used to enhance visual effects in movies and video games.</p>



<h2 class="wp-block-heading">Design</h2>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="865" height="435" src="https://www.aiuniverse.xyz/wp-content/uploads/2024/06/image-8.png" alt="" class="wp-image-18910" srcset="https://www.aiuniverse.xyz/wp-content/uploads/2024/06/image-8.png 865w, https://www.aiuniverse.xyz/wp-content/uploads/2024/06/image-8-300x151.png 300w, https://www.aiuniverse.xyz/wp-content/uploads/2024/06/image-8-768x386.png 768w" sizes="auto, (max-width: 865px) 100vw, 865px" /></figure>



<ol class="wp-block-list">
<li><strong>Architectural and Industrial Design</strong>:</li>
</ol>



<p>AI can help designers by generating multiple design alternatives based on specific criteria like space utilization, energy efficiency, or aesthetic preferences. This allows designers to explore more options and optimize designs more efficiently.</p>



<p><strong>2.</strong> <strong>Fashion and Textile Design</strong>:</p>



<p>In fashion, AI can predict trends and generate new designs based on past styles, current trends, and emerging preferences. It can also help in creating custom clothing by generating patterns and designs that fit individual customers.</p>



<p><strong>3. User Experience (UX) and Interface Design</strong>:</p>



<p>AI can generate design elements that are optimized for usability and aesthetic value, helping UX designers create more effective interfaces. It can also simulate user interactions to predict how changes to the design will impact user experience.</p>



<h2 class="wp-block-heading">Cross-Industry Applications</h2>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1024" height="614" src="https://www.aiuniverse.xyz/wp-content/uploads/2024/06/image-9.png" alt="" class="wp-image-18911" srcset="https://www.aiuniverse.xyz/wp-content/uploads/2024/06/image-9.png 1024w, https://www.aiuniverse.xyz/wp-content/uploads/2024/06/image-9-300x180.png 300w, https://www.aiuniverse.xyz/wp-content/uploads/2024/06/image-9-768x461.png 768w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<ol class="wp-block-list">
<li><strong>Automation of Creative Processes</strong>:</li>
</ol>



<p>Across industries, generative AI can automate repetitive and time-consuming tasks, allowing humans to focus on more strategic and creative aspects of their work.</p>



<p><strong>2.  Enhanced Decision Making</strong>:</p>



<p>By generating forecasts, scenarios, and models, AI can aid in complex decision-making processes, providing insights that might not be apparent through traditional methods.</p>



<p><strong>3.</strong> <strong>Training and Simulation</strong>:</p>



<p>Generative AI can create realistic scenarios for training purposes across various fields, from piloting aircraft to medical surgery simulations, enhancing the learning experience without the associated risks of real-world training.</p>



<p>Generative AI’s ability to analyze vast amounts of data and generate insightful outputs makes it a powerful tool in these and many other industries, potentially leading to innovations that can transform the way we live and work.</p>
<p>The post <a href="https://www.aiuniverse.xyz/applications-of-generative-ai-in-various-industries-like-healthcare-entertainment-and-design/">Applications of generative AI in various industries like healthcare, entertainment, and design?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/applications-of-generative-ai-in-various-industries-like-healthcare-entertainment-and-design/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AWS Partnership Advances Use of Machine Learning in Clinical Care</title>
		<link>https://www.aiuniverse.xyz/aws-partnership-advances-use-of-machine-learning-in-clinical-care/</link>
					<comments>https://www.aiuniverse.xyz/aws-partnership-advances-use-of-machine-learning-in-clinical-care/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 10 Oct 2020 05:20:47 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Amazon Web Services]]></category>
		<category><![CDATA[Chronic Disease Management]]></category>
		<category><![CDATA[Clinical Analytics]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Medical Imaging]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12087</guid>

					<description><![CDATA[<p>Source: hitinfrastructure.com Two projects sponsored by Amazon Web Services (AWS) and the Pittsburgh Health Data Alliance (PHDA) have generated solid use cases for machine learning in clinical <a class="read-more-link" href="https://www.aiuniverse.xyz/aws-partnership-advances-use-of-machine-learning-in-clinical-care/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/aws-partnership-advances-use-of-machine-learning-in-clinical-care/">AWS Partnership Advances Use of Machine Learning in Clinical Care</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: hitinfrastructure.com</p>



<p>Two projects sponsored by Amazon Web Services (AWS) and the Pittsburgh Health Data Alliance (PHDA) have generated solid use cases for machine learning in clinical care.</p>



<p>Amazon Web Services (AWS) and the Pittsburgh Health Data Alliance (PHDA) collaborated in August 2019 to advance innovation in areas including cancer diagnostics, precision medicine, electronic health records, and medical imaging. </p>



<p>Through the collaboration, researchers from the University of Pittsburgh Medical Center (UPMC), University of Pittsburgh, and Carnegie Mellon University (CMU) received support from Amazon Search Awards on top of existing support from PHDA to use machine learning to dive into various projects.</p>



<p>One of those projects examined machine learning techniques to help experts study breast cancer risk and understand what drives tumor growth. </p>



<p>Led by Shandong Wu, an associate professor at the University of Pittsburgh department of radiology, a research team analyzed 452 normal mammograms from 226 patients in order to predict the short-term risk of developing breast cancer.&nbsp;</p>



<p>Wu and his team, who included experts in computer vision, deep learning, bioinformatics, and breast cancer imaging, used two machine learning models and found that both models consistently outperformed in the area of breast density.</p>



<p>Specifically, the team’s model demonstrated between 33 percent and 35 percent improvement over the existing models, researchers highlighted.&nbsp;</p>



<p>“This preliminary work demonstrates the feasibility and promise of applying deep-learning methodologies for in-depth interpretation of mammogram images to enhance breast cancer risk assessment,” Wu said in the announcement.&nbsp;</p>



<p>“Identifying additional risk factors for breast cancer, including those that can lead to a more personalized approach to screening, may help patients and providers take more appropriate preventive measures to reduce the likelihood of developing the disease or catching it early on when interventions are most effective.”&nbsp;</p>



<p>Another project led by Eva Szigethy, clinical researcher at UPMC and Louis-Phillippe Morency, associate professor of computer science at CMU, used machine learning to measure changes in an individual’s behavior to diagnosis depression.</p>



<p>Their machine learning models are trained on tens of thousands of language, acoustic, and visual modalities to identify biomarkers for depression. The biomarkers will be compared to results from traditional clinical assessments to determine the accuracy of the machine learning models with identifying depression.</p>



<p>“New insights to increase the accuracy, efficiency, and adoption of depression screening have the potential to impact millions of patients, their families, and the healthcare system as a whole,” Morency stated.&nbsp;</p>



<p>AWS and PHDA noted that the projects on breast cancer and depression are just the start when it comes to research collaboration to improve patient care.&nbsp;</p>



<p>Teams of researchers, healthcare professionals, and machine learning experts will continue to work to understand the risk of aneurysms, predict how cancer cells progress, and aim to improve the electronic health records system.&nbsp;</p>



<p>“Amazon is excited and encouraged by the progress these researchers are making and how machine learning is central to their work,” said An Luo, senior technical program manager for academic programs at Amazon AI.&nbsp;</p>



<p>“We look forward to continuing to share how this unique collaboration between the PHDA and AWS is enabling new discoveries to help patients on a global scale.”</p>



<p>For example, David Vorp, PhD, associate dean for research at UPMA, and his research team employed AWS cloud resources to boost the diagnosis and therapy of abdominal aortic aneurysms.</p>



<p>And a CMU research team led by Russell Schwartz, PhD, and Jian Ma, PhD, used machine learning to develop algorithms and software tools to better understand cell origin and evolution.&nbsp;</p>



<p>“With the latest advances in machine learning, we are developing an algorithm that will provide clinicians with an objective, predictive tool to guide surgical interventions before symptoms appear, improving patient outcomes,” Vorp said in the August announcement.</p>
<p>The post <a href="https://www.aiuniverse.xyz/aws-partnership-advances-use-of-machine-learning-in-clinical-care/">AWS Partnership Advances Use of Machine Learning in Clinical Care</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/aws-partnership-advances-use-of-machine-learning-in-clinical-care/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Collaboration Will Offer Data to Train Machine Learning Tools</title>
		<link>https://www.aiuniverse.xyz/collaboration-will-offer-data-to-train-machine-learning-tools/</link>
					<comments>https://www.aiuniverse.xyz/collaboration-will-offer-data-to-train-machine-learning-tools/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 29 Sep 2020 07:19:25 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[analytics technologies]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Imaging Analytics]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Medical Imaging]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11834</guid>

					<description><![CDATA[<p>Source: healthitanalytics.com Researchers at the University of Iowa (UI) have received a $1 million grant from the National Science Foundation (NSF) to develop a machine learning platform <a class="read-more-link" href="https://www.aiuniverse.xyz/collaboration-will-offer-data-to-train-machine-learning-tools/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/collaboration-will-offer-data-to-train-machine-learning-tools/">Collaboration Will Offer Data to Train Machine Learning Tools</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: healthitanalytics.com</p>



<p>Researchers at the University of Iowa (UI) have received a $1 million grant from the National Science Foundation (NSF) to develop a machine learning platform to train algorithms with data from around the world.</p>



<p>The phase one grant will enable the UI team to lead a multi-university and industry collaboration and address concerns around patient privacy and data security in clinical AI development.</p>



<p>The researchers noted that although the use of AI is widespread in healthcare, training effective machine learning algorithms require thousands of samples annotated by doctors. This can lead to privacy and security issues, the team stated.</p>



<p>“Traditional methods of machine learning require a centralized database where patient data can be directly accessed for training a machine learning model,” said Stephen Baek, assistant professor of industrial and systems engineering at UI.</p>



<p>“Such methods are impacted by practical issues such as patient privacy, information security, data ownership, and the burden on hospitals which must create and maintain these centralized databases.”</p>



<p>The team will develop a decentralized, asynchronous solution called ImagiQ, which relies on an ecosystem of machine learning models so that institutions can select models that work best for their populations. Organizations will be able to upload and share the models, not patient data, with each other.</p>



<p>As each institution improves the model using their local patient data sets, models will be uploaded back to a centralized server. This ensemble learning approach will allow the most reliable and efficient models to come to the forefront, resulting in a better AI system for analyzing images like lung x-rays or CT scans that detect tumors.</p>



<p>The UI-led team includes researchers from Stanford University, the University of Chicago, Harvard University, Yale University, and Seoul National University.</p>



<p>Over the next nine months, the group will aim to develop a prototype of the system as well as participate in the Accelerator’s innovation curriculum to ensure the solution has societal impact. By the end of phase one, the team will participate in a pitch competition and proposal evaluation and if selected will proceed to phase two, with potential funding up to $5 million for 24 months.</p>



<p>“ImagiQ will further federated learning by decentralizing the model updates and eliminating the synchronous update cycle,” said Baek. “We are going to create a whole ecosystem of machine learning models that will evolve and improve over time. High performing models will be selected by many institutions, while others are phased out, producing more reliable and trustworthy outputs.”</p>



<p>The research team is part of the AI-driven data and model sharing track topic under the 2020 cohort NSF Convergence Accelerator program, designed to leverage a convergence approach to transition basic research and discovery into practice. NSF is investing more than $27 million to support the teams in phase one to develop the solution groundwork for AI-Driven Data and Model Sharing.</p>



<p>The Convergent Accelerator’s AI-Driven Innovation via Data and Model Sharing topic involves 18 teams concentrating on solution development. These research teams will also address a variety of data and model-related challenges and data types to include platform development to enable easy and efficient data matching and sharing.</p>



<p>“The quantum technology and AI-driven data and model sharing topics were chosen based on community input and identified federal research and development priorities,” said Douglas Maughan, head of the NSF Convergence Accelerator program. “This is the program&#8217;s second cohort and we are excited for these teams to use convergence research and innovation-centric fundamentals to accelerate solutions that have a positive societal impact.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/collaboration-will-offer-data-to-train-machine-learning-tools/">Collaboration Will Offer Data to Train Machine Learning Tools</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/collaboration-will-offer-data-to-train-machine-learning-tools/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deep Learning Tool Accurately Selects High-Quality Embryos for IVF</title>
		<link>https://www.aiuniverse.xyz/deep-learning-tool-accurately-selects-high-quality-embryos-for-ivf/</link>
					<comments>https://www.aiuniverse.xyz/deep-learning-tool-accurately-selects-high-quality-embryos-for-ivf/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 17 Sep 2020 08:40:29 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Analytics]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Imaging]]></category>
		<category><![CDATA[Medical Imaging]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11653</guid>

					<description><![CDATA[<p>Source: healthitanalytics.com A deep learning system was able to choose the most high-quality embryos for in-vitro fertilization (IVF) with 90 percent accuracy, according to a study published in eLife. When <a class="read-more-link" href="https://www.aiuniverse.xyz/deep-learning-tool-accurately-selects-high-quality-embryos-for-ivf/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-tool-accurately-selects-high-quality-embryos-for-ivf/">Deep Learning Tool Accurately Selects High-Quality Embryos for IVF</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: healthitanalytics.com</p>



<p>A deep learning system was able to choose the most high-quality embryos for in-vitro fertilization (IVF) with 90 percent accuracy, according to a study published in <em>eLife</em>.</p>



<p>When compared with trained embryologists, the deep learning model performed with an accuracy of approximately 75 percent while the embryologists performed with an average accuracy of 67 percent.</p>



<p>The average success rate of IVF is 30 percent, researchers stated. The treatment is also expensive, costing patients over $10,000 for each IVF cycle with many patients requiring multiple cycles in order to achieve successful pregnancy.</p>



<p>While multiple factors determine the success of IVF cycles, the challenge of non-invasive selection of the highest available quality embryos from a patient remains one of the most important factors in achieving successful IVF outcomes.</p>



<p>Currently, tools available to embryologists are limited and expensive, leaving most embryologists to rely on their observational skills and expertise. Researchers from Brigham and Women’s Hospital and Massachusetts General Hospital (MGH) set out to develop an assistive tool that can&nbsp;<a href="https://healthitanalytics.com/news/medical-imaging-machine-learning-to-align-in-10-key-areas">evaluate images</a>&nbsp;captured using microscopes traditionally available at fertility centers.</p>



<p>“There is so much at stake for our patients with each IVF cycle. Embryologists make dozens of critical decisions that impact the success of a patient cycle. With assistance from our AI system, embryologists will be able to select the embryo that will result in a successful pregnancy better than ever before,”&nbsp;<a href="https://www.eurekalert.org/pub_releases/2020-09/bawh-ais091520.php">said</a>&nbsp;co-lead author Charles Bormann, PhD, MGH IVF Laboratory director.</p>



<p>The team trained the deep learning system using images of embryos captured at 113 hours post-insemination. Among 742 embryos, the AI system was 90 percent accurate in choosing the most high-quality embryos.</p>



<p>The investigators further assessed the system’s ability to distinguish among high-quality embryos with the normal number of human chromosomes and compared the system’s performance to that of trained embryologists.</p>



<p>The results showed that the system was able to differentiate and identify embryos with the highest potential for success significantly better than 15 experienced embryologists from five different fertility centers across the US.</p>



<p>Researchers pointed out that in its current state, the deep learning system is meant to act&nbsp;<a href="https://healthitanalytics.com/news/artificial-intelligence-in-healthcare-augmentation-or-companionship">only as an assistive tool</a>&nbsp;for embryologists to make judgments during embryo selection.</p>



<p>“We believe that these systems will benefit clinical embryologists and patients,” said corresponding author&nbsp;Hadi Shafiee, PhD, of the Division of Engineering in Medicine at the Brigham. “A major challenge in the field is deciding on the embryos that need to be transferred during IVF. Our system has tremendous potential to improve clinical decision making and access to care.”</p>



<p>The team also stated that while the study demonstrates the potential for&nbsp;<a href="https://healthitanalytics.com/features/what-is-deep-learning-and-how-will-it-change-healthcare">deep learning</a>&nbsp;to outperform human clinicians, further research is needed before these tools can be deployed in regular clinical care.</p>



<p>“Advances in artificial intelligence have fostered numerous applications that have the potential to improve standard-of-care in the different fields of medicine. While other groups have also evaluated different use cases for machine learning in assisted reproductive medicine, this approach is novel in how it used a deep learning system trained on a large dataset to make predictions based on static images,” researchers said.</p>



<p>“Although the current retrospective study shows that these systems can perform better than highly-trained embryologists, randomized control trials are required before routine use in clinical practice is adopted.”</p>



<p>The findings offer hope for individuals seeking to undergo IVF, the group concluded.</p>



<p>“Our approach has shown the potential of AI systems to be used in aiding embryologists to select the embryo with the highest implantation potential, especially amongst high-quality embryos,” said Manoj Kumar Kanakasabapathy, one of the co-lead authors.</p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-tool-accurately-selects-high-quality-embryos-for-ivf/">Deep Learning Tool Accurately Selects High-Quality Embryos for IVF</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deep-learning-tool-accurately-selects-high-quality-embryos-for-ivf/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deep Learning Tools Can Kickstart Cancer Radiation Therapy</title>
		<link>https://www.aiuniverse.xyz/deep-learning-tools-can-kickstart-cancer-radiation-therapy/</link>
					<comments>https://www.aiuniverse.xyz/deep-learning-tools-can-kickstart-cancer-radiation-therapy/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 29 Jan 2020 07:59:07 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Imaging Analytics]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Medical Imaging]]></category>
		<category><![CDATA[Medical Research]]></category>
		<category><![CDATA[Preventive Care]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6436</guid>

					<description><![CDATA[<p>Source: healthitanalytics.com January 28, 2020 &#8211; New research from UT Southwestern has shown that deep learning technology could help providers quickly develop optimal treatment plans for cancer patients, decreasing the odds <a class="read-more-link" href="https://www.aiuniverse.xyz/deep-learning-tools-can-kickstart-cancer-radiation-therapy/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-tools-can-kickstart-cancer-radiation-therapy/">Deep Learning Tools Can Kickstart Cancer Radiation Therapy</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: healthitanalytics.com</p>



<p>January 28, 2020 &#8211; New research from UT Southwestern has shown that deep learning technology could help providers quickly develop optimal treatment plans for cancer patients, decreasing the odds that the disease will spread.</p>



<p>Patients usually have to wait several days to a week to begin therapy while doctors manually develop treatment plans, which can be a tedious, time-consuming process. Providers must carefully review a patient’s imaging data and conduct several phases of feedback within the medical team.</p>



<p>Delaying radiation therapy for even a week can increase the chance of some cancers recurring or spreading by 12 to 14 percent, researchers noted.</p>



<p>“Some of these patients need radiation therapy immediately, but doctors often have to tell them to go home and wait,” said&nbsp;Steve Jiang, PhD, who directs UT&nbsp;Southwestern’s&nbsp;Medical Artificial Intelligence and Automation (MAIA) Lab. “Achieving optimal treatment plans in near real time is important and part of our broader mission to use AI to improve all aspects of cancer care.”</p>



<p>The team explored how AI and deep learning tools could improve multiple aspects of radiation therapy, from initial dosage plans required before the treatment can begin, to the dose recalculations that occur as the plan progresses.</p>



<p>Researchers used data from 70 prostate cancer patients to train four deep learning models. The tools learned to develop 3D renderings of how to best distribute the radiation in each patient.</p>



<p>Each model accurately predicted the treatment plans developed by the medical team, and the technology was able to produce optimal treatment plans within five-hundredths of a second after receiving clinical data for patients.</p>



<p>“Our AI can cut out much of the back and forth that happens between the doctor and the dosage planner,” Jiang said. “This improves the efficiency dramatically.”</p>



<p>Jiang also led a second study that showed how AI can quickly and accurately recalculate dosages before each radiation session, taking into account how a patient’s anatomy may have changed since the last therapy. A traditional, accurate recalculation can require patients to wait up to ten minutes or more, in addition to the time needed to conduct anatomy imaging before each session.</p>



<p>Jiang and his team developed an AI model that combined two conventional models used for dose calculation: a simple, fast model that lacked accuracy, and a complex one that was accurate but required more time.</p>



<p>The newly developed AI technology assessed the differences between the models, and learned how to utilize both speed and accuracy to produce calculations within one second.</p>



<p>UT Southwestern plans to use these new deep learning and AI capabilities in clinical care after implementing a patient interface. The MAIA Lab is also currently developing deep learning tools for several other purposes, including enhanced medical imaging and image processing, automated medical procedures, and improved disease diagnosis and outcome prediction.</p>



<p>Researchers have taken an interest in using AI to improve radiation therapy for patients. A team from the University of Texas MD Anderson Cancer Center recently developed a machine learning tool that could accurately predict two of the most challenging side effects of radiation therapy for patients with head and neck cancers: significant weight loss or the need for a feeding tube.</p>



<p>The technology could help providers deliver more proactive care for patients with cancer.</p>



<p>“Being able to identify which patients are at greatest risk would allow radiation oncologists to take steps to prevent or mitigate these possible side effects,” said Jay Reddy, MD, PhD, an assistant professor of radiation oncology at The University of Texas MD Anderson Cancer Center and lead author on the study.&nbsp;</p>



<p>“If the patient has an intermediate risk, and they might get through treatment without needing a feeding tube, we could take precautions such as setting them up with a nutritionist and providing them with nutritional supplements. If we know their risk for feeding tube placement is extremely high, we could place it ahead of time so they wouldn’t have to be admitted to the hospital after treatment. We’d know to keep a closer eye on that patient.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-tools-can-kickstart-cancer-radiation-therapy/">Deep Learning Tools Can Kickstart Cancer Radiation Therapy</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deep-learning-tools-can-kickstart-cancer-radiation-therapy/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The Importance of Image Resolution in Building Deep Learning Models for Medical Imaging</title>
		<link>https://www.aiuniverse.xyz/the-importance-of-image-resolution-in-building-deep-learning-models-for-medical-imaging/</link>
					<comments>https://www.aiuniverse.xyz/the-importance-of-image-resolution-in-building-deep-learning-models-for-medical-imaging/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 24 Jan 2020 08:11:29 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Building]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Image Resolution]]></category>
		<category><![CDATA[Medical Imaging]]></category>
		<category><![CDATA[Models]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6356</guid>

					<description><![CDATA[<p>Source: pubs.rsna.org Deep learning with convolutional neural networks (CNNs) has shown tremendous success in classifying images, as we have seen with the ImageNet competition (1), which consists <a class="read-more-link" href="https://www.aiuniverse.xyz/the-importance-of-image-resolution-in-building-deep-learning-models-for-medical-imaging/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-importance-of-image-resolution-in-building-deep-learning-models-for-medical-imaging/">The Importance of Image Resolution in Building Deep Learning Models for Medical Imaging</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: pubs.rsna.org</p>



<p>Deep learning with convolutional neural networks (CNNs) has shown tremendous success in classifying images, as we have seen with the ImageNet competition (1), which consists of millions of everyday color images, such as animals, vehicles, and natural objects. For example, recent artificial intelligence (AI) systems have achieved a top-five accuracy (correct answer within the top five predictions) of greater than 96% on the ImageNet competition (2). To achieve such, computer vision scientists have generally found that deeper networks perform better, and as a result, modern AI architectures frequently have greater than 100 layers (2).</p>



<p>Because of the sheer size of such networks, which contain millions of parameters, most AI solutions use significantly downsampled images. For example, the famous AlexNet CNN that won ImageNet in 2012 used an input size of 227 × 227 pixels (1), which is a fraction of the native resolution of images taken by cameras and smartphones (usually greater than 2000 pixels in each dimension). Lower-resolution images are used for a variety of reasons. First, smaller images are easier to distribute across the Web, as ImageNet in itself is approximately 150 GB of data. Second, the task of identifying common objects such as planes or cars can be readily discerned at lower resolutions. Third, downsampled images make it easier and much faster to train deep neural networks. Finally, using lower-resolution images may lead to increased generalizability or less overfitting of deep learning models that focus on important high-level features.</p>



<p>Given the success of deep learning in general image classification, many researchers have applied the same techniques used in the ImageNet competitions to medical imaging (3). With chest radiographs, for example, researchers have downsampled the input images to about 256 pixels in each dimension from original images with more than 2000 pixels in each dimension. Nevertheless, relatively high accuracy has been reported for detection on chest radiographs of some conditions, including tuberculosis, pleural effusion, atelectasis, and pneumonia (4,5).</p>



<p>However, subtle radiologic findings, such as pulmonary nodules, hairline fractures, or small pneumothoraces, are less likely to be visible at lower resolutions. As such, the optimal resolution for detecting such abnormalities using CNNs is an important research question. For example, in the 2017 Radiological Society of North America competition for determining bone age on skeletal radiographs (6), many competitors used an input size of 512 pixels or greater. For the DREAM (Dialogue for Reverse Engineering Assessments and Methods) challenge of classifying screening mammograms, resolutions of up to 1700 × 2100 pixels were used in top solutions (7). Recently, for the Society of Imaging Informatics in Medicine and American College of Radiology Pneumothorax Challenge (8), many top entries used an input size of up to 1024 × 1024 pixels.</p>



<p>In their article, “The Effect of Image Resolution on Deep Learning in Radiography,” Sabottke and Spieler (9) address that important question using the public ChestX-ray14 dataset from the National Institutes of Health, which consists of more than 100 000 chest radiographs stored as 8-bit gray-scale images at a resolution of 1024 × 1024 pixels (10). These radiographs have been labeled with 14 conditions including normal, lung nodule, pneumothorax, emphysema, and cardiomegaly (10). The authors used two popular deep CNNs, ResNet 34 and DenseNet 121, and analyzed their models’ efficacy to classify radiographs at image resolutions ranging from 32 × 32 pixels to 600 × 600 pixels.</p>



<p>The authors found that the performance of most models tended to plateau at resolutions of around 256 × 256 pixels and 320 × 320 pixels. However, classification of emphysema and lung nodules performed better at 512 × 512 pixels and 448 × 448 pixels, respectively, than at lower resolutions. Emphysema findings can be subtle in mild cases, manifested by faint lucencies, which probably explains the need for higher resolution. Similarly, small lung nodules may be “blurred out” and not visible at lower resolution, which can explain the improvement in classification performance at higher resolutions.</p>



<p>The authors’ work is important. As we move further in the application of AI in medical imaging, we should be more cognizant of the potential impact of image resolution on the performance of AI models, whether for segmentation, classification, or another task. Moreover, groups who create public datasets to advance machine learning in medical imaging should consider releasing the images at full or near-full resolution. This would allow researchers to further understand the impact of image resolution and could lead to more robust models that better translate into clinical practice.</p>
<p>The post <a href="https://www.aiuniverse.xyz/the-importance-of-image-resolution-in-building-deep-learning-models-for-medical-imaging/">The Importance of Image Resolution in Building Deep Learning Models for Medical Imaging</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-importance-of-image-resolution-in-building-deep-learning-models-for-medical-imaging/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google proposes hybrid approach to AI transfer learning for medical imaging</title>
		<link>https://www.aiuniverse.xyz/google-proposes-hybrid-approach-to-ai-transfer-learning-for-medical-imaging/</link>
					<comments>https://www.aiuniverse.xyz/google-proposes-hybrid-approach-to-ai-transfer-learning-for-medical-imaging/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 11 Dec 2019 11:10:35 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[hybrid]]></category>
		<category><![CDATA[Medical Imaging]]></category>
		<category><![CDATA[transfer learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5577</guid>

					<description><![CDATA[<p>Source: venturebeat.com Medical imaging is among the most popular application of AI and machine learning, and with good reason. Computer vision algorithms are naturally adept at spotting <a class="read-more-link" href="https://www.aiuniverse.xyz/google-proposes-hybrid-approach-to-ai-transfer-learning-for-medical-imaging/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-proposes-hybrid-approach-to-ai-transfer-learning-for-medical-imaging/">Google proposes hybrid approach to AI transfer learning for medical imaging</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: venturebeat.com</p>



<p>Medical imaging is among the most popular application of AI and machine learning, and with good reason. Computer vision algorithms are naturally adept at spotting anomalies experts sometimes miss, in the process reducing wait times and lightening clinical workloads. Perhaps that’s why although the percentage of health care organizations that have adopted AI remains relatively low (22%) globally, the majority of practitioners (77%) believe the technology is important to the medical imaging field as a whole.</p>



<p>Unsurprisingly, data scientists have devoted outsize time and attention to developing AI imaging models for use in health care systems, a few of which Google scientists detail in a paper accepted to this week’s NeurIPS conference in Vancouver.  In “Transfusion: Understanding Transfer Learning for Medical Imaging,” coauthors hailing from Google Research (the R&amp;D-focused arm of Google’s business) investigate the role transfer learning plays in developing image classification algorithms.</p>



<p>In transfer learning, a machine learning algorithm is trained in two stages. First, there’s retraining, where the algorithm is generally trained on a benchmark data set representing a diversity of categories. Next comes fine-tuning, where it is further trained on the specific target task of interest. The pretraining step helps the model to learn general features that can be reused on the target task, boosting its accuracy.</p>



<p>According to the team, transfer learning isn’t quite the end-all, be-all of AI training techniques. In a performance evaluation that compared a range of model architectures trained to diagnose diabetic retinopathy and five different diseases from chest x-rays, a portion of which were pretrained on an open source image data set (ImageNet), they report that transfer learning didn’t “significantly” affect performance on medical imaging tasks. Moreover, a family of simple, lightweight models performed at a level comparable to the standard architectures.</p>



<p>In a second test, the team studied the degree to which transfer learning affected the kinds of features and representations learned by the AI models. They analyzed and compared the hidden representations (i.e., representations of data learned in the model’s latent portions) in the different models trained to solve medical imaging tasks, computing similarity scores for some of the representations between models trained from scratch and those pretrained on ImageNet. The team concludes that for large models, representations learned from scratch tended to be much more similar to each other than those learned from transfer learning, while there was greater overlap between representation similarity scores in the case of smaller models.</p>



<p>To rectify these and other issues, the team proposes a hybrid approach to transfer learning, where instead of reusing the full model architecture, only a portion of is resused and the rest is redesigned to better suit the target task. They say that it confers most of the benefits of transfer learning while further enabling flexible model design. “Transfer learning is a central technique for many domain,” wrote Google Research scientists Maithra Raghu and Chiyuan Zhang in a blog post. “Many interesting open questions remain, [and we] look forward to tackling these questions in future work.”</p>



<p>The work comes shortly after Google detailed an AI capable of classifying chest X-rays with human-level accuracy. In another recent study, teams from the tech giant claimed to have developed a machine learning model that detects 26 skin conditions as accurately as dermatologists and a lung cancer detection AI that outperformed six human radiologists.</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-proposes-hybrid-approach-to-ai-transfer-learning-for-medical-imaging/">Google proposes hybrid approach to AI transfer learning for medical imaging</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-proposes-hybrid-approach-to-ai-transfer-learning-for-medical-imaging/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Machine Learning Tool Accurately Diagnoses Esophageal Cancer</title>
		<link>https://www.aiuniverse.xyz/machine-learning-tool-accurately-diagnoses-esophageal-cancer/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-tool-accurately-diagnoses-esophageal-cancer/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 09 Nov 2019 08:05:21 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Imaging Analytics]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Medical Imaging]]></category>
		<category><![CDATA[Medical Research]]></category>
		<category><![CDATA[Predictive Analytics]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5076</guid>

					<description><![CDATA[<p>Source: dqindia.com November 08, 2019 &#8211; Machine learning methods could accurately identify cancerous esophagus tissue on microscopy images without the time-consuming manual data input that is required for current <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-tool-accurately-diagnoses-esophageal-cancer/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-tool-accurately-diagnoses-esophageal-cancer/">Machine Learning Tool Accurately Diagnoses Esophageal Cancer</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: dqindia.com</p>



<p>November 08, 2019 &#8211; Machine learning methods could accurately identify cancerous esophagus tissue on microscopy images without the time-consuming manual data input that is required for current methods, according to a study published in <em>JAMA Network Open</em>.</p>



<p>Researchers at Dartmouth and Dartmouth-Hitchcock Norris Cotton Cancer Center have developed an innovative machine learning approach that automatically learns clinically important regions on whole-slide images to classify them.</p>



<p>Histopathology image analysis requires a manual annotation process that outlines the regions of interest on a high-resolution whole slide image to train the computer model. Although the method is advanced, the process is still tedious.</p>



<p>“Data annotation is the most time-consuming and laborious bottleneck in developing modern deep learning methods,” said Saeed Hassanpour, PhD, lead author of the study.</p>



<p>“Our study shows that deep learning models for histopathology slides analysis can be trained with labels only at the tissue level, thus removing the need for high-cost data annotation and creating new opportunities for expanding the application of deep learning in digital pathology.”</p>



<p>The team tested their method for identifying cancerous and precancerous esophagus tissue on high-resolution microscopy images without training on region-of-interest annotations. Researchers then applied the network to Barrett esophagus and esophageal adenocarcinoma detection and found that their method achieved better results than the traditional method.</p>



<p>“Our new approach outperformed the current state-of-the-art approach that requires these detailed annotations for its training,” said Hassanpour.</p>



<p>“The result is significant because our method is based solely on tissue-level annotations, unlike existing methods that are based on manually annotated regions.”</p>



<p>Machine learning technology has consistently demonstrated its potential to improve diagnostics and care management. Recently, a team of researchers used machine learning tools to accurately predict patients with cancer who were at high risk of six-month mortality, which could help clinicians engage in timely conversations with their patients.</p>



<p>“Our findings demonstrated that machine learning algorithms can predict a patient’s risk of short-term mortality with good discrimination and PPV. Such a tool could be very useful in aiding clinicians’ risk assessments for patients with cancer as well as serving as a point-of-care prompt to consider discussions about goals and end-of-life preferences,” the researchers stated.</p>



<p>“Machine learning algorithms can be relatively easily retrained to account for emerging cancer survival patterns. As computational capacity and the availability of structured genetic and molecular information increase, we expect that predictive performance will increase and there may be a further impetus to implement similar tools in practice.”</p>



<p>The research team on the esophageal study believes that this new machine learning approach could improve cancer diagnosis and care.</p>



<p>“Our method would facilitate a more extensive range of research on analyzing histopathology images that were previously not possible due to the lack of detailed annotations,” Hassanpour concluded.</p>



<p>“Clinical deployment of such systems could assist pathologists in reading histopathology slides more accurately and efficiently, which is a critical task for the cancer diagnosis, predicting prognosis, and treatment of cancer patients.”</p>



<p>In future work, the team plans to further validate the model by testing it on data from other institutions and running prospective clinical trials. Additionally, the group will apply the method to histological images of other types of tumors and lesions that have limited training data.</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-tool-accurately-diagnoses-esophageal-cancer/">Machine Learning Tool Accurately Diagnoses Esophageal Cancer</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-tool-accurately-diagnoses-esophageal-cancer/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Preparing for the Artificial Intelligence Explosion at RSNA 2019</title>
		<link>https://www.aiuniverse.xyz/preparing-for-the-artificial-intelligence-explosion-at-rsna-2019/</link>
					<comments>https://www.aiuniverse.xyz/preparing-for-the-artificial-intelligence-explosion-at-rsna-2019/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 05 Nov 2019 10:31:51 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Imaging Analytics]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Medical Imaging]]></category>
		<category><![CDATA[Natural language processing]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5007</guid>

					<description><![CDATA[<p>Source: healthitanalytics.com November 04, 2019&#160;&#8211;&#160;What do the following numbers have to do with the annual meeting of the Radiological Society of North America: 2, 12, 32, 271, <a class="read-more-link" href="https://www.aiuniverse.xyz/preparing-for-the-artificial-intelligence-explosion-at-rsna-2019/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/preparing-for-the-artificial-intelligence-explosion-at-rsna-2019/">Preparing for the Artificial Intelligence Explosion at RSNA 2019</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: healthitanalytics.com</p>



<p>November 04, 2019&nbsp;&#8211;&nbsp;What do the following numbers have to do with the annual meeting of the Radiological Society of North America: 2, 12, 32, 271, and 308? They refer to the presence of “artificial intelligence” at the show from 2015 to 2019, in that order.</p>



<p>RNSA sees tremendous potential in the application of AI and its various permutations to the work of radiologists across the continent — a significant shift from the initial belief that AI would make radiologists redundant.</p>



<p>The society has gone so far as standing up an expanded AI showcase for this year’s show, which takes place December 1 through 6 at the McCormick Place in Chicago.</p>



<p>“Many RSNA meeting attendees seek out AI subject matter. Creating an encompassing showcase on artificial intelligence for exhibitors, educators and researchers will create a dynamic environment for our attendees,” said Steve Drew, RSNA Assistant Executive Director of Scientific Assembly, Informatics and Corporate Relations in a July announcement.</p>



<p>“High interest by commercial companies and meeting attendees led to this exciting development,” added John Jaworski, CEM, Director: Meetings and Exhibition Services of RSNA. “We now have more than 100 AI Showcase companies participating—which is up 25 percent over 2018’s final showcase figures—and the AI Theater, Deep Learning Classroom and Hands-on Classroom will provide various educational opportunities on artificial intelligence within the Showcase.”</p>



<p>Given this explosion of AI at RNSA’s annual event, attendees must know the terms that will be thrown around and differentiate between hype and reality. So here’s a primer for you, dear reader.</p>



<p><strong>Getting Conversant in AI</strong></p>



<p>AI is often seen as the silver bullet to healthcare’s many problems. It holds the promise of detecting diseases earlier and with more accuracy, standardizing clinical processes, and eliminating scheduling and paperwork. Ultimately, integrating artificial intelligence into clinical workflows can help ease provider burnout and improve patient outcomes.</p>



<p>Since 2016 alone, the FDA has approved 38 artificial intelligence algorithms for clinical use. Nearly half of these apply to radiology practice, the field most quickly adopting AI. Images and image reads easily lend themselves to interpretation by artificial intelligence.</p>



<p>Radiology is littered with studies demonstrating how algorithms and machine models are outperforming providers in detecting, characterizing, and monitoring disease. In the future, many predict artificial intelligence will continue to improve, exceeding humans in certain, more complex tasks.</p>



<p>Many radiologists are fearful that the widespread use of AI will result in machines replacing their jobs. However, artificial intelligence should be a supplement to the traditional workflow of providers, complimenting their work rather than eliminating it.</p>



<p>In order for radiologists to confidently implement artificial intelligence into clinical workflow, they must understand the different types of artificial intelligence and how these methods can be leveraged in radiology practice to dissuade false assumptions and hesitancy towards adoption.&nbsp;</p>



<p><strong>Natural Language Processing</strong></p>



<p>Natural language processing (NLP) is one branch of artificial intelligence that allows computers to understand and interpret language. The technology can comb through reports, interpret spoken language, and generate structured text from free text.</p>



<p>A systematic review of NLP in radiology practice identified dozens of natural language processing methodologies applicable to clinical practice. Results demonstrated how the technology can be used for diagnostic surveillance, quality assessment, clinical support services, and cohort building for epidemiological studies.</p>



<p>All four of these aspects of care will help improve provider efficiency and care quality. Diagnostic surveillance allows the machine to alert providers when items have not been acted on, promoting efficient care and quick referral. The ability of natural language processing to transform free text into structured text can eliminate the administrative burden on providers, automating routine data entry and improving clinical workflow. Building a cohort allows researchers to quickly identify individuals for studies or allow providers to identify high-risk groups sooner.</p>



<p><strong>Machine Learning</strong></p>



<p>Another branch of artificial intelligence is machine learning. In this model, algorithms learn from a data set on how to solve a specific task. Data is inputted into the system, the machine learns from it, and uses that data to predict a desired outcome (e.g., risk of contracting a disease). Rather than being programmed to give a specific result from a data set, the machine learns how to predict outcomes using patterns in the data to identify which variables are most influential to the result.</p>



<p>In radiology practice, machine learning has a wide array of potential applications as the sheer amount of data radiology has is ripe for algorithm development. Machine learning processes can learn how to read and interpret a variety of medical images, including PET scans, MRIs, and CT scans. Quicker and more accurate reads of these imagines can identify disease faster and in an earlier stage with more accuracy.</p>



<p>Some studies indicate that machine learning can help improve overall workflow, communication, and patient safety if image read time is decreased and the quality of the image read is improved. Not only can this give providers more time to spend with patients instead of interpreting results, but it can also improve patient safety as more accurate reads will result in fewer false positive or false negative diagnoses.</p>



<p>Other research demonstrates how machine learning can help identify complex patterns in diagnosis. As a result, this artificial intelligence method can improve radiologists’ ability to make accurate decisions, identifying diseases more precisely and accurately.</p>



<p><strong>Deep Learning/Neural Networks</strong></p>



<p>Deep learning, often referred to as neural networks, is a type of machine learning where the algorithm is trained using a complex network of patterns similar to the brain’s neural network. The methodology has demonstrated high performance in identifying disease from imaging studies, taking the methods of machine learning one step further. Rather than learning from a set of inputs given to the machine from the algorithm developer, the algorithm learns from the data. It is a more advanced kind of machine learning that requires large datasets to train the algorithm and the data must be standardized as the machine has to learn where to identify irregularities in images.</p>



<p>With obvious applicability to radiology practice, research demonstrates deep learning models can be particularly useful in screening images or early-stage identification.</p>



<p>Deep learning algorithms, though, are at risk of the ‘black box’ problem if their neural networks are not extensively understood. ‘Black box’ AI is the development of an algorithm without an understanding of how the machine generated the output. Thus, many providers as uneasy trusting diagnostic and treatment decisions to an algorithm they do not understand.</p>



<p>If deep learning methods are to be more widely and confidently utilized in radiology practice, their interpretability will need to improve, and their methods must be clearly laid out. As with all artificial intelligence methodologies, the higher quality data inputted into generating the algorithm, the more accurate and more trusted the results will be.</p>
<p>The post <a href="https://www.aiuniverse.xyz/preparing-for-the-artificial-intelligence-explosion-at-rsna-2019/">Preparing for the Artificial Intelligence Explosion at RSNA 2019</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/preparing-for-the-artificial-intelligence-explosion-at-rsna-2019/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
