<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>neural networks Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/neural-networks/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/neural-networks/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Sat, 29 Jun 2024 13:04:16 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>How do generative models like GANs (Generative Adversarial Networks) work?</title>
		<link>https://www.aiuniverse.xyz/how-do-generative-models-like-gans-generative-adversarial-networks-work/</link>
					<comments>https://www.aiuniverse.xyz/how-do-generative-models-like-gans-generative-adversarial-networks-work/#respond</comments>
		
		<dc:creator><![CDATA[Maruti Kr.]]></dc:creator>
		<pubDate>Sat, 29 Jun 2024 13:04:01 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[AI algorithms]]></category>
		<category><![CDATA[AI Image Generation]]></category>
		<category><![CDATA[AI model training]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Data Synthesis]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[GAN Applications]]></category>
		<category><![CDATA[GAN Technology]]></category>
		<category><![CDATA[GANs]]></category>
		<category><![CDATA[Generative Adversarial Networks]]></category>
		<category><![CDATA[Generator and Discriminator]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Neural Network Training]]></category>
		<category><![CDATA[neural networks]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=18956</guid>

					<description><![CDATA[<p>Generative Adversarial Networks (GANs) are a fascinating class of machine learning models used to generate new data that resembles the training data. They were first introduced by <a class="read-more-link" href="https://www.aiuniverse.xyz/how-do-generative-models-like-gans-generative-adversarial-networks-work/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-do-generative-models-like-gans-generative-adversarial-networks-work/">How do generative models like GANs (Generative Adversarial Networks) work?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-full"><img fetchpriority="high" decoding="async" width="1024" height="1024" src="https://www.aiuniverse.xyz/wp-content/uploads/2024/06/DALL·E-2024-06-29-18.31.23-A-visual-representation-of-a-Generative-Adversarial-Network-GAN-concept.-The-image-features-two-distinct-sections.-On-the-left-a-futuristic-robotic.webp" alt="" class="wp-image-18957" srcset="https://www.aiuniverse.xyz/wp-content/uploads/2024/06/DALL·E-2024-06-29-18.31.23-A-visual-representation-of-a-Generative-Adversarial-Network-GAN-concept.-The-image-features-two-distinct-sections.-On-the-left-a-futuristic-robotic.webp 1024w, https://www.aiuniverse.xyz/wp-content/uploads/2024/06/DALL·E-2024-06-29-18.31.23-A-visual-representation-of-a-Generative-Adversarial-Network-GAN-concept.-The-image-features-two-distinct-sections.-On-the-left-a-futuristic-robotic-300x300.webp 300w, https://www.aiuniverse.xyz/wp-content/uploads/2024/06/DALL·E-2024-06-29-18.31.23-A-visual-representation-of-a-Generative-Adversarial-Network-GAN-concept.-The-image-features-two-distinct-sections.-On-the-left-a-futuristic-robotic-150x150.webp 150w, https://www.aiuniverse.xyz/wp-content/uploads/2024/06/DALL·E-2024-06-29-18.31.23-A-visual-representation-of-a-Generative-Adversarial-Network-GAN-concept.-The-image-features-two-distinct-sections.-On-the-left-a-futuristic-robotic-768x768.webp 768w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>Generative Adversarial Networks (GANs) are a fascinating class of machine learning models used to generate new data that resembles the training data. They were first introduced by Ian Goodfellow and his colleagues in 2014. GANs are particularly popular in the field of image generation but have applications in other areas as well.</p>



<p>Here’s how GANs generally work:</p>



<h3 class="wp-block-heading">1. <strong>Architecture</strong></h3>



<p>A GAN consists of two main parts:</p>



<ul class="wp-block-list">
<li><strong>Generator</strong>: This component generates new data instances.</li>



<li><strong>Discriminator</strong>: This component evaluates them. It tries to distinguish between real data (from the training dataset) and fake data (created by the generator).</li>
</ul>



<h3 class="wp-block-heading">2. <strong>Training Process</strong></h3>



<p>The training of a GAN involves the following steps:</p>



<ul class="wp-block-list">
<li>The <strong>generator</strong> takes a random noise vector (random input) and transforms it into a data instance.</li>



<li>The <strong>discriminator</strong> receives either a generated data instance or a real data instance and must determine if it is real or fake.</li>
</ul>



<h3 class="wp-block-heading">3. <strong>Adversarial Relationship</strong></h3>



<p>The core idea behind GANs is based on a game-theoretical scenario where the generator and the discriminator are in a constant battle. The generator aims to produce data that is indistinguishable from genuine data, tricking the discriminator. The discriminator, on the other hand, learns to become better at distinguishing fake data from real data. This adversarial process leads to improvements in both models:</p>



<ul class="wp-block-list">
<li><strong>Generator’s Goal</strong>: Fool the discriminator by generating realistic data.</li>



<li><strong>Discriminator’s Goal</strong>: Accurately distinguish between real and generated data.</li>
</ul>



<h3 class="wp-block-heading">4. <strong>Loss Functions</strong></h3>



<p>Each component has its loss function that needs to be optimized:</p>



<ul class="wp-block-list">
<li><strong>Discriminator Loss</strong>: This aims to correctly classify real data as real and generated data as fake.</li>



<li><strong>Generator Loss</strong>: This encourages the generator to produce data that the discriminator will classify as real.</li>
</ul>



<h3 class="wp-block-heading">5. <strong>Backpropagation and Optimization</strong></h3>



<p>Both the generator and the discriminator are typically neural networks, and they are trained using backpropagation. They are trained simultaneously with the discriminator adjusting its weights to get better at telling real from fake, and the generator adjusting its weights to generate increasingly realistic data.</p>



<h3 class="wp-block-heading">6. <strong>Convergence</strong></h3>



<p>The training process is ideally stopped when the generator produces data that the discriminator judges as real about half the time, meaning the discriminator is essentially guessing, unable to distinguish real from fake effectively.</p>



<h3 class="wp-block-heading">Example Use Cases:</h3>



<ul class="wp-block-list">
<li><strong>Image Generation</strong>: GANs can generate realistic images that look like they could belong to the training set.</li>



<li><strong>Super Resolution</strong>: Enhancing the resolution of images.</li>



<li><strong>Style Transfer</strong>: Applying the style of one image to the content of another.</li>



<li><strong>Data Augmentation</strong>: Creating new training data for machine learning models.</li>
</ul>



<p>GANs have been revolutionary due to their ability to generate high-quality, realistic outputs, making them a powerful tool in the AI toolkit. However, training GANs can be challenging due to issues like mode collapse (where the generator produces a limited diversity of samples) and non-convergence.</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-do-generative-models-like-gans-generative-adversarial-networks-work/">How do generative models like GANs (Generative Adversarial Networks) work?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-do-generative-models-like-gans-generative-adversarial-networks-work/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Machine learning predicts schizophrenia relapses using smartphone data</title>
		<link>https://www.aiuniverse.xyz/machine-learning-predicts-schizophrenia-relapses-using-smartphone-data/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-predicts-schizophrenia-relapses-using-smartphone-data/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 16 Oct 2020 07:02:18 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Behavior]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[neural networks]]></category>
		<category><![CDATA[schizophrenia]]></category>
		<category><![CDATA[smartphone]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12269</guid>

					<description><![CDATA[<p>Source: newatlas.com A pair of newly published studies are demonstrating how passive smartphone data can be used to effectively predict relapse episodes in schizophrenia patients. The research <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-predicts-schizophrenia-relapses-using-smartphone-data/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-predicts-schizophrenia-relapses-using-smartphone-data/">Machine learning predicts schizophrenia relapses using smartphone data</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: newatlas.com</p>



<p>A pair of newly published studies are demonstrating how passive smartphone data can be used to effectively predict relapse episodes in schizophrenia patients. The research used machine learning to analyze behavioral data and predict schizophrenic relapses up to one month before they occurred.</p>



<p>The data used in both new papers was gathered from a cohort of 60 subjects with schizophrenia. Passive smartphone data, such as accelerometer readings and phone call metadata (such as frequency of calls and durations) was captured for the entire cohort. Eighteen of the subjects suffered a schizophrenic relapse during the course of the study.</p>



<p>A type of machine learning, dubbed encoder-decoder neural networks, was then used to analyzed the mass of data looking for anomalous behavioral patterns within 30 days of a major relapse. The results revealed an 108 percent increase in behavior anomalies could be detected in the month leading up to a relapse, suggesting this kind of system may be useful for detecting and treating patients before a major schizophrenic episode arises.</p>



<p>“We tried to create an approach where we could tell a clinician: not only is this participant experiencing unusual behavior, these are the specific things that are different in this particular patient,” says Dan Adler, a researcher from Cornell Tech working on the project. “If we can predict when someone’s symptoms are going to change before relapse, we can get them early treatment and possibly prevent an inpatient visit.”</p>



<p>As well as predicting relapses ahead of time, the system could effectively predict patients&#8217; self-assessments of their conditions. And a more granular analysis of the data revealed fine-grained symptom changes could also be predicted.</p>



<p>Different kinds of behavioral patterns, as tracked through passive smartphone data, could be associated with specific symptom characteristics. One of the papers, published in the journal&nbsp;<em>Scientific Reports</em>, strikingly presents a hypothetical scenario whereby the system itself could conceivably intervene in real-time to help guide subjects toward behavioral patterns that prevent a looming relapse.</p>



<p>“For example, if there is an unusual change in the ultradian rhythm of environment noise for a couple of hours, the system can prompt the patient to move to an environment that has a lower and more stable level of ambient noise to prevent the noise from affecting the patients’ cognitive performance,” the researchers write. “If the system notices that the patient’s phone usage in certain periods, for example in evening, has a very different pattern than in other periods (morning and afternoon), the system can intervene to change the patient’s phone usage pattern, delaying the arrival of phone notifications for instance, to avoid an increase in stress.”</p>



<p>anzeem Choudhury, from Cornell Tech and co-author on both of the new papers, suggests the system developed could be appropriated for many mental health conditions. Even major depressive episodes, he suggests, could be predicted ahead of time by passively tracking extreme behavioral changes.</p>



<p>“By focusing on changes in behavioral routines and misalignment with underlying biological rhythms, we expect our approach to generate clinically actionable insights that generalize across a diverse demographic of users,” says Choudhury.</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-predicts-schizophrenia-relapses-using-smartphone-data/">Machine learning predicts schizophrenia relapses using smartphone data</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-predicts-schizophrenia-relapses-using-smartphone-data/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Artificial Intelligence Tool Diagnoses Alzheimer’s with 95% Accuracy</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-tool-diagnoses-alzheimers-with-95-accuracy/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-tool-diagnoses-alzheimers-with-95-accuracy/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 01 Sep 2020 08:22:09 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[data quality]]></category>
		<category><![CDATA[Medical Research]]></category>
		<category><![CDATA[neural networks]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11356</guid>

					<description><![CDATA[<p>Source: healthitanalytics.com A team from Stevens Institute of Technology has developed an artificial intelligence tool that can diagnose Alzheimer’s disease with more than 95 percent accuracy, eliminating the need <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-tool-diagnoses-alzheimers-with-95-accuracy/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-tool-diagnoses-alzheimers-with-95-accuracy/">Artificial Intelligence Tool Diagnoses Alzheimer’s with 95% Accuracy</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: healthitanalytics.com</p>



<p>A team from Stevens Institute of Technology has developed an artificial intelligence tool that can diagnose Alzheimer’s disease with more than 95 percent accuracy, eliminating the need for expensive scans or in-person testing.</p>



<p>In addition, the algorithm is also able to explain its conclusions, enabling human experts to check the accuracy of its diagnosis.</p>



<p>Alzheimer’s disease can impact a person’s use of language, the researchers noted. For example, people with Alzheimer’s tend to replace nouns with pronouns, and they can express themselves in a very roundabout, awkward way.</p>



<p>The team designed an explainable AI tool that uses attention mechanisms and a convolutional neural network to accurately identify well-known signs of Alzheimer’s, as well as subtle linguistic patterns that were previously overlooked.</p>



<p>Researchers trained the algorithm using texts composed by both healthy subjects and known Alzheimer’s sufferers describing a drawing of children stealing cookies from a jar. The team converted each individual sentence into a unique numerical sequence, or vector, representing a specific point in a 512-dimensional space.</p>



<p>This kind of approach allows even complex sentences to be assigned a concrete numerical value, making it easier to analyze structural and thematic relationships between sentences.</p>



<p>Using those vectors along with handcrafted features, the AI gradually learned to spot differences between sentences composed by healthy or unhealthy individuals, and was able to determine with significant accuracy how likely any given text was to have been produced by a person with Alzheimer’s.</p>



<p>“This is a real breakthrough,” said the tool’s creator, K.P. Subbalakshmi, founding director of Stevens Institute of Artificial Intelligence and&nbsp;professor of electrical and computer engineering at the Charles V. Schaefer School of Engineering &amp; Science.</p>



<p>“We’re opening an exciting new field of research, and making it far easier to explain to patients why the AI came to the conclusion that it did, while diagnosing patients. This addresses the important question of trustability of AI systems in the medical field.”&nbsp;&nbsp;</p>



<p>The AI system can also incorporate new criteria that may be identified by other research teams in the future, making the algorithm increasingly more accurate over time.</p>



<p>“We designed our system to be both modular and transparent,” Subbalakshmi explained. “If other researchers identify new markers of Alzheimer’s, we can simply plug those into our architecture to generate even better results.”</p>



<p>In the future, AI tools may be able to diagnose Alzheimer’s using any text, from emails to social media posts. However, to develop such an algorithm, researchers would need to train it on many different kinds of texts produced by known Alzheimer’s sufferers instead of just picture descriptions.</p>



<p>While this kind of data is not yet available, increasing access to this kind of information could lead to the development of accurate, comprehensive AI tools.</p>



<p>“The algorithm itself is incredibly powerful,” Subbalakshmi said. “We’re only constrained by the data available to us.”</p>



<p>The researchers’ next steps will be gathering new data that will help the algorithm diagnose patients with Alzheimer’s disease based on speech in languages other than English. The team is also uncovering ways in which other neurological conditions, such as aphasia, stroke, traumatic brain injuries, and depression, can impact language use.</p>



<p>“This method is definitely generalizable to other diseases,” said Subbalakshmi. “As we acquire more and better data, we’ll be able to create streamlined, accurate diagnostic tools for many other illnesses too.”&nbsp;</p>



<p>Researchers expect that providers can use this AI tool to more accurately diagnose Alzheimer’s, leading to earlier treatment and reduced healthcare costs.</p>



<p>“This is absolutely state-of-the-art,” said Subbalakshmi. “Our AI software is the most accurate diagnostic tool currently available while also being explainable.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-tool-diagnoses-alzheimers-with-95-accuracy/">Artificial Intelligence Tool Diagnoses Alzheimer’s with 95% Accuracy</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-tool-diagnoses-alzheimers-with-95-accuracy/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Researchers Use Memristors To Create More Energy Efficient Neural Networks</title>
		<link>https://www.aiuniverse.xyz/researchers-use-memristors-to-create-more-energy-efficient-neural-networks/</link>
					<comments>https://www.aiuniverse.xyz/researchers-use-memristors-to-create-more-energy-efficient-neural-networks/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 31 Aug 2020 06:31:32 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Memristors]]></category>
		<category><![CDATA[neural networks]]></category>
		<category><![CDATA[researchers]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11315</guid>

					<description><![CDATA[<p>Source: unite.ai One of the less glamorous aspects of artificial intelligence is that it often requires a large amount of processing power and therefore it often has <a class="read-more-link" href="https://www.aiuniverse.xyz/researchers-use-memristors-to-create-more-energy-efficient-neural-networks/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/researchers-use-memristors-to-create-more-energy-efficient-neural-networks/">Researchers Use Memristors To Create More Energy Efficient Neural Networks</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: unite.ai</p>



<p>One of the less glamorous aspects of artificial intelligence is that it often requires a large amount of processing power and therefore it often has a large energy footprint. Recent work done by researchers at UCL has determined a method of improving an AI’s energy efficiency.</p>



<p>Neural networks and machine learning are powerful tools, but the most impressive feats of artificial intelligence usually have a large energy cost associated with them. For example, when OpenAI taught a robotic hand to manipulate a Rubik’s cube, it was estimated that the feat required around 2.8 gigawatt-hours of electricity.</p>



<p>According to TechExplore, Researchers at UCL have designed a new method of generating artificial neural networks. The new method utilizes memristors to generate the network, which are around 1000 times more energy-efficient than networks created with traditional approaches. Memristors are devices that can recall the amount of electrical charge that last flowed through them, preserving that memory state after they have been shut off. This means that they can remember their state even if a device should lose power. Although memristors were first theorized about around 50 years ago, it wasn’t until 2008 that a real memristor was created.</p>



<p>Memristors are occasionally referred to as “neuromorphic” computing devices or “brain-inspired” devices. Memristors are similar to the building blocks the brain uses to process information and create memories. They are highly efficient compared to most modern computer systems. These memristor devices possess aspects of capacitors and resistors, and over the past decade or so they have been manufactured and used in a variety of memory devices. The UCL research teams hope that their research will help these devices be used to create AI systems within a few years.</p>



<p>Despite their increased energy efficiency, memristors are traditionally much less efficient than regular neural networks, but the UCL researchers found a way to increase the accuracy of memristors. The researchers found that when using many memristors, they could be split up into multiple sub-groups and then their calculations averaged together. The averaging of the calculations help flaws in the subgroups cancel each other out and the more relevant patterns found.</p>



<p>Dr. Adnan Mehonic and Ph.D. student Dovydas Joksas (both UCL Electronic and Electrical Engineering) and their co-authors tested this averaging approach across various memristor types and found that the technique seemed to improve accuracy in all of the different memristors tested, not just one or two of them. The accuracy improvements applied to all the groups that were tested, no matter the type of material the memristor was made out of.</p>



<p>According to Dr. Mehonic, as quoted by TechExplore:</p>



<p>“We hoped that there might be more generic approaches that improve not the device-level, but the system-level behavior, and we believe we found one. Our approach shows that, when it comes to memristors, several heads are better than one. Arranging the neural network into several smaller networks rather than one big network led to greater accuracy overall.”</p>



<p>The research team was excited to have taken a computer science technique and applied to memristors, also using a common error avoidance technique (averaging calculations) to increase the accuracy of memristive neural networks. Study co-author Professor Tony Kenyon of UCL Electronic &amp; Electrical Engineering believes that memristors could “take a leading role” in creating more energy-sustainable edge computing devices and IoT devices.</p>



<p>Memristors are not only more energy-efficient than traditional neural network models, they can be easily included in a hand-held, mobile device. This is predicted to be of increasing importance in the near future as more data is created and transmitted all the time even though it is difficult to increase transmission capacity beyond a certain point. Memristors could help enable the transfer of large volumes of data at a fraction of the energy cost.</p>
<p>The post <a href="https://www.aiuniverse.xyz/researchers-use-memristors-to-create-more-energy-efficient-neural-networks/">Researchers Use Memristors To Create More Energy Efficient Neural Networks</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/researchers-use-memristors-to-create-more-energy-efficient-neural-networks/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deep Learning Algorithm Could Enhance Genomic Sequencing</title>
		<link>https://www.aiuniverse.xyz/deep-learning-algorithm-could-enhance-genomic-sequencing/</link>
					<comments>https://www.aiuniverse.xyz/deep-learning-algorithm-could-enhance-genomic-sequencing/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 07 Aug 2020 06:01:49 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[analytics technologies]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Genomics]]></category>
		<category><![CDATA[neural networks]]></category>
		<category><![CDATA[Personalized Medicine]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10712</guid>

					<description><![CDATA[<p>Source: healthitanalytics.com A deep learning tool could improve genomic sequencing processes, identifying disease-causing mechanisms that might otherwise be missed by traditional screening methods, according to a study published in Nature <a class="read-more-link" href="https://www.aiuniverse.xyz/deep-learning-algorithm-could-enhance-genomic-sequencing/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-algorithm-could-enhance-genomic-sequencing/">Deep Learning Algorithm Could Enhance Genomic Sequencing</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: healthitanalytics.com</p>



<p>A deep learning tool could improve genomic sequencing processes, identifying disease-causing mechanisms that might otherwise be missed by traditional screening methods, according to a study published in <em>Nature Machine Intelligence</em>.</p>



<p>Researchers from Children’s Hospital of Philadelphia (CHOP) and New Jersey Institute of Technology (NJIT) developed the tool, which can help predict sites of DNA methylation – a process that can change the activity of DNA without changing its overall structure.</p>



<p>DNA methylation is involved in many key cellular processes and is an important component in gene expression. Errors in methylation can be linked to a wide range of human diseases. Genomic sequencing tools can effectively pinpoint polymorphisms that may cause a disease, but these same methods are unable to capture the effects of methylation because the individual genes still look the same.</p>



<p>Researchers have made a considerable effort to study DNA methylation of N<sup>6</sup>-adenine (6mA) in eukaryotic cells, which include human cells. Although there is genomic data available, the role of methylation in these cells remains elusive.</p>



<p>“Previously, methods that had been developed to identify these methylation sites in the genome were very conservative and could only look at certain nucleotide lengths at a given time, so a large number of methylation sites were missed,” said Hakon Hakonarson, PhD, Director of the Center for Applied Genomics (CAG) at CHOP and one of the senior co-authors of the study.</p>



<p>“We needed to develop a better way of identifying and predicting methylation sites with a tool that could identify these motifs throughout the genome that may have a robust functional impact and are potentially disease causing.”</p>



<p>To overcome this issue, the team developed a deep learning algorithm that could predict where these sites of methylation happened, which could then help researchers determine the effect they might have on nearby genes.</p>



<p>The software, called Deep6mA, applies neural networks to study DNA methylation sites on natural multicellular organisms. This new method holds several advantages, researchers noted. The approach allows for the automation of the sequence feature representation of different levels of detail. Additionally, the method facilitates the integration of a broad spectrum of methylation sequences on nearby genes of interest.</p>



<p>The innovative process could also lead to model development and prediction in large-scale genomic data.</p>



<p>The researchers applied the algorithm to three different types of representative organisms, including A. thaliana,&nbsp;D. melanogaster, and&nbsp;E.coli, the first two being eukaryotic. The deep learning tool was able to identify 6mA methylation sites down to the resolution of a single nucleotide, or basic unit of DNA. Even in this initial confirmation study, researchers were able to visualize regulatory patterns they were unable to see using traditional methods.</p>



<p>“One limitation is that our proposed prediction is purely based on sequence information,” said Zhi Wei, PhD, a professor of computer science at NJIT and a senior co-author of the study.</p>



<p>“Whether a candidate is a 6mA site or not will also depend on many other factors. Methylation, including 6mA, is a dynamic process, which will change with cellular context. In the future, we would like to take other factors into consideration such as gene expression. We hope to predict 6mA across cellular context by integrating other data.”</p>



<p>Despite this limitation, the researchers believe that their study shows the ability for deep learning to accelerate personalized medicine and enhance clinical care.</p>



<p>“We already know that a number of genes have a disease-causing mechanism brought about by methylation, and while this study was not done in human cells, the eukaryotic cell models were very comparable,” Hakonarson said.</p>



<p>“Genomic scientists looking to translate their findings into clinical applications would find this tool very useful, and the level of precision could eventually lead to the discovery of specific cells or targets that are candidates for therapeutic intervention.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-algorithm-could-enhance-genomic-sequencing/">Deep Learning Algorithm Could Enhance Genomic Sequencing</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deep-learning-algorithm-could-enhance-genomic-sequencing/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Artificial Intelligence Detects Epileptic Seizures in Real Time</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-detects-epileptic-seizures-in-real-time/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-detects-epileptic-seizures-in-real-time/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 01 Jul 2020 07:07:30 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[analytics technologies]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[neural networks]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9906</guid>

					<description><![CDATA[<p>Source: healthitanalytics.com June 30, 2020 &#8211; An artificial intelligence algorithm can analyze electroencephalograph (EEG) electrodes to detect a seizure and accurately pinpoint its location, according to a study published in Scientific Reports. <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-detects-epileptic-seizures-in-real-time/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-detects-epileptic-seizures-in-real-time/">Artificial Intelligence Detects Epileptic Seizures in Real Time</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: healthitanalytics.com</p>



<p>June 30, 2020 &#8211; An artificial intelligence algorithm can analyze electroencephalograph (EEG) electrodes to detect a seizure and accurately pinpoint its location, according to a study published in <em>Scientific Reports</em>.</p>



<p>The researchers stated that epilepsy is one of the most common central nervous system disorders, with nearly four percent of people across different ages diagnosed with epilepsy during their lifetimes.</p>



<p>The current understanding of most seizures is that they occur when normal brain activity is interrupted by a strong, sudden hyper-synchronized firing of a cluster of neurons. During a seizure, if a person is hooked up to an EEG – a device that measures electrical output – the abnormal brain activity is presented as amplified spike-and-wave discharges.</p>



<p>However, when temporal EEG signals are used, it can be difficult to accurately detect a seizure. Researchers developed a network inference technique that would facilitate detection of a seizure and pinpoint its location with improved accuracy.</p>



<p>During an EEG session, a person has electrodes attached to different spots on her head, and each electrode records electrical activity around that spot.</p>



<p>“We treated EEG electrodes as nodes of a network. Using the recordings (time-series data) from each node, we developed a data-driven approach to infer time-varying connections in the network or relationships between nodes,” said Walter Bomela, a postdoctoral fellow in the Preston M. Green Department of Electrical &amp; Systems Engineering at the University of Texas at Arlington. “We want to infer how a brain region is interacting with others.”</p>



<p>These relationships form a network. Once researchers had a network, they could measure its parameters holistically. For example, instead of measuring the strength of a single signal, the team could evaluate the overall network for strength.</p>



<p>One parameter, the Fiedler eigenvalue, starts to increase when a seizure occurs. In network theory, the Fiedler eigenvalue is also related to a network’s synchronicity – the bigger the value the more the network is synchronous.</p>



<p>“This agrees with the theory that during seizure, the brain activity is synchronized,” Bomela said.</p>



<p>A bias toward synchronization also helps to reduce artifact and background noise, researchers noted. For instance, if a person scratches their arm, the associated brain activity will be captured on some EEG channels or electrodes. However, it won’t be synchronized with seizure activity. This structure inherently eliminates the importance of unrelated signals, so that only brain activities that are in sync will significantly increase the Fiedler eigenvalue.</p>



<p>“Our technique allows us to get raw data, process it and extract a feature that’s more informative for the machine learning model to use,” said Bomela. “The major advantage of our approach is to fuse signals from 23 electrodes to one parameter that can be efficiently processed with fewer computing resources.”</p>



<p>Currently, the system works for an individual patient. The next step is to integrate machine learning to generalize the technique for identifying different types of seizures across patients. Researchers are seeking to take advantage of various parameters characterizing the network and use them as features to train the machine learning algorithm.</p>



<p>“The network is like a face,” said Bomela. “You can extract different parameters from an individual’s network — such as the clustering coefficient or closeness centrality — to help machine learning differentiate between different seizures.”</p>



<p>In network theory, similarities in specific parameters are associated with specific networks. In this case, those networks will correspond to different types of seizures.</p>



<p>The team’s overall aim is to one day design a device for people with epilepsy that is analogous to an insulin pump. As the neurons begin to synchronize, the device will provide medication or electrical interference to stop the seizure. However, in order for this to happen, researchers need a better understanding of the neural network.</p>



<p>“While the ultimate goal is to refine the technique for clinical use, right now we are focused on developing methods to identify seizures as drastic changes in brain activity,” said Jr-Shin Li, professor in the Preston M. Green Department of Electrical &amp; Systems Engineering. “These changes are captured by treating the brain as a network in our current method.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-detects-epileptic-seizures-in-real-time/">Artificial Intelligence Detects Epileptic Seizures in Real Time</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-detects-epileptic-seizures-in-real-time/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AI System – Using Neural Networks With Deep Learning – Beats Stock Market in Simulation</title>
		<link>https://www.aiuniverse.xyz/ai-system-using-neural-networks-with-deep-learning-beats-stock-market-in-simulation/</link>
					<comments>https://www.aiuniverse.xyz/ai-system-using-neural-networks-with-deep-learning-beats-stock-market-in-simulation/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 03 Jun 2020 06:57:52 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[AI systems]]></category>
		<category><![CDATA[Automatica Sinica]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[neural networks]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9236</guid>

					<description><![CDATA[<p>Source: scitechdaily.com Researchers in Italy have melded the emerging science of convolutional neural networks (CNNs) with deep learning — a discipline within artificial intelligence — to achieve <a class="read-more-link" href="https://www.aiuniverse.xyz/ai-system-using-neural-networks-with-deep-learning-beats-stock-market-in-simulation/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/ai-system-using-neural-networks-with-deep-learning-beats-stock-market-in-simulation/">AI System – Using Neural Networks With Deep Learning – Beats Stock Market in Simulation</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: scitechdaily.com</p>



<p>Researchers in Italy have melded the emerging science of convolutional neural networks (CNNs) with deep learning — a discipline within artificial intelligence — to achieve a system of market forecasting with the potential for greater gains and fewer losses than previous attempts to use AI methods to manage stock portfolios. The team, led by Prof. Silvio Barra at the University of Cagliari, published their findings on IEEE/CAA Journal of Automatica Sinica.</p>



<p>The University of Cagliari-based team set out to create an AI-managed “buy and hold” (B&amp;H) strategy — a system of deciding whether to take one of three possible actions — a long action (buying a stock and selling it before the market closes), a short action (selling a stock, then buying it back before the market closes), and a hold (deciding not to invest in a stock that day). At the heart of their proposed system is an automated cycle of analyzing layered images generated from current and past market data. Older B&amp;H systems based their decisions on machine learning, a discipline that leans heavily on predictions based on past performance.</p>



<p>By letting their proposed network analyze current data layered over past data, they are taking market forecasting a step further, allowing for a type of learning that more closely mirrors the intuition of a seasoned investor rather than a robot. Their proposed network can adjust its buy/sell thresholds based on what is happening both in the present moment and the past. Taking into account present-day factors increases the yield over both random guessing and trading algorithms not capable of real-time learning.</p>



<p>To train their CNN for the experiment, the research team used S&amp;P 500 data from 2009 to 2016. The S&amp;P 500 is widely regarded as a litmus test for the health of the overall global market.</p>



<p>At first, their proposed trading system predicted the market with about 50 percent&nbsp;accuracy, or about accurate enough to break even in a real-world situation. They discovered that short-term outliers, which unexpectedly over- or underperformed, generating a factor they called “randomness.” Realizing this, they added threshold controls, which ended up greatly stabilizing their method.</p>



<p>“The mitigation of randomness yields two simple, but significant consequences,” Prof. Barra said. “When we lose, we tend to lose very little, and when we win, we tend to win considerably.”</p>



<p>Further enhancements will be needed, according to Prof. Barra, as other methods of automated trading already in use make markets more and more difficult to predict.</p>



<p>Reference: “Deep Learning and Time Series-to-Image Encoding for Financial Forecasting” by Silvio Barra, Salvatore Mario Carta, Andrea Corriga, Alessandro Sebastian Podda and Diego Reforgiato Recupero, May 2020, IEEE/CAA Journal of Automatica Sinica.</p>
<p>The post <a href="https://www.aiuniverse.xyz/ai-system-using-neural-networks-with-deep-learning-beats-stock-market-in-simulation/">AI System – Using Neural Networks With Deep Learning – Beats Stock Market in Simulation</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/ai-system-using-neural-networks-with-deep-learning-beats-stock-market-in-simulation/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>XMOS and Plumerai partner to accelerate binarised neural networks</title>
		<link>https://www.aiuniverse.xyz/xmos-and-plumerai-partner-to-accelerate-binarised-neural-networks/</link>
					<comments>https://www.aiuniverse.xyz/xmos-and-plumerai-partner-to-accelerate-binarised-neural-networks/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 03 Apr 2020 05:55:10 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[accelerate]]></category>
		<category><![CDATA[neural networks]]></category>
		<category><![CDATA[Plumerai]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[XMOS]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7911</guid>

					<description><![CDATA[<p>Source: newelectronics.co.uk British technology companies XMOS and Plumerai have agreed a new strategic partnership that will support the development of binarised neural network (BNN) capabilities, enabling AI <a class="read-more-link" href="https://www.aiuniverse.xyz/xmos-and-plumerai-partner-to-accelerate-binarised-neural-networks/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/xmos-and-plumerai-partner-to-accelerate-binarised-neural-networks/">XMOS and Plumerai partner to accelerate binarised neural networks</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: newelectronics.co.uk</p>



<p>British technology companies XMOS and Plumerai have agreed a new strategic partnership that will support the development of binarised neural network (BNN) capabilities, enabling AI to be embedded in a wide range of everyday devices efficiently at low power and at low cost.</p>



<p>The partnership combines Plumerai’s Larq software library for training BNNs and the xcore.ai crossover processor from XMOS which provides native support for inference of BNNs. The combination is intended to deliver a BNN capability that’s 2 to 4x more efficient than existing edge AI solutions.</p>



<p>This solution will enable a new generation of devices and could include everything from identifying that a shopping package has been delivered to a safe place to managing traffic flows more efficiently, supporting remote healthcare applications or keeping shelves in stores stocked more efficiently. While BNNs are an emerging technology, the future potential is said to be enormous.</p>



<p>A typical application uses deep learning models with tens of millions of parameters — and despite the move to 16-bit and 8-bit encoding there is still an insatiable demand to increase the speed and efficiency of deep learning and AI systems.</p>



<p>BNNs are seen as the most efficient form of deep learning, offering to transform the economics and efficiency of edge intelligence by going all the way down to just a single bit. However, there are significant challenges involved in making BNNs commercially viable — for example, they demand specific attention in chip design for efficient inference and new software algorithms for training.</p>



<p>XMOS and Plumerai have combined their respective expertise in embedded chip design and deep learning algorithms to enable this breakthrough technology and extend the use of AI.</p>



<p>Commenting Mark Lippett, CEO, XMOS said: “BNNs gained prominence in the news recently with Apple’s purchase of Xnor.ai for a reported $200m. It’s little surprise that Apple is exploring AI capabilities at the edge, with advanced machine learning algorithms that can run efficiently in low-power, offline environments.</p>



<p>“Regardless of other moves in the market, our partnership with Plumerai is exciting for AI developers around the world. The combination of Larq and xcore.ai offers the first consolidated path to commercially deploying BNNs, which will be highly disruptive in intelligent embedded systems.”</p>



<p>Roeland Nusselder, CEO, Plumerai added: “We share XMOs&#8217; excitement about the emerging era of intelligent connectivity. Binarized deep learning has tremendous potential for enabling a new generation of energy-efficient, AI-powered applications. Our two companies are perfectly positioned to turn this potential into reality.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/xmos-and-plumerai-partner-to-accelerate-binarised-neural-networks/">XMOS and Plumerai partner to accelerate binarised neural networks</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/xmos-and-plumerai-partner-to-accelerate-binarised-neural-networks/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Artificial Intelligence: Moving beyond efficiency gains in drug development</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-moving-beyond-efficiency-gains-in-drug-development/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-moving-beyond-efficiency-gains-in-drug-development/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 16 Mar 2020 07:38:33 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[neural networks]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7466</guid>

					<description><![CDATA[<p>Source: cphi.com The utilisation of artificial intelligence (AI) by the pharmaceutical industry is gathering pace rapidly. The technology has evolved beyond simple neural networks and machine learning <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-moving-beyond-efficiency-gains-in-drug-development/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-moving-beyond-efficiency-gains-in-drug-development/">Artificial Intelligence: Moving beyond efficiency gains in drug development</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: cphi.com</p>



<p>The utilisation of artificial intelligence (AI) by the pharmaceutical industry is gathering pace rapidly. The technology has evolved beyond simple neural networks and machine learning towards a deep learning approach geared at producing insights which should in theory enable better business decision&nbsp;making. AI’s remit is largely to tackle complex problems in ways similar to human logic and reasoning and it is already being used successfully within the&nbsp;healthcare arena to mine large amounts of patient data, to design treatment<br>plans and to develop drugs.The</p>



<p>The potential for transformational impact is huge all along the product lifecycle; AI machine learning algorithms can enhance and speed up manufacturing processes by increasing efficiencies and reducing waste. In the drug discovery process, AI is also being used in drug targeting and biomarker identification and predicting success rate probabilities. Earlier this year, drug discovery firm Exscientia and Japanese company Sumitomo Dainippon Pharma heralded their drug&nbsp;candidate for obsessive compulsive disorder as the world’s first AI-generated compound to make it into clinical trials.</p>



<p>And in a recent collaboration with the German Research Center for Artificial Intelligence, biopharma services company Sartorius has established the Sartorius AI Lab (SAIL) with the aim of developing machine learning and image and pattern recognition processes for life science applications. For example, researchers are working on new deep learning algorithms and methods for image recognition<br>of cells and organoids, analysis and modelling of biological systems and forsimulation and optimization of biopharmaceutical production processes.</p>



<p>“Our aim is to use better methods of data analysis and the increasing computer capacities to map and simulate the development and production of biopharmaceuticals in computers in the future,” says Oscar-Werner Reif, chief technology officer at Sartorius. “As a result, development times and costs for new therapies will improve dramatically by an accelerated timeline from idea to patient.”<br>&nbsp;</p>



<p><strong>Improving clinical trial outcomes</strong></p>



<p>With regard to the drug development phase, one of the most exciting and more developed areas where AI is already making a big difference is clinical trials; it can be used not only to process and interpret mega-volumes of clinical trial data but also to more successfully identify and recruit appropriate trial candidates and ultimately improve outcomes.</p>



<p>The net effects of these efforts should be faster research, more logical cross-referencing of data and hopefully reduced drug approval timelines, which can result in cost savings for the industry and a wider choice of treatments&nbsp;for patients.</p>



<p>According to Charles Fisher, founder and CEO at Unlearn.AI, the sky is literally the limit when it comes to the adoption of AI in pharma: “We’ve only just begun to scratch the surface. Part of the reason is the adoption of technology in general within biopharma always tends<br>to lag 10-15 years behind.”</p>



<p>He explains that up till now, most AI solutions have been designed to tackle the challenges that pure tech companies such as Google and Facebook face like image processing and improving the playability of video games.</p>



<p>“We will need to develop new AI solutions to solve pharma’s problems instead of taking things like image processing and trying to shove them on top of problems they weren’t designed to solve,” he says.<br>&nbsp;</p>



<p><strong>Blending pharma and tech expertise</strong></p>



<p>Whereas it would be harsh to describe the pharma-tech marriage as a dichotomy, there is growing consensus that AI start-ups working on drug development solutions need to fully understand the big pharma environment in order to succeed.</p>



<p>Ketan Patel, product director, portfolio, licensing and clinical at Clarivate Analytics argues that there are two broad camps of AI&nbsp;companies within healthcare.</p>



<p>“Firstly, there are those who have typically come out of Silicon Valley and basically want to throw AI at anything that’s potentially interesting &#8211; they have a technology hammer and they want to find a nail,” he says. “And then there is a second set of companies who have both AI and life sciences expertise and they’re invariably founded by someone from a healthcare/ex-pharma background.”</p>



<p>Patel says this second group of firms, which tend to be biotech-like in nature, is more interesting as it is applying AI to developing new drugs, doing lead generation, and designing new molecules much quicker than humans are capable of.</p>



<p>The key is finding the sweet spot of bringing together real pharma and real AI expertise on an equal footing, says Fisher.<br>&nbsp;</p>



<p><strong>Identifying the value-add</strong></p>



<p>As has been previously noted, AI’s ability to effectively mine large volumes of clinical data makes it the ideal tool to make clinical trials faster and more efficient.</p>



<p>However, as Patel argues, the differentiator when leveraging this technology is using it to go beyond merely speeding up trials and focusing on areas where it can provide a real tangible value-add.</p>



<p>“A lot of AI is designed around making trials more efficient; however, to really upend the trial paradigm you want to get away from a patient visiting a doctor at a clinic or investigational site,” he says. “That then opens up the ballgame to a whole new area of clinical<br>trials – siteless or virtual clinical trials.”</p>



<p>He explains that a wealth of data can be gleaned from worn sensors and patients’ electronic medical records; then AI can be used to make sense of which of that data is pertinent to the clinical trial and to include in statistical analysis.</p>



<p>“It’s also ensuring having that kind of adaptive capability so that instead of a human being phoning up the patient every day, an app on the patient’s mobile phone reminds them to take their medicine if it notices that the medication hasn’t been opened because a sensor<br>on the packet hasn’t been triggered,” he adds.</p>



<p>Fisher at Unlearn highlights three ways of improving the quality of trials using AI. Firstly, he suggests pharma companies should move away from the silo approach and aggregate databases of historical trials “so in principle you should be making better decisions about whether those drugs are safe and effective.”</p>



<p>Secondly he says AI can help pharma companies make better use of what they are observing in their clinical trials, where they are usually working with a homogenous patient population: “you want to know how it translates to the real world&nbsp;when you start actually prescribing the drug to patients who have comorbidities or who maybe don’t have exactly the same condition as what they’re trusting the drug in.”</p>



<p>Finally, he says AI can be effective in identifying the right patients – those who are more likely to benefit from the therapy being tested &#8212; for clinical trials.<br>&nbsp;</p>



<p><strong>Regulatory framework?</strong></p>



<p>As the use of AI in healthcare becomes more widespread, it is inevitable that the clarion call for more stringent regulation will only get louder. Fisher believes that some existing regulatory frameworks such as that for drug development could be adapted for new software<br>and AI-based healthcare tools and that we are likely to see regulatory agencies follow this course over the next few years.</p>



<p>He describes a tricky balancing act of ensuring that these solutions &#8212; which can be updated rapidly and frequently – do not escape regulatory scrutiny with encouraging developers to make swift improvements for the benefit of patients.</p>



<p>“It’s the ability to make these tools better much more rapidly than you can make a drug better that makes them so attractive, so you need to find some balance between those things,” he says.</p>



<p>Patel says that with health interventions becoming more digital in nature, “the FDA does need to take into account regulation of these kinds of software as a medical device because they are effectively being used in place of a drug.”</p>



<p>He adds that regulation should ensure that the data used to approve devices needs to be very solid and that the collection systems are such that they can’t be tampered with: “You have to use good research, clinical practices and signed consent forms.”<br>&nbsp;</p>



<p><strong>Looking to the future</strong></p>



<p>According to Patel, the future for AI in drug development looks extremely exciting, particularly with regard to using imaging data to both perform better diagnostics to secure the right clinical trial patients in areas such as oncology and identify a drug candidate’s effectiveness much quicker, areas he says are gaining great traction.</p>



<p>Fisher concludes that while the first real inroads of AI use in pharma have been made in marketing and commercialisation, it will not be long before the first R&amp;D-based start-up wave begins to bear fruits: “We’re going to see custom-built tools that were designed to solve<br>problems in pharma R&amp;D and that’s when we’re really going to start to see success.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-moving-beyond-efficiency-gains-in-drug-development/">Artificial Intelligence: Moving beyond efficiency gains in drug development</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-moving-beyond-efficiency-gains-in-drug-development/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How neuro-symbolic AI might finally make machines reason like humans</title>
		<link>https://www.aiuniverse.xyz/how-neuro-symbolic-ai-might-finally-make-machines-reason-like-humans/</link>
					<comments>https://www.aiuniverse.xyz/how-neuro-symbolic-ai-might-finally-make-machines-reason-like-humans/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 28 Jan 2020 09:12:23 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[artificial neural networks]]></category>
		<category><![CDATA[computer science]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[neural networks]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6417</guid>

					<description><![CDATA[<p>Source: zmescience.com If you want a machine to learn to do something intelligent you either have to program it or teach it to learn. For decades, engineers <a class="read-more-link" href="https://www.aiuniverse.xyz/how-neuro-symbolic-ai-might-finally-make-machines-reason-like-humans/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-neuro-symbolic-ai-might-finally-make-machines-reason-like-humans/">How neuro-symbolic AI might finally make machines reason like humans</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: zmescience.com</p>



<p>If you want a machine to learn to do something intelligent you either have to program it or teach it to learn.</p>



<p>For decades, engineers have been programming machines to perform all sorts of tasks — from software that runs on your personal computer and smartphone to guidance control for space missions.</p>



<p>But although computers are generally much faster and more precise than the human brain at sequential tasks, such as adding numbers or calculating chess moves, such programs are very limited in their scope. Something as trivial as identifying a bicycle among a crowded pedestrian street or picking up a hot cup of coffee from a desk and gently moving it to the mouth can send a computer into convulsions, nevermind conceptualizing or abstraction (such as designing a computer itself).</p>



<p>If you want a machine to learn to do something intelligent you either have to program it or teach it to learn.</p>



<p>For decades, engineers have been programming machines to perform all sorts of tasks — from software that runs on your personal computer and smartphone to guidance control for space missions.</p>



<p>But although computers are generally much faster and more precise than the human brain at sequential tasks, such as adding numbers or calculating chess moves, such programs are very limited in their scope. Something as trivial as identifying a bicycle among a crowded pedestrian street or picking up a hot cup of coffee from a desk and gently moving it to the mouth can send a computer into convulsions, nevermind conceptualizing or abstraction (such as designing a computer itself).</p>



<p>If you want a machine to learn to do something intelligent you either have to program it or teach it to learn.</p>



<p>For decades, engineers have been programming machines to perform all sorts of tasks — from software that runs on your personal computer and smartphone to guidance control for space missions.</p>



<p>But although computers are generally much faster and more precise than the human brain at sequential tasks, such as adding numbers or calculating chess moves, such programs are very limited in their scope. Something as trivial as identifying a bicycle among a crowded pedestrian street or picking up a hot cup of coffee from a desk and gently moving it to the mouth can send a computer into convulsions, nevermind conceptualizing or abstraction (such as designing a computer itself).</p>



<p>The gist is that humans were never programmed (not like a digital computer, at least) — humans have become intelligent through learning.</p>



<h3 class="wp-block-heading">Intelligent machines</h3>



<p>Do machine learning and deep learning ring a bell? They should. These are not merely buzz words — they’re techniques that have literally triggered a renaissance of artificial intelligence leading to phenomenal advances in self-driving cars, facial recognition, or real-time speech translations.</p>



<p>Although AI systems seem to have appeared out of nowhere in the previous decade, the first seeds were laid as early as 1956 by John McCarthy, Claude Shannon, Nathan Rochester, and Marvin Minsky at the Dartmouth Conference. Concepts like artificial neural networks, deep learning, but also neuro-symbolic AI are not new — scientists have been thinking about how to model computers after the human brain for a very long time. It’s only fairly recently that technology has developed the capability to store huge amounts of data and significant processing power, allowing AI systems to finally become practically useful.</p>



<p>But despite impressive advances, deep learning is still very far from replicating human intelligence. Sure, a machine capable of teaching itself to identify skin cancer better than doctors is great, don’t get me wrong, but there are also many flaws and limitations.</p>



<p>One important limitation is that deep learning algorithms and other machine learning neural networks are too narrow.</p>



<p>When you have huge amounts of carefully curated data, you can achieve remarkable things with them, such as superhuman accuracy and speed. Right now, AIs have crushed humans at every single important game, from chess to Jeopardy! and Starcraft.</p>



<p>However, their utility breaks down once they’re prompted to adapt to a more general task. What’s more, these narrow-focused systems are prone to error. For instance, take a look at the following picture of a “Teddy Bear” — or at least in the interpretation of a sophisticated modern AI.</p>



<p>These are just a couple of examples that illustrate that today’s systems don’t truly understand what they’re looking at. And what’s more, artificial neural networks rely on enormous amounts of data in order to train them, which is a huge problem in the industry right now. At the rate at which computational demand is growing, there will come a time when even all the energy that hits the planet from the sun won’t be enough to satiate our computing machines. Even so, despite being fed millions of pictures of animals, a machine can still mistake a furry cup for a teddy bear.</p>



<p>Meanwhile, the human brain can recognize and label objects effortlessly and with minimal training — basically we only need one picture. If you show a child a picture of an elephant — the very first time they’ve ever seen one — that child will instantly recognize that a) that is an animal and b) that this is an elephant next time they’ll come across that animal, either in real life or in a picture.</p>



<p>This is why we need a middle ground — a broad AI that can multi-task and cover multiple domains, but which also can read data from a variety of sources (text, video, audio, etc), whether the data is structured or unstructured. Enter the world of neuro-symbolic AI.</p>



<p>David Cox is the head of the MIT-IBM Watson AI Lab, a collaboration between IBM and MIT that will invest $250 million over ten years to advance fundamental research in artificial intelligence. One important avenue of research is neuro-symbolic AI.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>“A&nbsp;neuro-symbolic&nbsp;AI&nbsp;system combines neural networks/deep learning with ideas from&nbsp;symbolic&nbsp;AI.&nbsp;A neural network is a special kind of machine learning algorithm that maps from inputs (like an image of an apple) to outputs (like the label “apple”, in the case of a neural network that recognizes objects).&nbsp;Symbolic&nbsp;AI&nbsp;is different; for instance, it provides a way to express all the knowledge we have about apples: an apple has parts (a stem and a body), it has properties like its color, it has an origin (it comes from an apple tree), and so on,” Cox told ZME Science.</p><p>“Symbolic&nbsp;AI&nbsp;allows you to use logic to reason about entities and their properties and relationships.&nbsp;Neuro-symbolic&nbsp;systems combine these two kinds of&nbsp;AI, using neural networks to bridge from the messiness of the real world to the world of symbols, and the two kinds of&nbsp;AI&nbsp;in many ways complement each other’s strengths and weaknesses.&nbsp;I think that any meaningful step toward general&nbsp;AI&nbsp;will have to include symbols or symbol-like representations,” he added.</p></blockquote>



<p>By combining the two approaches, you end up with a system that has neural pattern recognition allowing it to&nbsp;<em>see</em>, while the symbolic part allows the system to&nbsp;<em>logically reason</em>&nbsp;about symbols, objects, and the relationships between them. Taken together, neuro-symbolic AI goes beyond what current deep learning systems are capable of doing.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>“One of the reasons why humans are able to work with so few examples of a new thing is that we are able to break down an object into its parts and properties and then to reason about them.&nbsp;Many of today’s neural networks try to go straight from inputs (e.g. images of elephants) to outputs (e.g. the label “elephant”), with a black box in between.&nbsp;We think it is important to step through an intermediate stage where we decompose the scene into a structured,&nbsp;symbolic&nbsp;representation of parts, properties, and relationships,” Cox told ZME Science.</p></blockquote>



<p>Here are some examples of questions that are trivial to answer by a human child but which can be highly challenging for AI systems solely predicated on neural networks.</p>



<p>Neural networks are trained to identify objects in a scene and interpret the natural language of various questions and answers (i.e. “What is the color of the sphere?”). The symbolic side recognizes concepts such as “objects,” “object attributes,” and “spatial relationship,” and uses this capability to answer questions about novel scenes that the AI had never encountered.</p>



<p>A neuro-symbolic system, therefore, applies logic and language processing to answer the question in a similar way to how a human would reason. An example of such a computer program is the neuro-symbolic concept learner (NS-CL), created at the MIT-IBM lab by a team led by Josh Tenenbaum, a professor at MIT’s Center for Brains, Minds, and Machines.</p>



<p>You could achieve a similar result to that of a neuro-symbolic system solely using neural networks, but the training data would have to be immense. Moreover, there’s always the risk that outlier cases, for which there is little or no training data, are answered poorly. In contrast, this hybrid approach boosts a high data efficiency, in some instances requiring just 1% of training data other methods need.</p>



<h3 class="wp-block-heading">The next evolution in AI</h3>



<p>Just like deep learning was waiting for data and computing to catch up with its ideas, so has symbolic AI been waiting for neural networks to mature. And now that two complementary technologies are ready to be synched, the industry could be in for another disruption — and things are moving fast.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>“We’ve got over 50 collaborative projects running with MIT, all tackling hard questions at the frontiers of AI.&nbsp;We think that neuro-symbolic AI methods are going to be applicable in many areas, including computer vision, robot control, cybersecurity, and a host of other areas.&nbsp;We have projects in all of these areas, and we’ll be excited to share them as they mature,” Cox said.</p></blockquote>



<p>But not everyone is convinced that this is the fastest road to achieving general artificial intelligence.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>“I think that symbolic style reasoning is definitely something that is important for AI to capture. But, many people (myself included) believe that human abilities with symbolic logic emerge as a result&nbsp;of training, and are not convinced that an explicitly hard-wiring in symbolic systems is the right approach. I am more inclined to think that we should try to design artificial neural networks (ANNs) that can learn how to do symbolic processing. The reason is this: it is hard to know what should be represented by a symbol, predicate, etc., and&nbsp;I think we have to be able to learn that, so hard-wiring the system in this way is maybe not a good idea,” Blake Richards, who is an Assistant Professor in the Montreal Neurological Institute and the School of Computer Science at McGill University, told ZME Science.</p></blockquote>



<p>Irina Rish, an Associate Professor in the Computer Science and Operations Research department at the Université de Montréal (UdeM), agrees that neuro-symbolic AI is worth pursuing but believes that “growing” symbolic reasoning out of neural networks, may be more effective in the long-run.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>“We all agree that deep learning in its current form has many limitations including the need for large datasets. However, this can be either viewed as criticism of deep learning or the plan for future expansion of today’s deep learning towards more capabilities,” Rish said.</p></blockquote>



<p>Rish sees current limitations surrounding ANNs as a ‘to-do’ list rather than a hard ceiling. Their dependence on large datasets for training can be mitigated by meta- and transfer-learning, for instance. What’s more, the researcher argues that many assumptions in the community about how to model human learning are rather flawed, calling for more interdisciplinary research.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>“A common argument about “babies learning from a few samples unlike deep networks” is fundamentally flawed since it is unfair to compare an artificial neural network trained from scratch (random initialization, some ad-hoc architectures) with a highly structured, far-from-randomly initialized neural networks in baby’s brains,&nbsp; incorporating prior knowledge about the world, from millions of years of evolution in varying environments. Thus, more and more people in the deep learning community now believe that we must focus more on interdisciplinary research on the intersection of AI and other disciplines that have been studying brain and minds for centuries, including neuroscience, biology, cognitive psychology, philosophy, and related disciplines,” she said.</p></blockquote>



<p>Rish points to exciting recent research that focuses on “developing next-generation network-communication based intelligent machines driven by the evolution of more complex behavior in networks of communicating units.” Rish believes that AI is naturally headed towards further automation of AI development, away from hard-coded models. In the future, AI systems will also be more bio-inspired and feature more dedicated hardware such as neuromorphic and quantum devices.</p>



<p>“The general trend in AI and in computing as a whole,&nbsp;towards further and further automation and replacing hard-coded approaches with automatically learned ones, seems to be the way to go,” she added.</p>



<p>For now, neuro-symbolic AI combines the best of both worlds in innovative ways by enabling systems to have both visual perception and logical reasoning. And, who knows, maybe this avenue of research might one day bring us closer to a form of intelligence that seems more like our own.</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-neuro-symbolic-ai-might-finally-make-machines-reason-like-humans/">How neuro-symbolic AI might finally make machines reason like humans</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-neuro-symbolic-ai-might-finally-make-machines-reason-like-humans/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
