<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>artificial neural networks Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/artificial-neural-networks/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/artificial-neural-networks/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Fri, 25 Sep 2020 06:52:58 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Deep learning helps explore the structural and strategic bases of autism?</title>
		<link>https://www.aiuniverse.xyz/deep-learning-helps-explore-the-structural-and-strategic-bases-of-autism/</link>
					<comments>https://www.aiuniverse.xyz/deep-learning-helps-explore-the-structural-and-strategic-bases-of-autism/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 25 Sep 2020 06:12:02 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[artificial neural networks]]></category>
		<category><![CDATA[ASD]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[human brain]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11747</guid>

					<description><![CDATA[<p>Source: medicalxpress.com Psychiatrists typically diagnose autism spectrum disorders (ASD) by observing a person&#8217;s behavior and by leaning on the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), <a class="read-more-link" href="https://www.aiuniverse.xyz/deep-learning-helps-explore-the-structural-and-strategic-bases-of-autism/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-helps-explore-the-structural-and-strategic-bases-of-autism/">Deep learning helps explore the structural and strategic bases of autism?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: medicalxpress.com</p>



<p>Psychiatrists typically diagnose autism spectrum disorders (ASD) by observing a person&#8217;s behavior and by leaning on the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), widely considered the &#8216;bible&#8217; of mental health diagnosis.</p>



<p>However, there are substantial differences amongst individuals on the spectrum and a great deal remains unknown by science about the causes of autism, or even what autism is. As a result, an accurate diagnosis of ASD and a prognosis prediction for patients can be extremely difficult.</p>



<p>But what if&nbsp;artificial intelligence&nbsp;(AI) could help? Deep learning, a type of AI, deploys&nbsp;artificial neural networks&nbsp;based on the&nbsp;human brain&nbsp;to recognize patterns in a way that is akin to, and in some cases can surpass, human ability. The technique, or rather suite of techniques, has enjoyed remarkable success in recent years in fields as diverse as voice recognition, translation, autonomous vehicles, and drug discovery.</p>



<p>A group of researchers from KAIST in collaboration with the YonseiUniversity College of Medicine has applied these&nbsp;deep learning techniques&nbsp;to autism diagnosis. Their findings were published on August 14 in the journal&nbsp;IEEE Access.</p>



<p>Magnetic resonance imaging (MRI) scans of brains of people known to have autism have been used by researchers and clinicians to try to identify structures of the brain they believed were associated with ASD. These researchers have achieved considerable success in identifying abnormal gray and white matter volume and irregularities in cerebral cortex activation and connections as being associated with the condition.</p>



<p>These findings have subsequently been deployed in studies attempting more consistent diagnoses of patients than has been achieved via psychiatrist observations during counseling sessions. While such studies have reported high levels of diagnostic accuracy, the number of participants in these studies has been small, often under 50, and diagnostic performance drops markedly when applied to large sample sizes or on datasets that include people from a wide variety of populations and locations.</p>



<p>&#8220;There was something as to what defines autism that human researchers and clinicians must have been overlooking,&#8221; said Keun-Ah Cheon, one of the two corresponding authors and a professor in Department of Child and Adolescent Psychiatry at Severance Hospital of the Yonsei University College of Medicine.</p>



<p>&#8220;And humans poring over thousands of MRI scans won&#8217;t be able to pick up on what we&#8217;ve been missing,&#8221; she continued. &#8220;But we thought AI might be able to.&#8221;</p>



<p>So the team applied five different categories of deep learning models to an open-source dataset of more than 1,000 MRI scans from the Autism Brain Imaging Data Exchange (ABIDE) initiative, which has collected brain imaging data from laboratories around the world, and to a smaller, but higher-resolution MRI image dataset (84 images) taken from the Child Psychiatric Clinic at Severance Hospital, Yonsei University College of Medicine. In both cases, the researchers used both structural MRIs (examining the anatomy of the brain) and functional MRIs (examining brain activity in different regions).</p>



<p>The models allowed the team to explore the structural bases of ASD brain region by brain region, focusing in particular on many structures below the cerebral cortex, including the basal ganglia, which are involved in motor function (movement) as well as learning and memory.</p>



<p>Crucially, these specific types of deep learning models also offered up possible explanations of how the AI had come up with its rationale for these findings.</p>



<p>&#8220;Understanding the way that the AI has classified these&nbsp;brain&nbsp;structures and dynamics is extremely important,&#8221; said Sang Wan Lee, the other corresponding author and an associate professor at KAIST. &#8220;It&#8217;s no good if a doctor can tell a patient that the computer says they have autism, but not be able to say why the computer knows that.&#8221;</p>



<p>The&nbsp;deep learning&nbsp;models were also able to describe how much a particular aspect contributed to ASD, an analysis tool that can assist psychiatric physicians during the diagnosis process to identify the severity of the&nbsp;autism.</p>



<p>&#8220;Doctors should be able to use this to offer a personalized diagnosis for patients, including a prognosis of how the condition could develop,&#8221; Lee said.</p>



<p>&#8220;Artificial intelligence is not going to put psychiatrists out of a job,&#8221; he explained. &#8220;But using AI as a tool should enable doctors to better understand and diagnose complex disorders than they could do on their own.&#8221;</p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-helps-explore-the-structural-and-strategic-bases-of-autism/">Deep learning helps explore the structural and strategic bases of autism?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deep-learning-helps-explore-the-structural-and-strategic-bases-of-autism/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>DECODING THE LINK BETWEEN ARTIFICIAL NEURAL NETWORKS AND DEEP LEARNING ALGORITHMS</title>
		<link>https://www.aiuniverse.xyz/decoding-the-link-between-artificial-neural-networks-and-deep-learning-algorithms/</link>
					<comments>https://www.aiuniverse.xyz/decoding-the-link-between-artificial-neural-networks-and-deep-learning-algorithms/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 24 Jun 2020 07:50:53 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[algorithms]]></category>
		<category><![CDATA[Artificial intelligence (AI)]]></category>
		<category><![CDATA[artificial neural networks]]></category>
		<category><![CDATA[DECODING]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9750</guid>

					<description><![CDATA[<p>Source: analyticsinsight.net The idea of creating intelligent systems has always fascinated data science professionals. The advent of computers and technology uplifts the notion that an algorithm that can <a class="read-more-link" href="https://www.aiuniverse.xyz/decoding-the-link-between-artificial-neural-networks-and-deep-learning-algorithms/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/decoding-the-link-between-artificial-neural-networks-and-deep-learning-algorithms/">DECODING THE LINK BETWEEN ARTIFICIAL NEURAL NETWORKS AND DEEP LEARNING ALGORITHMS</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: analyticsinsight.net</p>



<p>The idea of creating intelligent systems has always fascinated data science professionals. The advent of computers and technology uplifts the notion that an algorithm that can learn from itself and adapt to changing model inputs. The art of self-learning algorithms helping data science with valuable information is an uncharted territory that AI-powered neural networks would like to explore more, courtesy the growing interest of professionals and technology experts alike.</p>



<h4 class="wp-block-heading"><strong>Understanding Artificial Neural Networks (ANNs)</strong></h4>



<p>To understand the complexities of Artificial Neural Networks (ANNs) lets first decode how our brain learns and relearns from different experiences. The human brain is made up of interconnected networks, these are called neurons. These neurons are responsible for processing different pieces of information. Let’s understand through the concept of a hierarchy pyramid. Our brain is composed of different levels, and each level is responsible for decoding and understanding information from the surroundings.</p>



<p>As information passes on through layers from different levels arranged hierarchically, each layer of neurons understand, process, gather knowledgeable insight, and pass this information to the next layer in the hierarchy. Thus, ensuring the information which reaches on the pinnacle of the pyramid is intelligently accurate and without any bias.</p>



<p>Let’s understand the Artificial Neural Network through food!</p>



<p>For example, when you get a whiff of something delicious cooking, for instance, a loaf of banana bread with chocolate chips baking your brain may process the information as… ‘I smell banana bread and chocolate chips,’ (that’s your data input) … ‘I love banana bread with chocolate chips!’ (thought) … ‘I’ll eat a lot of banana bread with chocolate chips’ (decision making) … ‘Oh, but they add to calories, I promised to go on a diet’ (memory) … ‘But, one slice won’t hurt?’ (reasoning) ‘I will have one slice for sure!’ (final course of action).</p>



<p>Likewise, ANNs seek to simulate information passing through layers of networks or interconnected brain cells, to let them learn and make decisions in a realistically human like manner. This is the layered approach to process information that ANNs strive to simulate. The human brain is complex and replicating it is a tough task, however, in its simplest form, an ANN can comprise of three layers of neurons-</p>



<p>1. The input layer (for data input)</p>



<p>2. The hidden layer (information processing layer)</p>



<p>3. The output layer (decision-making step).</p>



<p>A lot happens in the hidden layer, often called as the black box of ANN decision making point. The black box can add up to multiple hidden layers for the flow of information from one layer to another, just like what happens inside the human brain.</p>



<h4 class="wp-block-heading"><strong>Comprehending Deep Learning Algorithms</strong></h4>



<p>Deep learning seeks to understand what exactly happens within those hidden layers of the ANN. Representing the very cutting edge of Artificial Intelligence (AI), an algorithm trains itself to process and learn from data that is injected into the model in the input layer.</p>



<p>How is that possible? Thanks to the hidden layer of ANNs which is also called a ‘deep neural network’ (DNNs), or in simple words deep learning. It is a self-teaching algorithm that filters information through multiple hidden layers same as a human mind does. Here are some interesting concepts and viewpoints of deep learning-</p>



<p>• Goodfellow, Bengio and Courville explained that while shallow neural networks are trained to handle complex problems, deep learning networks add to accuracy as more neuron layers are added in the information hierarchy.</p>



<p>•&nbsp;These additional layers can yield maximum accuracy till the 9<sup>th</sup>&nbsp;or 10<sup>th</sup>&nbsp;layer, after which a decline is observed in their predictive power.</p>



<p>•&nbsp;At present, most ANNs and implementations deploy a maximum of 3-10 deep network neuron layers.</p>



<h4 class="wp-block-heading"><strong>Bridging the Gap between ANNs and Deep Learning</strong></h4>



<p>To make DNNs “learn” increasingly complex algorithms for accurate prediction, classification, several features run behind the black box, adding more layers to the hidden layer is one of them. More layers and more neurons do represent complex models with greater accuracy, but at the same time, data science experts must compute the cost and time aspect in model building.</p>



<p>The tech world is looking forward to achieve the perfect balance blending time, cost, model building and accuracy in predictions with deep neural networks, to solve the complex classification and prediction tasks in a jiffy.</p>
<p>The post <a href="https://www.aiuniverse.xyz/decoding-the-link-between-artificial-neural-networks-and-deep-learning-algorithms/">DECODING THE LINK BETWEEN ARTIFICIAL NEURAL NETWORKS AND DEEP LEARNING ALGORITHMS</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/decoding-the-link-between-artificial-neural-networks-and-deep-learning-algorithms/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deep Learning Used to Find Disease-Related Genes</title>
		<link>https://www.aiuniverse.xyz/deep-learning-used-to-find-disease-related-genes/</link>
					<comments>https://www.aiuniverse.xyz/deep-learning-used-to-find-disease-related-genes/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 19 Feb 2020 06:44:02 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[artificial neural networks]]></category>
		<category><![CDATA[Biology]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[researchers]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6893</guid>

					<description><![CDATA[<p>Source: unite.ai A new study led by researchers at Linköping University demonstrates how an artificial neural network (ANN) can reveal large amounts of gene expression data, and it can <a class="read-more-link" href="https://www.aiuniverse.xyz/deep-learning-used-to-find-disease-related-genes/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-used-to-find-disease-related-genes/">Deep Learning Used to Find Disease-Related Genes</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: unite.ai</p>



<p>A new study led by researchers at Linköping University demonstrates how an artificial neural network (ANN) can reveal large amounts of gene expression data, and it can lead to the discovery of groups of disease-related genes. The study was published in Nature Communications, and the scientists want the method to be applied within precision medicine and individualized treatment. </p>



<p>Scientists are currently developing maps of biological networks that are based on how different proteins or genes interact with each other. The new study involves the use of artificial intelligence (AI) in order to find out if biological networks can be discovered through the use of deep learning. Artificial neural networks, which are trained by experimental data in the process of deep learning, are able to find patterns within massive amounts of complex data. Because of this, they are often used in applications such as image recognition. Even with its seemingly enormous potential, the use of this machine learning method has been limited within biological research. </p>



<p>Sanjiv Dwivedi is a postdoc in the Department of Physics, Chemistry and Biology (IFM) at Linköping University.</p>



<p>“We have for the first time used deep learning to find disease-related genes. This is a very powerful method in the analysis of huge amounts of biological information, or ‘big data’,” says Dwivedi.</p>



<p>The scientists relied on a large database with information regarding the expression patterns of 20,000 genes in a large number of people. The artificial neural network was not told which gene expression patterns were from people with diseases, or which ones were from healthy individuals. The AI model was then trained to find patterns of gene expression.</p>



<p>One of the mysteries surrounding machine learning is that it is currently impossible to see how an artificial neural network gets to its final result. It is only possible to see the information that goes in and the information that is produced, but everything that happens in-between consists of several layers of mathematically processed information. These inner workings of an artificial neural network are not yet able to be deciphered. The scientists wanted to know if there were any similarities between the designs of the neural network and the familiar biological networks.&nbsp;</p>



<p>Mike Gustafsson is a senior lecturer at IFM and leads the study.&nbsp;</p>



<p>“When we analysed our neural network, it turned out that the first hidden layer represented to a large extent interactions between various proteins. Deeper in the model, in contrast, on the third level, we found groups of different cell types. It’s extremely interesting that this type of biologically relevant grouping is automatically produced, given that our network has started from unclassified gene expression data,” says Gustafsson.</p>



<p>The scientists then wanted to know if their model of gene expression was capable of being used to determine which gene expression patterns are associated with disease and which are normal. They were able to confirm that the model can discover relative patterns that agree with biological mechanisms in the body. Another discovery was that the artificial neural network could possibly discover brand new patterns since it was trained with unclassified data. The researchers will now investigate previously unknown patterns and whether they are relevant within biology.&nbsp;</p>



<p>“We believe that the key to progress in the field is to understand the neural network. This can teach us new things about biological contexts, such as diseases in which many factors interact. And we believe that our method gives models that are easier to generalise and that can be used for many different types of biological information,” says Gustafsson.</p>



<p>Through collaborations with medical researchers, Gustafsson hopes to apply the method in precision medicine. This could help determine which specific types of medicine patients should receive.</p>



<p>The study was financially supported by the Swedish Foundation for Strategic Research (SSF) and the Swedish Research Council.</p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-used-to-find-disease-related-genes/">Deep Learning Used to Find Disease-Related Genes</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deep-learning-used-to-find-disease-related-genes/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How neuro-symbolic AI might finally make machines reason like humans</title>
		<link>https://www.aiuniverse.xyz/how-neuro-symbolic-ai-might-finally-make-machines-reason-like-humans/</link>
					<comments>https://www.aiuniverse.xyz/how-neuro-symbolic-ai-might-finally-make-machines-reason-like-humans/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 28 Jan 2020 09:12:23 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[artificial neural networks]]></category>
		<category><![CDATA[computer science]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[neural networks]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6417</guid>

					<description><![CDATA[<p>Source: zmescience.com If you want a machine to learn to do something intelligent you either have to program it or teach it to learn. For decades, engineers <a class="read-more-link" href="https://www.aiuniverse.xyz/how-neuro-symbolic-ai-might-finally-make-machines-reason-like-humans/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-neuro-symbolic-ai-might-finally-make-machines-reason-like-humans/">How neuro-symbolic AI might finally make machines reason like humans</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: zmescience.com</p>



<p>If you want a machine to learn to do something intelligent you either have to program it or teach it to learn.</p>



<p>For decades, engineers have been programming machines to perform all sorts of tasks — from software that runs on your personal computer and smartphone to guidance control for space missions.</p>



<p>But although computers are generally much faster and more precise than the human brain at sequential tasks, such as adding numbers or calculating chess moves, such programs are very limited in their scope. Something as trivial as identifying a bicycle among a crowded pedestrian street or picking up a hot cup of coffee from a desk and gently moving it to the mouth can send a computer into convulsions, nevermind conceptualizing or abstraction (such as designing a computer itself).</p>



<p>If you want a machine to learn to do something intelligent you either have to program it or teach it to learn.</p>



<p>For decades, engineers have been programming machines to perform all sorts of tasks — from software that runs on your personal computer and smartphone to guidance control for space missions.</p>



<p>But although computers are generally much faster and more precise than the human brain at sequential tasks, such as adding numbers or calculating chess moves, such programs are very limited in their scope. Something as trivial as identifying a bicycle among a crowded pedestrian street or picking up a hot cup of coffee from a desk and gently moving it to the mouth can send a computer into convulsions, nevermind conceptualizing or abstraction (such as designing a computer itself).</p>



<p>If you want a machine to learn to do something intelligent you either have to program it or teach it to learn.</p>



<p>For decades, engineers have been programming machines to perform all sorts of tasks — from software that runs on your personal computer and smartphone to guidance control for space missions.</p>



<p>But although computers are generally much faster and more precise than the human brain at sequential tasks, such as adding numbers or calculating chess moves, such programs are very limited in their scope. Something as trivial as identifying a bicycle among a crowded pedestrian street or picking up a hot cup of coffee from a desk and gently moving it to the mouth can send a computer into convulsions, nevermind conceptualizing or abstraction (such as designing a computer itself).</p>



<p>The gist is that humans were never programmed (not like a digital computer, at least) — humans have become intelligent through learning.</p>



<h3 class="wp-block-heading">Intelligent machines</h3>



<p>Do machine learning and deep learning ring a bell? They should. These are not merely buzz words — they’re techniques that have literally triggered a renaissance of artificial intelligence leading to phenomenal advances in self-driving cars, facial recognition, or real-time speech translations.</p>



<p>Although AI systems seem to have appeared out of nowhere in the previous decade, the first seeds were laid as early as 1956 by John McCarthy, Claude Shannon, Nathan Rochester, and Marvin Minsky at the Dartmouth Conference. Concepts like artificial neural networks, deep learning, but also neuro-symbolic AI are not new — scientists have been thinking about how to model computers after the human brain for a very long time. It’s only fairly recently that technology has developed the capability to store huge amounts of data and significant processing power, allowing AI systems to finally become practically useful.</p>



<p>But despite impressive advances, deep learning is still very far from replicating human intelligence. Sure, a machine capable of teaching itself to identify skin cancer better than doctors is great, don’t get me wrong, but there are also many flaws and limitations.</p>



<p>One important limitation is that deep learning algorithms and other machine learning neural networks are too narrow.</p>



<p>When you have huge amounts of carefully curated data, you can achieve remarkable things with them, such as superhuman accuracy and speed. Right now, AIs have crushed humans at every single important game, from chess to Jeopardy! and Starcraft.</p>



<p>However, their utility breaks down once they’re prompted to adapt to a more general task. What’s more, these narrow-focused systems are prone to error. For instance, take a look at the following picture of a “Teddy Bear” — or at least in the interpretation of a sophisticated modern AI.</p>



<p>These are just a couple of examples that illustrate that today’s systems don’t truly understand what they’re looking at. And what’s more, artificial neural networks rely on enormous amounts of data in order to train them, which is a huge problem in the industry right now. At the rate at which computational demand is growing, there will come a time when even all the energy that hits the planet from the sun won’t be enough to satiate our computing machines. Even so, despite being fed millions of pictures of animals, a machine can still mistake a furry cup for a teddy bear.</p>



<p>Meanwhile, the human brain can recognize and label objects effortlessly and with minimal training — basically we only need one picture. If you show a child a picture of an elephant — the very first time they’ve ever seen one — that child will instantly recognize that a) that is an animal and b) that this is an elephant next time they’ll come across that animal, either in real life or in a picture.</p>



<p>This is why we need a middle ground — a broad AI that can multi-task and cover multiple domains, but which also can read data from a variety of sources (text, video, audio, etc), whether the data is structured or unstructured. Enter the world of neuro-symbolic AI.</p>



<p>David Cox is the head of the MIT-IBM Watson AI Lab, a collaboration between IBM and MIT that will invest $250 million over ten years to advance fundamental research in artificial intelligence. One important avenue of research is neuro-symbolic AI.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>“A&nbsp;neuro-symbolic&nbsp;AI&nbsp;system combines neural networks/deep learning with ideas from&nbsp;symbolic&nbsp;AI.&nbsp;A neural network is a special kind of machine learning algorithm that maps from inputs (like an image of an apple) to outputs (like the label “apple”, in the case of a neural network that recognizes objects).&nbsp;Symbolic&nbsp;AI&nbsp;is different; for instance, it provides a way to express all the knowledge we have about apples: an apple has parts (a stem and a body), it has properties like its color, it has an origin (it comes from an apple tree), and so on,” Cox told ZME Science.</p><p>“Symbolic&nbsp;AI&nbsp;allows you to use logic to reason about entities and their properties and relationships.&nbsp;Neuro-symbolic&nbsp;systems combine these two kinds of&nbsp;AI, using neural networks to bridge from the messiness of the real world to the world of symbols, and the two kinds of&nbsp;AI&nbsp;in many ways complement each other’s strengths and weaknesses.&nbsp;I think that any meaningful step toward general&nbsp;AI&nbsp;will have to include symbols or symbol-like representations,” he added.</p></blockquote>



<p>By combining the two approaches, you end up with a system that has neural pattern recognition allowing it to&nbsp;<em>see</em>, while the symbolic part allows the system to&nbsp;<em>logically reason</em>&nbsp;about symbols, objects, and the relationships between them. Taken together, neuro-symbolic AI goes beyond what current deep learning systems are capable of doing.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>“One of the reasons why humans are able to work with so few examples of a new thing is that we are able to break down an object into its parts and properties and then to reason about them.&nbsp;Many of today’s neural networks try to go straight from inputs (e.g. images of elephants) to outputs (e.g. the label “elephant”), with a black box in between.&nbsp;We think it is important to step through an intermediate stage where we decompose the scene into a structured,&nbsp;symbolic&nbsp;representation of parts, properties, and relationships,” Cox told ZME Science.</p></blockquote>



<p>Here are some examples of questions that are trivial to answer by a human child but which can be highly challenging for AI systems solely predicated on neural networks.</p>



<p>Neural networks are trained to identify objects in a scene and interpret the natural language of various questions and answers (i.e. “What is the color of the sphere?”). The symbolic side recognizes concepts such as “objects,” “object attributes,” and “spatial relationship,” and uses this capability to answer questions about novel scenes that the AI had never encountered.</p>



<p>A neuro-symbolic system, therefore, applies logic and language processing to answer the question in a similar way to how a human would reason. An example of such a computer program is the neuro-symbolic concept learner (NS-CL), created at the MIT-IBM lab by a team led by Josh Tenenbaum, a professor at MIT’s Center for Brains, Minds, and Machines.</p>



<p>You could achieve a similar result to that of a neuro-symbolic system solely using neural networks, but the training data would have to be immense. Moreover, there’s always the risk that outlier cases, for which there is little or no training data, are answered poorly. In contrast, this hybrid approach boosts a high data efficiency, in some instances requiring just 1% of training data other methods need.</p>



<h3 class="wp-block-heading">The next evolution in AI</h3>



<p>Just like deep learning was waiting for data and computing to catch up with its ideas, so has symbolic AI been waiting for neural networks to mature. And now that two complementary technologies are ready to be synched, the industry could be in for another disruption — and things are moving fast.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>“We’ve got over 50 collaborative projects running with MIT, all tackling hard questions at the frontiers of AI.&nbsp;We think that neuro-symbolic AI methods are going to be applicable in many areas, including computer vision, robot control, cybersecurity, and a host of other areas.&nbsp;We have projects in all of these areas, and we’ll be excited to share them as they mature,” Cox said.</p></blockquote>



<p>But not everyone is convinced that this is the fastest road to achieving general artificial intelligence.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>“I think that symbolic style reasoning is definitely something that is important for AI to capture. But, many people (myself included) believe that human abilities with symbolic logic emerge as a result&nbsp;of training, and are not convinced that an explicitly hard-wiring in symbolic systems is the right approach. I am more inclined to think that we should try to design artificial neural networks (ANNs) that can learn how to do symbolic processing. The reason is this: it is hard to know what should be represented by a symbol, predicate, etc., and&nbsp;I think we have to be able to learn that, so hard-wiring the system in this way is maybe not a good idea,” Blake Richards, who is an Assistant Professor in the Montreal Neurological Institute and the School of Computer Science at McGill University, told ZME Science.</p></blockquote>



<p>Irina Rish, an Associate Professor in the Computer Science and Operations Research department at the Université de Montréal (UdeM), agrees that neuro-symbolic AI is worth pursuing but believes that “growing” symbolic reasoning out of neural networks, may be more effective in the long-run.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>“We all agree that deep learning in its current form has many limitations including the need for large datasets. However, this can be either viewed as criticism of deep learning or the plan for future expansion of today’s deep learning towards more capabilities,” Rish said.</p></blockquote>



<p>Rish sees current limitations surrounding ANNs as a ‘to-do’ list rather than a hard ceiling. Their dependence on large datasets for training can be mitigated by meta- and transfer-learning, for instance. What’s more, the researcher argues that many assumptions in the community about how to model human learning are rather flawed, calling for more interdisciplinary research.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>“A common argument about “babies learning from a few samples unlike deep networks” is fundamentally flawed since it is unfair to compare an artificial neural network trained from scratch (random initialization, some ad-hoc architectures) with a highly structured, far-from-randomly initialized neural networks in baby’s brains,&nbsp; incorporating prior knowledge about the world, from millions of years of evolution in varying environments. Thus, more and more people in the deep learning community now believe that we must focus more on interdisciplinary research on the intersection of AI and other disciplines that have been studying brain and minds for centuries, including neuroscience, biology, cognitive psychology, philosophy, and related disciplines,” she said.</p></blockquote>



<p>Rish points to exciting recent research that focuses on “developing next-generation network-communication based intelligent machines driven by the evolution of more complex behavior in networks of communicating units.” Rish believes that AI is naturally headed towards further automation of AI development, away from hard-coded models. In the future, AI systems will also be more bio-inspired and feature more dedicated hardware such as neuromorphic and quantum devices.</p>



<p>“The general trend in AI and in computing as a whole,&nbsp;towards further and further automation and replacing hard-coded approaches with automatically learned ones, seems to be the way to go,” she added.</p>



<p>For now, neuro-symbolic AI combines the best of both worlds in innovative ways by enabling systems to have both visual perception and logical reasoning. And, who knows, maybe this avenue of research might one day bring us closer to a form of intelligence that seems more like our own.</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-neuro-symbolic-ai-might-finally-make-machines-reason-like-humans/">How neuro-symbolic AI might finally make machines reason like humans</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-neuro-symbolic-ai-might-finally-make-machines-reason-like-humans/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Machine learning tackles quantum error correction</title>
		<link>https://www.aiuniverse.xyz/machine-learning-tackles-quantum-error-correction/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-tackles-quantum-error-correction/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 16 Aug 2017 09:32:34 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[artificial neural networks]]></category>
		<category><![CDATA[error correction]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[quantum computing]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=638</guid>

					<description><![CDATA[<p>Source &#8211; phys.org Physicists have applied the ability of machine learning algorithms to learn from experience to one of the biggest challenges currently facing quantum computing: quantum error <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-tackles-quantum-error-correction/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-tackles-quantum-error-correction/">Machine learning tackles quantum error correction</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; phys.org</p>
<p>Physicists have applied the ability of machine learning algorithms to learn from experience to one of the biggest challenges currently facing quantum computing: quantum error correction, which is used to design noise-tolerant quantum computing protocols. In a new study, they have demonstrated that a type of neural network called a Boltzmann machine can be trained to model the errors in a quantum computing protocol and then devise and implement the best method for correcting the errors.</p>
<p>The physicists, Giacomo Torlai and Roger G. Melko at the University of Waterloo and the Perimeter Institute for Theoretical Physics, have published a paper on the new machine learning algorithm in a recent issue of <i>Physical Review Letters</i>.</p>
<p>&#8220;The idea behind neural decoding is to circumvent the process of constructing a decoding algorithm for a specific code realization (given some approximations on the noise), and let a neural network learn how to perform the recovery directly from raw data, obtained by simple measurements on the code,&#8221; Torlai told <i>Phys.org</i>. &#8220;With the recent advances in quantum technologies and a wave of quantum devices becoming available in the near term, neural decoders will be able to accommodate the different architectures, as well as different noise sources.&#8221;</p>
<p>As the researchers explain, a Boltzmann machine is one of the simplest kinds of stochastic artificial neural networks, and it can be used to analyze a wide variety of data. Neural networks typically extract features and patterns from raw data, which in this case is a data set containing the possible errors that can afflict quantum states.</p>
<p>Once the new algorithm, which the physicists call a neural decoder, is trained on this data, it is able to construct an accurate model of the probability distribution of the errors. With this information, the neural decoder can generate the appropriate error chains that can then be used to recover the correct quantum states.</p>
<p>The researchers tested the neural decoder on quantum topological codes that are commonly used in quantum computing, and demonstrated that the algorithm is relatively simple to implement. Another advantage of the new algorithm is that it does not depend on the specific geometry, structure, or dimension of the data, which allows it to be generalized to a wide variety of problems.</p>
<p>In the future, the physicists plan to explore different ways to improve the algorithm&#8217;s performance, such as by stacking multiple Boltzmann machines on top of one another to build a network with a deeper structure. The researchers also plan to apply the neural decoder to more complex, realistic codes.</p>
<p>&#8220;So far, neural decoders have been tested on simple codes typically used for benchmarks,&#8221; Torlai said. &#8220;A first direction would be to perform error correction on codes for which an efficient decoder is yet to be found, for instance Low Density Parity Check codes. On the long term I believe neural decoding will play an important role when dealing with larger quantum systems (hundreds of qubits). The ability to compress high-dimensional objects into low-dimensional representations, from which stems the success of machine learning, will allow to faithfully capture the complex distribution relating the errors arising in the system with the measurements outcomes.&#8221;</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-tackles-quantum-error-correction/">Machine learning tackles quantum error correction</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-tackles-quantum-error-correction/feed/</wfw:commentRss>
			<slash:comments>9</slash:comments>
		
		
			</item>
		<item>
		<title>Intel forays into deep-learning arena; launches Movidius Neural Compute Stick</title>
		<link>https://www.aiuniverse.xyz/intel-forays-into-deep-learning-arena-launches-movidius-neural-compute-stick/</link>
					<comments>https://www.aiuniverse.xyz/intel-forays-into-deep-learning-arena-launches-movidius-neural-compute-stick/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 24 Jul 2017 08:08:06 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[artificial neural networks]]></category>
		<category><![CDATA[Compute Stick]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Intel forays]]></category>
		<category><![CDATA[launches]]></category>
		<category><![CDATA[Movidius Neural]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=243</guid>

					<description><![CDATA[<p>Source &#8211; telecom.economictimes.indiatimes.com New Delhi: Intel on Friday launched the Movidius Neural Compute Stick, a USB-based deep learning inference kit and self-contained artificial intelligence (AI) accelerator that delivers dedicated deep <a class="read-more-link" href="https://www.aiuniverse.xyz/intel-forays-into-deep-learning-arena-launches-movidius-neural-compute-stick/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/intel-forays-into-deep-learning-arena-launches-movidius-neural-compute-stick/">Intel forays into deep-learning arena; launches Movidius Neural Compute Stick</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211;<strong> telecom.economictimes.indiatimes.com</strong></p>
<p>New Delhi: Intel on Friday launched the Movidius Neural Compute Stick, a USB-based deep learning inference kit and self-contained artificial intelligence (AI) accelerator that delivers dedicated deep neural network processing capabilities to a wide range of host devices at the edge.</p>
<p>Designed for product developers and researchers, the Movidius Neural Compute Stick aims to reduce barriers to developing, tuning and deploying AI applications by delivering dedicated high-performance deep-neural network processing in a small form factor.</p>
<p>As more developers adopt advanced machine learning approaches to build innovative applications and solutions, Intel is committed to providing the most comprehensive set of development tools and resources to ensure developers are retooling for an AI-centric digital economy.</p>
<p>Whether it is training artificial neural networks on the Intel Nervana cloud, optimising for emerging workloads such as artificial intelligence, virtual and augmented reality, and automated driving with Intel Xeon Scalable processors, or taking AI to the edge with Movidius vision processing unit (VPU) technology, Intel offers a comprehensive AI portfolio of tools, training and deployment options for the next generation of AI-powered products and services.</p>
<p>&#8220;The Myriad 2 VPU housed inside the Movidius Neural Compute Stick provides powerful, yet efficient performance &#8211; more than 100 gigaflops of performance within a 1W power envelope &#8211; to run real-time deep neural networks directly from the device. This enables a wide range of AI applications to be deployed offline,&#8221; said Remi El-Ouazzane, VP and general manager of Movidius, an Intel company.</p>
<p>Machine intelligence development is fundamentally composed of two stages- training an algorithm on large sets of sample data via modern machine learning techniques, and running the algorithm in an end-application that needs to interpret real-world data.</p>
<p>This second stage is referred to as &#8220;inference,&#8221; and performing inference at the edge &#8211; or natively inside the device &#8211; brings numerous benefits in terms of latency, power consumption and privacy.</p>
<p>Layer-by-layer performance metrics for both industry-standard and custom-designed neural networks enable effective tuning for optimal real-world performance at ultra-low power. Validation scripts allow developers to compare the accuracy of the optimised model on the device to the original PC-based model.</p>
<p>Unique to Movidius Neural Compute Stick, the device can behave as a discrete neural network accelerator by adding dedicated deep learning inference capabilities to existing computing platforms for improved performance and power efficiency.</p>
<p>The post <a href="https://www.aiuniverse.xyz/intel-forays-into-deep-learning-arena-launches-movidius-neural-compute-stick/">Intel forays into deep-learning arena; launches Movidius Neural Compute Stick</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/intel-forays-into-deep-learning-arena-launches-movidius-neural-compute-stick/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
	</channel>
</rss>
