<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>neural network Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/neural-network/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/neural-network/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Thu, 19 Nov 2020 05:02:52 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>SYNTHESIZING ROBOTIC AI SPONTANEOUS BEHAVIOR VIA CHAOTIC ITINERANCY</title>
		<link>https://www.aiuniverse.xyz/synthesizing-robotic-ai-spontaneous-behavior-via-chaotic-itinerancy/</link>
					<comments>https://www.aiuniverse.xyz/synthesizing-robotic-ai-spontaneous-behavior-via-chaotic-itinerancy/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 19 Nov 2020 05:02:48 +0000</pubDate>
				<category><![CDATA[Robotics]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[neural network]]></category>
		<category><![CDATA[researchers]]></category>
		<category><![CDATA[robotic]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12383</guid>

					<description><![CDATA[<p>Source: analyticsinsight.net The chaotic itinerancy is a closed-loop pathway through high-dimensional state space of neural activity, directing cortex in sequence with quasi-attractors. Over the years researchers have <a class="read-more-link" href="https://www.aiuniverse.xyz/synthesizing-robotic-ai-spontaneous-behavior-via-chaotic-itinerancy/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/synthesizing-robotic-ai-spontaneous-behavior-via-chaotic-itinerancy/">SYNTHESIZING ROBOTIC AI SPONTANEOUS BEHAVIOR VIA CHAOTIC ITINERANCY</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: analyticsinsight.net</p>



<p>The chaotic itinerancy is a closed-loop pathway through high-dimensional state space of neural activity, directing cortex in sequence with quasi-attractors.</p>



<p>Over the years researchers have created robotic models with attributes, similar to humans. Robotic models which can hear, sense, emotionally support, blink and fight against abuse, are getting heavily researched and deployed in the industry. To expand the further horizon of neuro-robotics, the researchers at the University of Tokyo have created a model that gives robotic AI spontaneous behavior through the chaotic itinerary, a neural process to find in humans and animals. The research paper titled “Designing Spontaneous Behavioural Switching via Chaotic itinerancy” states that the chaotic itinerancy is a high-dimensional non-linear dynamical system, which addresses the pre-existing challenges of cognitive architecture in robotics.</p>



<h4 class="wp-block-heading"><strong>What is Chaotic Itinerancy?</strong></h4>



<p>The chaotic itinerancy is a closed-loop pathway through high-dimensional state space of neural activity, directing cortex in sequence with quasi-attractors. The quasi attractor is a region of the brain which has convergent flows as attractants and absorbents actions, and divergent flows, involve repellent and dispersive actions. These flows give ordered periodic activity and disordered chaotic activity between the regions of the brain.</p>



<p>Furthermore, experts have associated quasi-attractors with perception, thoughts, memories, thinking, speaking and writing. Researchers cite that robotics has applied a dynamical systems approach to analyze and control agents associated with training of robots. This approach collaborates the functional hierarchy and the elementary motion by expressing the physical constraints of the agent as the temporal lobe of the brain develops. Following this approach, CI is integrated into model spontaneous behaviors. Moreover, the researchers have proposed an algorithm that designs the properties of CI characterized by neuro-robotics context. This model addresses the challenges of designing a cognitive agent in the conventional context of robotics and artificial intelligence.</p>



<h4 class="wp-block-heading"><strong>Structure of the Model</strong></h4>



<p>Researchers prepared a high-dimensional chaotic model, by embedding target quasi-attractors. They used an echo state network which is a form of Recurrent Neural Network and is heavily controlled by reservoir computing. Consequently, internal parameters were added to the model to generate intrinsic complex trajectories. These trajectories are generated by the initial chaotic system also known as innate trajectories which correspond to the types of the discrete input. Parallel to this, researchers trained a linear regression model which is named as readout that result in the designated trajectories known as output dynamics. These output dynamics is a resultant of exploiting the embedded innate trajectory.</p>



<p>Researchers say that this process can be applied to the other chaotic dynamical systems which are not limited to RNN in silicon since neither module nor hierarchical structures are required. Also, this embedding process is accomplished by modifying fewer parameters using the method of reservoir computing. The reservoir computing is an approach for making machine learning algorithms run faster, to expedite the computing process. Researches find this scheme to be more stable and less computationally expensive than conventional methods of back propagation to train the network parameters. Additionally, they added feedback classifier to the trained chaotic systems to autonomously generate specific symbolic systems.</p>



<p>Researchers suggest that two mechanisms are required for the successful designing of a CI model. The first is the differences among the trajectories are sufficiently enlarged through the temporal development to realize the stochastic symbol transition. A stochastic matrix is a square matrix that describes the transition of a Markov chain. A Markov chain is used to describe the sequence of possible events, where the probability of each event depends upon the state attained in the previous event. The second mechanism involves creating a spatiotemporal pattern to analyse the computing processes of the model. The Spatio-temporal pattern collects data across space and time and is utilized for by humans for solving multi-step problems by analysing the movement of objects in space and time.</p>



<h4 class="wp-block-heading"><strong>Conclusion</strong></h4>



<p>Researchers say that this model will be helpful to understand the underlying mechanism of the brain’s information processing from a certain perspective. Furthermore, as the high-dimensional chaos has the rich expressive capability to design CI, henceforth, this model will aid in understanding the mechanism of the contribution of the high-dimensional chaos to the information processing in animal brains.</p>
<p>The post <a href="https://www.aiuniverse.xyz/synthesizing-robotic-ai-spontaneous-behavior-via-chaotic-itinerancy/">SYNTHESIZING ROBOTIC AI SPONTANEOUS BEHAVIOR VIA CHAOTIC ITINERANCY</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/synthesizing-robotic-ai-spontaneous-behavior-via-chaotic-itinerancy/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deep Learning is Helping Make Intelligent Vehicle a Reality</title>
		<link>https://www.aiuniverse.xyz/deep-learning-is-helping-make-intelligent-vehicle-a-reality/</link>
					<comments>https://www.aiuniverse.xyz/deep-learning-is-helping-make-intelligent-vehicle-a-reality/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 10 Sep 2020 09:41:12 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[neural network]]></category>
		<category><![CDATA[traffic applications]]></category>
		<category><![CDATA[vehicle]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11485</guid>

					<description><![CDATA[<p>Source: english.cas.cn With the development of intelligent transportation, an effective vehicle management method is applied to track moving vehicles with the help of monitoring equipment. Video surveillance <a class="read-more-link" href="https://www.aiuniverse.xyz/deep-learning-is-helping-make-intelligent-vehicle-a-reality/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-is-helping-make-intelligent-vehicle-a-reality/">Deep Learning is Helping Make Intelligent Vehicle a Reality</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: english.cas.cn</p>



<p>With the development of intelligent transportation, an effective vehicle management method is applied to track moving vehicles with the help of monitoring equipment. Video surveillance is the main way to obtain vehicle dynamic information.&nbsp;</p>



<p>Thus, the accurate and robust vehicle tracking algorithm is urgently needed when the tracking target suffering from heavy occlusions, illumination change and scale variation.&nbsp;</p>



<p>A research team led by Prof. Dr. QIU Shi from&nbsp;the Xi&#8217;an Institute of Optics and Precision Mechanics (XIOPM) of the Chinese Academy of Sciences (CAS) proposed the modified Gaussian mixture model (GMM) algorithm to reduce the error judgment probability of pixel state and extract the moving target accurately.&nbsp;&nbsp;</p>



<p>The novel denoising auto encoder (DAE) neural network can also obtain sparse constraint in the hidden layer to limit vehicle feature model and achieve vehicle tracking accurately. The results were published in the Journal of Ambient Intelligence and Humanized Computing. </p>



<p>After utilizing DAE neural network and GMM moving target extraction, the tracker can be more robust to complex scenarios, including heavy occlusion, illumination change and also multiple targets.</p>



<p>&#8220;From the perspective of probability theory, our algorithm can reduce the probability of the error judgment of pixel state and extract the moving object better,&#8221; said Prof. QIU.&nbsp;</p>



<p>The&nbsp;results provided a novel idea in dealing with moving target in traffic applications. The Intelligent vehicle will drive on the road all by itself in the near future.</p>



<p>&#8220;Our method can be one part of the whole technique which may someday help accomplish the autonomous driving,&#8221; said QIU.</p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-is-helping-make-intelligent-vehicle-a-reality/">Deep Learning is Helping Make Intelligent Vehicle a Reality</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deep-learning-is-helping-make-intelligent-vehicle-a-reality/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Tenstorrent Achieves First-Pass Silicon Success For AI Processor SoC Using Synopsys’ Broad DesignWare IP Portfolio</title>
		<link>https://www.aiuniverse.xyz/tenstorrent-achieves-first-pass-silicon-success-for-ai-processor-soc-using-synopsys-broad-designware-ip-portfolio/</link>
					<comments>https://www.aiuniverse.xyz/tenstorrent-achieves-first-pass-silicon-success-for-ai-processor-soc-using-synopsys-broad-designware-ip-portfolio/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 17 Jul 2020 05:30:23 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[IoT]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[MOBILE]]></category>
		<category><![CDATA[neural network]]></category>
		<category><![CDATA[Tenstorrent]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10237</guid>

					<description><![CDATA[<p>Source: aithority.com Synopsys, Inc. announced that Tenstorrent has achieved first-pass silicon success for its Grayskull AI processor system-on-chip (SoC) using Synopsys’ DesignWare PCI Express (PCIe) 4.0 Controller and <a class="read-more-link" href="https://www.aiuniverse.xyz/tenstorrent-achieves-first-pass-silicon-success-for-ai-processor-soc-using-synopsys-broad-designware-ip-portfolio/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/tenstorrent-achieves-first-pass-silicon-success-for-ai-processor-soc-using-synopsys-broad-designware-ip-portfolio/">Tenstorrent Achieves First-Pass Silicon Success For AI Processor SoC Using Synopsys’ Broad DesignWare IP Portfolio</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: aithority.com</p>



<p>Synopsys, Inc. announced that Tenstorrent has achieved first-pass silicon success for its Grayskull AI processor system-on-chip (SoC) using Synopsys’ DesignWare PCI Express (PCIe) 4.0 Controller and PHY, ARC HS48 Processor, and LPDDR4 Controller IP. The silicon-proven DesignWare IP portfolio enabled Tenstorrent to quickly meet the critical real-time connectivity and specialized processing requirements of their dynamic <a href="https://www.aithority.com//?s=artificial+intelligence">artificial intelligence</a> (AI) processor SoC for high-performance <a href="https://aithority.com//?s=computing">computing</a> applications. Tenstorrent also leveraged Synopsys’ expert technical support team to ease IP integration and significantly accelerate their design schedule.</p>



<p>Grayskull offers differentiated capabilities, including fine-grained conditional computation, an area- and power-efficient matrix compute engine, a custom network-on-chip (NoC), and dynamic data compression. Due to the success of the Grayskull SoC, Tenstorrent intends to engage with Synopsys on their next-generation AI processor SoCs for markets such as data centers, public/private cloud servers, on-premises servers, edge servers, and automotive.</p>



<p>“Tenstorrent’s Grayskull AI processor SoC required a range of high-performance IP that met the aggressive compute demands of training and inferencing models,” said <a href="https://www.linkedin.com/in/drago-ignjatovic-5347948/">Drago Ignjatovic</a>, vice president of engineering at Tenstorrent. “Synopsys’ established track record in the IP industry gave us confidence that we could quickly integrate the DesignWare PCIe 4.0 Controller and PHY, ARC HS48 Processor, and LPDDR4 IP into our AI processor SoC. In addition, Synopsys’ technical support team along with the maturity and quality of the DesignWare IP allowed our designers to focus on their core competencies and quickly achieve first-pass silicon success.”</p>



<p>The PCI Express 4.0 controller and PHY IP provide the required 16GT/s data rate and x16 link width while allowing more than 36dB channel loss across process, voltage, and temperature (PVT) variations for high-throughput and low-latency connectivity. A quad-core configuration of the DesignWare ARC HS48 Processor delivers high processing performance within constrained area and power budgets. To achieve power-efficiency Synopsys’ LPDDR4 Controller IP, operating at 4267 Mbps, provides automated low-power state entry and exit. The Advanced Reliability, Serviceability, and Availability (RAS) features including inline error correcting code (ECC) with address protection reduce system downtime.</p>



<p><a>You are at :</a><a href="https://aithority.com/">Home</a><strong>»</strong><a href="https://aithority.com/category/computing/">Computing</a><strong>»</strong><strong>Tenstorrent Achieves First-Pass Silicon Success for AI Processor SoC Using Synopsys’ Broad DesignWare IP Portfolio</strong></p>



<h1 class="wp-block-heading">Tenstorrent Achieves First-Pass Silicon Success For AI Processor SoC Using Synopsys’ Broad DesignWare IP Portfolio</h1>



<p>AIT News Desk  16 Jul 2020 Computing, Machine Learning, News  Leave A Comment  68 Views</p>



<p>High-Quality DesignWare Interface and Processor IP for Efficient Real-Time Connectivity and Machine Learning Processing Accelerate Design Schedule and Lower Risk</p>



<p>Synopsys, Inc. announced that Tenstorrent has achieved first-pass silicon success for its Grayskull AI processor system-on-chip (SoC) using Synopsys’ DesignWare PCI Express (PCIe) 4.0 Controller and PHY, ARC HS48 Processor, and LPDDR4 Controller IP. The silicon-proven DesignWare IP portfolio enabled Tenstorrent to quickly meet the critical real-time connectivity and specialized processing requirements of their dynamic artificial intelligence (AI) processor SoC for high-performance computing applications. Tenstorrent also leveraged Synopsys’ expert technical support team to ease IP integration and significantly accelerate their design schedule.</p>



<p>Grayskull offers differentiated capabilities, including fine-grained conditional computation, an area- and power-efficient matrix compute engine, a custom network-on-chip (NoC), and dynamic data compression. Due to the success of the Grayskull SoC, Tenstorrent intends to engage with Synopsys on their next-generation AI processor SoCs for markets such as data centers, public/private cloud servers, on-premises servers, edge servers, and automotive.</p>



<p>“Tenstorrent’s Grayskull AI processor SoC required a range of high-performance IP that met the aggressive compute demands of training and inferencing models,” said Drago Ignjatovic, vice president of engineering at Tenstorrent. “Synopsys’ established track record in the IP industry gave us confidence that we could quickly integrate the DesignWare PCIe 4.0 Controller and PHY, ARC HS48 Processor, and LPDDR4 IP into our AI processor SoC. In addition, Synopsys’ technical support team along with the maturity and quality of the DesignWare IP allowed our designers to focus on their core competencies and quickly achieve first-pass silicon success.”</p>



<p>The PCI Express 4.0 controller and&nbsp;PHY IP&nbsp;provide the required 16GT/s data rate and x16 link width while allowing more than 36dB channel loss across process, voltage, and temperature (PVT) variations for high-throughput and low-latency connectivity. A quad-core configuration of the DesignWare ARC HS48 Processor delivers high processing performance within constrained area and power budgets. To achieve power-efficiency Synopsys’ LPDDR4 Controller IP, operating at 4267 Mbps, provides automated low-power state entry and exit. The Advanced Reliability, Serviceability, and Availability (RAS) features including inline error correcting code (ECC) with address protection reduce system downtime.</p>



<p>“Innovations in machine learning algorithms and neural network processing for high-performance computing applications are driving new technology requirements for AI SoCs,” said John Koeter, senior vice president of marketing and strategy for IP at Synopsys. “Synopsys provides companies such as Tenstorrent with a comprehensive IP portfolio that addresses the performance, latency, memory and connectivity requirements of AI chips for cloud, IoT, mobile, and automotive designs, while accelerating their development time.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/tenstorrent-achieves-first-pass-silicon-success-for-ai-processor-soc-using-synopsys-broad-designware-ip-portfolio/">Tenstorrent Achieves First-Pass Silicon Success For AI Processor SoC Using Synopsys’ Broad DesignWare IP Portfolio</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/tenstorrent-achieves-first-pass-silicon-success-for-ai-processor-soc-using-synopsys-broad-designware-ip-portfolio/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Team dramatically reduces image analysis times using deep learning, other approaches</title>
		<link>https://www.aiuniverse.xyz/team-dramatically-reduces-image-analysis-times-using-deep-learning-other-approaches/</link>
					<comments>https://www.aiuniverse.xyz/team-dramatically-reduces-image-analysis-times-using-deep-learning-other-approaches/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 30 Jun 2020 08:38:24 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[analyzing]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[image analysis]]></category>
		<category><![CDATA[neural network]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9857</guid>

					<description><![CDATA[<p>Source: eurekalert.org WOODS HOLE, Mass. &#8211; A picture is worth a thousand words -but only when it&#8217;s clear what it depicts. And therein lies the rub in <a class="read-more-link" href="https://www.aiuniverse.xyz/team-dramatically-reduces-image-analysis-times-using-deep-learning-other-approaches/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/team-dramatically-reduces-image-analysis-times-using-deep-learning-other-approaches/">Team dramatically reduces image analysis times using deep learning, other approaches</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: eurekalert.org</p>



<p>WOODS HOLE, Mass. &#8211; A picture is worth a thousand words -but only when it&#8217;s clear what it depicts. And therein lies the rub in making images or videos of microscopic life. While modern microscopes can generate huge amounts of image data from living tissues or cells within a few seconds, extracting meaningful biological information from that data can take hours or even weeks of laborious analysis.</p>



<p>To loosen this major bottleneck, a team led by MBL Fellow Hari Shroff has devised deep-learning and other computational approaches that dramatically reduce image-analysis time by orders of magnitude &#8212; in some cases, matching the speed of data acquisition itself. They report their results this week in Nature Biotechnology.</p>



<p>&#8220;It&#8217;s like drinking from a firehose without being able to digest what you&#8217;re drinking,&#8221; says Shroff of the common problem of having too much imaging data and not enough post-processing power. The team&#8217;s improvements, which stem from an ongoing collaboration at the Marine Biological Laboratory (MBL), speed up image analysis in three major ways.</p>



<p>First, imaging data off the microscope is typically corrupted by blurring. To lessen the blur, an iterative &#8220;deconvolution&#8221; process is used. The computer goes back and forth between the blurred image and an estimate of the actual object, until it reaches convergence on a best estimate of the real thing.</p>



<p>By tinkering with the classic algorithm for deconvolution, Shroff and co-authors accelerated deconvolution by more than 10-fold. Their improved algorithm is widely applicable &#8220;to almost any fluorescence microscope,&#8221; Shroff says. &#8220;It&#8217;s a strict win, we think. We&#8217;ve released the code and other groups are already using it.&#8221;</p>



<p>Next, they addressed the problem of 3D registration: aligning and fusing multiple images of an object taken from different angles. &#8220;It turns out that it takes much longer to register large datasets, like for light-sheet microscopy, than it does to deconvolve them,&#8221; Shroff says. They found several ways to accelerate 3D registration, including moving it to the computer&#8217;s graphics processing unit (GPU). This gave them a 10- to more than 100-fold improvement in processing speed over using the computer&#8217;s central processing unit (CPU).</p>



<p>&#8220;Our improvements in registration and deconvolution mean that for datasets that fit onto a graphics card, image analysis can in principle keep up with the speed of acquisition,&#8221; Shroff says. &#8220;For bigger datasets, we found a way to efficiently carve them up into chunks, pass each chunk to the GPU, do the registration and deconvolution, and then stitch those pieces back together. That&#8217;s very important if you want to image large pieces of tissue, for example, from a marine animal, or if you are clearing an organ to make it transparent to put on the microscope. Some forms of large microscopy are really enabled and sped up by these two advances.&#8221;</p>



<p>Lastly, the team used deep learning to accelerate &#8220;complex deconvolution&#8221; &#8211; intractable datasets in which the blur varies significantly in different parts of the image. They trained the computer to recognize the relationship between badly blurred data (the input) and a cleaned, deconvolved image (the output). Then they gave it blurred data it hadn&#8217;t seen before. &#8220;It worked really well; the trained neural network could produce deconvolved results really fast,&#8221; Shroff says. &#8220;That&#8217;s where we got thousands-fold improvements in deconvolution speed.&#8221;</p>



<p>While the deep learning algorithms worked surprisingly well, &#8220;it&#8217;s with the caveat that they are brittle,&#8221; Shroff says. &#8220;Meaning, once you&#8217;ve trained the neural network to recognize a type of image, say a cell with mitochondria, it will deconvolve those images very well. But if you give it an image that is a bit different, say the cell&#8217;s plasma membrane, it produces artifacts. It&#8217;s easy to fool the neural network.&#8221; An active area of research is creating neural networks that work in a more generalized way.</p>



<p>&#8220;Deep learning augments what is possible,&#8221; Shroff says. &#8220;It&#8217;s a good tool for analyzing datasets that would be difficult any other way.&#8221;</p>
<p>The post <a href="https://www.aiuniverse.xyz/team-dramatically-reduces-image-analysis-times-using-deep-learning-other-approaches/">Team dramatically reduces image analysis times using deep learning, other approaches</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/team-dramatically-reduces-image-analysis-times-using-deep-learning-other-approaches/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deep learning has now been used for the first time to study to dark matter</title>
		<link>https://www.aiuniverse.xyz/deep-learning-has-now-been-used-for-the-first-time-to-study-to-dark-matter/</link>
					<comments>https://www.aiuniverse.xyz/deep-learning-has-now-been-used-for-the-first-time-to-study-to-dark-matter/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 25 Sep 2019 12:28:24 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[dark-matter]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[neural network]]></category>
		<category><![CDATA[Research]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4591</guid>

					<description><![CDATA[<p>Source: neowin.net Dark matter and dark energy have been the subject of study for cosmologists and physicists who are striving the understand the world around us in <a class="read-more-link" href="https://www.aiuniverse.xyz/deep-learning-has-now-been-used-for-the-first-time-to-study-to-dark-matter/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-has-now-been-used-for-the-first-time-to-study-to-dark-matter/">Deep learning has now been used for the first time to study to dark matter</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: neowin.net</p>



<p>Dark matter and dark energy have been the subject of study for cosmologists and physicists who are striving the understand the world around us in its entirety. The composition of the universe is an age-old inquiry that these researchers have probed into. And while we do have estimates of the likely percentages of baryonic matter, dark matter, and dark energy at 5%, 27% and 68%, respectively, researchers have been trying to improve these estimates and optimize the computational expense of the statistical methods employed to analyze cosmological data.</p>



<p>One such paper was released recently by a team of researchers hailing from ETH Zurich. In the paper, titled &#8220;<em>Cosmological constraints with deep learning from KiDS-450 weak lensing maps&#8221;, </em>the team of researchers detailed their method to study dark matter in the cosmos by employing convolutional neural networks (via Nvidia Developer News Center).</p>



<p>The team began by first training the convolutional neural network (CNN) using Nvidia P100 GPUs on the data from a computer-generated simulation of the universe. With this, the model was able to learn the various hidden features and the weights associated with the model to improve its accuracy. Subsequently, the trained model was then put to test with the KiDS-450 tomographic weak lensing dataset, which contains the shapes of approximately 15 million galaxies.</p>



<p>In the results, the researchers found that the deep learning-based model performed better than traditional methods of inference. Specifically, the former delivered 30% more accurate values than those made by scientists using traditional statistical methods. In addition, the model was also faster than using the Hubble telescope with the team saying that twice as much time would have been spent in gathering data alone from the telescope for the experiment.</p>



<p>A Ph.D. student at ETH Zurich and the lead author of the study, Janis Fluri, commented on the team&#8217;s work saying that it was an industry-first and that it allowed the extraction of more information from the data analyzed:</p>



<p>&#8220;This is the first time such machine learning tools have been used in this context. We found that the deep artificial neural network enables us to extract more information from the data than previous approaches. We believe that this usage of machine learning in cosmology will have many future applications.”</p>



<p>While in the abstract of the paper, the team claims that the technique is a promising prospect for cosmological data analysis in the future:We compare this result to the power spectrum analysis on the same maps and likelihood pipeline and find an improvement of about 30% for the CNN. We discuss how our results offer excellent prospects for the use of deep learning in future cosmological data analysis.</p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-has-now-been-used-for-the-first-time-to-study-to-dark-matter/">Deep learning has now been used for the first time to study to dark matter</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deep-learning-has-now-been-used-for-the-first-time-to-study-to-dark-matter/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
