<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>deep-learning Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/deep-learning-2/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/deep-learning-2/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Thu, 25 Mar 2021 06:28:13 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Deep-learning algorithm designs soft robots with sensors</title>
		<link>https://www.aiuniverse.xyz/deep-learning-algorithm-designs-soft-robots-with-sensors/</link>
					<comments>https://www.aiuniverse.xyz/deep-learning-algorithm-designs-soft-robots-with-sensors/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 25 Mar 2021 06:28:10 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[algorithm]]></category>
		<category><![CDATA[deep-learning]]></category>
		<category><![CDATA[designs]]></category>
		<category><![CDATA[Robots]]></category>
		<category><![CDATA[SENSORS]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13779</guid>

					<description><![CDATA[<p>Source &#8211; https://www.theweek.in/ Soft robots collect more useful information about their surroundings Creating soft robots has been a long-running challenge in robotics. Their rigid counterparts have a <a class="read-more-link" href="https://www.aiuniverse.xyz/deep-learning-algorithm-designs-soft-robots-with-sensors/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-algorithm-designs-soft-robots-with-sensors/">Deep-learning algorithm designs soft robots with sensors</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.theweek.in/</p>



<p>Soft robots collect more useful information about their surroundings</p>



<p>Creating soft robots has been a long-running challenge in robotics. Their rigid counterparts have a built-in advantage: a limited range of motion. Rigid robots&#8217; finite array of joints and limbs usually makes for manageable calculations by the algorithms that control mapping and motion planning.</p>



<p>A team of MIT researchers developed a deep learning neural network to aid the design of soft-bodied robots.</p>



<p>Soft-bodied robots are able to interact with people more safely or slip into tight spaces with ease. Soft robots are not so tractable. But for robots to reliably complete their programmed duties, they need to know the whereabouts of all their body parts. That&#8217;s a tall task for a soft robot that can deform in a virtually infinite number of ways. The algorithm developed by MIT researchers can help engineers design soft robots that collect more useful information about their surroundings. The deep-learning algorithm suggests an optimised placement of sensors within the robot&#8217;s body, allowing it to better interact with its environment and complete assigned tasks. The advance is a step toward the automation of robot design. &#8220;The system not only learns a given task, but also how to best design the robot to solve that task,&#8221; says Alexander Amini. &#8220;Sensor placement is a very difficult problem to solve. So, having this solution is extremely exciting.&#8221;</p>



<p>Soft-bodied robots are flexible and pliant—they generally feel more like a bouncy ball than a bowling ball. &#8220;The main problem with soft robots is that they are infinitely dimensional,&#8221; says co-author Andrew Spielberg. &#8220;Any point on a soft-bodied robot can, in theory, deform in any way possible.&#8221; That makes it tough to design a soft robot that can map the location of its body parts. Past efforts have used an external camera to chart the robot&#8217;s position and feed that information back into the robot&#8217;s control program. But the researchers wanted to create a soft robot untethered from external aid.</p>



<p>&#8220;You can&#8217;t put an infinite number of sensors on the robot itself,&#8221; says Spielberg. &#8220;So, the question is: How many sensors do you have, and where do you put those sensors in order to get the most bang for your buck?&#8221; The team turned to deep learning for an answer.</p>



<p>The researchers developed a novel neural network architecture that both optimises sensor placement and learns to efficiently complete tasks. First, the researchers divided the robot&#8217;s body into regions called &#8220;particles.&#8221; Each particle&#8217;s rate of strain was provided as an input to the neural network. Through a process of trial and error, the network &#8220;learns&#8221; the most efficient sequence of movements to complete tasks, like gripping objects of different sizes. At the same time, the network keeps track of which particles are used most often, and it culls the lesser-used particles from the set of inputs for the networks&#8217; subsequent trials.</p>



<p>Spielberg says their work could help to automate the process of robot design. In addition to developing algorithms to control a robot&#8217;s movements, &#8220;we also need to think about how we&#8217;re going to sensorize these robots, and how that will interplay with other components of that system,&#8221; he says. And better sensor placement could have industrial applications, especially where robots are used for fine tasks like gripping. &#8220;That&#8217;s something where you need a very robust, well-optimized sense of touch,&#8221; says Spielberg. &#8220;So, there&#8217;s potential for immediate impact.&#8221;</p>



<p>&#8220;Automating the design of sensorised soft robots is an important step toward rapidly creating intelligent tools that help people with physical tasks,&#8221; says coauthor Daniela Rus. &#8220;The sensors are an important aspect of the process, as they enable the soft robot to &#8220;see&#8221; and understand the world and its relationship with the world.&#8221;</p>



<p>The research will be presented during April&#8217;s IEEE International Conference on Soft Robotics and will be published in the journal IEEE Robotics and Automation Letters.&nbsp;&nbsp;</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-algorithm-designs-soft-robots-with-sensors/">Deep-learning algorithm designs soft robots with sensors</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deep-learning-algorithm-designs-soft-robots-with-sensors/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Explainer: What is deep learning?</title>
		<link>https://www.aiuniverse.xyz/explainer-what-is-deep-learning/</link>
					<comments>https://www.aiuniverse.xyz/explainer-what-is-deep-learning/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 17 Aug 2020 09:33:18 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[deep-learning]]></category>
		<category><![CDATA[IT-tecnology]]></category>
		<category><![CDATA[Software-Market]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10929</guid>

					<description><![CDATA[<p>Source:-moneycontrol.com Deep learning, a technology based on artificial neural networks, has revolutionized artificial intelligence in the space of a few years. But what exactly is it? Used <a class="read-more-link" href="https://www.aiuniverse.xyz/explainer-what-is-deep-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/explainer-what-is-deep-learning/">Explainer: What is deep learning?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:-moneycontrol.com</p>



<p>Deep learning, a technology based on artificial neural networks, has revolutionized artificial intelligence in the space of a few years. But what exactly is it?</p>



<p>Used by Siri, Cortana and Google Now to understand speech and recognize faces, deep learning is often confused with the concept of artificial intelligence (AI), so much so that the two terms are thought to be synonymous. However, this isn&#8217;t the case at all. Deep learning is a branch of machine learning, which in turn is a subset of AI. Here&#8217;s how it works.</p>



<p><strong>Take a plunge into the depths of deep learning</strong></p>



<p>Born with the development of computers, research in AI was quickly characterized by the emergence of different currents. One of them sought inspiration from the workings of the human brain in an attempt to create artificial neural networks. An initial neural machine was created by two Harvard University researchers as early as 1951. But development in the field only took off in recent decades, which were marked by major advances in the performance of computers. These also paved the way for the concept of deep learning, which depends on neural networks with many hidden layers.</p>



<p>Put simply, deep learning is a technology that teaches a machine to represent the world. It is a training technique that can enable a program to recognize the content of an image or to understand the spoken word. In the past, to accomplish such tasks engineers would explain to machines how to represent images. With deep learning, the machines take on this job themselves.</p>



<p><strong>An extension of supervised learning</strong></p>



<p>To understand how machines are capable of such a feat, you have to start with supervised learning. This is a standard technique in AI, which consists of feeding a machine with large amounts of data. For example, to train a program to recognize automobiles, it is fed tens of thousands of images of automobiles, which are labeled as such. Once this training has been completed &#8212; and it may take several hours or even days &#8212; the program will be able to recognize automobiles even in images it has never seen before.</p>



<p>Deep learning also uses supervised learning, but the internal architecture of the machine is different, because each of the thousands of units making up the neural network performs small, simple calculations.</p>



<p>A researcher at the French National Center for Scientific Research (CNRS) Yann Ollivier explains this process with an example: &#8220;How does a machine recognize a picture of a cat? The most salient characteristics are the eyes and ears. So how does it recognize a cat&#8217;s ear? It is distinguished by an angle of about 45 degrees. To recognize the presence of a line, a first layer of neurons will identify a difference in the pixels above and below it: this will generate a level one characteristic. The second layer will work on these features and combine them. If there are two lines that meet at 45°, it will start to recognize a cat&#8217;s ear triangle. And so on…&#8221; At every stage of this ongoing analysis, the neural network gains a deeper understanding of the content of an image.</p>
<p>The post <a href="https://www.aiuniverse.xyz/explainer-what-is-deep-learning/">Explainer: What is deep learning?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/explainer-what-is-deep-learning/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Israeli researchers develop deep-learning method for creating 3D dynamic cell images</title>
		<link>https://www.aiuniverse.xyz/israeli-researchers-develop-deep-learning-method-for-creating-3d-dynamic-cell-images/</link>
					<comments>https://www.aiuniverse.xyz/israeli-researchers-develop-deep-learning-method-for-creating-3d-dynamic-cell-images/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 18 Jun 2020 07:13:06 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[3D image]]></category>
		<category><![CDATA[deep-learning]]></category>
		<category><![CDATA[Develop]]></category>
		<category><![CDATA[researchers]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9619</guid>

					<description><![CDATA[<p>Source: xinhuanet.com JERUSALEM, June 17 (Xinhua) &#8212; Israeli researchers have developed an innovative microscopic method for creating 3D dynamic cell images, the northern Israel Institute of Technology <a class="read-more-link" href="https://www.aiuniverse.xyz/israeli-researchers-develop-deep-learning-method-for-creating-3d-dynamic-cell-images/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/israeli-researchers-develop-deep-learning-method-for-creating-3d-dynamic-cell-images/">Israeli researchers develop deep-learning method for creating 3D dynamic cell images</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: xinhuanet.com</p>



<p>JERUSALEM, June 17 (Xinhua) &#8212; Israeli researchers have developed an innovative microscopic method for creating 3D dynamic cell images, the northern Israel Institute of Technology (Technion) said on Wednesday.</p>



<p>This deep-learning technology, published in the journal Nature Methods, may lead to the mapping of biological processes in living cells in super-resolution.</p>



<p>The new system significantly shortens 3D image creation time through artificial neuron network and deep-learning.</p>



<p>The researchers demonstrated the system&#8217;s efficiency in 3D mapping of mitochondria (the cell&#8217;s energy maker) and tracking of telomeres, DNA sections at the ends of the chromosomes, in living cells.</p>



<p>One of the challenges of biology today is the mapping of dynamic biological processes in super-resolution living cells, that is, 10 times greater than the resolution of a standard optical microscope.</p>



<p>Thus, microscopes produce 2D images with limited resolution, with 3D images currently being obtained by scanning different sampled layers, and integrating them with computerized means to 3D images.</p>



<p>Such a process requires a lot of scanning time, during which the object being examined must be static, and provides low quality images.</p>



<p>To solve this, the Technion team has developed the artificial neural network, which performs computational tasks at unprecedented speed and performance.</p>



<p>Thus, the network trains on a huge number of virtual samples. Then, it analyzes the information obtained from microscope sample images and produces super-resolution 3D images from it. Enditem</p>
<p>The post <a href="https://www.aiuniverse.xyz/israeli-researchers-develop-deep-learning-method-for-creating-3d-dynamic-cell-images/">Israeli researchers develop deep-learning method for creating 3D dynamic cell images</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/israeli-researchers-develop-deep-learning-method-for-creating-3d-dynamic-cell-images/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google Open-Sources Computer Vision Model Big Transfer</title>
		<link>https://www.aiuniverse.xyz/google-open-sources-computer-vision-model-big-transfer/</link>
					<comments>https://www.aiuniverse.xyz/google-open-sources-computer-vision-model-big-transfer/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 10 Jun 2020 07:27:35 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[computer vision]]></category>
		<category><![CDATA[deep-learning]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[researchers]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9422</guid>

					<description><![CDATA[<p>Source: infoq.com Google Brain has released the pre-trained models and fine-tuning code for Big Transfer (BiT), a deep-learning computer vision model. The models are pre-trained on publicly-available generic image datasets <a class="read-more-link" href="https://www.aiuniverse.xyz/google-open-sources-computer-vision-model-big-transfer/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-open-sources-computer-vision-model-big-transfer/">Google Open-Sources Computer Vision Model Big Transfer</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: infoq.com</p>



<p>Google Brain has released the pre-trained models and fine-tuning code for Big Transfer (BiT), a deep-learning computer vision model. The models are pre-trained on publicly-available generic image datasets and can meet or exceed state-of-the-art performance on several vision benchmarks after fine-tuning on just a few samples.</p>



<p>Paper co-authors Lucas Beyer and Alexander Kolesnikov gave an overview of their work in a recent blog post. To help advance the performance of deep-learning vision models, the team investigated large-scale pre-training and the effects of model size, dataset size, training duration, normalization strategy, and hyperparameter choice. As a result of this work, the team developed a &#8220;recipe&#8221; of components and training heuristics that achieves strong performance on a variety of benchmarks, including an &#8220;unprecedented top-5 accuracy of 80.0%&#8221; on the ObjectNet dataset. Beyer and Kolesnikov claim,</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>[Big Transfer] will allow anyone to reach state-of-the-art performance on their task of interest, even with just a handful of labeled images per class.</p></blockquote>



<p>Deep-learning models have made great strides in computer vision, particularly in recognizing objects in images. One key to this success has been the availability of large-scale labelled datasets: collections of images with corresponding text descriptions of the objects they contain. These datasets must be created manually, with human workers applying a label to each of thousands of images: the popular ImageNet dataset, for example, contains over 14 million labeled images containing 21k different object classes. However, the images are usually generic, showing commonplace objects such as people, pets, or household items. Creating a dataset of similar scale for a specialized task, say for an industrial robot, might be prohibitively expensive or time-consuming.</p>



<p>In this situation, AI engineers often apply transfer learning, a strategy that has become popular with large-scale natural-language processing (NLP) models. A neural network is first <em>pre-trained</em> on a large generic dataset until it achieves a certain level of performance on a test dataset. Then the model is fine-tuned with a smaller task-specific dataset, sometimes with as few as a single example of the task-specific objects. Large NLP models routinely set new state-of-the-art performance levels using transfer learning.</p>



<p>For BiT, the Google researchers used a ResNet-v2 neural architecture. To investigate the effects of pre-training dataset size, the team replicated their experiments on three groups of models pre-trained with different datasets: BiT-S models pre-trained on 1.28M images from ILSVRC-2012, BiT-M models pre-trained on 14.2M images from ImageNet-21k, and BiT-L models pre-trained on 300M images from JFT-300M. The models were then fine-tuned and evaluated on several common benchmarks: ILSVRC-2012, CIFAR-10/100, Oxford-IIIT Pet, and Oxford Flowers-102.</p>



<p>The team noted several findings from their experiments. First, the benefits from increasing model size diminish on smaller datasets, and there is little benefit in pre-training smaller models on larger datasets. Second, the large models performed better using group normalization compared to batch normalization. Finally, to avoid an expensive hyperparameter search during fine-tuning, the team developed a heuristic called BiT-HyperRule, where all hyperparameters are fixed except &#8220;training schedule length, resolution, and whether to use MixUp regularization.&#8221;</p>



<p>Google has released the best-performing pre-trained models from the BiT-S and BiT-M groups. However, they have not released any of the BiT-L models based on the JFT-300M dataset. Commenters on Hacker News pointed out that no model trained on JFT-300M has ever been released. One commenter pointed to several models released by Facebook which were pre-trained on an even larger dataset. Another said:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>I&#8217;ve wondered if legal/copyright issues block any release: there&#8217;s always someone who tries to argue that a model is a derived work, and nothing in the JFT-300M papers mentions having licenses covering public redistribution.</p></blockquote>



<p>The code for fine-tuning and tutorials for using the released pre-trained models are available on GitHub.</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-open-sources-computer-vision-model-big-transfer/">Google Open-Sources Computer Vision Model Big Transfer</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-open-sources-computer-vision-model-big-transfer/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Israeli scientists develop deep-learning method to predict brain&#8217;s age</title>
		<link>https://www.aiuniverse.xyz/israeli-scientists-develop-deep-learning-method-to-predict-brains-age/</link>
					<comments>https://www.aiuniverse.xyz/israeli-scientists-develop-deep-learning-method-to-predict-brains-age/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 04 Jun 2020 07:18:24 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[deep-learning]]></category>
		<category><![CDATA[Develop]]></category>
		<category><![CDATA[framework]]></category>
		<category><![CDATA[Israeli]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9260</guid>

					<description><![CDATA[<p>Source: xinhuanet.com JERUSALEM, June 3 (Xinhua) &#8212; Israeli researchers have developed a deep-learning framework to predict brain&#8217;s age, Ben-Gurion University (BGU) said Wednesday. This method may help <a class="read-more-link" href="https://www.aiuniverse.xyz/israeli-scientists-develop-deep-learning-method-to-predict-brains-age/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/israeli-scientists-develop-deep-learning-method-to-predict-brains-age/">Israeli scientists develop deep-learning method to predict brain&#8217;s age</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: xinhuanet.com</p>



<p>JERUSALEM, June 3 (Xinhua) &#8212; Israeli researchers have developed a deep-learning framework to predict brain&#8217;s age, Ben-Gurion University (BGU) said Wednesday.</p>



<p>This method may help trace the brain&#8217;s development and provide early warning of diseases, which are essential steps toward developing effective treatments.</p>



<p>The brain&#8217;s age is not necessarily the same age as the body&#8217;s chronological age, as past studies have shown brain aging is related to neurodegenerative diseases and mortality.</p>



<p>In a new study published in the journal Human Brain Mapping, BGU researchers found what makes the brain look younger or older.</p>



<p>The new method is based on big data from structural magnetic resonance images (MRIs) for age prediction.</p>



<p>The team trained an ensemble of deep neural networks to predict the brain&#8217;s age from brain imaging of healthy subjects.</p>



<p>Once trained, the network ensemble was able to provide accurate prediction of the brain&#8217;s age within three years, thus cutting through the clutter of existing methods.</p>



<p>The algorithm was applied to 15 open source datasets comprising more than 10,000 MRIs in a total of people aged 4-94.</p>



<p>The researchers were also able to examine which anatomical brain regions contributed to the high predictive power of the neural network model.</p>



<p>&#8220;With our method, brain age and its divergence from the chronological age might be used as an early brain health biomarker, also providing further insights into what happens when diseases affect the brain,&#8221; the team concluded. Enditem</p>
<p>The post <a href="https://www.aiuniverse.xyz/israeli-scientists-develop-deep-learning-method-to-predict-brains-age/">Israeli scientists develop deep-learning method to predict brain&#8217;s age</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/israeli-scientists-develop-deep-learning-method-to-predict-brains-age/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Rosetta Analytics launches RL One Strategy</title>
		<link>https://www.aiuniverse.xyz/rosetta-analytics-launches-rl-one-strategy/</link>
					<comments>https://www.aiuniverse.xyz/rosetta-analytics-launches-rl-one-strategy/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 28 May 2020 08:43:50 +0000</pubDate>
				<category><![CDATA[Reinforcement Learning]]></category>
		<category><![CDATA[Analytics]]></category>
		<category><![CDATA[Data Strategy]]></category>
		<category><![CDATA[deep-learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9088</guid>

					<description><![CDATA[<p>Source: hedgeweek.com RL One is a long/short strategy that generates returns through deep reinforcement learning, a category of machine learning that reacts and learns from its environment <a class="read-more-link" href="https://www.aiuniverse.xyz/rosetta-analytics-launches-rl-one-strategy/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/rosetta-analytics-launches-rl-one-strategy/">Rosetta Analytics launches RL One Strategy</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: hedgeweek.com</p>



<p>RL One is a long/short strategy that generates returns through deep reinforcement learning, a category of machine learning that reacts and learns from its environment by determining which decision will result in the highest risk/reward trade-off. The reinforcement learning model predicts optimal long or short exposure to the S&amp;P 500 Index on a market-close to market-close basis. This exposure could range from 100 per cent long to 100 per cent short. These predictions are then implemented with unleveraged long or short positions in the S&amp;P 500 Index E-mini futures.</p>



<p> As a next-generation quantitative investment manager, Rosetta Analytics uses proprietary advanced artificial intelligence models, such as deep learning and deep reinforcement learning, to create robust and scalable active investment strategies.<br> <br>Rosetta’s existing deep-learning strategies – DL One and DL Two – were funded by a US institutional investor and have been live since 1 September, 2017. The deep-learning model driving DL One and DL Two generates a signal that offers a binary trading decision. DL One implements this signal as either 100 per cent long or short S&amp;P 500 E-Mini futures, and DL Two implements this signal as 100 per cent long S&amp;P 500 E-Mini futures or 100 per cent cash.<br> <br>RL One takes Rosetta’s predictive capabilities to the next level by determining the optimal allocation of its trading signals, including the size of the trade and the extent to which it should be long or short across multiple asset classes. Rosetta has also successfully tested other multi-asset strategies, including a 22-stock long-only strategy and a US large cap-equities and US bonds long/short strategy.<br> <br>During the day-to-day management of RL One, representations of S&amp;P 500 Index stock-level returns and financial and macro-economic data – such as interest rates and spreads, commodity prices and currency pairs – act as inputs into the strategy’s reinforcement learning model. The result is a daily optimal allocation of capital between the S&amp;P 500 and cash.<br> <br>Leading the Rosetta Analytics investment team are co-founders Julia Bonafede, CFA, and Angelo Calvello, PhD. Bonafede is the former president of Wilshire Consulting who, at the time, managed an institutional consulting and OCIO firm with more than US$1 trillion in assets under advisement. Angelo has a proven track record, having co-founded Blue Diamond Asset Management AG and Impact Investment Partners AG. Earlier in his career, Angelo also held senior roles at Man Group and State Street Global Advisors.<br> <br>Julia Bonafede, CFA, co-founder of Rosetta Analytics, says: “We believe investors shouldn’t compromise on earning consistent net-of-fee returns when actively allocating to risky assets. For too long, traditional active managers have consistently failed to provide promised returns to investors. Traditional quantitative models have been using the same quantitative methods to make investment decisions based on academic frameworks developed 50 years ago. It’s time for innovation and disruption. Traditional quantitative methods continue to produce homogeneous and suboptimal performance, whereas our next generation quantitative methods use powerful self-learning computational algorithms that can identify actionable insights in traditional and nontraditional data that are hidden from conventional investment processes. These insights provide a new and sustainable edge in investment decision-making.”<br> <br>Angelo Calvello, PhD, co-founder of Rosetta Analytics, says: “We are excited to launch our RL One Strategy with its transformational and market-disrupting reinforcement learning model that reacts and learns from the environment to generate returns. Our approach has no preset notions and is continuously learning and adapting to market conditions. The successful live performance of our deep-learning strategies and the strength of the hypothetical performance of our reinforcement-learning prototype strategies demonstrates that deep learning and reinforcement learning can be used to find new commercially valuable insights undiscoverable by traditional quantitative methods.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/rosetta-analytics-launches-rl-one-strategy/">Rosetta Analytics launches RL One Strategy</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/rosetta-analytics-launches-rl-one-strategy/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
