<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Environments Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/environments/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/environments/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Wed, 09 Sep 2020 06:30:11 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Data-Efficient Scalable Reinforcement Learning for Practical Robotic Environments</title>
		<link>https://www.aiuniverse.xyz/data-efficient-scalable-reinforcement-learning-for-practical-robotic-environments/</link>
					<comments>https://www.aiuniverse.xyz/data-efficient-scalable-reinforcement-learning-for-practical-robotic-environments/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 09 Sep 2020 06:29:50 +0000</pubDate>
				<category><![CDATA[Reinforcement Learning]]></category>
		<category><![CDATA[Environments]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[researchers]]></category>
		<category><![CDATA[robotic]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11451</guid>

					<description><![CDATA[<p>Source: cordis.europa.eu Designing algorithms for more challenging data Machine Learning researchers often have to overcome the ‘sim-to-real’ transfer, where algorithmic feats accomplished in computer simulations can be <a class="read-more-link" href="https://www.aiuniverse.xyz/data-efficient-scalable-reinforcement-learning-for-practical-robotic-environments/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/data-efficient-scalable-reinforcement-learning-for-practical-robotic-environments/">Data-Efficient Scalable Reinforcement Learning for Practical Robotic Environments</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: cordis.europa.eu</p>



<h3 class="wp-block-heading">Designing algorithms for more challenging data</h3>



<p>Machine Learning researchers often have to overcome the ‘sim-to-real’ transfer, where algorithmic feats accomplished in computer simulations can be repeated in test performances. DESIRE has produced a data-driven, robust decision-making algorithm to achieve just that.</p>



<p>Advancements in computing, such as the game AlphaGo, both rely on and generate large amounts of data. To cater for this volume of data, researchers depend on Machine Learning (ML) algorithms developed from techniques such as Reinforcement Learning (RL), alongside Artificial Intelligence (AI) breakthroughs. However, while these algorithms can be effective within simulations, they often prove disappointing in the real world. Such performance failures matter in high-stakes areas such as robotics where, for reasons of practicality and expense, only a limited number of trials can be undertaken. The EU-supported DESIRE project set out to improve the robustness of the optimisation, learning and control algorithms underlying many innovations striving for autonomous control.</p>



<h4 class="wp-block-heading">Kernel DRO</h4>



<p>One of the key problems in the sim-to-real transfer is an ML phenomenon called ‘distribution shift’. Put simply, this is when a discrepancy appears between the distribution of data in the datasets used for training and those used for testing in the real world. “This is usually because the test datasets prove to be too simplistic in their rendering of real-world conditions,” says research fellow Jia-Jie Zhu who received support from the Marie Skłodowska-Curie Actions programme. “Distribution shift has been one of the major problems plaguing learning and control algorithms and a stumbling block to progress,” adds Zhu, from the Max Planck Institute for Intelligent Systems (the project host). The DESIRE project drew upon so-called kernel-based learning methods to reduce this distribution drift. These are computations which make algorithms more reliable by recognising patterns in data, identifying and then organising relations within the data according to predetermined features such as correlations or classifications. This enabled DESIRE to create an algorithm employing kernel distributionally robust optimisation (Kernel-DRO), in which decisions, such as control commands for robots, were robustly determined.</p>



<h4 class="wp-block-heading">Broad applicability</h4>



<p>While DESIRE’s work is theoretical, besides contributing to the literature of mathematical optimisation, control and ML theory, it has a range of very practical implications. Indeed, a strength of the team’s Kernel-DRO solution is precisely this broad applicability. “Many of today’s learning tasks suffer from data distribution ambiguity. We believe that industry or business practitioners looking to improve robustness in their machine learning can easily apply our algorithm,” explains Zhu. To take the work further, Zhu is now aiming to create larger-scale learning algorithms which can cater for more random data inputs, suitable for industrial applications. For example, the principle of data robustness is being applied to model predictive control, a highly effective control method useful for safety-critical applications such as flight control, chemical process control and robotics.</p>
<p>The post <a href="https://www.aiuniverse.xyz/data-efficient-scalable-reinforcement-learning-for-practical-robotic-environments/">Data-Efficient Scalable Reinforcement Learning for Practical Robotic Environments</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/data-efficient-scalable-reinforcement-learning-for-practical-robotic-environments/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Facebook’s AI teaches robots to navigate environments using less data</title>
		<link>https://www.aiuniverse.xyz/facebooks-ai-teaches-robots-to-navigate-environments-using-less-data/</link>
					<comments>https://www.aiuniverse.xyz/facebooks-ai-teaches-robots-to-navigate-environments-using-less-data/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 14 Apr 2020 11:03:11 +0000</pubDate>
				<category><![CDATA[Data Robot]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[autonomous machines]]></category>
		<category><![CDATA[Environments]]></category>
		<category><![CDATA[Facebook]]></category>
		<category><![CDATA[Robots]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8163</guid>

					<description><![CDATA[<p>Source: venturebeat.com In a recent paper published on the preprint server Arxiv.org, researchers at Carnegie Mellon, Facebook, and the University of Illinois Urbana-Champaign propose Active Neural Simultaneous Localization and <a class="read-more-link" href="https://www.aiuniverse.xyz/facebooks-ai-teaches-robots-to-navigate-environments-using-less-data/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/facebooks-ai-teaches-robots-to-navigate-environments-using-less-data/">Facebook’s AI teaches robots to navigate environments using less data</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: venturebeat.com</p>



<p>In a recent paper published on the preprint server Arxiv.org, researchers at Carnegie Mellon, Facebook, and the University of Illinois Urbana-Champaign propose Active Neural Simultaneous Localization and Mapping (Active Neural SLAM), a hierarchical approach for teaching AI agents to explore environments. They say that it leverages the strength of both classical and AI-based path- and goal-planning methods, making it robust against errors and sidestepping the complexities associated with previous approaches.</p>



<p>Techniques like those underpinning Active Neural SLAM could greatly advance the state of the art in robotics. Navigation, which in this context refers not only to coordinate navigation but to pathfinding (i.e., finding paths to objects), is a critical task for autonomous machines. But training those machines to learn about mapping requires a lot of computation.</p>



<p> Active Neural SLAM, then, works with raw sensory inputs such as camera images and exploits regularities in the layouts of environments, enabling it to achieve the same or better performance than existing methods while requiring a fraction of the training data. </p>



<p> The neural SLAM module within Active Neural SLAM comprises a Mapper and a Pose Estimator. The Mapper is responsible for generating a top-down spatial map of a given environment and predicting obstacles and explored areas, while the Pose Estimator anticipates the agent’s pose based on past pose estimates. The spatial map — where each element corresponds to a cell size of 25 square centimeters in the physical world — is ingested along with the agent pose by a global policy to produce various long-term goals. A Planner model then takes the goals, the spatial obstacle map, and the agent pose estimates to compute short-term goals, or the shortest paths from the current location to the long-term goals. Lastly, a local policy outputs navigational actions using camera data and the short-term goals. </p>



<p>In experiments, the researchers paired Facebook’s open source Habitat platform, a modular high-level library for training agents across a variety of tasks, environments, and simulators, with data sets (Gibson and Matterport’s MP3D) consisting of 3D reconstructions of real-world environments like office and home interiors. Agents could make one of three moves — forward 25 centimeters, leftward 10 degrees, or rightward 10 degrees — in the environments and were trained in 994 episodes consisting of 1,000 steps or 10 million frames, such that all of Active Neural SLAM’s components — the Mapper, the Pose Estimator, the global policy, and the local policy — were trained simultaneously.</p>



<p>The team reports that the Active Neural SLAM managed to almost completely explore small scenes in around 500 steps versus the baselines’ 85% to 90% exploration of the same scenes in 1,000 steps. The baseline models also tended to become stuck in areas, indicating that they weren’t able to “remember” explored areas over time — a problem that Active Neural SLAM didn’t exhibit.</p>



<p>Encouraged by these results, the coauthors deployed the trained Active Neural SLAM policy from simulation to a real-world Locobot robot. After adjusting the camera height and vertical field-of-views to match those of the Habitat simulator, they say that it successfully explored the living area in an apartment.</p>



<p>“In the future, [Active Neural SLAM] can be extended to complex semantic tasks such as semantic goal navigation and embodied question answering by using a semantic Neural SLAM module, which creates a … map capturing semantic properties of the objects in the environment,” wrote the coauthors. “The model can also be combined with prior work on localization to relocalize in a previously created map for efficient navigation in subsequent episodes.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/facebooks-ai-teaches-robots-to-navigate-environments-using-less-data/">Facebook’s AI teaches robots to navigate environments using less data</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/facebooks-ai-teaches-robots-to-navigate-environments-using-less-data/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Seven Key Dimensions to Help You Understand Artificial Intelligence Environments</title>
		<link>https://www.aiuniverse.xyz/seven-key-dimensions-to-help-you-understand-artificial-intelligence-environments/</link>
					<comments>https://www.aiuniverse.xyz/seven-key-dimensions-to-help-you-understand-artificial-intelligence-environments/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 15 Jun 2019 10:19:36 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Environments]]></category>
		<category><![CDATA[Help]]></category>
		<category><![CDATA[Seven Key]]></category>
		<category><![CDATA[Understand]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=3868</guid>

					<description><![CDATA[<p>Source:- towardsdatascience.com Every artificial intelligence(AI) problem is a new universe of complexities and unique challenges. Very often, the most challenging aspects of solving an AI problem is not <a class="read-more-link" href="https://www.aiuniverse.xyz/seven-key-dimensions-to-help-you-understand-artificial-intelligence-environments/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/seven-key-dimensions-to-help-you-understand-artificial-intelligence-environments/">Seven Key Dimensions to Help You Understand Artificial Intelligence Environments</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source:- towardsdatascience.com</p>
<div class="section-inner sectionLayout--insetColumn">
<p id="d807" class="graf graf--p graf-after--figure">Every artificial intelligence(AI) problem is a new universe of complexities and unique challenges. Very often, the most challenging aspects of solving an AI problem is not about finding a solution but understanding the problem itself. As paradoxically as that sounds, even the most experienced AI experts have been guilty of rushing into proposing deep learning algorithms and exoteric optimization techniques without fully understanding the problem at hand. When we think about an AI problem, we tend to link our reasoning to two main aspects: datasets and models. However, that reasoning is ignoring what can be considered the most challenging aspect of an AI problem: the environment.</p>
<p id="63d9" class="graf graf--p graf-after--p">When designing artificial intelligence(AI) solutions, we spend a lot of time focusing on aspects such as the structure of learning algorithms [ex: supervised, unsupervised, semi-supervised], the architecture of a neural network [ex: convolutional, recurrent…] or the characteristics of the data [ex: labeled, unlabeled…]. However, little attention is often provided to the nature of the environment on which the AI solution operates. As it turns out, the characteristics of the environment are the number one element that can make or break an AI model.</p>
<p id="1f14" class="graf graf--p graf-after--p">There are several aspects that distinguish AI environments. The shape and frequency of the data, the nature of the problem , the volume of knowledge available at any given time are some of the elements that differentiate one type of AI environment from another. Deep diving into those characteristics will guide the strategies of AI experts in areas such as algorithm selections, neural network architectures, optimization techniques and many other relevant aspects of the lifecycle of AI applications. Understanding an AI environment is an incredibly complex task but there are several key dimensions that provide clarity on that reasoning.</p>
<h3 id="dcb6" class="graf graf--h3 graf-after--p">Seven Key Dimensions to Classify an AI Environment</h3>
<p id="57f9" class="graf graf--p graf-after--h3">One of the most effective methodologies for understanding an AI environment is to classify it across a series of well-known dimensions that are often only segmented in two or three classifications. Among the different characteristics that can be used to classify an AI environment, there are seven key exclusive dynamics that provide a rapid understanding of the challenges and capabilities needed by AI agents.</p>
</div>
<div class="section-inner sectionLayout--outsetColumn">
<h4 id="23dc" class="graf graf--h4 graf-after--figure"><strong class="markup--strong markup--h4-strong">1-Single Agent vs. Multi-Agent</strong></h4>
<p id="9a72" class="graf graf--p graf-after--h4">One of the most obvious dimensions to classify and AI environment is based on the number of agents involved. The vast majority of AI models today focus on environments involving a single agent but there is an increasing expansion in multi-agent settings. The introduction of multiple agents in an AI problem raises challenges such as collaborative or competitive dynamics which are not present in single-agent environments.</p>
<h4 id="7fe7" class="graf graf--h4 graf-after--p"><strong class="markup--strong markup--h4-strong">2-Complete vs. Incomplete</strong></h4>
<p id="f2d0" class="graf graf--p graf-after--h4">Complete AI environments are those on which, at any given time, the agents have enough information to complete a branch of the problem. Chess is a classic example of a complete AI environment. Poker, on the other hand, is an incomplete environments as AI strategies can only anticipate many moves in advance and, instead, they focus on finding a good ‘equilibrium” at any given time. Most of the famous Nash equilibrium principles are particularly relevant in incomplete AI environments.</p>
<h4 id="0dc8" class="graf graf--h4 graf-after--p"><strong class="markup--strong markup--h4-strong">2-Fully Observable vs. Partially Observable</strong></h4>
<p id="a2fb" class="graf graf--p graf-after--h4">A fully observable AI environment has access to all required information to complete target task. Image recognition operates in fully observable domains. Partially observable environments such as the ones encountered in self-driving vehicle scenarios deal with partial information in order to solve AI problems. Partially observable environments often rely on statistic techniques to extrapolate knowledge of the environment.</p>
<h4 id="a609" class="graf graf--h4 graf-after--p"><strong class="markup--strong markup--h4-strong">3-Competitive vs. Collaborative</strong></h4>
<p id="4568" class="graf graf--p graf-after--h4">Competitive AI environments face AI agents against each other in order to optimize a specific outcome. Games such as GO or Chess are examples of competitive AI environments. Collaborative AI environments rely on the cooperation between multiple AI agents. Self-driving vehicles or cooperating to avoid collisions or smart home sensors interactions are examples of collaborative AI environments. Many multi-agent environments such as video games include both collaborative and competitive dynamics which makes them particularly challenging from an AI perspective.</p>
<h4 id="47ec" class="graf graf--h4 graf-after--p"><strong class="markup--strong markup--h4-strong">4-Static vs. Dynamic</strong></h4>
<p id="dc4e" class="graf graf--p graf-after--h4">static AI environments rely on data-knowledge sources that don’t change frequently over time. Speech analysis is a problem that operates on static AI environments. Contrasting with that model, dynamic AI environments such as the vision AI systems in drones deal with data sources that change quite frequently. Dynamic AI environments often need to enable faster and more regular training of AI agents.</p>
<h4 id="00e1" class="graf graf--h4 graf-after--p"><strong class="markup--strong markup--h4-strong">5-Discrete vs. Continuous</strong></h4>
<p id="5866" class="graf graf--p graf-after--h4">Discrete AI environments are those on which a finite [although arbitrarily large] set of possibilities can drive the final outcome of the task. Chess is also classified as a discrete AI problem. Continuous AI environments rely on unknown and rapidly changing data sources. Multi-player video games are a classic example of continuous AI environments.</p>
<h4 id="f673" class="graf graf--h4 graf-after--p"><strong class="markup--strong markup--h4-strong">6-Deterministic vs. Stochastic</strong></h4>
<p id="4f48" class="graf graf--p graf-after--h4">Deterministic AI environments are those on which the outcome can be determined base on a specific state. By determinism, we specifically refer to AI environments that ignore uncertainty. Most real world AI environments are not deterministic. Instead, they can be classified as stochastic. Self-driving vehicles are a one of the most extreme examples of stochastic AI environments but simpler settings can be found in simulation environments or even speech analysis models.</p>
<p id="53c2" class="graf graf--p graf-after--p graf--trailing">Understanding an AI environment is one of the most challenging steps on any AI problem. Luckily, the friction points across the seven dimensions explored in this article often yield a robust classification of an AI environment and facilitate the selection of models and architectures. While there have been notorious advancements in AI architectures and optimization techniques, the analysis of environments remains a highly subjective aspect of the AI lifecycle.</p>
</div>
<p>The post <a href="https://www.aiuniverse.xyz/seven-key-dimensions-to-help-you-understand-artificial-intelligence-environments/">Seven Key Dimensions to Help You Understand Artificial Intelligence Environments</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/seven-key-dimensions-to-help-you-understand-artificial-intelligence-environments/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
