<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>PAPERS Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/papers/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/papers/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Thu, 18 Mar 2021 06:11:15 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>10 MUST LOOK ARTIFICIAL INTELLIGENCE RESEARCH PAPERS SO FAR</title>
		<link>https://www.aiuniverse.xyz/10-must-look-artificial-intelligence-research-papers-so-far/</link>
					<comments>https://www.aiuniverse.xyz/10-must-look-artificial-intelligence-research-papers-so-far/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 18 Mar 2021 06:11:13 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[applications]]></category>
		<category><![CDATA[PAPERS]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[smartphones]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13575</guid>

					<description><![CDATA[<p>Source &#8211; https://www.analyticsinsight.net/ Artificial intelligence research is increasingly influencing the use of technology From our smartphones to cars and homes, artificial intelligence is increasingly touching our every <a class="read-more-link" href="https://www.aiuniverse.xyz/10-must-look-artificial-intelligence-research-papers-so-far/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/10-must-look-artificial-intelligence-research-papers-so-far/">10 MUST LOOK ARTIFICIAL INTELLIGENCE RESEARCH PAPERS SO FAR</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.analyticsinsight.net/</p>



<h2 class="wp-block-heading"><strong>Artificial intelligence research is increasingly influencing the use of technology</strong></h2>



<p>From our smartphones to cars and homes, artificial intelligence is increasingly touching our every walk of life. Applications of artificial intelligence have already proved disruptive across diverse industries, including manufacturing, healthcare, retail, etc. Considering these progresses, we can say artificial intelligence has evolved much impressively in recent years. Research around this technology has also surged and is impacting the way every individual and business interacts with AI technologies. Analytics Insight has listed 10 must look artificial intelligence research papers so far worth looking at now.</p>



<h4 class="wp-block-heading"><strong>Adam: A Method for Stochastic Optimization</strong></h4>



<p>Author(s): Diederik P. Kingma, Jimmy Ba</p>



<p>Adam is an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, and it is computationally efficient, invariant to a diagonal rescaling of the gradients, and has little memory requirements. It is well suited for problems that are large in terms of data and parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. Adam has been adopted as a default method of optimization algorithm for all those millions of neural networks that people train nowadays.</p>



<h4 class="wp-block-heading"><strong>Towards a Human-like Open-Domain Chatbot</strong></h4>



<p>Author(s): Daniel Adiwardana, Minh-Thang Luong, David R. So, Jamie Hall, Noah Fiedel, RomalThoppilan, Zi Yang, ApoorvKulshreshtha, Gaurav Nemade, Yifeng Lu, Quoc V. Le</p>



<p>This research paper presents Meena, a multi-turn open-domain chatbot that is trained end-to-end on data mined and filtered from public domain social media conversations. This 2.6B parameter neural network is simply trained to minimize the perplexity of the next token. The researchers also propose a new human evaluation metric to capture key elements of a human-like multi-turn conversation, dubbed Sensibleness and Specificity Average (SSA).</p>



<h4 class="wp-block-heading"><strong>Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift</strong></h4>



<p>Author(s): Sergey Ioffe, Christian Szegedy</p>



<p>Training Deep Neural Networks is complicated by the fact that the distribution of each layer’s inputs changes during training, as the parameters of the previous layers change. The researchers refer to this phenomenon as “internal covariate shift”, and address the problem by normalizing layer inputs. Batch Normalization allows the researchers to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps and surpasses the original model by a significant margin.</p>



<h4 class="wp-block-heading"><strong>Large-scale Video Classification with Convolutional Neural Networks</strong></h4>



<p>Author(s): Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei</p>



<p>Convolutional Neural Networks (CNNs) have been considered as a powerful class of models for image recognition problems. Encouraged by these results, the researchers provide an extensive empirical evaluation of CNNs on large-scale video classification. This used a new dataset of 1 million YouTube videos belonging to 487 classes. Provided by IEEE Conference on Computer Vision and Pattern Recognition, this research paper has been cited by 865 times with a HIC score of 24 and a CV of 239.</p>



<h4 class="wp-block-heading"><strong>Beyond Accuracy: Behavioral Testing of NLP models with CheckList</strong></h4>



<p>Author(s): Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, Sameer Singh</p>



<p>Through this research paper around artificial intelligence, the authors point out the inadequacies of existing approaches to evaluating the performance of NLP models. The principles of behavioural testing in software engineering inspired researchers to introduce CheckList, a task-agnostic methodology for testing NLP models. It involves a matrix of general linguistic capabilities and test types that facilitate comprehensive test ideation, as well as a software tool to produce a large and diverse number of test cases quickly.</p>



<h4 class="wp-block-heading"><strong>Generative Adversarial Nets</strong></h4>



<p>Author(s): Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, SherjilOzair, Aaron Courville, YoshuaBengio</p>



<p>The authors in this AI research paper propose a new framework for estimating generative models via an adversarial process. They simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake.</p>



<h4 class="wp-block-heading"><strong>Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks</strong></h4>



<p>Author(s): Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun</p>



<p>Advances like SPPnet and Fast R-CNN have minimized the running time of state-of-the-art detection networks, exposing region proposal computation as a bottleneck. To this context, the authors introduce a Region Proposal Network (RPN), a fully convolutional network that simultaneously predicts object bounds and abjectness scores at each position. RPN shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals.</p>



<h4 class="wp-block-heading"><strong>A Review on Multi-Label Learning Algorithms</strong></h4>



<p>Author(s): Min-Ling Zhang, Zhi-Hua Zhou</p>



<p>Multi-label learning studies the problem where each example is represented by a single instance while associated with a set of labels simultaneously. While there has been a significant amount of progress made toward the machine learning paradigm in the past decade, this paper aims to provide a timely review on this area with an emphasis on state-of-the-art multi-label learning algorithms.</p>



<h4 class="wp-block-heading"><strong>Neural Machine Translation by Jointly Learning to Align and Translate</strong></h4>



<p>Author(s): DzmitryBahdanau, Kyunghyun Cho, YoshuaBengio</p>



<p>Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belongs to a family of encoder-decoders. It involves an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation.</p>



<h4 class="wp-block-heading"><strong>Mastering the game of Go with deep neural networks and tree search</strong></h4>



<p>Author(s): David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, and others</p>



<p>The paper introduces a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves in the game of Go. Go has been perceived as the most challenging of classic games for artificial intelligence. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play.</p>
<p>The post <a href="https://www.aiuniverse.xyz/10-must-look-artificial-intelligence-research-papers-so-far/">10 MUST LOOK ARTIFICIAL INTELLIGENCE RESEARCH PAPERS SO FAR</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/10-must-look-artificial-intelligence-research-papers-so-far/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Which Papers Won At 35th AAAI Conference On Artificial Intelligence?</title>
		<link>https://www.aiuniverse.xyz/which-papers-won-at-35th-aaai-conference-on-artificial-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/which-papers-won-at-35th-aaai-conference-on-artificial-intelligence/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 09 Feb 2021 05:31:37 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[35th]]></category>
		<category><![CDATA[AAAI]]></category>
		<category><![CDATA[Conference]]></category>
		<category><![CDATA[PAPERS]]></category>
		<category><![CDATA[Won]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12762</guid>

					<description><![CDATA[<p>Source &#8211; https://analyticsindiamag.com/ The 35th AAAI Conference on Artificial Intelligence (AAAI-21), held virtually this year, saw more than 9,000 paper submissions, of which, only 1,692 research papers made the <a class="read-more-link" href="https://www.aiuniverse.xyz/which-papers-won-at-35th-aaai-conference-on-artificial-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/which-papers-won-at-35th-aaai-conference-on-artificial-intelligence/">Which Papers Won At 35th AAAI Conference On Artificial Intelligence?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://analyticsindiamag.com/</p>



<p>The 35th AAAI Conference on Artificial Intelligence (AAAI-21), held virtually this year, saw more than 9,000 paper submissions, of which, only 1,692 research papers made the cut.</p>



<p>The Association for the Advancement of Artificial Intelligence (AAAI) committee has announced the Best Paper and Runners Up awards. Let’s take a look at the papers that won the awards.</p>



<h3 class="wp-block-heading" id="h-best-papers"><strong>Best Papers</strong></h3>



<h4 class="wp-block-heading" id="h-1-informer-beyond-efficient-transformer-for-long-sequence-time-series-forecasting"><strong>1| Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting</strong></h4>



<p><strong>About:</strong> Informer is an efficient transformer-based model for Long Sequence Time-series Forecasting (LSTF). A team of researchers from UC Berkeley introduced this Transformer model to predict long sequences. Informer has three distinctive characteristics:</p>



<ul class="wp-block-list"><li>A ProbSparse Self-attention mechanism, which achieves O(Llog L) in time complexity and memory usage, has comparable performance on sequences’ dependency alignment.</li><li>The self-attention distilling highlights dominating attention by halving cascading layer input, and efficiently handles extreme long input sequences.</li><li>The generative style decoder that predicts the long time-series sequences at one forward operation rather than step-by-step, which improves the inference speed of long-sequence predictions.</li></ul>



<h4 class="wp-block-heading" id="h-2-exploration-exploitation-in-multi-agent-learning-catastrophe-theory-meets-game-theory"><strong>2| Exploration-Exploitation in Multi-Agent Learning: Catastrophe Theory Meets Game Theory</strong></h4>



<p><strong>About:</strong> Exploration-exploitation is a powerful tool in multi-agent learning (MAL). A team of researchers from Singapore University of Technology studied a variant of stateless Q-learning, with softmax or Boltzmann exploration, also termed as Boltzmann Q-learning or smooth Q-learning (SQL). Boltzmann Q-learning is one of the most fundamental models of exploration-exploitation in MAS.</p>



<h4 class="wp-block-heading" id="h-3-mitigating-political-bias-in-language-models-through-reinforced-calibration"><strong>3| Mitigating Political Bias in Language Models through Reinforced Calibration&nbsp;</strong></h4>



<p><strong>About:</strong> Researchers from Dartmouth College, University of Texas and ProtagoLabs described metrics for measuring political bias in GPT-2 generation and proposed a reinforcement learning (RL) framework to reduce political biases in the generated text. Using rewards from word embeddings or a classifier, the RL framework guided the debiased generation without having access to the training data or requiring the model to be retrained. The researchers also proposed two bias metrics (indirect bias and direct bias) to quantify the political bias in language model generation.</p>



<h3 class="wp-block-heading" id="h-runners-up"><strong>Runners Up</strong></h3>



<h4 class="wp-block-heading" id="h-1-learning-from-extreme-bandit-feedback"><strong>1| Learning from eXtreme Bandit Feedback</strong></h4>



<p><strong>About:</strong>&nbsp;Researchers from Amazon and UC Berkeley studied the problem of batch learning from bandit feedback in extremely large action spaces. They introduced a selective importance sampling estimator (sIS) operating in a significantly more favorable bias-variance regime. The sIS estimator is obtained by performing importance sampling on the conditional expectation of the reward concerning a small subset of actions for each instance.</p>



<h4 class="wp-block-heading" id="h-2-self-attention-attribution-interpreting-information-interactions-inside-transformer"><strong>2| Self-Attention Attribution: Interpreting Information Interactions Inside Transformer</strong></h4>



<p><strong>About:</strong> Researchers from Microsoft and Beihang University proposed a self-attention attribution algorithm to interpret the information interactions inside the Transformer. As part of the research, the scientists first extracted the most salient dependencies in each layer to construct an attribution graph, which reveals the hierarchical interactions inside the Transformer. Next, they applied self attention attribution to identify the important attention head. Finally, they showed that the attribution results can be used as adversarial patterns to implement non-targeted attacks towards BERT.</p>



<h4 class="wp-block-heading" id="h-3-dual-mandate-patrols-multi-armed-bandits-for-green-security"><strong>3| Dual-Mandate Patrols: Multi-Armed Bandits for Green Security</strong></h4>



<p><strong>About:&nbsp;</strong>Researchers from Harvard University and Carnegie Mellon University introduced LIZARD, an algorithm that accounts for decomposability of the reward function,&nbsp; smoothness of the decomposed reward function across features, monotonicity of rewards as patrollers exert more effort, and availability of historical data. According to them, LIZARD leverages both decomposability and Lipschitz continuity simultaneously, bridging the gap between combinatorial and Lipschitz bandits.</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/which-papers-won-at-35th-aaai-conference-on-artificial-intelligence/">Which Papers Won At 35th AAAI Conference On Artificial Intelligence?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/which-papers-won-at-35th-aaai-conference-on-artificial-intelligence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>A reflection on artificial intelligence singularity</title>
		<link>https://www.aiuniverse.xyz/a-reflection-on-artificial-intelligence-singularity/</link>
					<comments>https://www.aiuniverse.xyz/a-reflection-on-artificial-intelligence-singularity/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 01 Jul 2020 06:46:38 +0000</pubDate>
				<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[AI research]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[General AI]]></category>
		<category><![CDATA[PAPERS]]></category>
		<category><![CDATA[singularity]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9900</guid>

					<description><![CDATA[<p>Source: bdtechtalks.com Should you feel bad about pulling the plug on a robot or switch off an artificial intelligence algorithm? Not for the moment. But how about <a class="read-more-link" href="https://www.aiuniverse.xyz/a-reflection-on-artificial-intelligence-singularity/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/a-reflection-on-artificial-intelligence-singularity/">A reflection on artificial intelligence singularity</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: bdtechtalks.com</p>



<p>Should you feel bad about pulling the plug on a robot or switch off an artificial intelligence algorithm? Not for the moment. But how about when our computers become as smart—or smarter—than us?</p>



<p>Debates about the consequences of artificial general intelligence (AGI) are almost as old as the history of AI itself. Most discussions depict the future of artificial intelligence as either <em>Terminator</em>-like apocalypse or <em>Wall-E</em>-like utopia. But what’s less discussed is how we will perceive, interact with, and accept artificial intelligence agents when they develop traits of life, intelligence, and consciousness.</p>



<p>In a recently published essay, Borna Jalsenjak, scientist at Zagreb School of Economics and Management, discusses super-intelligent AI and analogies between biological and artificial life. Titled “The Artificial Intelligence Singularity: What It Is and What It Is Not,” his work appears in Guide to Deep Learning Basics, a collection of papers and treatises that explore various historic, scientific, and philosophical aspects of artificial intelligence.</p>



<p>Jalsenjak takes us through the philosophical anthropological view of life and how it applies to AI systems that can evolve through their own manipulations. He argues that “thinking machines” will emerge when AI develops its own version of “life,” and leaves us with some food for thought about the more obscure and vague aspects of the future of artificial intelligence.</p>



<h3 class="wp-block-heading">AI singularity</h3>



<p>Singularity is a term that comes up often in discussions about general AI. And as is wont with everything that has to do with AGI, there’s a lot of confusion and disagreement on what the singularity is. But a key thing that most scientists and philosophers agree that it is a turning point where our AI systems become smarter than ourselves. Another important aspect of the singularity is time and speed: AI systems will reach a point where they can self-improve in a recurring and accelerating fashion.</p>



<p>“Said in a more succinct way, once there is an AI which is at the level of human beings and that AI can create a slightly more intelligent AI, and then that one can create an even more intelligent AI, and then the next one creates even more intelligent one and it continues like that until there is an AI which is remarkably more advanced than what humans can achieve,” Jalsenjak writes.</p>



<p>To be clear, the artificial intelligence technology we have today, known as narrow AI, is nowhere near achieving such feat. Jalšenjak describes current AI systems as “domain-specific” such as “AI which is great at making hamburgers but is not good at anything else.” On the other hand, the kind of algorithms that is the discussion of AI singularity is “AI that is not subject-specific, or for the lack of a better word, it is domainless and as such it is capable of acting in any domain,” Jalsenjak writes.</p>



<p>This is not a discussion about how and when we’ll reach AGI. That’s a different topic, and also a focus of much debate, with most scientists in the belief that human-level artificial intelligence is at least decades away. Jalsenjack rather speculates of how the identity of AI (and humans) will be defined <em>when</em> we actually get there, whether it be tomorrow or in a century.</p>



<h3 class="wp-block-heading">Is artificial intelligence alive?</h3>



<p>There’s great tendency in the AI community to view machines as humans, especially as they develop capabilities that show signs of intelligence. While that is clearly an overestimation of today’s technology, Jasenjak also reminds us that artificial general intelligence does not necessarily have to be a replication of the human mind.</p>



<p>“That there is no reason to think that advanced AI will have the same structure as human intelligence if it even ever happens, but since it is in human nature to present states of the world in a way that is closest to us, a certain degree of anthropomorphizing is hard to avoid,” he writes in his essay’s footnote.</p>



<p>One of the greatest differences between humans and current artificial intelligence technology is that while humans are “alive” (and we’ll get to what that means in a moment), AI algorithms are not.</p>



<p>“The state of technology today leaves no doubt that technology is not alive,” Jalsenjak writes, to which he adds, “What we can be curious about is if there ever appears a superintelligence such like it is being predicted in discussions on singularity it might be worthwhile to try and see if we can also consider it to be alive.”</p>



<p>Albeit not organic, such artificial life would have tremendous repercussions on how we perceive AI and act toward it.</p>



<h3 class="wp-block-heading">What would it take for AI to come alive?</h3>



<p>Drawing from concepts of philosophical anthropology, Jalsenjak notes that living beings can act autonomously and take care of themselves and their species, what is known as “immanent activity.”</p>



<p>“Now at least, no matter how advanced machines are, they in that regard always serve in their purpose only as extensions of humans,” Jalsenjak observes.</p>



<p>There are different levels to life, and as the trend shows, AI is slowly making its way toward becoming alive. According to philosophical anthropology, the first signs of life take shape when organisms develop toward a purpose, which is present in today’s goal-oriented AI. The fact that the AI is not “aware” of its goal and mindlessly crunches numbers toward reaching it seems to be irrelevant, Jalsenjak says, because we consider plants and trees as being alive even though they too do not have that sense of awareness.</p>



<p>Another key factor for being considered alive is a being’s ability to repair and improve itself, to the degree that its organism allows. It should also produce and take care of its offspring. This is something we see in trees, insects, birds, mammals, fish, and practically anything we consider alive. The laws of natural selection and evolution have forced every organism to develop mechanisms that allow it to learn and develop skills to adapt to its environment, survive, and ensure the survival of its species.</p>



<p>On child-rearing, Jalsenjak posits that AI reproduction does not necessarily run in parallel to that of other living beings. “Machines do not need offspring to ensure the survival of the species. AI could solve material deterioration problems with merely having enough replacement parts on hand to swap the malfunctioned (dead) parts with the new ones,” he writes. “Live beings reproduce in many ways, so the actual method is not essential.”</p>



<p>When it comes to self-improvement, things get a bit more subtle. Jalsenjak points out that there is already software that is capable of self-modification, even though the degree of self-modification varies between different software.</p>



<p>Today’s machine learning algorithms are, to a degree, capable of adapting their behavior to their environment. They tune their many parameters to the data collected from the real-world, and as the world changes, they can be retrained on new information. For instance, the coronavirus pandemic disrupted may AI systems that had been trained on our normal behavior. Among them are facial recognition algorithms that can no longer detect faces because people are wearing masks. These algorithms can now retune their parameters by training on images of mask-wearing faces. Clearly, this level of adaptation is very small when compared to the broad capabilities of humans and higher-level animals, but it would be comparable to, say, trees that adapt by growing deeper roots when they can’t find water at the surface of the ground.</p>



<p>An ideal self-improving AI, however, would be one that could create totally new algorithms that would bring fundamental improvements. This is called “recursive self-improvement” and would lead to an endless and accelerating cycle of ever-smarter AI. It could be the digital equivalent of the genetic mutations organisms go through over the span of many many generations, though the AI would be able to perform it at a much faster pace.</p>



<p>Today, we have some mechanisms such as genetic algorithms and grid-search that can improve the non-trainable components of machine learning algorithms (also known as hyperparameters). But the scope of change they can bring is very limited and still requires a degree of manual work from a human developer. For instance, you can’t expect a recursive neural network to turn into a Transformer through many mutations.</p>



<p>Recursive self-improvement, however, will give AI the “possibility to replace the algorithm that is being used altogether,” Jalsenjak notes. “This last point is what is needed for the singularity to occur.”</p>



<p>By analogy, looking at determined characteristics, superintelligent AIs can be considered alive, Jalsenjak concludes, invalidating the claim that AI is an extension of human beings. “They will have their own goals, and probably their rights as well,” he says, “Humans will, for the first time, share Earth with an entity which is at least as smart as they are and probably a lot smarter.”</p>



<p>Would you still be able to unplug the robot without feeling guilt?</p>



<h3 class="wp-block-heading">Being alive is not enough</h3>



<p>At the end of his essay, Jalsenjak acknowledges that the reflection on artificial life leaves many more questions. “Are characteristics described here regarding live beings enough for something to be considered alive or are they just necessary but not sufficient?” He asks.</p>



<p>Having just read I Am a Strange Loop by philosopher and scientist Douglas Hofstadter, I can definitely say no. Identity, self-awareness, and consciousness are other concepts that discriminate living beings from one another. For instance, is a mindless paperclip-builder robot that is constantly improving its algorithms to turn the entire universe into paperclips alive and deserving of its own rights?</p>



<p>Free will is also an open question. “Humans are co-creators of themselves in a sense that they do not entirely give themselves existence but do make their existence purposeful and do fulfill that purpose,” Jalsenjak writes. “It is not clear will future AIs have the possibility of a free will.”</p>



<p>And finally, there is the problem of the ethics of superintelligent AI. This is a broad topic that includes the kinds of moral principles AI should have, the moral principles humans should have toward AI, and how AIs should view their relations with humans.</p>



<p>The AI community often dismisses such topics, pointing out to the clear limits of current deep learning systems and the far-fetched notion of achieving general AI.</p>
<p>The post <a href="https://www.aiuniverse.xyz/a-reflection-on-artificial-intelligence-singularity/">A reflection on artificial intelligence singularity</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/a-reflection-on-artificial-intelligence-singularity/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Using Rotation, Translation, and Cropping to Boost Generalization in Deep Reinforcement Learning Models</title>
		<link>https://www.aiuniverse.xyz/using-rotation-translation-and-cropping-to-boost-generalization-in-deep-reinforcement-learning-models/</link>
					<comments>https://www.aiuniverse.xyz/using-rotation-translation-and-cropping-to-boost-generalization-in-deep-reinforcement-learning-models/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 06 Feb 2020 05:23:37 +0000</pubDate>
				<category><![CDATA[Reinforcement Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[PAPERS]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6567</guid>

					<description><![CDATA[<p>Source: syncedreview.com “Generalization” is an AI buzzword these days for good reason: most scientists would love to see the models they’re training in simulations and video game <a class="read-more-link" href="https://www.aiuniverse.xyz/using-rotation-translation-and-cropping-to-boost-generalization-in-deep-reinforcement-learning-models/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/using-rotation-translation-and-cropping-to-boost-generalization-in-deep-reinforcement-learning-models/">Using Rotation, Translation, and Cropping to Boost Generalization in Deep Reinforcement Learning Models</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: syncedreview.com</p>



<p>

“Generalization” is an AI buzzword these days for good reason: most scientists would love to see the models they’re training in simulations and video game environments evolve and expand to take on meaningful real-world challenges — for example in safety, conservation, medicine, etc.</p>



<p>One concerned research area is deep reinforcement learning (DRL), which implements deep learning architectures with reinforcement learning algorithms to enable AI agents to learn the best actions possible to attain their goals in virtual environments. DRL has been widely applied in games and robotics.</p>



<p>Such DRL agents have an impressive track record on Starcraft II and Dota-2. But because they were trained in fixed environments, studies suggest DRL agents can fail to generalize to even slight variations of their training environments.</p>



<p>In a new paper, researchers from the New York University and Modl.ai, a company applying machine learning to game developing, suggest that simple spacial processing methods such as rotation, translation and cropping could help increase model generality.</p>



<p>The ability to learn directly from pixels as output by various games was one of the reasons for DRL’s surge in popularity over the last few years. But many researchers have begun to question what the models actually learn from those pixels. One way to investigate what models trained with DRL learn from pixel data is by studying their generalization capacity.</p>



<p>Starting from the hypothesis that DRL cannot easily learn generalizable policies on games using a static third-person perspective, the researchers discovered that the lack of generalization is partly due to the input representations. This means that while DRL models for games with static third-person representations do not tend to learn generalizable policies, they have a better chance of doing so if the game is “seen” from a more agent-centric perspective.</p>



<p>Because an agent’s immediate surroundings can greatly affect its ability to learn in DRL scenarios, the team proposed providing agents with a first-person view. They applied three basic image processing techniques — rotating, translating, and cropping — to the observable areas around agents.</p>



<p>Rotation keeps the agents always facing forward, so any action they take always happens from the same perspective. Translation then orients the observations around the agent so it is always at the center of its view. Finally, cropping shrinks observations down to just local information around the agent.</p>



<p>In their experiments the researchers observed that these three simple transformations enable better learning for agents, and the polices that are learned generalize much better to new environments.</p>



<p>The technique has so far only been tested on two game variants — a GVGAI port for the dungeon system in The Legend of Zelda and a simplified version of the game, Simple Zelda. For future work, the researchers intend to continue testing the generalization effects on different games, and improve their understanding of the effects of each transformation.<br></p>
<p>The post <a href="https://www.aiuniverse.xyz/using-rotation-translation-and-cropping-to-boost-generalization-in-deep-reinforcement-learning-models/">Using Rotation, Translation, and Cropping to Boost Generalization in Deep Reinforcement Learning Models</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/using-rotation-translation-and-cropping-to-boost-generalization-in-deep-reinforcement-learning-models/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
