<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>AI research Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/ai-research/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/ai-research/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Wed, 14 Oct 2020 05:13:34 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Facebook Open-Sources Machine-Learning Privacy Library Opacus</title>
		<link>https://www.aiuniverse.xyz/facebook-open-sources-machine-learning-privacy-library-opacus/</link>
					<comments>https://www.aiuniverse.xyz/facebook-open-sources-machine-learning-privacy-library-opacus/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 14 Oct 2020 05:13:29 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI research]]></category>
		<category><![CDATA[Facebook]]></category>
		<category><![CDATA[machine-learning]]></category>
		<category><![CDATA[PyTorch]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12186</guid>

					<description><![CDATA[<p>Source: infoq.com Facebook AI Research (FAIR) has announced the release of Opacus, a high-speed library for applying differential privacy techniques when training deep-learning models using the PyTorch <a class="read-more-link" href="https://www.aiuniverse.xyz/facebook-open-sources-machine-learning-privacy-library-opacus/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/facebook-open-sources-machine-learning-privacy-library-opacus/">Facebook Open-Sources Machine-Learning Privacy Library Opacus</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: infoq.com</p>



<p>Facebook AI Research (FAIR) has announced the release of Opacus, a high-speed library for applying differential privacy techniques when training deep-learning models using the PyTorch framework. Opacus can achieve an order-of-magnitude speedup compared to other privacy libraries.</p>



<p>The library was described on the FAIR blog. Opacus provides an API and implementation of a PrivacyEngine, which attaches directly to the PyTorch optimizer during training. By using hooks in the PyTorch Autograd component, Opacus can efficiently calculate per-sample gradients, a key operation for differential privacy. Training produces a standard PyTorch model which can be deployed without changing existing model-serving code. According to FAIR,</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>[W]e hope to provide an easier path for researchers and engineers to adopt differential privacy in ML, as well as to accelerate DP research in the field.</p></blockquote>



<p>Differential privacy (DP) is a mathematical definition of data privacy. The core concept of DP is to add noise to a query operation on a dataset so that removing a single data element from the dataset has a very low probability of altering the results of that query. This probability is called the privacy budget. Each successive query expends part of the total privacy budget of the dataset; once that has happened, further queries cannot be performed while still guaranteeing privacy.</p>



<p>When this concept is applied to machine learning, it is typically applied during the training step, effectively guaranteeing that the model does not learn &#8220;too much&#8221; about specific input samples. Because most deep-learning frameworks use a training process called stochastic gradient descent (SGD), the privacy-preserving version is called DP-SGD. During the back-propagation step, normal SGD computes a single gradient tensor for an entire input &#8220;minibatch&#8221;, which is then used to update model parameters. However, DP-SGD requires computing the gradient for the individual samples in the minibatch. The implementation of this step is the key to the speed gains for Opacus.</p>



<p>For computing the individual gradients, Opacus uses an efficient algorithm developed by Ian Goodfellow, inventor of the generative adversarial network (GAN) model. Applying this technique, Opacus computes the gradient for each input sample. Each gradient is clipped to a maximum magnitude, ensuring privacy for outliers in the data. The gradients are aggregated to a single tensor, and noise is added to the result before model parameters are updated. Because each training step constitutes a &#8220;query&#8221; of the input data, and thus an expenditure of privacy budget, Opacus tracks this, providing real-time monitoring and the option to stop training when the budget is expended.</p>



<p>In developing Opacus, FAIR and the PyTorch team collaborated with OpenMined, an open-source community dedicated to developing privacy techniques for ML and AI. OpenMined had previously contributed to Facebook&#8217;s CrypTen, a framework for ML privacy research, and developed its own projects, including a DP library called PySyft and a federated-learning platform called PyGrid. According to FAIR&#8217;s blog post, Opacus will now become one of the core dependencies of OpenMined&#8217;s libraries. PyTorch&#8217;s major competitor, Google&#8217;s deep-learning framework TensorFlow, released a DP library in early 2019. However, the library is not compatible with the newer 2.x versions of TensorFlow.</p>
<p>The post <a href="https://www.aiuniverse.xyz/facebook-open-sources-machine-learning-privacy-library-opacus/">Facebook Open-Sources Machine-Learning Privacy Library Opacus</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/facebook-open-sources-machine-learning-privacy-library-opacus/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Carbonate’s deep learning nodes: Building the future of AI research</title>
		<link>https://www.aiuniverse.xyz/carbonates-deep-learning-nodes-building-the-future-of-ai-research/</link>
					<comments>https://www.aiuniverse.xyz/carbonates-deep-learning-nodes-building-the-future-of-ai-research/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 23 Jul 2020 06:41:12 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[AI research]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Future]]></category>
		<category><![CDATA[researchers]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10406</guid>

					<description><![CDATA[<p>Source: itnews.iu.edu From decoding genomes to analyzing the contents of thousands of images and videos, artificial intelligence (AI) is redefining research. At Indiana University (IU), the Deep <a class="read-more-link" href="https://www.aiuniverse.xyz/carbonates-deep-learning-nodes-building-the-future-of-ai-research/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/carbonates-deep-learning-nodes-building-the-future-of-ai-research/">Carbonate’s deep learning nodes: Building the future of AI research</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: itnews.iu.edu</p>



<p>From decoding genomes to analyzing the contents of thousands of images and videos, artificial intelligence (AI) is redefining research. At Indiana University (IU), the Deep Learning (DL) Resource on Carbonate provides processing power and specialized support for over 100 projects across a wide range of fields that engage the potential of AI.</p>



<p>Starting in June 2019 with a 12-node expansion of the Carbonate supercomputing cluster, the DL resource has delivered 759,688 core hours, 92,783 GPU hours for over 130 projects. “With its uniquely capable V100 GPUs, this resource gives IU’s researchers the ability to get ahead of the curve with their research using AI techniques,” said Scott Michael, manager of Research Applications and Deep Learning. The expansion is part of Research Technologies’ effort to meet new interest in AI. “We wanted to capitalize on that interest and give people who were very keen on putting in NSF or NIH proposals that involved deep learning a platform to conduct that research on,” said Michael.</p>



<p>Deep learning nodes enable researchers to develop “neural networks” or complex ways of processing large quantities of information. Samantha Wood, a researcher in the Informatics Department, is creating a visual-based toolkit for researchers without a computer science background to enable more access to AI and Deep Reinforcement Learning (DRL). “DRL models are so successful because they mimic the biological process of learning,” said Wood in her proposal. “Agents start with untrained ‘brains,’ then develop their own curriculum of learning through their interactions with the environment. The DRL agents are equipped with a full ‘brain’ for processing stimuli, making associations, encoding memories, and performing actions. As a result, these models can also function as biologically-inspired, formal models of cognition,” she said.</p>



<p>Similar to a human brain, AI’s neural networks are adaptable to virtually any set of information “If you are interested in analyzing: stop signs, for example” said Michael. “Then you can set up rules within the neural network that cover all angles of viewing, illumination, and different settings in a much more comprehensive way,” he said. “So while humans can scan through thousands of pictures and classify them, once you have the trained model, then the prediction is much faster and more efficient.”&nbsp;&nbsp;</p>



<p>This technique also has applications in genomics. “If you were trying to find a very particular mutation in a genome and you knew precisely its location, then it would be less computationally expensive to just scan the genome and detect that particular mutation,” Michael explained. Still, the genome is much more complex than a stop sign, and requires more complex models to identify mutations accurately. Michael says ideally, a copy of a genome could be uploaded to a neural network, and it would provide an accurate reading of mutations. Researchers creating such networks on Carbonate’s DL resource haven’t completed a trained model yet, but it is on the horizon. “The end goal is to provide services that very accurately and rapidly classify data,” he said.</p>
<p>The post <a href="https://www.aiuniverse.xyz/carbonates-deep-learning-nodes-building-the-future-of-ai-research/">Carbonate’s deep learning nodes: Building the future of AI research</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/carbonates-deep-learning-nodes-building-the-future-of-ai-research/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>A reflection on artificial intelligence singularity</title>
		<link>https://www.aiuniverse.xyz/a-reflection-on-artificial-intelligence-singularity/</link>
					<comments>https://www.aiuniverse.xyz/a-reflection-on-artificial-intelligence-singularity/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 01 Jul 2020 06:46:38 +0000</pubDate>
				<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[AI research]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[General AI]]></category>
		<category><![CDATA[PAPERS]]></category>
		<category><![CDATA[singularity]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9900</guid>

					<description><![CDATA[<p>Source: bdtechtalks.com Should you feel bad about pulling the plug on a robot or switch off an artificial intelligence algorithm? Not for the moment. But how about <a class="read-more-link" href="https://www.aiuniverse.xyz/a-reflection-on-artificial-intelligence-singularity/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/a-reflection-on-artificial-intelligence-singularity/">A reflection on artificial intelligence singularity</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: bdtechtalks.com</p>



<p>Should you feel bad about pulling the plug on a robot or switch off an artificial intelligence algorithm? Not for the moment. But how about when our computers become as smart—or smarter—than us?</p>



<p>Debates about the consequences of artificial general intelligence (AGI) are almost as old as the history of AI itself. Most discussions depict the future of artificial intelligence as either <em>Terminator</em>-like apocalypse or <em>Wall-E</em>-like utopia. But what’s less discussed is how we will perceive, interact with, and accept artificial intelligence agents when they develop traits of life, intelligence, and consciousness.</p>



<p>In a recently published essay, Borna Jalsenjak, scientist at Zagreb School of Economics and Management, discusses super-intelligent AI and analogies between biological and artificial life. Titled “The Artificial Intelligence Singularity: What It Is and What It Is Not,” his work appears in Guide to Deep Learning Basics, a collection of papers and treatises that explore various historic, scientific, and philosophical aspects of artificial intelligence.</p>



<p>Jalsenjak takes us through the philosophical anthropological view of life and how it applies to AI systems that can evolve through their own manipulations. He argues that “thinking machines” will emerge when AI develops its own version of “life,” and leaves us with some food for thought about the more obscure and vague aspects of the future of artificial intelligence.</p>



<h3 class="wp-block-heading">AI singularity</h3>



<p>Singularity is a term that comes up often in discussions about general AI. And as is wont with everything that has to do with AGI, there’s a lot of confusion and disagreement on what the singularity is. But a key thing that most scientists and philosophers agree that it is a turning point where our AI systems become smarter than ourselves. Another important aspect of the singularity is time and speed: AI systems will reach a point where they can self-improve in a recurring and accelerating fashion.</p>



<p>“Said in a more succinct way, once there is an AI which is at the level of human beings and that AI can create a slightly more intelligent AI, and then that one can create an even more intelligent AI, and then the next one creates even more intelligent one and it continues like that until there is an AI which is remarkably more advanced than what humans can achieve,” Jalsenjak writes.</p>



<p>To be clear, the artificial intelligence technology we have today, known as narrow AI, is nowhere near achieving such feat. Jalšenjak describes current AI systems as “domain-specific” such as “AI which is great at making hamburgers but is not good at anything else.” On the other hand, the kind of algorithms that is the discussion of AI singularity is “AI that is not subject-specific, or for the lack of a better word, it is domainless and as such it is capable of acting in any domain,” Jalsenjak writes.</p>



<p>This is not a discussion about how and when we’ll reach AGI. That’s a different topic, and also a focus of much debate, with most scientists in the belief that human-level artificial intelligence is at least decades away. Jalsenjack rather speculates of how the identity of AI (and humans) will be defined <em>when</em> we actually get there, whether it be tomorrow or in a century.</p>



<h3 class="wp-block-heading">Is artificial intelligence alive?</h3>



<p>There’s great tendency in the AI community to view machines as humans, especially as they develop capabilities that show signs of intelligence. While that is clearly an overestimation of today’s technology, Jasenjak also reminds us that artificial general intelligence does not necessarily have to be a replication of the human mind.</p>



<p>“That there is no reason to think that advanced AI will have the same structure as human intelligence if it even ever happens, but since it is in human nature to present states of the world in a way that is closest to us, a certain degree of anthropomorphizing is hard to avoid,” he writes in his essay’s footnote.</p>



<p>One of the greatest differences between humans and current artificial intelligence technology is that while humans are “alive” (and we’ll get to what that means in a moment), AI algorithms are not.</p>



<p>“The state of technology today leaves no doubt that technology is not alive,” Jalsenjak writes, to which he adds, “What we can be curious about is if there ever appears a superintelligence such like it is being predicted in discussions on singularity it might be worthwhile to try and see if we can also consider it to be alive.”</p>



<p>Albeit not organic, such artificial life would have tremendous repercussions on how we perceive AI and act toward it.</p>



<h3 class="wp-block-heading">What would it take for AI to come alive?</h3>



<p>Drawing from concepts of philosophical anthropology, Jalsenjak notes that living beings can act autonomously and take care of themselves and their species, what is known as “immanent activity.”</p>



<p>“Now at least, no matter how advanced machines are, they in that regard always serve in their purpose only as extensions of humans,” Jalsenjak observes.</p>



<p>There are different levels to life, and as the trend shows, AI is slowly making its way toward becoming alive. According to philosophical anthropology, the first signs of life take shape when organisms develop toward a purpose, which is present in today’s goal-oriented AI. The fact that the AI is not “aware” of its goal and mindlessly crunches numbers toward reaching it seems to be irrelevant, Jalsenjak says, because we consider plants and trees as being alive even though they too do not have that sense of awareness.</p>



<p>Another key factor for being considered alive is a being’s ability to repair and improve itself, to the degree that its organism allows. It should also produce and take care of its offspring. This is something we see in trees, insects, birds, mammals, fish, and practically anything we consider alive. The laws of natural selection and evolution have forced every organism to develop mechanisms that allow it to learn and develop skills to adapt to its environment, survive, and ensure the survival of its species.</p>



<p>On child-rearing, Jalsenjak posits that AI reproduction does not necessarily run in parallel to that of other living beings. “Machines do not need offspring to ensure the survival of the species. AI could solve material deterioration problems with merely having enough replacement parts on hand to swap the malfunctioned (dead) parts with the new ones,” he writes. “Live beings reproduce in many ways, so the actual method is not essential.”</p>



<p>When it comes to self-improvement, things get a bit more subtle. Jalsenjak points out that there is already software that is capable of self-modification, even though the degree of self-modification varies between different software.</p>



<p>Today’s machine learning algorithms are, to a degree, capable of adapting their behavior to their environment. They tune their many parameters to the data collected from the real-world, and as the world changes, they can be retrained on new information. For instance, the coronavirus pandemic disrupted may AI systems that had been trained on our normal behavior. Among them are facial recognition algorithms that can no longer detect faces because people are wearing masks. These algorithms can now retune their parameters by training on images of mask-wearing faces. Clearly, this level of adaptation is very small when compared to the broad capabilities of humans and higher-level animals, but it would be comparable to, say, trees that adapt by growing deeper roots when they can’t find water at the surface of the ground.</p>



<p>An ideal self-improving AI, however, would be one that could create totally new algorithms that would bring fundamental improvements. This is called “recursive self-improvement” and would lead to an endless and accelerating cycle of ever-smarter AI. It could be the digital equivalent of the genetic mutations organisms go through over the span of many many generations, though the AI would be able to perform it at a much faster pace.</p>



<p>Today, we have some mechanisms such as genetic algorithms and grid-search that can improve the non-trainable components of machine learning algorithms (also known as hyperparameters). But the scope of change they can bring is very limited and still requires a degree of manual work from a human developer. For instance, you can’t expect a recursive neural network to turn into a Transformer through many mutations.</p>



<p>Recursive self-improvement, however, will give AI the “possibility to replace the algorithm that is being used altogether,” Jalsenjak notes. “This last point is what is needed for the singularity to occur.”</p>



<p>By analogy, looking at determined characteristics, superintelligent AIs can be considered alive, Jalsenjak concludes, invalidating the claim that AI is an extension of human beings. “They will have their own goals, and probably their rights as well,” he says, “Humans will, for the first time, share Earth with an entity which is at least as smart as they are and probably a lot smarter.”</p>



<p>Would you still be able to unplug the robot without feeling guilt?</p>



<h3 class="wp-block-heading">Being alive is not enough</h3>



<p>At the end of his essay, Jalsenjak acknowledges that the reflection on artificial life leaves many more questions. “Are characteristics described here regarding live beings enough for something to be considered alive or are they just necessary but not sufficient?” He asks.</p>



<p>Having just read I Am a Strange Loop by philosopher and scientist Douglas Hofstadter, I can definitely say no. Identity, self-awareness, and consciousness are other concepts that discriminate living beings from one another. For instance, is a mindless paperclip-builder robot that is constantly improving its algorithms to turn the entire universe into paperclips alive and deserving of its own rights?</p>



<p>Free will is also an open question. “Humans are co-creators of themselves in a sense that they do not entirely give themselves existence but do make their existence purposeful and do fulfill that purpose,” Jalsenjak writes. “It is not clear will future AIs have the possibility of a free will.”</p>



<p>And finally, there is the problem of the ethics of superintelligent AI. This is a broad topic that includes the kinds of moral principles AI should have, the moral principles humans should have toward AI, and how AIs should view their relations with humans.</p>



<p>The AI community often dismisses such topics, pointing out to the clear limits of current deep learning systems and the far-fetched notion of achieving general AI.</p>
<p>The post <a href="https://www.aiuniverse.xyz/a-reflection-on-artificial-intelligence-singularity/">A reflection on artificial intelligence singularity</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/a-reflection-on-artificial-intelligence-singularity/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Facebook’s Ethics In AI Research Awards: Who Are The Winners From India?</title>
		<link>https://www.aiuniverse.xyz/facebooks-ethics-in-ai-research-awards-who-are-the-winners-from-india/</link>
					<comments>https://www.aiuniverse.xyz/facebooks-ethics-in-ai-research-awards-who-are-the-winners-from-india/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 24 Sep 2019 12:55:04 +0000</pubDate>
				<category><![CDATA[AI-ONE]]></category>
		<category><![CDATA[AI research]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Facebook]]></category>
		<category><![CDATA[India]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4567</guid>

					<description><![CDATA[<p>Source: inc42.com To encourage research on artificial intelligence (AI) ethics, Silicon Valley-headquartered Facebook&#160;said that it has now selected six projects from India that will focus on three <a class="read-more-link" href="https://www.aiuniverse.xyz/facebooks-ethics-in-ai-research-awards-who-are-the-winners-from-india/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/facebooks-ethics-in-ai-research-awards-who-are-the-winners-from-india/">Facebook’s Ethics In AI Research Awards: Who Are The Winners From India?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: inc42.com</p>



<p>To encourage research on artificial intelligence (AI) ethics, Silicon Valley-headquartered Facebook&nbsp;said that it has now selected six projects from India that will focus on three key areas – governance, cultural diversity and operationalising ethics.</p>



<p>The company announced Ethics in AI research awards in June with a regional focus on India. It had said that the proposed budget should be within INR 10 Lakh-INR 20 Lakh. The company also said that AI technological developments pose intricate and complex ethical questions that the industry alone cannot answer.</p>



<p>Therefore, Facebook sought research questions in the application of AI from independent academic research institutions along with the companies building and deploying the technology.</p>



<p>The challenge was open to academic institutions, think tanks, and research organisations registered and operational in India. The solutions were sought across: operationalizing ethics / explainability / fairness, governance and cultural diversity.</p>



<p>The shortlisted candidates were reviewed by judges such as Professor Sudeshna Sarkar (IIT Kharagpur), Sunil Abraham (CIS), and Bharath Visweswariah (Omidyar Network). Here are the winners, according to reports:</p>



<ol class="wp-block-list"><li><strong>Operationalizing Ethics / Explainability / Fairness</strong></li></ol>



<ul class="wp-block-list"><li>Patient-Centric Frameworks for the Evaluation of AI-Enabled Medical Tests</li></ul>



<p>PI: Amit Sethi, Indian Institute of Technology Bombay (IIT Bombay)</p>



<p>Collaborators: Swapnil Rane and Zakia Khan, Tata Memorial Centre</p>



<ul class="wp-block-list"><li>Targeted Bias in Indian Media Outlets</li></ul>



<p>PI: Animesh Mukherjee, Indian Institute of Technology Kharagpur (IIT Kharagpur)</p>



<p>Collaborators: Pawan Goyal and Souvic Chakraborty, IIT Kharagpur</p>



<ol class="wp-block-list"><li><strong>Governance</strong></li></ol>



<ul class="wp-block-list"><li>Ethical Implications of Delegating Decision-making Journey to AI Systems</li></ul>



<p>PI: Dr Rahul De’, Indian Institute of Management Bangalore (IIM Bangalore)</p>



<p>Collaborator: Sai Dattathrani, IIM Bangalore</p>



<ul class="wp-block-list"><li>A ‘Public Law of Information’ for India</li></ul>



<p>PI: Sudhir Krishnaswamy, Centre for Law and Policy Research</p>



<ol class="wp-block-list"><li><strong>Cultural Diversity</strong></li></ol>



<ul class="wp-block-list"><li>Mitigating Bias in Face Recognition for Vast Regional Diversity in India</li></ul>



<p>PI: Richa Singh, Indraprastha Institute of Information Technology-Delhi (IIT-Delhi)</p>



<p>Collaborator: Mayank Vatsa, IIIT-Delhi</p>



<ul class="wp-block-list"><li>Regulatory Impact Assessment of the National AI Market Place of India</li></ul>



<p>PI: Varadharajan Sridhar, International Institute of Information Technology Bangalore (IIT Bangalore)</p>



<p>Collaborator: Shrisha Rao, IIIT Bangalore</p>



<p>According to a Gartner survey, AI adoption in organisations has tripled in the past year, and AI is a top priority for CIOs. And, AI will be one of the top workloads that drives infrastructure decisions through 2023, according to Gartner.</p>



<p>Major unicorns in the Indian startup ecosystem have acquired at least one AI company in the last two years. An indicator that AI and machine learning play a key role in sustaining competitive advantage and delivering the superior user experience.</p>
<p>The post <a href="https://www.aiuniverse.xyz/facebooks-ethics-in-ai-research-awards-who-are-the-winners-from-india/">Facebook’s Ethics In AI Research Awards: Who Are The Winners From India?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/facebooks-ethics-in-ai-research-awards-who-are-the-winners-from-india/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Microsoft offers learning Python programming for free</title>
		<link>https://www.aiuniverse.xyz/microsoft-offers-learning-python-programming-for-free/</link>
					<comments>https://www.aiuniverse.xyz/microsoft-offers-learning-python-programming-for-free/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 23 Sep 2019 12:42:41 +0000</pubDate>
				<category><![CDATA[Microsoft Azure Machine Learning]]></category>
		<category><![CDATA[AI research]]></category>
		<category><![CDATA[Azure Machine Learning]]></category>
		<category><![CDATA[Learning]]></category>
		<category><![CDATA[Microsoft]]></category>
		<category><![CDATA[Programming]]></category>
		<category><![CDATA[Python]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4557</guid>

					<description><![CDATA[<p>Source: torringtontribune.com A new 44-part video series called&#160;‘Python for Beginners’&#160;is being offered on YouTube by Microsoft. The Python series, consisting of three to four minute lessons. are <a class="read-more-link" href="https://www.aiuniverse.xyz/microsoft-offers-learning-python-programming-for-free/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/microsoft-offers-learning-python-programming-for-free/">Microsoft offers learning Python programming for free</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: torringtontribune.com</p>



<p>A new 44-part video series called&nbsp;‘Python for Beginners’&nbsp;is being offered on YouTube by Microsoft. The Python series, consisting of three to four minute lessons. are taught by two staff members who love programming and teaching.</p>



<p>But this series of free lessons isn’t really for total beginners. Microsoft assumes that people who sign up have previously done some programming in JavaScript or may have used a visual programing language targeted for kids called&nbsp;‘Scratch’&nbsp;which was developed by MIT.</p>



<p>The purpose of this free training is to help infuse ambitions in beginners to build their own machine-learning apps, or applications or even build automated processes on a desktop computer.</p>



<p>To assist students in their training on&nbsp;‘Pythons for Beginners,’&nbsp;Microsoft has also provided additional resources which include slides and code samples on a page called GitHub.</p>



<p>Microsoft staff members who are teaching the&nbsp;‘Python for Beginners’&nbsp;series are&nbsp;Christopher Harrison,&nbsp;who is a senior program manager at Microsoft, and Susan Ibach, who is one of Microsoft’s AI Gaming unit’s business development manager.</p>



<p>Microsoft has several reasons why it wants more people to know Python which is already very popular and easy to learn. Python has a lot of libraries which assist and allow app developers to interface with Google-developed TensorFlow as well as Microsoft’s Cognitive Toolkit (CNTK) which are both machine-learning frameworks.</p>



<p>Developers can also use VS Code on their local PC in order to edit code stored on other&nbsp;remote machines,&nbsp;or containers, and Windows Subsystem for Linux (WSL) operating systems because Microsoft has built better support for Python in its&nbsp;Visual Studio Code (VS Code) editor.</p>



<p>In Microsoft’s marketplace for developers, the most popular extension is its own Python extension for VS Code and the VS code in itself has become hugely poplar among developers everywhere. Through the company’s distribution</p>



<p>of its popular Anaconda Python, it has made the VS Code available as part of its focus on AI.</p>



<p>Microsoft’s main benefit though for offering free training for Python is to expand&nbsp;&nbsp;the number of Python developers who would be using Azure in order to build AI apps. Azure&nbsp;Machine Learning Studio&nbsp;already has built-in support for Python and recently last August of this year, Microsoft announced complete Azure Machine Learning support for its PyTorch 1.2&nbsp;&nbsp;which is a machine-learning framework for Python within Facebook’s AI research team.</p>
<p>The post <a href="https://www.aiuniverse.xyz/microsoft-offers-learning-python-programming-for-free/">Microsoft offers learning Python programming for free</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/microsoft-offers-learning-python-programming-for-free/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Why India Needs a Strategic Artificial Intelligence Vision</title>
		<link>https://www.aiuniverse.xyz/why-india-needs-a-strategic-artificial-intelligence-vision/</link>
					<comments>https://www.aiuniverse.xyz/why-india-needs-a-strategic-artificial-intelligence-vision/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 21 Jul 2017 08:07:24 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI research]]></category>
		<category><![CDATA[AI technologies]]></category>
		<category><![CDATA[AI Vision]]></category>
		<category><![CDATA[Strategic]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=222</guid>

					<description><![CDATA[<p>Source &#8211; thewire.in China’s attention to artificial intelligence (AI) based technologies and machine learning in the US, according to a recent Reuters report, has the US quite concerned. China has been <a class="read-more-link" href="https://www.aiuniverse.xyz/why-india-needs-a-strategic-artificial-intelligence-vision/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/why-india-needs-a-strategic-artificial-intelligence-vision/">Why India Needs a Strategic Artificial Intelligence Vision</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>thewire.in</strong></p>
<p>China’s attention to artificial intelligence (AI) based technologies and machine learning in the US, according to a recent Reuters report, has the US quite concerned. China has been heavily investing in American AI start-ups, alarming the US government enough for it to seriously consider strengthening its existing strategic foreign investment regulatory mechanism – the Committee on Foreign Investment in the United States (CFIUS).</p>
<p>The CFIUS was most recently by the Obama administration to block Chinese acquisitions of American chip manufacturing companies.</p>
<p>China’s focus on AI research and development is a calculated move clearly manifested in the intensity of its domestic investments. Large budgets have been allocated to AI advancements within its borders, prioritising cutting-edge fields such as robotics, swarm technology and machine learning. These technologies have immense potential to revolutionise warfare and change future security environments. China’s investments in its universities, research laboratories and companies include huge budgets and state-of-the-art infrastructure, attracting highly skilled researchers and practitioners from around the world, while developing their own workforce in AI and robotics research and development.</p>
<p>India should immediately take note of these exponential developments in its neighbourhood, as its AI capabilities are far inferior to those of the US and China. The present trajectory of AI advancement indicates that future economies and national security will be defined by it, making it among a handful of technologies that will shape global politics.</p>
<p>At present, the bulk of leading AI research is conducted or financed by a few American or Chinese companies, allowing them, and thereby their respective governments, to control access to the technology. If India were to leverage the potential of AI, it is necessary for the government to focus on three basic steps.</p>
<p><strong>Scope of AI in India </strong></p>
<p>The mapping of India’s existing AI capabilities with a comprehensive survey of every AI focused establishment in the country would be a good place to start. It is important that these assessments record the relatively numerous small start-ups with appropriate knowledge and expertise working in the field. Such detailed mapping would provide accurate estimations of India’s capabilities, especially in comparison to other countries, and strategically optimise its budget in the national and local contexts.</p>
<p>Government support in AI research and development is essential to its advancement, evident in the levels of government engagement in the US and China. The Indian government must provide the necessary policy framework and incentives, including direct funding to select companies, start-ups and research institutions, to ensure targeted capacity development. This becomes especially expedient because India does not have tech giants like Google or Baidu that can provide the investments and resources necessary for developing advanced AI capabilities.</p>
<p>A comprehensive long-term vision of the strategic and military role of AI is the backbone of sustained AI research and development as well as innovation. The vision must cover the various strategic facets of AI, including autonomous weapons and the role of AI in cyber-defence, and formulate distinctive policies for each of them. It is not necessary that these policies be in line with either general international opinion or policy trends in other countries on these issues, as long as they adequately serve national interests. The development of such a comprehensive vision will help the Indian government optimise the allocation of its considerable research capabilities towards the development of specific AI capabilities that would most benefit the country.</p>
<p>The need for India to appreciate and develop the strategic potential of AI cannot be overstated. Delaying the initial push will only widen the technology gap between India and the likes of China. AI will also become central to economic growth, revolutionising everything from manufacturing to innovation and labour market productivity, and potentially doubling the growth rates of the most advanced economies. Given this increasingly pervasive influence, the lack of an indigenous AI capacity will severely compromise India’s future.</p>
<p>Unfortunately, India has traditionally been two steps behind other major powers when it comes to acknowledging the strategic importance of emerging technologies. While the effect of this has been generally limited in the past, such an oversight now will lead to an inexorable gap that could severely affect India’s economic and security capabilities.</p>
<p>The post <a href="https://www.aiuniverse.xyz/why-india-needs-a-strategic-artificial-intelligence-vision/">Why India Needs a Strategic Artificial Intelligence Vision</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/why-india-needs-a-strategic-artificial-intelligence-vision/feed/</wfw:commentRss>
			<slash:comments>7</slash:comments>
		
		
			</item>
		<item>
		<title>The future of artificial intelligence: two experts disagree</title>
		<link>https://www.aiuniverse.xyz/the-future-of-artificial-intelligence-two-experts-disagree/</link>
					<comments>https://www.aiuniverse.xyz/the-future-of-artificial-intelligence-two-experts-disagree/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 18 Jul 2017 07:39:54 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI research]]></category>
		<category><![CDATA[algorithms]]></category>
		<category><![CDATA[human civilization]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=151</guid>

					<description><![CDATA[<p>Source &#8211; theconversation.com Artificial intelligence (AI) promises to revolutionise our lives, drive our cars, diagnose our health problems, and lead us into a new future where thinking machines <a class="read-more-link" href="https://www.aiuniverse.xyz/the-future-of-artificial-intelligence-two-experts-disagree/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-future-of-artificial-intelligence-two-experts-disagree/">The future of artificial intelligence: two experts disagree</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>theconversation.com</strong></p>
<p><em>Artificial intelligence (AI) promises to revolutionise our lives, drive our cars, diagnose our health problems, and lead us into a new future where thinking machines do things that we’re yet to imagine.</em></p>
<p><em>Or does it? Not everyone agrees.</em></p>
<p><em>Even billionaire entrepreneur Elon Musk, who admits he has access to some of the most cutting-edge AI, said recently that without some regulation “AI is a fundamental risk to the existence of human civilization”.</em></p>
<p><em>So what is the future of AI? Michael Milford and Peter Stratton are both heavily involved in AI research and they have different views on how it will impact on our lives in the future.</em></p>
<h2>How widespread is artificial intelligence today?</h2>
<p><em>Michael:</em></p>
<p>Answering this question depends on what you consider to be “artificial intelligence”.</p>
<p>Basic machine learning algorithms underpin many technologies that we interact with in our everyday lives &#8211; voice recognition, face recognition &#8211; but are application-specific and can only do one very specific defined task (and not always well).</p>
<p>More capable AI &#8211; what we might consider as being somewhat smart &#8211; is only now becoming widespread in areas such as online retail and marketing, smartphones, assistive car systems and service robots such as robotic vacuum cleaners.</p>
<p><em>Peter:</em></p>
<p>The most obvious and useful examples of current AI are the speech recognition on your phone, and search engines such as Google. There is also IBM’s Watson, which in 2011 beat human champion players at the US TV game show Jeopardy, and is now being trialled in business and healthcare.</p>
<p>Most recently, Google’s DeepMind AI called AlphaGo beat the world champion Go player, surprising a lot of people – especially since Go is an extremely complex game, way surpassing chess.</p>
<figure class="align-center zoomable"><img decoding="async" src="https://cdn.theconversation.com/files/178386/width754/file-20170717-26940-1hiwjjg.jpg" alt="" /></p>
<div class="enlarge_hint"></div><figcaption><span class="caption">Chinese Go player Ke Jie competes against Google’s artificial intelligence program AlphaGo.</span> <span class="attribution"><span class="source">Reuters/Stringer</span></span></figcaption></figure>
<h2>What major advances in AI will we see over the next 10 years?</h2>
<p><em>Peter:</em></p>
<p>Many auto manufacturers and research institutions are competing to create practical driverless cars for general road use. While currently these cars can drive themselves for much of the time, many challenges remainin dealing with bad weather (heavy rain, fog and snow) and random real-world events such as roadworks, accidents and other blockages.</p>
<p>These incidents often require some degree of human judgement, common sense and even calculated risk to successfully navigate through. We are still a long way from fully autonomous vehicles that don’t need a licensed driver ready to take control in an instant.</p>
<p>The same can be said for all the AI that we will see over the coming 10-20 years, such as online virtual personal assistants, accountants, legal and financial advisers, doctors and even physical shop-bots, museum guides, cleaners and security guards.</p>
<p>They will be advanced tools that are very useful in specific situations, but they will never fully replace people because they will have little common sense (probably none, in fact).</p>
<p><em>Michael:</em></p>
<p>We will definitely see a range of steady, incremental improvements in everyday AI. Online product recommendations will get better, your phone or car will understand your voice increasingly well and your vacuum cleaner robot won’t get stuck as often.</p>
<p>It’s likely that we’ll see some major advances beyond today’s technology in some but not all of the following areas: self-driving cars, healthcare, utilities (electricity, water, and so on) management, legal, and service areas such as cleaning robots.</p>
<p>I disagree on self-driving cars &#8211; there’s no real reason why there won’t be fully autonomous controlled ride-sharing fleets in the affluent centres of cities, and this is indeed the strategy of companies such as NuTonomy, working in Singapore and Boston.</p>
<figure class="align-center zoomable"><img decoding="async" src="https://cdn.theconversation.com/files/178387/width754/file-20170717-26940-30z79v.jpg" alt="" /></p>
<div class="enlarge_hint"></div><figcaption><span class="caption">Pedestrians cross the road as a nuTonomy self-driving taxi undergoes its public trial in Singapore.</span> <span class="attribution"><span class="source">Reuters/Edgar Su</span></span></figcaption></figure>
<h2>What approaches will lead to the biggest improvements in AI?</h2>
<p><em>Michael:</em></p>
<p>Major advances will come from two sources.</p>
<p>First, there is a long runway of steady incremental improvements left in many areas of conventional AI &#8211; large, complex neural networks and algorithms. These systems will continue to improve steadily as more training data becomes available and as scientists perfect them.</p>
<p>The second area will likely be biological inspiration. Scientists are only just starting to tap into the knowledge about how brain networks work, and it’s likely they will copy or adapt what we know about animal and human brains to make current deep learning networks far more capable.</p>
<p><em>Peter:</em></p>
<p>Old-fashioned AI, which was based on pure logic and computer programs that tried to get machines to behave intelligently, basically failed to do anything that humans are good at and computers are not (speech and image recognition, playing complex strategic games, for example).</p>
<p>What’s quite clear now is that our best-performing AI is based on how we think the brain works.</p>
<p>But our current brain-based AI (called Deep Artificial Neural Networks) is still light years away from emulating an actual brain. Enhanced AI capabilities in the future will come from developing better theories of how the brain works.</p>
<p>The fundamental science needed to cultivate these theories will probably come from publicly funded research institutions, which will then be spun off into commercial start-up companies, and then quickly acquired by interested large corporations if they look like they might be successful.</p>
<h2>How will artificial intelligence affect society and jobs?</h2>
<p><em>Peter:</em></p>
<p>Most jobs won’t be under threat for a long time, probably several generations. Real people are needed to actually make any significant decisions because AI currently has no common sense.</p>
<p>Instead of replacing jobs, our overall quality of life will go up. For example, right now few people can afford a personal assistant, or a full-time life coach. In the near future, we’ll all have (a virtual) one!</p>
<p>Our virtual doctor will be working for us daily, monitoring our health and making exercise and lifestyle suggestions.</p>
<p>Our houses and workplaces might be cleaner, but we will still need people to clean the spots the robots miss. We’ll also need people to deploy, retrieve and maintain all the robots.</p>
<p>Our goods will be cheaper due to reduced transport costs, but we’ll still need human drivers to cover all the situations the self-drivers can’t.</p>
<p>All this doesn’t even mention the whole new entertainment technologiesand industries that will spring up to capture our increased disposable income and to cash-in on our improved quality of life.</p>
<p>So yes, jobs will change, but there will still be plenty of them.</p>
<p><em>Michael:</em></p>
<p>It’s likely that a significant fraction of jobs will be under threat over the coming decade. It’s important to note that this won’t necessarily be divided by blue-collar versus white-collar, but rather by which occupations are easily automatable.</p>
<p>It’s unlikely that an effective plumber robot will be built in the near future, but aspects of the so far undisrupted construction industry may change radically.</p>
<p>Some people say machines will never have the emotional capabilities of humans. Whether that is true or not, many jobs will be under threat with even the most rudimentary levels of emotional understanding and interaction.</p>
<p>Don’t think about the complex, nuanced interaction you had with your psychologist; instead think about the one with that disinterested, uncaring part-time hospitality worker. The bar for disruption is not as high as many think.</p>
<p>That leaves the question of what happens then. There are two scenarios &#8211; the first being that, like in the past, new types of jobs are generated by the technological revolution.</p>
<p>The other is that humanity gradually transitions into a Utopian society where scientific, artistic and sporting pursuits are pursued at leisure. The short to medium-term reality is probably somewhere in between.</p>
<h2>Will Skynet/the machines take over and enslave humanity?</h2>
<p><em>Michael:</em></p>
<p>It’s unlikely in the near future but possible. The real danger is the unpredictability. Skynet-like killer cyborgs as featured in the Terminator film series are unlikely because that development cycle takes a while, and we have multiple opportunities to stop development.</p>
<p>But AI could destroy or damage humanity in other unpredictable ways. For example, when big companies like Google Deepmind start entering into healthcare, it’s likely that they will improve patient outcomes through a combination of big data and intelligent systems.</p>
<p>One of the temptations or pressures will be to deploy these extremely complex systems before we completely understand every possible ramification. Imagine the pressure if there is good evidence it will save thousands of lives per year.</p>
<p>As we well know, we have a long history of negative unintended consequences with new technology that we didn’t fully understand.</p>
<p>In a far-fetched but not impossible healthcare scenario, deploying AI may lead to catastrophic outcomes &#8211; a world-wide AI network deciding in ways invisible to us human observers to kill us all off to optimise some misguided performance goal.</p>
<p>The challenge is that with newly developing technologies, there is an illusion of 100% control, which doesn’t really exist.</p>
<p><em>Peter:</em></p>
<p>All our current AI, and any that we can possibly create in the foreseeable future, are just tools – developed for specific jobs and totally useless outside of the exact duties they were designed for. They don’t have thoughts or feelings. These AIs are just as likely to try to take over the world as your Xbox or your toaster.</p>
<p>One day, I believe, we will build machines that rival us in intelligence, and these machines will have their own thoughts and possibly learn in an unconstrained way. This sounds scary. But humans are dangerous for exactly the reasons that the machines won’t be.</p>
<p>Humans evolved in a constant struggle for life and death, which made us innately competitive and potentially treacherous. When we build the machines, we can instead build them with any underlying motivation that we would like.</p>
<p>For example, we could build an intelligent machine whose only desire is to dismantle itself. Or, we could build in a hidden remote-controlled off switch that is completely separate from any of the machine’s own circuits, and an auto-shutdown reflex if the machine somehow ever notices it.</p>
<p>All these safeguards will be trivial to implement. So there is simply no way that we could accidentally build a machine that then tries to wipe out the human race.</p>
<p>Of course, because humans themselves are dangerous, someone could build a machine that doesn’t have these safeguards and use it for nefarious purposes. But we have that same problem now with nuclear weapons.</p>
<p>The post <a href="https://www.aiuniverse.xyz/the-future-of-artificial-intelligence-two-experts-disagree/">The future of artificial intelligence: two experts disagree</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-future-of-artificial-intelligence-two-experts-disagree/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
	</channel>
</rss>
