<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>machines Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/machines/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/machines/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Wed, 14 Jul 2021 06:52:06 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>
	<item>
		<title>The Rise Of The Machines; Analogue Meets Artificial Intelligence</title>
		<link>https://www.aiuniverse.xyz/the-rise-of-the-machines-analogue-meets-artificial-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/the-rise-of-the-machines-analogue-meets-artificial-intelligence/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 14 Jul 2021 06:43:15 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Analogue]]></category>
		<category><![CDATA[machines]]></category>
		<category><![CDATA[meets]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14966</guid>

					<description><![CDATA[<p>Source &#8211; https://which-50.com/ When&#160;Southern Cross Austereo (SCA) become an early-stage investor&#160;in Melbourne-based&#160;Sonnant, Which-50.com decided to reach out to the company and ask what SCA was getting for their money. Surprisingly, Sonnant CEO&#160;Tony Simmons&#160;responded by offering a demonstration to explain how it all worked. Sonnant styles itself as a “transformational artificial intelligence (AI) and machine learning <a class="read-more-link" href="https://www.aiuniverse.xyz/the-rise-of-the-machines-analogue-meets-artificial-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-rise-of-the-machines-analogue-meets-artificial-intelligence/">The Rise Of The Machines; Analogue Meets Artificial Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://which-50.com/</p>



<p>When&nbsp;Southern Cross Austereo (SCA) become an early-stage investor&nbsp;in Melbourne-based&nbsp;Sonnant, Which-50.com decided to reach out to the company and ask what SCA was getting for their money.</p>



<p>Surprisingly, Sonnant CEO&nbsp;Tony Simmons&nbsp;responded by offering a demonstration to explain how it all worked.</p>



<p>Sonnant styles itself as a “transformational artificial intelligence (AI) and machine learning (ML) company that provides content discovery for the spoken word”.</p>



<p>Simmons explained that the key to Sonnant’s success was their initial decision to train the AI to understand the Australian accent in phase one. “There’s often problems with the machine understanding the Aussie accent. Anyone who’s tried to make a booking at a restaurant while in America will know what I mean.”</p>



<p>Australian English is most associated with monophthongs (single vowels), where there are approximately 20 distinct sounds compared to American English, with only 16 sounds. Also difficult for the AI are Australian diphthongs, the timing between two vowel sounds and the tendency for a falling second sound.</p>



<p>An accurate transcript of an analogue recording is necessary to map the keywords. Simmons shows me how the platform extracts the keywords for use in various scenarios. Furthermore, users can set parameters or search for terms that the AI hasn’t selected.</p>



<p>Once a user has a summary of the terms, they can attach them to various curation problems. Let’s say SCA has noticed audiences downloading their morning shows late in the day, possibly listening while commuting home. The shows need to have commercials attached that are more appropriate for that time of day; delete the breakfast cereal and add commercials for Deliveroo.</p>



<p>By the end of the month, Sonnant will offer users the ability to curate their listening by using keywords and subjects.</p>



<p>Originally Sonnant was used in the education sector. Students and staff have been able to use the program to revise and redistribute lectures. The possibilities for students to curate course notes to reflect their research or to improve weaknesses in their knowledge is continually developing.</p>



<p>Simmons and I discuss the possibilities; I suggest having at your fingertips the ability to retrieve archival records of Dwight Eisenhower’s wartime comments on the war in Europe, against his domestic policy priorities while he was President in the ’50s.</p>



<h3 class="wp-block-heading">The Future Is Here</h3>



<p>Simmons leans over his desk and tells me he’s calling this ability to extract useable analogue recording as ‘archival revival’.</p>



<p>We return to the topic of SCA and how they might apply the platform to their industry. Simmons puts it in terms of “talent and topics”.</p>



<p>Radio audiences will either follow individual personalities or hone in on topics.&nbsp;Take sports radio. Audiences will want to curate their listening to suit their interests. An AFL demographic will have different consumer biases than a basketball listening demographic. There will be cross overs, but for SCA, the ability to tailor every aspect of the listening experience will allow them to extract maximum revenue.</p>



<p>Finally, if you can flexibly curate your audio via AI and machine learning, you probably need less expensive human staff in the production studio. Productivity will rise, leaving more income to drop to the bottom line.</p>



<p>As Simmons ends, he tells me that “the technology now allows us to do things with audio that was science fiction a few years ago. There’s plenty more to come.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/the-rise-of-the-machines-analogue-meets-artificial-intelligence/">The Rise Of The Machines; Analogue Meets Artificial Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-rise-of-the-machines-analogue-meets-artificial-intelligence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Machines Can See – Computer Vision and Deep Learning Summit</title>
		<link>https://www.aiuniverse.xyz/machines-can-see-computer-vision-and-deep-learning-summit/</link>
					<comments>https://www.aiuniverse.xyz/machines-can-see-computer-vision-and-deep-learning-summit/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 10 Jun 2021 05:34:25 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Computer]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[machines]]></category>
		<category><![CDATA[Summit]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14155</guid>

					<description><![CDATA[<p>Source &#8211; https://www.biometricupdate.com/ The fifth annual international summit ‘Machines Can See’ will be held on July 8 in Moscow at the Omega Rooftop and is hosted by VisionLabs. This event brings together the world’s leading experts in computer vision and machine learning to discuss technology trends and share experience, connecting international AI communities. Human-centric technologies This <a class="read-more-link" href="https://www.aiuniverse.xyz/machines-can-see-computer-vision-and-deep-learning-summit/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machines-can-see-computer-vision-and-deep-learning-summit/">Machines Can See – Computer Vision and Deep Learning Summit</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.biometricupdate.com/</p>



<p>The fifth annual international summit ‘Machines Can See’ will be held on July 8 in Moscow at the Omega Rooftop and is hosted by VisionLabs.</p>



<p>This event brings together the world’s leading experts in computer vision and machine learning to discuss technology trends and share experience, connecting international AI communities.</p>



<h2 class="wp-block-heading">Human-centric technologies</h2>



<p>This year’s theme is “human-centric technologies” with speakers:</p>



<p>· Dima Damen – Associate Professor in the Department of Computer Science at the University of Bristol</p>



<p>· Dr. Efstratios Gavves – Associate Professor at the University of Amsterdam, Scientific Director of the QUVA Deep Vision Lab, Scientific Director of the POP-AART Lab</p>



<p>· Bernard Ghanem – Associate Professor in the CEMSE division, a theme leader at the Visual Computing Center (VCC), and the Interim Lead of the AI Initiative at KAUST</p>



<p>· Ira Kemelmacher-Shlizerman – Associate Professor of Computer Science at the Allen School, Director of the UW Reality Lab, and an Eng Lead at Google</p>



<p>· Kris M. Kitani – Associate Research Professor and Director of the Computer Vision MS program in the Robotics Institute at Carnegie Mellon University</p>



<p>This event also includes a computer vision competition on gesture recognition that runs until July 5. Winners will be announced at the summit and will share a prize fund of 500 000 rubles.</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/machines-can-see-computer-vision-and-deep-learning-summit/">Machines Can See – Computer Vision and Deep Learning Summit</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machines-can-see-computer-vision-and-deep-learning-summit/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>UNRAVELLING TRANSFER LEARNING TO MAKE MACHINES MORE ADVANCED</title>
		<link>https://www.aiuniverse.xyz/unravelling-transfer-learning-to-make-machines-more-advanced/</link>
					<comments>https://www.aiuniverse.xyz/unravelling-transfer-learning-to-make-machines-more-advanced/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 23 Feb 2021 10:33:06 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[advanced]]></category>
		<category><![CDATA[Learning]]></category>
		<category><![CDATA[machines]]></category>
		<category><![CDATA[transfer]]></category>
		<category><![CDATA[UNRAVELLING]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13028</guid>

					<description><![CDATA[<p>Source &#8211; https://www.analyticsinsight.net/ Researchers have embraced transfer learning to address algorithm challenges Advanced machines never fail to leave men in awe. But only researchers who worked behind the machines know how much time, cost and data it took to become a stage stealer. Training an algorithm that employs various features in a machine is quite nerve-wracking. <a class="read-more-link" href="https://www.aiuniverse.xyz/unravelling-transfer-learning-to-make-machines-more-advanced/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/unravelling-transfer-learning-to-make-machines-more-advanced/">UNRAVELLING TRANSFER LEARNING TO MAKE MACHINES MORE ADVANCED</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.analyticsinsight.net/</p>



<h1 class="wp-block-heading">Researchers have embraced transfer learning to address algorithm challenges</h1>



<p>Advanced machines never fail to leave men in awe. But only researchers who worked behind the machines know how much time, cost and data it took to become a stage stealer. Training an algorithm that employs various features in a machine is quite nerve-wracking. But tech geeks have found a solution using transfer learning. Besides, companies are also unveiling a mixture of technologies like deep learning neural networks and machine learning to come up with futuristic machines.</p>



<p>We are often surrounded by the myth that number-crunching gets cheaper all the time. According to Moore’s law, the number of components that can be squeezed onto a microchip of a given size can double every two years with the amount of computational power available at a given cost. This idea might suggest the opinion that the cost of training a machine is falling. But that is not true. Just because data is everywhere and is easily available doesn’t mean they are open to use and inexpensive in any way. Even when the data is open for accessibility, training an algorithm takes much more effort than any other computational process. Industry analysts anticipate that worldwide spending on artificial intelligence will reach US$100 billion in 2024, double of what it is today.</p>



<p>The advantage of machine learning and artificial intelligence algorithm is that they can easily understand information, act and interact with our environment in the most natural and human way possible. But the performance of the models depends highly on the calculation power allocated, and the quantity and quality of data. A study conducted by Dimensional Research unravels that around 96% of organizations run into a problem with training data quality and quantity. Besides, the study also claims that most machine learning model projects require more than 100,000 data samples to perform effectively. A machine learning system is still programmed with standard one-and-zero logic, but it can modify its behavior to meet specialized goals based on patterns it discovers in the sample data. Henceforth, machine learning algorithm needs to be trained with good data, which means data is optimized according to the issue you are dealing with. Fortunately, transfer learning can help as it takes knowledge gained from a pre-trained model that was used to solve a specific task and applies it to a different, but a similar problem within the same domain. Additionally, a mixed array of technologies like deep learning neural networks and machine learning are also making the training process less burdening.</p>



<h3 class="wp-block-heading"><strong>Transfer learning addresses algorithm challenges</strong></h3>



<p>Transfer learning is a machine learning method where a model developed for a task is reused as the starting point for a model on a second task. The technology is seen as a popular approach in deep learning where pre-trained models are used as the starting point on computer vision and natural language processing tasks, given the vast compute and time resources required to develop neural network models on these problems and from the huge jumps in a skill that they provide on related problems.</p>



<p>Remarkably, with the help of transfer learning, instead of starting the learning process from scratch, you start from patterns that have been learned when solving a different problem. This way, you leverage previous learning and avoid starting from nothing. Transfer learning is usually expressed through the use of pre-trained models that were trained on a large dataset to solve a problem similar to the one that we want to solve. One of the well-known examples of transfer learning is GPT-3, the largest natural language machine learning model ever built. GPT-3 is a language prediction model where an algorithm structure is designed to take one piece of language and transform it into what it predicts is the most useful following piece of language for the user. Behind the mechanism are machine learning, deep learning and transfer learning technologies that help the model to produce humanlike predictive text.</p>



<p>Other than this, big tech conglomerates like Microsoft, AWS, NVIDIA, IBM, etc. have leveraged the help of transfer learning toolkits to remove the burden of building models from scratch, address the data quality and quantity challenges and expedite production machine learning.</p>
<p>The post <a href="https://www.aiuniverse.xyz/unravelling-transfer-learning-to-make-machines-more-advanced/">UNRAVELLING TRANSFER LEARNING TO MAKE MACHINES MORE ADVANCED</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/unravelling-transfer-learning-to-make-machines-more-advanced/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Why human-like robots elicit uncanny feelings</title>
		<link>https://www.aiuniverse.xyz/why-human-like-robots-elicit-uncanny-feelings/</link>
					<comments>https://www.aiuniverse.xyz/why-human-like-robots-elicit-uncanny-feelings/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 17 Sep 2020 07:29:31 +0000</pubDate>
				<category><![CDATA[Robotics]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[developed]]></category>
		<category><![CDATA[human]]></category>
		<category><![CDATA[machines]]></category>
		<category><![CDATA[Robots]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11640</guid>

					<description><![CDATA[<p>Source: nanowerk.com (Nanowerk News) Androids, or robots with humanlike features, are often more appealing to people than those that resemble machines — but only up to a certain point. Many people experience an uneasy feeling in response to robots that are nearly lifelike, and yet somehow not quite “right.” The feeling of affinity can plunge <a class="read-more-link" href="https://www.aiuniverse.xyz/why-human-like-robots-elicit-uncanny-feelings/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/why-human-like-robots-elicit-uncanny-feelings/">Why human-like robots elicit uncanny feelings</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: nanowerk.com</p>



<p>(Nanowerk News) Androids, or robots with humanlike features, are often more appealing to people than those that resemble machines — but only up to a certain point. Many people experience an uneasy feeling in response to robots that are nearly lifelike, and yet somehow not quite “right.” The feeling of affinity can plunge into one of repulsion as a robot’s human likeness increases, a zone known as “the uncanny valley.”</p>



<p>The journal Perception (&#8220;The Uncanny Valley Phenomenon and the Temporal Dynamics of Face Animacy Perception &#8220;) published new insights by Emory psychologists into the cognitive mechanisms underlying this phenomenon.</p>



<p>Since the uncanny valley was first described, a common hypothesis developed to explain it. Known as the mind-perception theory, it proposes that when people see a robot with human-like features, they automatically add a mind to it. A growing sense that a machine appears to have a mind leads to the creepy feeling, according to this theory.</p>



<p>“We found that the opposite is true,” says Wang Shensheng, first author of the new study, who did the work as a graduate student at Emory and recently received his PhD in psychology. “It’s not the first step of attributing a mind to an android but the next step of ‘dehumanizing’ it by subtracting the idea of it having a mind that leads to the uncanny valley. Instead of just a one-shot process, it’s a dynamic one.”</p>



<p>The findings have implications for both the design of robots and for understanding how we perceive one another as humans.</p>



<p>“Robots are increasingly entering the social domain for everything from education to healthcare,” Wang says. “How we perceive them and relate to them is important both from the standpoint of engineers and psychologists.”</p>



<p>“At the core of this research is the question of what we perceive when we look at a face,” adds Philippe Rochat, Emory professor of psychology and senior author of the study. “It’s probably one of the most important questions in psychology. The ability to perceive the minds of others is the foundation of human relationships. ”</p>



<p>The research may help in unraveling the mechanisms involved in mind-blindness — the inability to distinguish between humans and machines — such as in cases of extreme autism or some psychotic disorders, Rochat says.</p>



<p>Co-authors of the study include Yuk Fai Cheong and Daniel Dilks, both associate professors of psychology at Emory.</p>



<p>Anthropomorphizing, or projecting human qualities onto objects, is common. “We often see faces in a cloud for instance,” Wang says. “We also sometimes anthropomorphize machines that we’re trying to understand, like our cars or a computer.”</p>



<p>Naming one’s car or imagining that a cloud is an animated being, however, is not normally associated with an uncanny feeling, Wang notes. That led him to hypothesize that something other than just anthropomorphizing may occur when viewing an android.</p>



<p>To tease apart the potential roles of mind-perception and dehumanization in the uncanny valley phenomenon the researchers conducted experiments focused on the temporal dynamics of the process. Participants were shown three types of images — human faces, mechanical-looking robot faces and android faces that closely resembled humans — and asked to rate each for perceived animacy or “aliveness.” The exposure times of the images were systematically manipulated, within milliseconds, as the participants rated their animacy.<br>The results showed that perceived animacy decreased significantly as a function of exposure time for android faces but not for mechanical-looking robot or human faces. And in android faces, the perceived animacy drops at between 100 and 500 milliseconds of viewing time. That timing is consistent with previous research showing that people begin to distinguish between human and artificial faces around 400 milliseconds after stimulus onset.</p>



<p>A second set of experiments manipulated both the exposure time and the amount of detail in the images, ranging from a minimal sketch of the features to a fully blurred image. The results showed that removing details from the images of the android faces decreased the perceived animacy along with the perceived uncanniness.</p>



<p>“The whole process is complicated but it happens within the blink of an eye,” Wang says. “Our results suggest that at first sight we anthropomorphize an android, but within milliseconds we detect deviations and dehumanize it. And that drop in perceived animacy likely contributes to the uncanny feeling.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/why-human-like-robots-elicit-uncanny-feelings/">Why human-like robots elicit uncanny feelings</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/why-human-like-robots-elicit-uncanny-feelings/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Is Artificial Intelligence really &#8216;intelligent&#8217;?</title>
		<link>https://www.aiuniverse.xyz/is-artificial-intelligence-really-intelligent/</link>
					<comments>https://www.aiuniverse.xyz/is-artificial-intelligence-really-intelligent/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 20 Jul 2020 07:42:14 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[machines]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10327</guid>

					<description><![CDATA[<p>Source: thearticle.com When Artificial Intelligence was in its infancy it was quite natural to give it a sonorous name. It needed to attract money and talent. It has since become a mainstream subject that seeks to imitate human intelligence. See a recent definition:&#160;“Artificial Intelligence is the theory and development of computer systems able to perform <a class="read-more-link" href="https://www.aiuniverse.xyz/is-artificial-intelligence-really-intelligent/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/is-artificial-intelligence-really-intelligent/">Is Artificial Intelligence really &#8216;intelligent&#8217;?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: thearticle.com</p>



<p>When Artificial Intelligence was in its infancy it was quite natural to give it a sonorous name. It needed to attract money and talent. It has since become a mainstream subject that seeks to imitate human intelligence. See a recent definition:&nbsp;“Artificial Intelligence is the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”</p>



<p>Speech recognition, I remember the first steps. It was the late 1950s. I worked in industry. A colleague of mine, two benches away, had the job of recognising and printing out some limited speech consisting of the numbers from one to ten. He talked to an oscilloscope and watched the appearing waveform. He hoped to identify the numbers from the zero crossings on the oscilloscope, ie when the waveform changed sign. One day he told me that the problem had been solved. His machine had been able to recognise all those numbers. “May I try it?” I asked. “By all means,” he said. I tried, and counted up to ten. The machine ignored me. Several other people tried and failed too. As it turned out later, the machine could only work if addressed in a Polish accent. That was a long time ago. Since then software has been commercially available that understands not only those born in this country but also Hungarians, known to be mercilessly massacring the English language.</p>



<p>Machines can of course do a lot more nowadays than understand the spoken word. But are they intelligent? Where should our quest for intelligence take us? Games are good candidates. Let us look at a number of them starting with a simple one: Noughts and Crosses.</p>



<p>It is a trivial example. There are only nine squares. The machine can look at all combinations of moves and countermoves. They amount to about 35,000.&nbsp;Draughts is a game incomparably more complicated, but there are too many possible moves. Brute force, ie looking at all the possibilities, does not work. So what can be done?</p>



<p>A strategy was envisaged by Arthur Samuel whose first program goes back to 1959. He introduced a score function, which assessed the chances that any given move would eventually lead to a win. The function took into account the number of kings and how close any of the pieces were to becoming king. Samuel also introduced machine learning. He fed thousands of games into the computer, pinpointing winning strategies. He did his programming on an IBM computer. His machine could beat amateurs but not professionals. But even this partial success led to IBM stock rising, with the birth of this new computer application: games.</p>



<p>The game that stands above them all is chess. There is no chance of exhausting all possible moves, so it comes down to Samuel’s methods, a score function and learning from examples. Oddly enough this way of learning was first practised by a fictional character in Stefan Zweig’s Schachnovelle published in 1941 (recently mentioned by Raymond Keene in a column in these pages). The main character, an Austrian aristocrat, was imprisoned by the Nazis. While in solitary confinement he managed to get hold of a book containing all the moves in a high level Chess Tournament. Not having anything else to read, he just played them in his mind again and again and again. When he was released, his mental state was affected, but his play was good enough to beat the World Champion. Deep Blue, IBM’s computer trained to play chess, beat Kasparov, the reigning champion in the real world, in 1997 in a six-game match. Deep Blue had a three-way strategy. It played countless games (like the Austrian aristocrat), it had a score function and used Brute Force to evaluate the game six or seven moves ahead.</p>



<p>The machine’s victory is regarded as the greatest triumph of Artificial Intelligence, although it was somewhat marred by Kasparov’s claim that IBM cheated. He said that the machine must have been occasionally overruled by a human player, and this amounted to cheating because he would play differently against a human player than against a machine. The controversy was never resolved. IBM dismantled the machine very soon after the end of the match. Was Deep Blue intelligent? Not really, because it just did what it was programmed for. Its main advantage was speed. The programmers were intelligent (even if they cheated), Deep Blue was not.</p>



<p>So let’s go to GO, regarded by orientals as the supreme game. The computer, Deep Mind, challenged grandmasters including the world champion about three years ago. The computer won hands down. The main reason for winning was that a lot has happened in AI since Deep Blue. There has been a radical change in programming philosophy, It started with no knowledge of the game and built up its expertise by studying millions of actual games. It trained itself for the singular purpose of playing GO. It was a radical departure from previous approaches by not feeding into the computer any preliminary information on the nature of the game in question. It started from scratch, just like a non-swimmer thrown in at the deep end of a swimming pool.</p>



<p>Games are games. They are excellent demonstrations of how to solve problems where the criteria of success are well defined, and the rules are known. But let us widen our scope and look at a much-predicted product of Artificial Intelligence — driverless cars. If perfected, could this be regarded as matching human intelligence? I think the answer is yes. Driverless cars would, no doubt, be a great improvement over human-driven cars. They have many advantages. They would never be under the influence of alcohol nor drugs, they would never race a fellow-driverless car. They would never try to show off to impress a girl-friend and they would never fall asleep.</p>



<p>Even so, we are still very far from the driverless stage. When will they be ready? In a year or two? In ten years? In thirty years? Next century, perhaps? Part of the reason is technical. How can they be trained? Not like GO. Driverless cars cannot learn by going up and down a street a million times. Even a thousand times would not go down well with those living there. And even if everything goes well with the first two thousand journeys down a street, something new — the development of a new junction — might invalidate all that training. And that was only one street.</p>



<p>If that wasn’t enough, there is a psychological barrier as well — the fear of accidents. It may very well happen that driverless cars turn out to be safer than those driven by ordinary mortals. They might cause only, say, 900 fatal accidents in a year in contrast to the 1,700 caused by human drivers in the UK. Will we be happy? Unlikely. We accept human errors because we often commit them ourselves. But if there were ever a fatal accident caused by a driverless car we would blame the manufacturers and demand that their product should be banned from the roads.</p>



<p>Much of what goes on at the moment as Artificial Intelligence is hype. Many of the functional applications already in existence need no intelligence, but use instead the assiduous collection of data combined with known techniques of automation. On the whole I would claim that the programmers are intelligent, but the machines are not. In one application, driving cars, machine intelligence might indeed surpass human intelligence but that application may never come. Machines could of course help humans in arriving at decisions, say diagnoses in medicine, but very few patients would be happy if the decisions were made by machines alone.</p>
<p>The post <a href="https://www.aiuniverse.xyz/is-artificial-intelligence-really-intelligent/">Is Artificial Intelligence really &#8216;intelligent&#8217;?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/is-artificial-intelligence-really-intelligent/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>HOW INTELLIGENT IS ARTIFICIAL INTELLIGENCE?</title>
		<link>https://www.aiuniverse.xyz/how-intelligent-is-artificial-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/how-intelligent-is-artificial-intelligence/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 13 Jun 2020 08:04:04 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[applications]]></category>
		<category><![CDATA[digital humans]]></category>
		<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[machines]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9526</guid>

					<description><![CDATA[<p>Source: analyticsinsight.ne The quest for making machines, to think, and act like humans has evolved from movie-fiction to real-world applications. Yet we are far from replicating the cognitive thinking of humans with accuracy and precision. Although the bots, cobots, robots, humanoids, and digital humans can either outplay or coordinate with us in many ways, unlike <a class="read-more-link" href="https://www.aiuniverse.xyz/how-intelligent-is-artificial-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-intelligent-is-artificial-intelligence/">HOW INTELLIGENT IS ARTIFICIAL INTELLIGENCE?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: analyticsinsight.ne</p>



<p>The quest for making machines, to think, and act like humans has evolved from movie-fiction to real-world applications. Yet we are far from replicating the cognitive thinking of humans with accuracy and precision. Although the bots, cobots, robots, humanoids, and digital humans can either outplay or coordinate with us in many ways, unlike human intelligence, they need to be fed with data regularly. While our minds cannot beat machines in terms of computational power and speed of execution, the level of complex cognitive skills still makes us superior to the machines. The programmed and trained models fail when it comes to making a rational decision. This where a lot of work needs to be done as we need a holistic human approach for real-world situations in the future.</p>



<p>The learning process never happens in the spur of the moment. It requires the steady practice of absorbing information and processing it and eventually adds up to our experience, which again differs among individuals. One of the cognitive psychologists at Harvard, Elizabeth S. Spelke, uses behavioral methods and laboratory-based tasks to investigate the concepts and reasoning of infants, children, and adults. According to her, while infants are no match for AI, there are things that they can do beyond the reach of AI. Despite being terrible at labeling images, hopeless at mining text, and awful at a videogame, just after few months, they start to understand how the physical world works and grasp the foundations of language, such as grammar. And a couple of years later, they can extract knowledge, recognize objects, employ cognitive thinking, extrapolate motion, develop mathematical skills, understand the cause and effect of things around them, acquire abstract concepts from its surrounding. This is what surprises Spelke and other experts pondering about how babies learn. Finding this can help us design better AI.</p>



<p>François Chollet, a well-known AI engineer and the creator of Keras, says, “What makes human intelligence special is its adaptability, i.e., its power to generalize to never-seen-before situations,” In his November research paper, he advises not to measure machine intelligence solely according to its skills at specific tasks. “Humans don’t start with skills; they start with a broad ability to acquire new skills,” he says. “What a strong human chess player is demonstrating isn’t the ability to play chess per se, but the potential to acquire any task of similar difficulty. That’s a very different capability.” He designed a series of puzzles to test AI ability to learn in a generalized environment. Each puzzle problem requires arranging colored squares on a grid-based on just a few previous examples. While it is barely a ‘challenge’ for humans, AI has managed to reach a max of only 12 percent accuracy by April.</p>



<p>Scott Robinson, a SharePoint, and business intelligence expert based in Louisville, Kentucky, precisely points out, saying, “Business processes involve intelligent thought and intelligent behavior. AI is great at replicating intelligent behavior, but intelligent thought is another matter. We don’t fully understand how intelligent human thoughts develop, so we’re not going to build machines that can have them anytime soon.”</p>



<p>While it is established that AI is not a ‘see once and always remember’ learner like humans, there are also a few more flaws that differentiate it from us. In a recent paper titled The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence, the author highlights that the neural network of AI can be easily fooled. Plus, these networks may give faulty interpretations even when there are some minor tweaks in the data stimulus. These are the areas that demand a huge volume of innovation, engineering, and research before we devise a version of AI that is more similar to a human brain or human intelligence. Then maybe we can debate on man vs. machine intelligence.</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-intelligent-is-artificial-intelligence/">HOW INTELLIGENT IS ARTIFICIAL INTELLIGENCE?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-intelligent-is-artificial-intelligence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>India Witnesses 6X Growth in Internet of Things (IoT) Patents over Last 5 Yrs</title>
		<link>https://www.aiuniverse.xyz/india-witnesses-6x-growth-in-internet-of-things-iot-patents-over-last-5-yrs/</link>
					<comments>https://www.aiuniverse.xyz/india-witnesses-6x-growth-in-internet-of-things-iot-patents-over-last-5-yrs/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 06 Jun 2020 07:46:00 +0000</pubDate>
				<category><![CDATA[Internet of things]]></category>
		<category><![CDATA[Automation]]></category>
		<category><![CDATA[Internet of Things]]></category>
		<category><![CDATA[machines]]></category>
		<category><![CDATA[Technologies]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9338</guid>

					<description><![CDATA[<p>Source: indianweb2.com According to a recent NASSCOM report, India has witnessed six times growth in Internet of Things (IoT) patents over the last five years and more than 80% of these patents were related to applications related to “Industry 4.0”, which refers to the intelligent networking of machines and processes for industry Nearly 6,000 IoT Patents were filed <a class="read-more-link" href="https://www.aiuniverse.xyz/india-witnesses-6x-growth-in-internet-of-things-iot-patents-over-last-5-yrs/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/india-witnesses-6x-growth-in-internet-of-things-iot-patents-over-last-5-yrs/">India Witnesses 6X Growth in Internet of Things (IoT) Patents over Last 5 Yrs</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: indianweb2.com</p>



<p>According to a recent NASSCOM report, India has witnessed six times growth in Internet of Things (IoT) patents over the last five years and more than 80% of these patents were related to applications related to “Industry 4.0”, which refers to the intelligent networking of machines and processes for industry</p>



<p>Nearly 6,000 IoT Patents were filed in India from 2009-2019, of which over 5,000 were filed in the last five years. Healthcare and Automobile Industry lead the patents race, said the report titled “IoT: Driving the Patent Growth Story in India”.</p>



<p>Over 70% of the total IoT patents came from from the R&amp;D centres of MNCs while start-ups accounted for about 7% of such patents, the report further said.</p>



<p>Nearly 95% of IoT patents were relate to hardware components, with connectivity network and sensors being the leading sub-technologies.</p>



<p>Manufacturers of electronics and electrical equipments, semiconductor devices and computer and telecom equipments together accounted for over 60% of the IoT patents filed in India by business entities over 2009-19. The share for IT-ITeS companies stood at 13 per cent.</p>



<p>The NASSCOM report also said that patent filing will also see an increase in the coming years primarily driven by healthcare, automation, manufacturing and supply chain, 5G and security systems</p>
<p>The post <a href="https://www.aiuniverse.xyz/india-witnesses-6x-growth-in-internet-of-things-iot-patents-over-last-5-yrs/">India Witnesses 6X Growth in Internet of Things (IoT) Patents over Last 5 Yrs</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/india-witnesses-6x-growth-in-internet-of-things-iot-patents-over-last-5-yrs/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Can AI be cutting-edge in the geopolitical scenario?</title>
		<link>https://www.aiuniverse.xyz/can-ai-be-cutting-edge-in-the-geopolitical-scenario/</link>
					<comments>https://www.aiuniverse.xyz/can-ai-be-cutting-edge-in-the-geopolitical-scenario/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 04 Apr 2020 06:52:07 +0000</pubDate>
				<category><![CDATA[AI-ONE]]></category>
		<category><![CDATA[application]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[machines]]></category>
		<category><![CDATA[technological]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7952</guid>

					<description><![CDATA[<p>Source: dqindia.com It all started when Alan Turing asked himself, in 1950, if machines could just think. Isaac Asimov’s novels – creator of the famous three laws of robotics -the myths of ancient Greece and other anecdotes from past centuries show the same question has been around the minds of scientists and the general public <a class="read-more-link" href="https://www.aiuniverse.xyz/can-ai-be-cutting-edge-in-the-geopolitical-scenario/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/can-ai-be-cutting-edge-in-the-geopolitical-scenario/">Can AI be cutting-edge in the geopolitical scenario?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: dqindia.com</p>



<p>It all started when Alan Turing asked himself, in 1950, if machines could just think. Isaac Asimov’s novels – creator of the famous three laws of robotics -the myths of ancient Greece and other anecdotes from past centuries show the same question has been around the minds of scientists and the general public much longer.</p>



<p>Six years after Turing’s question, Marvin Minsky and his colleagues used the term “artificial intelligence” (AI) for the first time. This technological concept is not alien to any developed country today, nor its application to the most advanced companies.</p>



<p>The utopia of having a non-organic intelligent agent obeying orders has also caught the attention of Defense ministries around the world. In 2017, China released its strategy to position itself at the forefront of AI research. A year later, the United States assigned 2,000 million dollars to the advancement of this technology. Countries such as Russia, Japan and the United Kingdom have also joined in making great contributions to this global pulse, which has created a widespread feeling of a “new arms race” that once again passes through universities, private companies, and governments.</p>



<p>John McCarthy, one of the pioneers in the world of AI, defined it in 1956 as “the science of creating intelligent machines”. Although the definition of intelligence is controversial, early AI scientists proposed language as a way to channel and manifest it. One of them, Turing, also famous for decoding the WWII Enigma machine, which allowed the allies to decipher Nazi communications and win the war, developed a famous test by which a “smart” machine would be considered if it was able to converse with a human without it recognizing that its interlocutor was a robot, called the “Turing Test”.</p>



<p>We shall note that both humans and machines need high doses of information to understand what is happening around us. Deprived of meaning, machines are able to represent the outside world from data packets or datasets. The content of these data is vital for the construction of the artificial “mind” and the cognitive and moral characteristics of the mathematician or developer who develops the algorithm, since the AI consciousness depends on them. In other words, the mathematician plays the role of father or mother and data educates the machine. That presents a problem: Developer’s biases may end up getting implemented into AI, which could take on racist leanings.</p>



<h4 class="wp-block-heading"><strong>The future of arms race?</strong></h4>



<p>The possibilities of minimizing human risks and maximizing effectiveness in a conflict scenario make armies the first ones interested in betting on AI. In fact, Russian leader Vladimir Putin has even ruled that “<em>whoever leads the race for AI will rule the world</em>.” AI applications in the world of security and defense are ever-growing: they can accelerate the identification of suspects through their ability to find patterns and select images – image recognition, train military personnel in a specific environment by simulating scenarios, reinforce the resilience of computer systems, reduce the number of human soldiers on the battlefield and expand the precision of military weaponry in a tremendous scale.</p>



<p>Hence, autonomous weapons are one of the most visible faces of this new generation. Defined by the United States Department of Defense as systems that can “select and interact with a target, without the intervention of a human operator”, they are especially useful in reconnaissance or patrol missions abroad. Their ability to get closer to the target also makes them very suitable for dangerous or long-term missions, reducing all the risks derived from the human species’ own needs, such as fatigue, stress, fear and moral dilemmas as well as the risk of losing sensitive information if a person is captured. In addition, AI would learn from the environment and process information about it, increasing the degree of success in the mission.</p>



<p>At the end of Barack Obama’s presidency in October 2016, the White House released a report outlining the risks and opportunities of AI for the American economy and homeland security. Following Trump’s winning, the White House’s bid for AI seemed to be reduced until February 2019, when a series of measures were announced to maintain the United States’ leadership in AI. A few months earlier, the Pentagon launched the AI Next program, with an investment of close to $ 2 billion.</p>



<p>Chinese efforts in this area are a natural continuation of the Made in China 2025 plan, a strategy that aims to make China a leading technology country. Due to the number of patents and most cited articles, China already surpasses the United States, although it lacks researchers who continue to promote its industry.</p>



<h4 class="wp-block-heading"><strong>The way forward</strong></h4>



<p>The development of cutting-edge technology is usually reserved for well-to-do countries, which are able to afford large investments in R&amp;D, which are also the first ones to benefit economically. It is what could lead to “data colonialism”, as Israeli author Yuval Noah Harari has named it: “a new and uneven way of interacting between states, in which companies would collect data from countries with less developed privacy laws, to process them in countries where AI is available and apply for the benefits there.”</p>



<p>It is undeniable that AI will penetrate our daily life, in fact, it already is. So it is worth asking what the objective of your research is, weighing the consequences and ensuring that the procedure follows a logical and ethical line. This is probably one of the points that should concern us: how imperfect algorithms can find their place in places as important as armies, law firms, and police stations. Supervising that it is always a human responsibility, and not an AI factor.</p>



<p>The decision-making process responsibility is one of the pending tasks to be regulated towards the future development and implementation of AI.</p>
<p>The post <a href="https://www.aiuniverse.xyz/can-ai-be-cutting-edge-in-the-geopolitical-scenario/">Can AI be cutting-edge in the geopolitical scenario?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/can-ai-be-cutting-edge-in-the-geopolitical-scenario/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Using cues and actions to help people get along with artificial intelligence</title>
		<link>https://www.aiuniverse.xyz/using-cues-and-actions-to-help-people-get-along-with-artificial-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/using-cues-and-actions-to-help-people-get-along-with-artificial-intelligence/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 01 Apr 2020 10:00:55 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Human-AI Interaction]]></category>
		<category><![CDATA[machines]]></category>
		<category><![CDATA[researchers]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7904</guid>

					<description><![CDATA[<p>Source: techxplore.com Learning how people interact with artificial intelligence-enabled machines—and using that knowledge to improve people&#8217;s trust in AI—may help us live in harmony with the ever-increasing number of robots, chatbots and other smart machines in our midst, according to a Penn State researcher. In a paper published in the current issue of the Journal <a class="read-more-link" href="https://www.aiuniverse.xyz/using-cues-and-actions-to-help-people-get-along-with-artificial-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/using-cues-and-actions-to-help-people-get-along-with-artificial-intelligence/">Using cues and actions to help people get along with artificial intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: techxplore.com</p>



<p> Learning how people interact with artificial intelligence-enabled machines—and using that knowledge to improve people&#8217;s trust in AI—may help us live in harmony with the ever-increasing number of robots, chatbots and other smart machines in our midst, according to a Penn State researcher. </p>



<p>In a paper published in the current issue of the Journal of Computer-Mediated Communication, S. Shyam Sundar, James P. Jimirro Professor of Media Effects in Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory, has proposed a way, or framework, to study AI, that may help researchers better investigate how people interact with artificial intelligence, or Human-AI Interaction (HAII).</p>



<p>&#8220;This is an attempt to systematically look at all the ways artificial intelligence could be influencing users psychologically, especially in terms of trust,&#8221; said Sundar, who is also an affiliate of Penn State&#8217;s Institute for Computational and Data Sciences (ICDS). &#8220;Hopefully, the theoretical model advanced in this paper will give researchers a framework, as well as a vocabulary, for studying the social psychological effects of AI.&#8221;</p>



<p>The framework identifies two paths—cues and actions—that AI developers can focus on to gain trust and improve user experience, said Sundar. Cues are signals that can trigger a range of mental and emotional responses from people.</p>



<p>&#8220;The cue route is based on superficial indicators of how the AI looks or what it apparently does,&#8221; he explained.</p>



<p>Sundar added that there are several cues that affect whether users trust AI. The cues can be as obvious as the use of human-like features, such as a human face that some robots have, or a human-like voice that virtual assistants like Siri and Alexa use.</p>



<p>Other cues can be more subtle, such as a statement on the interface explaining how the device works, as in when Netflix explains why it is recommending a certain movie to viewers.</p>



<p>But, each of these cues can trigger distinct mental shortcuts or heuristics, according to Sundar.</p>



<p>&#8220;When an AI is identified to the user as a machine rather than human, as often happens in modern-day chatbots, it triggers the &#8216;machine heuristic,&#8221; or a mental shortcut that leads us to automatically apply all the stereotypes we hold about machines,&#8221; said Sundar. &#8220;We might think machines are accurate and precise, but we also might think of computers and machines as cold and unyielding.&#8221; These stereotypes in turn dictate how much we trust the AI system.</p>



<p>Sundar suggested that autopilot systems in airplanes are one example of how over-trust in AI can lead to negative repercussions. Pilots may trust so implicitly in the autopilot system that they relax their guard and are not prepared for sudden changes in the plane&#8217;s performance or malfunctions that would require their intervention. He cites this kind of &#8216;automation bias&#8217; as an indication of our deep trust in machine performance.</p>



<p>On the other hand, AI can also trigger negative biases for some people.</p>



<p>&#8220;The opposite of automation bias would be algorithm aversion,&#8221; said Sundar. &#8220;There are people who just have an aversion because, perhaps in the past, they were burned by an algorithm and now deeply mistrust AI. They were probably fooled by &#8216;deepfakes&#8217; which are fabricated videos created using AI technology, or they got the wrong product recommendation from an e-commerce site, or felt their privacy was invaded by AI snooping into their prior searches and purchases.&#8221;</p>



<p>Sundar advised developers to pay particular attention to the cues they may be offering users.</p>



<p>&#8220;If you provide clear cues on the interface, you can help shape how the users respond, but if you don&#8217;t provide good cues, you will let the user&#8217;s prior experience and folk theories, or naive notions, about algorithms to take over,&#8221; Sundar said.</p>



<p>In addition to providing cues, the AI&#8217;s ability to interact with people can also fashion user experience, according to Sundar. He calls this the &#8220;action route.&#8221;</p>



<p>&#8220;The action route is really about collaboration,&#8221; said Sundar. &#8220;AIs should actually engage and work with us. Most of the new AI tools—the smart speakers, robots and chat bots—are highly interactive. In this case, it&#8217;s not just visible cues about how they look and what they say, but about how they interact with you.&#8221;</p>



<p>In both actions and cues, Sundar suggests developers maintain the correct balance. For example, a cue that does not transparently tell the user that AI is at work in the device might trigger negative cues, but if the cue provides too much detail, people may try to corrupt—or &#8220;game&#8221;—the interaction with the AI. &#8220;Cueing the right amount of transparency on the interface is therefore quite important,&#8221; he said.</p>



<p>&#8220;If your smart speaker asks you too many questions, or interacts with you too much, that could be a problem, too,&#8221; said Sundar. &#8220;People want collaboration. But they also want to minimize their costs. If the AI is constantly asking you questions, then the whole point of AI, namely convenience, is gone.&#8221;</p>



<p>Sundar said he expects the framework of cues and actions to guide researchers as they test these two paths to AI trust. This will generate evidence to inform how developers and designers create AI-powered tools and technology for people.</p>



<p>AI technology is evolving so fast that many critics are pushing to outright ban certain applications. Sundar said that giving researchers the time to thoroughly investigate and understand how humans interact with technology is a necessary step to help society tap the benefits of the devices, while minimizing the possible negative implications.</p>



<p>&#8220;We will make mistakes,&#8221; said Sundar. &#8220;From the printing press to the internet, new media technologies have led to negative consequences, but they have also led to many more benefits. There is no question that certain manifestations of AI will frustrate us, but at some point, we will have to co-exist with AI and bring them into our lives.&#8221;</p>
<p>The post <a href="https://www.aiuniverse.xyz/using-cues-and-actions-to-help-people-get-along-with-artificial-intelligence/">Using cues and actions to help people get along with artificial intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/using-cues-and-actions-to-help-people-get-along-with-artificial-intelligence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Artificial Intelligence in Defence and Security Sector</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-in-defence-and-security-sector/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-in-defence-and-security-sector/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 14 Jan 2020 06:10:18 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Artificial intelligence (AI)]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[machines]]></category>
		<category><![CDATA[Security]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6128</guid>

					<description><![CDATA[<p>Source: newdelhitimes.com The term Artificial Intelligence (AI) was coined by John McCarthy in 1956. AI is defined in the Oxford English Dictionary as “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” According to techopedia.com, Artificial Intelligence (AI) <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-in-defence-and-security-sector/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-in-defence-and-security-sector/">Artificial Intelligence in Defence and Security Sector</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: newdelhitimes.com</p>



<p>The term Artificial Intelligence (AI) was coined by John McCarthy in 1956. AI is defined in the Oxford English Dictionary as “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” According to techopedia.com, Artificial Intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans.</p>



<p>AI is increasingly being used in the defence sector to boost the military capabilities in many developing nations of the world. In a December 2019 strategic research paper entitled, “A Candle in the Dark: US National Security Strategy for Artificial Intelligence”, Stephen Rodriguez and Tate Nurkin shed more light on this aspect. The forward written by Ashton B Carter, Former US Secretary of Defence, mentions that the strategy paper articulates the current technological landscape and offers a coherent strategic framework for the United States and its allies to harness AI’s upside potential, while mitigating downside risks and defending against emerging threats.</p>



<p>The paper states that “American AI development will take place within a complex, competitive, and challenging strategic geopolitical and security context that will both shape and be shaped by how the United States, China, and other actors, develop, diffuse, and deploy various AI technologies and the capabilities they enable”.</p>



<p>The paper states that the “four forces are particularly relevant to the intersection between the capacity of state and non-state actors to harness AI, with an impact on US national security”. They are:<br>1) Fractured Frameworks and Enhanced Competition – This includes the US-China geostrategic competition and Technology development and acquisition-especially in AI-as a critical part of this expanding competition. 2) Conflict between liberalism and authoritarianism; 3) The Information and Fourth Industrial Revolutions – AI and 4IR technologies are shaping the future of military capabilities and potentially changing the nature of conflict and warfare altogether.</p>



<p>Over time, they could remove important human components to combat and introduce new norms, operational concepts, and domain areas for competition; 4) Diffusion of the Power to Disrupt – This diffusion is happening simultaneously through licit and surreptitious means, ranging from mergers and acquisitions and joint ventures to cybertheft, traditional espionage, and the use of non-traditional collectors.</p>



<p>The paper then argues that the strategic context facing defence and security communities is characterised by the “fusion” of four previously mostly separate concepts or conditions: peace and conflict, physical and digital worlds, reality and perception, and defence/security within commercial/consumer priorities. The intersection of these concepts has created a strategic and operational environment conspicuously vulnerable to exploitation by the employment of AI-driven disinformation, distortion, and disruption campaigns.</p>



<p>The Fusion of States of Peace and Conflict has resulted in the formulation of strategic and operational doctrines around it by major global military powers. The Fusion of the Physical and Digital has resulted in Military and security communities experimenting with ways to incorporate novel 4IR technologies and AI applications that link the humans and machines to improve decision-making, physical endurance, and performance.</p>



<p>The Fusion of Reality and Perception implies that, will exploit the degradation of the truth to create and intensify divisive polarities and also offer sufficient justification for the instinct to retrench, to double down on interpretation and perspective in the face of established-but still debated-facts.</p>



<p>The Fusion of Security and Commercial Demand and Interests implies that the intersection of the 4IR and geopolitical competition is merging technology demands of national security communities and the high-tech industry and other commercial entities.</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-in-defence-and-security-sector/">Artificial Intelligence in Defence and Security Sector</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-in-defence-and-security-sector/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
