<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>AI system Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/ai-system/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/ai-system/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Fri, 16 Oct 2020 05:59:21 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>MAKING DEEP LEARNING MODEL INTELLIGENT WITH SYNTHETIC NEURONS</title>
		<link>https://www.aiuniverse.xyz/making-deep-learning-model-intelligent-with-synthetic-neurons/</link>
					<comments>https://www.aiuniverse.xyz/making-deep-learning-model-intelligent-with-synthetic-neurons/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 16 Oct 2020 05:59:14 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[AI system]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[researchers]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12254</guid>

					<description><![CDATA[<p>Source: analyticsinsight.net Researchers unveiled an AI system that&#160;has key advantages over previous deep learning models. Deep learning, a subset of the broad field of AI, refers to <a class="read-more-link" href="https://www.aiuniverse.xyz/making-deep-learning-model-intelligent-with-synthetic-neurons/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/making-deep-learning-model-intelligent-with-synthetic-neurons/">MAKING DEEP LEARNING MODEL INTELLIGENT WITH SYNTHETIC NEURONS</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: analyticsinsight.net</p>



<h4 class="wp-block-heading"><strong>Researchers unveiled an AI system that</strong>&nbsp;<strong>has key advantages over previous deep learning models.</strong></h4>



<p>Deep learning, a subset of the broad field of AI, refers to the engineering of developing intelligent machines that can learn, perform and achieve goals as humans do. Over the last few years, deep learning models have been illustrated to outpace conventional machine learning techniques in diverse fields. The technology enables computational models of multiple processing layers to learn and represent data with manifold levels of abstraction, imitating how the human brain senses and understands multimodal information.</p>



<p>A team of researchers from TU Wien (Vienna), IST Austria and MIT (USA) has developed a new artificial intelligence system based on the brains of tiny animals like threadworms. This new AI-powered system is said to have the potential to control a vehicle with just a few synthetic neurons. According to the researchers, the system has decisive advantages over previous deep learning models. It is able to better address noisy input, and owing to its simplicity, its mode of operation can be explained in detail. It does not have to be regarded as a complex black box, but it can be understood by humans, researchers noted.</p>



<p>According to the report, artificial neural networks (ANNs), similar to human brains, comprise of numerous individual cells. When a cell is active, it conveys a signal to other cells. All these signals are received by the next cell and combined to decide whether this cell will become active as well. “For years, we have been investigating what we can learn from nature to improve deep learning,” Prof. Radu Grosu, Head of the Research Group, Cyber-Physical Systems at TU Wien said. “The nematode C. elegans, for example, lives its life with an amazingly small number of neurons, and still shows interesting behavioral patterns. This is due to the efficient and harmonious way the nematode’s nervous system processes information.”</p>



<p>As part of their test, the researchers chose a task: self-driving cars staying in their lane. For this, the neural network used camera images of the road as input and determined automatically whether to steer to the right or left. According to Alexander Amini, a Ph.D. student at MIT CSAIL, the new system encompasses two parts. The camera input is first processed by a convolutional neural network, which only perceives the visual data to excerpt structural features from incoming pixels. The network decides which parts of the camera image are interesting and significant to choose. It then passes signals to the crucial part of the network – a “control system” that then steers the vehicle.</p>



<p>As researchers experimented a new deep learning model with an autonomous vehicle, it allowed them to examine what the network focuses its attention on while driving. “Our networks focus on very specific parts of the camera picture: The curbside and the horizon. This behavior is highly desirable, and it is unique among artificial intelligence systems,” Ramin Hasani, Postdoctoral Associate at the Institute of Computer Engineering, TU Wien and MIT CSAIL said. Through their study, researchers found that interpretability and robustness are the two major advantages of the new deep learning model.</p>
<p>The post <a href="https://www.aiuniverse.xyz/making-deep-learning-model-intelligent-with-synthetic-neurons/">MAKING DEEP LEARNING MODEL INTELLIGENT WITH SYNTHETIC NEURONS</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/making-deep-learning-model-intelligent-with-synthetic-neurons/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google’s AI detects adversarial attacks against image classifiers</title>
		<link>https://www.aiuniverse.xyz/googles-ai-detects-adversarial-attacks-against-image-classifiers/</link>
					<comments>https://www.aiuniverse.xyz/googles-ai-detects-adversarial-attacks-against-image-classifiers/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 26 Feb 2020 06:14:44 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[AI system]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Google AI]]></category>
		<category><![CDATA[image classifiers]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7042</guid>

					<description><![CDATA[<p>Source: venturebeat.com Defenses against adversarial attacks, which in the context of AI refer to techniques that fool models through malicious input, are increasingly being broken by “defense-aware” <a class="read-more-link" href="https://www.aiuniverse.xyz/googles-ai-detects-adversarial-attacks-against-image-classifiers/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/googles-ai-detects-adversarial-attacks-against-image-classifiers/">Google’s AI detects adversarial attacks against image classifiers</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: venturebeat.com</p>



<p>Defenses against adversarial attacks, which in the context of AI refer to techniques that fool models through malicious input, are increasingly being broken by “defense-aware” attacks. In fact, most state-of-the-art methods claiming to detect adversarial attacks have been counteracted shortly after their publication. To break the cycle, researchers at the University of California, San Diego and Google Brain, including Turing Award winner Geoffrey Hinton, recently described in a preprint paper an approach that deflects attacks in the computer vision domain. Their framework either detects attacks accurately or, for undetected attacks, pressures the attackers to produce images that resemble the target class of images.</p>



<p>The proposed architecture comprises (1) a network that classifies various input images from a data set and (2) a network that reconstructs the inputs conditioned on parameters of a predicted capsule. Several years ago, Hinton and several students devised an architecture called CapsNet, a discriminately trained and multilayer AI system. It and other capsule networks make sense of objects in images by interpreting sets of their parts geometrically. Sets of mathematical functions (capsules) responsible for analyzing various object properties (like position, size, and hue) are tacked onto a type of AI model often used to analyze visuals. Several of the capsules’ predictions are reused to form representations of parts, and since these representations remain intact throughout analyses, capsule systems can leverage them to identify objects even when the positions of parts are swapped or transformed.</p>



<p>Another unique thing about capsule systems? They route with attention. As with all deep neural networks, capsules’ functions are arranged in interconnected layers that transmit “signals” from input data and slowly adjust the synaptic strength — weights — of each connection. (That’s how they extract features and learn to make predictions.) But where capsules are concerned, the weightings are calculated dynamically according to previous-layer functions’ ability to predict the next layer’s outputs.</p>



<p>Three reconstruction-based detection methods are used together by the capsule network to detect standard adversarial attacks. The first — Global Threshold Detector — exploits the fact that when input images are adversarially perturbed, the classification given to the input may be incorrect, but the reconstruction is often blurry. Local Best Detector identifies “clean” images from their reconstruction error; when the input is a clean image, the reconstruction error from the winning capsule is smaller than that of the losing capsules. As for the last technique, called Cycle-Consistency Detector, it flags inputs as adversarial examples if they aren’t classified in the same class as the reconstruction of the winning capsule.</p>



<p>The team reports that in experiments they were able to detect standard adversarial attacks based on three different distance metrics with a low False Positive Rate on SVHN and CIFAR-10. “A large percentage of the undetected attacks are deflected by our model to resemble the adversarial target class [and] stop being adversarial any more,” they wrote. “These attack images can no longer be called ‘adversarial’ because our network classifies them the same way as humans do.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/googles-ai-detects-adversarial-attacks-against-image-classifiers/">Google’s AI detects adversarial attacks against image classifiers</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/googles-ai-detects-adversarial-attacks-against-image-classifiers/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google&#8217;s AI Tool Will No Longer Attach Gender Labels to People in Pictures</title>
		<link>https://www.aiuniverse.xyz/googles-ai-tool-will-no-longer-attach-gender-labels-to-people-in-pictures/</link>
					<comments>https://www.aiuniverse.xyz/googles-ai-tool-will-no-longer-attach-gender-labels-to-people-in-pictures/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 25 Feb 2020 06:53:47 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[AI system]]></category>
		<category><![CDATA[ARTIFICIAL INTELLIGENCE USES]]></category>
		<category><![CDATA[GENDER LABELS]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[GOOGLE ARTIFICIAL INTELLIGENCE]]></category>
		<category><![CDATA[GOOGLE CLOUD VISION API]]></category>
		<category><![CDATA[GOOGLE IMAGES]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7024</guid>

					<description><![CDATA[<p>Source: news18.com On Thursday, Business Insider reported that Google&#8217;s Cloud Vision API service, an AI-powered tool that developers use to identify components in an image like faces, <a class="read-more-link" href="https://www.aiuniverse.xyz/googles-ai-tool-will-no-longer-attach-gender-labels-to-people-in-pictures/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/googles-ai-tool-will-no-longer-attach-gender-labels-to-people-in-pictures/">Google&#8217;s AI Tool Will No Longer Attach Gender Labels to People in Pictures</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: news18.com</p>



<p>On Thursday, Business Insider reported that Google&#8217;s Cloud Vision API service, an AI-powered tool that developers use to identify components in an image like faces, objects, or landmarks, will no longer attach gender-related labels to pictured people.</p>



<p>Yesterday, Google sent out an email to its Cloud Vision API customers that the tool, which can identify and tag various components in an image like brand logos, faces, and landmarks, will no longer attach gender labels like &#8220;man&#8221; or &#8220;woman&#8221; to people pictured in an image.</p>



<p>According to the email, as reported by Business Insider, Google said that this practice was to be discontinued because &#8220;you can&#8217;t deduce someone&#8217;s gender by their appearance alone&#8221; and doing so would enforce an unethical use of AI. Instead, an individual&#8217;s will simply be tagged as a &#8220;person&#8221;.</p>



<p>Speaking with Business Insider, AI bias expert Frederike Kaltheuner describes this change as &#8220;very positive,&#8221; stating that &#8220;Classifying people as male or female assumes that gender is binary. Anyone who doesn&#8217;t fit it will automatically be misclassified and misgendered. So this is about more than just bias &#8212; a person&#8217;s gender cannot be inferred by appearance. Any AI system that tried to do that will inevitably misgender people.&#8221;</p>



<p>Google noted in the email that they intend to continue evolving their AI to ensure that people are not discriminated against based on gender, but also not discriminated against based on factors like, race, ethnicity, income, or religious belief.<br></p>
<p>The post <a href="https://www.aiuniverse.xyz/googles-ai-tool-will-no-longer-attach-gender-labels-to-people-in-pictures/">Google&#8217;s AI Tool Will No Longer Attach Gender Labels to People in Pictures</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/googles-ai-tool-will-no-longer-attach-gender-labels-to-people-in-pictures/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>A roadmap to using Artificial Intelligence</title>
		<link>https://www.aiuniverse.xyz/a-roadmap-to-using-artificial-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/a-roadmap-to-using-artificial-intelligence/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 12 Sep 2019 12:34:55 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI project]]></category>
		<category><![CDATA[AI system]]></category>
		<category><![CDATA[roadmap]]></category>
		<category><![CDATA[Technologies]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4461</guid>

					<description><![CDATA[<p>Source: thehindubusinessline.com The first question you need to consider is: what are the specific business drivers for your AI project? In a broader context, the likelihood is <a class="read-more-link" href="https://www.aiuniverse.xyz/a-roadmap-to-using-artificial-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/a-roadmap-to-using-artificial-intelligence/">A roadmap to using Artificial Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: thehindubusinessline.com</p>



<p>The first question you need to consider is: what are the specific business drivers for your AI project?</p>



<p>In a broader context, the likelihood is that you are driven by a desire to gain or sustain a competitive advantage or to enter into new areas of business, stave off competitors, etc. Those are important drivers, but if everyone is getting into AI then they could arguably be table stakes. AI is a long-term investment and you need to think beyond the short term. Assume that your competitors and future competitors in adjacent markets are also at least considering the use of AI to keep parity. Reducing costs is a key driver – but just as important and just as overlooked is the fact that in the not too distant future both your suppliers and your customers will be expecting you to leverage AI. You need to think holistically about the short and long-term drivers, both for the internal and external factors that affect your business…</p>



<h4 class="wp-block-heading">Unexpected Impact</h4>



<p>The impact of AI on business processes gives us some cause for concern&#8230; Many seem to think this impact will come further down the line. In reality, even a small change in a process can have a big impact on a worker’s or department’s daily life. It depends on how you define a big impact. AI can have an immediate impact on the day-to-day work of individuals but it may take time for the results of all those small changes to have a big impact on the business as a whole. You need to plan for an immediate impact on your processes, as well as the greater impact AI will have in the mid and long term. Your processes will change, and hiccups and push backs not identified or mitigated against at the start could derail your project.</p>



<p>Finally, let’s consider your existing business operations; for example, your sales, HR, procurement, accounts, marketing processes, etc. To you these may seem to be straightforward and discreet activities. An AI system may not view these things as so clear cut; it may see more optimal ways of working, it may not respect the cultural silos that exist within your organization. Outside of very narrow, niche projects, the use of AI will likely have a knock-on impact, including potentially unexpected impacts, across your organization. Remember, when we said that you need to think holistically when planning the use of AI? Well, consider the chain effect of automating decision-making in one aspect of your business and how that may affect other areas of your business.</p>



<p>You must ask three essential questions:</p>



<p>1. What is the project? What value will it bring? What impact will it have?</p>



<p>2. What will the project involve in terms of changes to existing systems, processes and the organization?</p>



<p>3. What financial benefits will the project bring?</p>



<h4 class="wp-block-heading">Costs to consider</h4>



<p>Like IT projects, staff costs are a significant chunk of the overall costs. Because of a talent shortage in certain areas, the salaries for AI specialists can be higher than other IT roles. Also seemingly basic tasks like labeling of the training data can be a significant cost.</p>



<p>In AI projects, you also will have to budget for data and computing environment costs. Plus, if your teams need any external consulting support for opportunity identification, roadmap development, and working alongside the team, that cost also has to be considered. In contrast to normal Central Processing Units (CPU) in our computers and laptops, Graphical Processing Units (GPU) are special purpose hardware chips that are needed to optimize many machine learning applications. Another special purpose hardware chip is the Tensor Processing Unit (TPU) which is optimized for certain machine learning applications.</p>



<p>GPUs/ TPUs can make your machine learning applications run faster, but keep in mind that they can be more expensive compared to normal CPU-based environments…</p>



<p>The likelihood is your AI project is going to be processing a lot of data, fast. You may decide to use optimized on-premise servers or utilize cloud services such as Amazon, Microsoft or Google. Either way, you need to budget carefully. We suggest you do a five-year cost analysis as low monthly fees can stack up versus high initial costs for on premise hardware.</p>



<p>Finally, most AI projects will use some kind of external consulting and this can also be expensive. However, there can be great value in engaging experts that have AI project experience, saving you from reinventing the wheel.</p>



<p>This is the budget in the initial phases when you are about to begin your AI journey. After that, you will need to factor in solution deployment, solution management and solution retraining costs.</p>



<p>AI projects may require extra governance and specialist oversight teams. In fact, AI projects will be staffed differently to traditional IT projects and the vendor partnerships you will form will have deeper and different dimensions to them.</p>



<p>And, of course, AI projects will require a significant amount of change versus the status quo. Change may well be profound and any change, no matter how small, needs to be managed.</p>
<p>The post <a href="https://www.aiuniverse.xyz/a-roadmap-to-using-artificial-intelligence/">A roadmap to using Artificial Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/a-roadmap-to-using-artificial-intelligence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Artificial intelligence: Future perfect, future tense</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-future-perfect-future-tense/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-future-perfect-future-tense/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 28 Aug 2017 09:26:11 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI system]]></category>
		<category><![CDATA[chatbot]]></category>
		<category><![CDATA[Communication technology]]></category>
		<category><![CDATA[Future perfect]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=802</guid>

					<description><![CDATA[<p>Source &#8211; deccanchronicle.com Artificial intelligence (AI) is developing quickly. Apple’s intelligent personal assistant, Siri, can listen to your voice and find the nearest restaurant; self-driving cars have become <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-future-perfect-future-tense/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-future-perfect-future-tense/">Artificial intelligence: Future perfect, future tense</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>deccanchronicle.com</strong></p>
<p>Artificial intelligence (AI) is developing quickly. Apple’s intelligent personal assistant, Siri, can listen to your voice and find the nearest restaurant; self-driving cars have become a reality, and IBM’s quiz contest-winning AI model ‘Watson’ is now being deployed to improve cancer treatment. While researchers and experts continue to exploit and harness AI’s “revolutionary” potential, the celebration could be premature. Microsoft chatbot on Twitter transformed into a Hitler-loving, incest-promoting robot in 2016; Wikipedia edit bots have repeatedly engaged in feuds over editing pages; and two chatbots on popular messaging application QQ in China were taken offline after they went off-script last week. Recently, Facebook also had to shut down one of its AI systems after the chatbots allegedly developed their own language. However, the social media giant clarified that its AI system had not gone rogue and the programme was closed as it could not have brought any benefit to the company.</p>
<p>As the instances of AI machines going awry grow, experts and researchers in the field of artificial intelligence have cautioned that the technology is incomprehensible. “Artificial intelligence is, of course, going to be unpredictable. Any really complicated controller can behave in unexpected ways. We’ll always have to be careful about what aspects of our lives we put into the “hands” of artificial intelligence. We’d want to vet these things really well before handing life-or-death tasks over to them — like driving, to give just one topical example,” said Michael Graziano, a neuroscientist and author of the book Consciousness And The Social Brain.</p>
<p>The two iconic entrepreneurs, Facebook CEO Mark Zuckerberg and inventor Elon Musk, are locked in a bitter tussle over the use of artificial intelligence. In 2014, addressing students at MIT, Musk likened AI to “summoning the demons”. “AI is a fundamental existential risk for human civilisation, and I don’t think people fully appreciate that,” he had said. Calling for oversight in 2017, Musk stated, “We need to be proactive about regulation instead of reactive. Governments couldn’t afford to wait until a whole bunch of bad things happen.” Responding to Musk’s remarks, Zuckerberg, on July 23, called his comment “irresponsible”. “I think people who are naysayers and try to drum up these doomsday scenarios — I just don’t understand it. It’s really negative and in some way, I actually think it is pretty irresponsible,” said Zuckerberg.</p>
<p>Two days later the war of words got ugly, with Musk tweeting, “I’ve talked to Mark (Zuckerberg) about this. His understanding of the subject is limited.” However, Musk isn’t the only one who takes a grim view of AI. Aaron M. Bornstein, a Princeton neuroscientist, believes that AI may worsen inequality and oppression. “More likely, and it is already happening — the ways humans use machine learning, it will worsen existing inequality and oppression by making it seem objective, and harder to overcome,” said Bornstein.</p>
<p>If AI machines are tipped to take over the important aspects of human life eventually, can experts instil values and human-like motivation in them? Michael Graziano, who is researching on engineering consciousness in AI, believes that AI can be made conscious. “The mind is something migratable to artificial devices. The technology is moving in that direction rapidly. A really convincing version, like Data, the android from Star Trek, might be beyond our lifetime, but that sort of thing and more will inevitably come,” said Graziano.</p>
<p>Oxford philosopher Nick Bostrom holds a diametrically opposite view. He argued, “We cannot blithely assume that a superintelligence will necessarily share any of the final values stereotypically associated with wisdom and intellectual development in humans: scientific curiosity, benevolent concern for others…” Facebook’s suicide prevention AI system had failed to prevent people from taking their lives in India. Two cases of live-streaming of suicide were reported in India after Facebook deployed AI in January to avert cases of suicide.  “Using AI to identify people who are thinking about suicide, and then reaching out to them, may be very helpful. But even if it helps to some degree, for some people, it obviously won’t solve the whole problem, so you’ll always be able to point to some spectacular tragedies. Communication technology seems to enable certain kinds of behaviours. I don’t think giving emotions to AI would make any obvious difference to that effort, at least not right now. Human beings are good at emotions, and yet not very good at suicide prevention,” said Graziano.</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-future-perfect-future-tense/">Artificial intelligence: Future perfect, future tense</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-future-perfect-future-tense/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
		<item>
		<title>New AI system can decode your brain signals</title>
		<link>https://www.aiuniverse.xyz/new-ai-system-can-decode-your-brain-signals/</link>
					<comments>https://www.aiuniverse.xyz/new-ai-system-can-decode-your-brain-signals/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 22 Aug 2017 16:12:36 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI system]]></category>
		<category><![CDATA[brain signals]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=696</guid>

					<description><![CDATA[<p>Source &#8211; economictimes.indiatimes.com BERLIN: Scientists have developed a new artificial intelligence system that can decode brain signals, an advance that may help severely paralysed patients communicate with <a class="read-more-link" href="https://www.aiuniverse.xyz/new-ai-system-can-decode-your-brain-signals/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/new-ai-system-can-decode-your-brain-signals/">New AI system can decode your brain signals</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; economictimes.indiatimes.com</p>
<p>BERLIN: Scientists have developed a new artificial intelligence system that can decode brain signals, an advance that may help severely paralysed patients communicate with their thoughts.</p>
<p>Artificial intelligence has far outpaced human intelligence in certain tasks.</p>
<p>Researchers from University Hospital Freiburg in Germany led by neuroscientist Tonio Ball showed how a self-learning algorithm decodes human brain signals that were measured by an electroencephalogram (EEG).  ..<br />
It included performed movements, but also hand and foot movements that were merely thought of, or an imaginary rotation of objects.</p>
<p>The system could be used for early detection of epileptic seizures, communicating with severely paralysed patients or make automatic neurological diagnosis.</p>
<p>&#8220;Our software is based on brain-inspired models that have proven to be most helpful to decode various natural signals such as phonetic sounds,&#8221; said Robin Tibor Schirrmeister, University Hospital Freiburg.</p>
<p>&#8220;The great thing about the program is we needn&#8217;t predetermine any character .The information is processed layer for layer, that is in multiple steps with the help of a non-linear function,&#8221; said Schirrmeister.</p>
<p>&#8220;The system learns to recognise and differentiate between certain behavioural patterns from various movements as it goes along,&#8221; he said.<br />
The model is based on the connections between nerve cells in the human body in which electric signals from synapses are directed from cellular protuberances to the cell&#8217;s core and back again.</p>
<p>&#8220;Theories have been in circulation for decades, but it wasn&#8217;t until the emergence of today&#8217;s computer processing power that the model has become feasible,&#8221; comments Schirrmeister.<br />
Up until now, it had been problematic to interpret the network&#8217;s circuitry after the learning process had been completed. All algorithmic processes take place in the background and are invisible.</p>
<p>That is why the researchers developed the software to create cards from which they could understand the decoding decisions. The researchers can insert new datasets into the system at any time.</p>
<p>&#8220;Our vision for the future includes self-learning algorithms that can reliably and quickly recognise the user&#8217;s various intentions based on their brain signals. In addition, such algorithms could assist neurological diagnoses,&#8221; said Ball, head investigator of the study published in the journal Human Brain Mapping.</p>
<p>The post <a href="https://www.aiuniverse.xyz/new-ai-system-can-decode-your-brain-signals/">New AI system can decode your brain signals</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/new-ai-system-can-decode-your-brain-signals/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>Microsoft Research takes inspiration from nature for its latest AI-powered initiative</title>
		<link>https://www.aiuniverse.xyz/microsoft-research-takes-inspiration-from-nature-for-its-latest-ai-powered-initiative/</link>
					<comments>https://www.aiuniverse.xyz/microsoft-research-takes-inspiration-from-nature-for-its-latest-ai-powered-initiative/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 18 Aug 2017 12:01:07 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI system]]></category>
		<category><![CDATA[AI-powered]]></category>
		<category><![CDATA[Microsoft Research]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=669</guid>

					<description><![CDATA[<p>Source &#8211; neowin.net You may remember that back in February, the folks at Microsoft Research shared a series of tools meant to help create safer and more aware Unmanned <a class="read-more-link" href="https://www.aiuniverse.xyz/microsoft-research-takes-inspiration-from-nature-for-its-latest-ai-powered-initiative/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/microsoft-research-takes-inspiration-from-nature-for-its-latest-ai-powered-initiative/">Microsoft Research takes inspiration from nature for its latest AI-powered initiative</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>neowin.net</strong></p>
<p>You may remember that back in February, the folks at Microsoft Research shared a series of tools meant to help create safer and more aware Unmanned Aerial Vehicles (UAVs). However, the search for improvement never stops, and the Research division has now shared something else they’re working on, inspired by the way nature operates.</p>
<p>Taking cues from birds and the way they stay aloft, the team has created an AI system which can keep a sailplane in the air “without using a motor, by autonomously finding and catching rides on naturally occurring thermals”.</p>
<p>In contrast to a bird, which does this naturally, for an AI to even be able to keep a vehicle in the air autonomously, it needs to not only gather a multitude of data points – be it wind speed, temperature, and others -, but it also needs to make predictions regarding how these data points will affect its trajectory. Moreover, the system needs to be sophisticated enough to also act in accordance to these predictions.</p>
<p>Similar to Facebook’s take, Ashish Kapoor, principal researcher at Microsoft, says that one day, this autonomous sailplane could replace cellular towers, therefore negating the need for any ground infrastructure. Not only that, but in conjunction with solar panels, the vehicle could stay aloft indefinitely.</p>
<p>The sailplane itself is still in testing, so it does include a battery and a motor, the latter of which is there in case a ground operator needs to take manual control. That said, when the vehicle gets in the air, it’s designed to adjust itself sans the need for an operator or a motor.</p>
<p>To achieve this high level of self-sufficiency on the AI’s part, the team has combined a few so-called frameworks of thinking, chief among which, the “partially observable Markov decision process”. This is used in cases where you need to plan for decisions “in an environment in which you can’t know everything”. Building upon this initial idea, for the AI to absorb and learn as much of the environment cues as quickly as possible, the Markov framework was combined with Bayesian reinforcement learning. Finally, in order for the AI to elect the most promising course of action, the team has also combined the previous two approaches with something called the Monte Carlo tree search.</p>
<figure class="image"><img decoding="async" src="https://s3.amazonaws.com/neowin/news/images/uploaded/2017/08/1502966058_4q4a4353-1024x683.jpg" alt="" /><figcaption>From left to right: Debadeepta Dey, Andrey Kolobov, Rick Rogahn, Ashish Kapoor and Jim Piavis, gearing up for the launch of the sailplane</figcaption></figure>
<p>For the AI system to be ever perfected, it’s split into two parts, the low-level and high-level planners. The high-level planner takes into account all sensory data from the environment, thus creating policies for trajectories the vehicle should take in order to go and look for thermals. In essence, the more times it does this, the better it gets at making these predictions. In the words of researcher Andrey Kolobov, “The system will perform better on Friday than on Thursday because it incorporates information based on past flights.” The low-level planner however, incorporates the Bayesian reinforcement learning approach to detect thermals in real time, or what’s thought of as “learning by doing”.</p>
<p>After months of work on Microsoft’s Redmond campus, the testing was finally conducted in Hawthorne, Nevada for a number of reasons. First, the open area allowed for higher flexibility, and the fact that it was in the real world as opposed to a simulation, revealed problems that need to be fixed in order to improve the system’s operation – such as the fact that it needs to know when to avoid obstacles like mountains, steer clear of restricted air space, a lake, or “scores of munitions that the U.S. Army stores in the area near the test flight site”.</p>
<p>The post <a href="https://www.aiuniverse.xyz/microsoft-research-takes-inspiration-from-nature-for-its-latest-ai-powered-initiative/">Microsoft Research takes inspiration from nature for its latest AI-powered initiative</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/microsoft-research-takes-inspiration-from-nature-for-its-latest-ai-powered-initiative/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
	</channel>
</rss>
