<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>computers Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/computers/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/computers/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Fri, 14 Aug 2020 06:21:37 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Deepfakes: The Dark Origins of Fake Videos and Their Potential to Wreak Havoc Online</title>
		<link>https://www.aiuniverse.xyz/deepfakes-the-dark-origins-of-fake-videos-and-their-potential-to-wreak-havoc-online/</link>
					<comments>https://www.aiuniverse.xyz/deepfakes-the-dark-origins-of-fake-videos-and-their-potential-to-wreak-havoc-online/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 14 Aug 2020 06:21:10 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Internet]]></category>
		<category><![CDATA[weapons & security]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10884</guid>

					<description><![CDATA[<p>Source: discovermagazine.com Encountering altered videos and photoshopped images is almost a rite of passage on the internet. It’s rare these days that you’d visit social media and <a class="read-more-link" href="https://www.aiuniverse.xyz/deepfakes-the-dark-origins-of-fake-videos-and-their-potential-to-wreak-havoc-online/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deepfakes-the-dark-origins-of-fake-videos-and-their-potential-to-wreak-havoc-online/">Deepfakes: The Dark Origins of Fake Videos and Their Potential to Wreak Havoc Online</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: discovermagazine.com</p>



<p>Encountering altered videos and photoshopped images is almost a rite of passage on the internet. It’s rare these days that you’d visit social media and not come across some form of edited content — whether that be a simple selfie with a filter, a highly embellished meme or a video edited to add a soundtrack or enhance certain elements.</p>



<p>But while some forms of media are obviously edited, other alterations may be harder to spot. You may have heard the term “deepfake” in recent years — it first came about in 2017 to describe videos and images that implement deep learning algorithms to create videos and images that&nbsp;<em>look&nbsp;</em>real.</p>



<p>For example, take the moon disaster speech given by former president Richard Nixon when the Apollo 11 team crashed into the lunar surface. Just kidding — that never happened. But a hyper-realistic deepfake of Nixon paying tribute to a fallen Buzz Aldrin and Neil Armstrong appeared in a 2019 film, In Event of Moon Disaster, which showcased the convincing alteration of the president’s original speech.</p>



<p>Other current and former world leaders, such as John F. Kennedy, Barack Obama and Vladimir Putin have been the subjects of deepfake videos, too, in which they appear to say and do things that they never actually said or did. Though the rise of deepfakes in recent years has been discussed in popular media, the pool of academic literature on the topic remains relatively sparse.</p>



<p>But researchers have expressed concern that these doctored images and videos could present a growing security risk in the coming years. A report last week in Crime Science predicts that deepfakes will pose the most serious security threat in the next 15 years out of a host of other AI-powered technologies.</p>



<p>“Humans have a strong tendency to believe their own eyes and ears,” the researchers wrote in their concluded. So when the media we consume looks too good to be fake, it’s easy to fall victim to trickery. And the amount of deepfakes online continues to grow, though not always in the places you might expect.</p>



<h3 class="wp-block-heading">What Makes a Deepfake?</h3>



<p>The term deepfake doesn’t refer to just any convincing edited video or image — more specifically, the term is a conglomeration of “deep learning” and “fake.” This specific type of media relies on neural networks to alter audio and video.</p>



<p>The technology to create deepfakes has gotten easier to access over the years, with a handful of programs and websites cropping up that allow users to make their own, sometimes at a hefty price. Still, many of the deepfakes that populate various corners of the internet aren’t that convincing, says Giorgio Patrini. He’s the CEO and founder of Sensity, a company in Amsterdam that has been researching the spread of deepfakes since 2018. Patrini says most of the deepfakes he’s come across are made with the same few open-source tools. “The reason is they are very easy to use and they are very well-maintained and known by the communities,” he adds. And most media they find “in the wild,” as Patrini puts it, use the same few methods to alter digital footage</p>



<p>Recently, Facebook announced the results of a competition where experts built new algorithms to detect deepfakes — the winner was able to detect 82 percent of the AI-altered media they were exposed to. Some deepfakes can be created using methods that are still hard for current detection algorithms to spot, but Patrini says deepfake creators in the wild tend to use cheaper, simpler methods when making videos. The detection software we have now is actually pretty successful at sorting through the large swaths of media found online, he adds.</p>



<p>“I would say maybe 99 percent, or even more, of the deepfake videos that we find are … based on face swapping,” he says. “There are other ways to create fake videos, even changing the speech and lip movement, [or] changing the body movement.” But so far, those are not the most popular methods among deepfake connoisseurs, says Patrini, so current algorithms can still weed out much of the AI-altered content.</p>



<p>And though face-swapping technology can be applied to literally any photo or video with a human face in it, deepfake creators seem to have an affinity for one type of media in particular: pornography. An overwhelming amount of AI-altered videos are created to place one subject&#8217;s face onto the body of a porn star — a phenomenon that disproportionately targets women and hearkens back to the dark origins of deepfakes themselves.</p>



<h3 class="wp-block-heading">The Porn Problem</h3>



<p>In 2019, when Sensity released a report on the state of deepfakes under the name Deeptrace, they detected 14,678 total AI-altered videos online. Of those, 96 percent were used in pornographic content.</p>



<p>And the first deepfake videos, in fact, were made for the same reason. In 2017, users on Reddit started to post doctored videos of female celebrities whose faces were non-consensually swapped onto the bodies of porn stars. Reddit banned users from posting these explicit deepfakes in 2018, but reports show that other ethically problematic sites and apps still popped up in its place.</p>



<p>&#8220;We haven&#8217;t gone very far from it,&#8221; Patrini says. Despite widespread media coverage of political deepfakes, pornographic edits have been the reigning form of AI-altered content to spread across the web. And so far, women are pretty much always the targets — Sensity&#8217;s 2019 report found that 100 percent of detected pornographic deepfakes featured female subjects.</p>



<p>Just two months ago, Sensity identified 49,081 total deepfake videos online — a trend showing that the numbers are doubling nearly every six months. Lately, Patrini says, they’ve observed an increase in videos targeting people who are popular internet personalities, or influencers, on Youtube, Instagram and Twitch. &#8220;Maybe a year ago we saw that most of the content was featuring known celebrities that could be … from the entertainment industry,” he says. But deepfake creators are also targeting individuals, often women, who lead active lives online.</p>



<h3 class="wp-block-heading">Can We Stop the Spread?</h3>



<p>While AI-altered media might seem all bad, the technology itself isn’t inherently damaging. “For many people, deepfakes already have anintrinsically negative connotation,” Patrini says. But the technology behind it can be used for a host of creative projects — such as translation services or visual tricks in movies and TV shows.</p>



<p>Take the Nixon deepfake, for example. The directors didn&#8217;t present their creation to mislead viewers or make them think the history books got the Apollo 11 mission wrong. Rather, the film used an experimental new technology to showcase what an alternate historical timeline might have looked like, while educating viewers on how convincing deepfakes and video editing can be.</p>



<p>But that&#8217;s not to say deepfakes can&#8217;t mislead, nor that they aren&#8217;t already being used to carry out nefarious deeds. Besides the widespread use of non-consensual, doctored porn, Patrini says he’s also seen a rise in cases where deepfakes are used to impersonate someone trying to open a bank account or Bitcoin wallet. Video verification can be required for these processes, and it&#8217;s possible for a deepfake to trick the cameras.</p>



<p>&#8220;With some sophistication, people can actually fake an ID and also fake how they appear on the video,&#8221; Patrini says. Sometimes that can mean opening accounts under a stranger&#8217;s name, or a fake name and creating a persona that does not exist. For now, Patrini says, this kind of trickery does not appear to be widespread — but it does represent a more sinister application for deepfakes.</p>



<p>And with the technology getting easier to access, it’s likely that the spread of deepfakes will continue. We can only hope people will choose to use them for good.</p>
<p>The post <a href="https://www.aiuniverse.xyz/deepfakes-the-dark-origins-of-fake-videos-and-their-potential-to-wreak-havoc-online/">Deepfakes: The Dark Origins of Fake Videos and Their Potential to Wreak Havoc Online</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deepfakes-the-dark-origins-of-fake-videos-and-their-potential-to-wreak-havoc-online/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Design by computers: How AI is changing the graphic design industry</title>
		<link>https://www.aiuniverse.xyz/design-by-computers-how-ai-is-changing-the-graphic-design-industry/</link>
					<comments>https://www.aiuniverse.xyz/design-by-computers-how-ai-is-changing-the-graphic-design-industry/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 10 Jul 2020 09:21:06 +0000</pubDate>
				<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[graphics design]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10115</guid>

					<description><![CDATA[<p>Source: clickz.com There is no denying that artificial intelligence (AI) is one of the biggest technologies of the current generation. It holds tremendous potential in domains like <a class="read-more-link" href="https://www.aiuniverse.xyz/design-by-computers-how-ai-is-changing-the-graphic-design-industry/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/design-by-computers-how-ai-is-changing-the-graphic-design-industry/">Design by computers: How AI is changing the graphic design industry</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: clickz.com</p>



<p>There is no denying that artificial intelligence (AI) is one of the biggest technologies of the current generation. It holds tremendous potential in domains like healthcare, education, manufacturing, etc.</p>



<p>However, to everyone’s surprise, AI has also found an application in the creative arena. For instance, mobile app developers are using AI to design a better mobile app user experience.</p>



<p>There is also a wide range of graphic design software that leverage AI for creating complex designs. So, are we on the verge of AI revolution in the design space?</p>



<h3 class="wp-block-heading"><strong>The rise of AI in graphics design</strong></h3>



<p>AI has matured a lot today. The biggest achievement of technology can be seen in logo designing. After all, it was a perfect match since the beginning.</p>



<p>Take Tailor Brands, for example. It’s a highly advanced AI-based logo designer that can produce attractive and unique logos for entrepreneurs. Sure, it can’t match the work of human designers but it’s fast, affordable, and offers tons of customizable features.</p>



<p>Most importantly, it can mimic a human designer by understanding your design requirements.</p>



<p>A few years ago, a software application that can process human requirements for a graphics design like a logo was simply unheard of. This is because it was something that’s usually reserved for human’s emotional intelligence.</p>



<p>There are many examples of tech giants also using artificial intelligence. For instance, Adobe’s new AI tool Sensei uses machine learning to make it easier for you to create the perfect customer experiences through visual assets.</p>



<p>It can work as your assistant when you do creative work and help you achieve photorealistic effects, find the right content with an intuitive search, and more.</p>



<p>These examples show that AI has not just forayed into digital design space but rather become an indispensable component to give a new direction to the industry.</p>



<p>This brings us to another important question:</p>



<h3 class="wp-block-heading"><strong>Can AI replace designers?</strong></h3>



<p>AI tools are all the rage today. However, the good news is that graphic designers needn’t fear them. This is because at least at this stage, AI can only serve to make graphic design easier.</p>



<p>AI tools can limit the legwork for graphics designers and perform repetitive tasks for them so that they can focus on the bigger picture. In other words, AI is not going to replace designers but merely work as their assistants. At least that’s what we can surmise for now.</p>



<p>This is because there are some major limitations of AI today:</p>



<h3 class="wp-block-heading"><strong>1) Understanding nuances that come naturally to humans</strong></h3>



<p>AI has come a long way today, but it’s far from being even comparable to human intelligence. This is because we humans have emotional intelligence which AI doesn’t have.</p>



<p>We are capable of understanding body language, the subtle changes in voice and tone, and the messages we get when we read between the lines. This understanding of common nuances is absent in AI.</p>



<p>So, it can be difficult to make an AI software understand what we really want it to do when there are subtleties in the design.</p>



<p>Occasionally, it can happen that you lay down the requirements for a simple website interface or app design that has a certain connotation, but the AI program you are using interprets it differently.</p>



<h3 class="wp-block-heading"><strong>2) Originality</strong></h3>



<p>What makes us humans special is our ability to imagine. So many geniuses who walked on the face of the Earth created music, paintings, and poems that are simply out of the world and can’t be replicated. AI doesn’t have that kind of capacity- it can’t imagine.</p>



<h3 class="wp-block-heading"><strong>3) Human touch</strong></h3>



<p>We know that ecommerce has exploded today. However, many people still prefer shopping from local stores.</p>



<p>This is because they get a personalized experience by shopping offline- the friendly store owner can understand their requirements and give recommendations in a way that can’t be matched by an online service.</p>



<p>The same principle can be seen in graphics design. There are many entrepreneurs who want a human touch, a human being who can listen to their problems and create designs that aptly meet their needs.</p>



<h3 class="wp-block-heading"><strong>Bottom line of AI and graphic design</strong></h3>



<p>Artificial intelligence is a powerful technology and there is no dearth of its merits. It’s disrupted many industries and we can see more achievements in the time to come as the technology becomes closer to human intelligence.</p>



<p>That said, AI is still pretty much dependent on us and requires inputs from graphics designers to do most of the tasks. So, for now, AI has simplified graphics design to a great extent, at least for people who don’t have a design background.</p>



<p>However, to tap into its full potential, we need to wait a little longer.</p>



<p>Carl Dean is a freelance content writer that specializes in content topics that touch on tech and AI. Carl happily identifies as a geek – it’s a badge of honor for him. When not writing, Carl can be found attached to his Xbox, his favorite game at the moment is Doom Eternal.</p>
<p>The post <a href="https://www.aiuniverse.xyz/design-by-computers-how-ai-is-changing-the-graphic-design-industry/">Design by computers: How AI is changing the graphic design industry</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/design-by-computers-how-ai-is-changing-the-graphic-design-industry/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>IASST develops an artificial intelligence-based computer diagnosis framework for rapid and accurate diagnosis of oral cancers</title>
		<link>https://www.aiuniverse.xyz/iasst-develops-an-artificial-intelligence-based-computer-diagnosis-framework-for-rapid-and-accurate-diagnosis-of-oral-cancers/</link>
					<comments>https://www.aiuniverse.xyz/iasst-develops-an-artificial-intelligence-based-computer-diagnosis-framework-for-rapid-and-accurate-diagnosis-of-oral-cancers/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 08 Jun 2020 09:36:31 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[develops]]></category>
		<category><![CDATA[framework]]></category>
		<category><![CDATA[IASST]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9374</guid>

					<description><![CDATA[<p>Source: london-post.co.uk New Delhi: Scientists at the Institute of Advanced Study in Science and Technology (IASST), Guwahati, an autonomous institute of the Department of Science &#38; Technology, <a class="read-more-link" href="https://www.aiuniverse.xyz/iasst-develops-an-artificial-intelligence-based-computer-diagnosis-framework-for-rapid-and-accurate-diagnosis-of-oral-cancers/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/iasst-develops-an-artificial-intelligence-based-computer-diagnosis-framework-for-rapid-and-accurate-diagnosis-of-oral-cancers/">IASST develops an artificial intelligence-based computer diagnosis framework for rapid and accurate diagnosis of oral cancers</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: london-post.co.uk</p>



<p>New Delhi: Scientists at the Institute of Advanced Study in Science and Technology (IASST), Guwahati, an autonomous institute of the Department of Science &amp; Technology, Govt of India, have developed an artificial intelligence (AI) based algorithms as an aid to rapid diagnosis and prediction of oral squamous cell carcinoma.</p>



<p>The framework developed by the research group at the Central Computational and Numerical Sciences Division, IASST led by Dr. Lipi B Mahanta, will also help grading of oral squamous cell carcinoma.</p>



<p>An indigenous dataset was developed by the scientists through collaborations to make for the unavailability of any benchmark oral cancer dataset for the study. Exploring different state-of-the-art AI techniques and playing with their proposed method, the scientists have gained unprecedented accuracy in oral cancer grading. The study was conducted applying two approaches through the application of transfer learning using a pre-trained deep convolutional neural network (CNN).</p>



<p>Four candidate pre-trained models, namely Alexnet, VGG-16, VGG-19, and Resnet-50, were chosen to find the most suitable model for the classification problem, and a proposed CNN model developed to fit the problem. Although the highest classification accuracy of 92.15% was achieved by the Resnet-50 model, the experimental findings highlight that the proposed CNN model outperformed the transfer learning approaches displaying accuracy of 97.5%. The work has been published in the journal Neural Networks.</p>



<p>As of now, the group is set for converting the algorithm into proper software to move on to carry out field trials. This is the next challenge that the group is prepared to meet, considering the ever-present gap between the health and IT sectors. Dr. Mahanta aspires for all the advanced infrastructural support to meet these challenges and feels that the software needs to be actively tested in hospitals, to make it truly robust, more accurate, and real-time worthy.</p>



<p>Around 16.1% of all cancers amongst men and 10.4% amongst women are oral cancer, and the picture is all the more alarming in NE India. Oral cavity cancers are also known to have a high recurrence rate compared to other cancers due to the high consumption of betel nut and tobacco.</p>



<p>This cancer group is characterized by epithelial squamous tissue differentiation and aggressive tumour growth, disrupting the basement membrane of the inner cheek region and thus can be graded by Broder’s histopathological system as well-differentiated SCC (WDSCC), moderately differentiated SCC (MDSCC) and poorly differentiated SCC (PDSCC). The cellular morphometry highlighting the tumour growth displays a very minute histological difference separating the three classes, which are very hard to capture by the human eye. It has remained elusive due to its highly similar histological features, which even pathologists find difficult to classify.</p>



<p>The advent of deep learning in AI holds an extraordinary prospect in digital image analysis to serve as a computational aid in the diagnosis of cancer, thus providing help in timely and effective prognosis and multi-modal treatment protocols for cancer patients and reducing the operational workload of pathologists while enhancing management of the disease.</p>
<p>The post <a href="https://www.aiuniverse.xyz/iasst-develops-an-artificial-intelligence-based-computer-diagnosis-framework-for-rapid-and-accurate-diagnosis-of-oral-cancers/">IASST develops an artificial intelligence-based computer diagnosis framework for rapid and accurate diagnosis of oral cancers</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/iasst-develops-an-artificial-intelligence-based-computer-diagnosis-framework-for-rapid-and-accurate-diagnosis-of-oral-cancers/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Computer Graphics Technology Adapted for Soft Robotics</title>
		<link>https://www.aiuniverse.xyz/computer-graphics-technology-adapted-for-soft-robotics/</link>
					<comments>https://www.aiuniverse.xyz/computer-graphics-technology-adapted-for-soft-robotics/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 20 May 2020 05:58:44 +0000</pubDate>
				<category><![CDATA[Robotics]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[Graphics]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8887</guid>

					<description><![CDATA[<p>Source: unite.ai Scientists from the University of California, Los Angeles (UCLA) and Carnegie Mellon University have adapted sophisticated computer graphics technology for soft robotics. They used the <a class="read-more-link" href="https://www.aiuniverse.xyz/computer-graphics-technology-adapted-for-soft-robotics/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/computer-graphics-technology-adapted-for-soft-robotics/">Computer Graphics Technology Adapted for Soft Robotics</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: unite.ai</p>



<p>Scientists from the University of California, Los Angeles (UCLA) and Carnegie Mellon University have adapted sophisticated computer graphics technology for soft robotics. They used the same technology that motion-picture animators and video game developers rely on to create very detailed images, such as hair and fabric in animated films. It is now being used by the scientists to simulate soft, limbed robots and their movements.&nbsp;</p>



<p>The work was published in Nature Communications on May 6. The paper is titled “Dynamic Simulation of Articulated Soft Robots.”</p>



<p>Khalid Jawed is the study author and an assistant professor of mechanical and aerospace engineering at UCLA Samueli School of Engineering.&nbsp;</p>



<p>“We have achieved faster than real-time simulation of soft robots, and this is a major step toward such robots that are autonomous and can plan out their actions on their own,” said Jawed.&nbsp; “Soft robots are made of flexible material which makes them intrinsically resilient against damage and potentially much safer in interaction with humans. Prior to this study, predicting the motion of these robots has been challenging because they change shape during operation.”</p>



<h3 class="wp-block-heading"><strong>DER and FEM Technologies</strong></h3>



<p>An algorithm called discrete elastic rods (DER) is often used in movie-making in order to animate free-flowing objects. In just a fraction of a second, DER is capable of predicting hundreds of movements.&nbsp;</p>



<p>The researchers set out to use DER to develop a physics engine capable of simulating the movements of bio-inspired robots. They also wanted to use it for robots that exist in difficult environments, like those developed for Mars or underwater.&nbsp;</p>



<p>Finite element method (FEM) is also an algorithm-based technology, and it is able to simulate the movements of solid and rigid robots. However, FEM is not ideal when it comes to soft, natural movements and the required level of detail. Besides that, FEM relies on a lot of computational power and requires long periods of time.&nbsp;</p>



<p>In order to develop and simulate soft robots, roboticists have relied on trial-and-error methods.&nbsp;</p>



<p>Carmel Majidi is an associate professor of mechanical engineering in Carnegie Mellon’s College of Engineering.&nbsp;</p>



<p>“Robots made out of hard and inflexible materials are relatively easy to model using existing computer simulation tools,” said Majidi.&nbsp; “Until now, there haven’t been good software tools to simulate robots that are soft and squishy. Our work is one of the first to demonstrate how soft robots can be successfully simulated using the same computer graphics software that has been used to model hair and fabrics in blockbuster films and animated movies.”</p>



<p>The researchers began to collaborate in Majidi’s Soft Machines Lab over three years ago. Their most recent project involved Jawed running simulations in his research lab at UCLA and Majidi performing physical experiments to confirm the simulation results.&nbsp;</p>



<p>The simulation tool drastically reduces the time it takes to get a soft robot to the point of application.&nbsp;</p>



<h3 class="wp-block-heading"><strong>Support from the Army Research Office</strong></h3>



<p>The research was partly funded by the Army Research Office, which is a part of the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory.&nbsp;</p>



<p>Dr. Samuel Stanton is a program manager with the Army Research Office.&nbsp;</p>



<p>“Experimental advances in soft-robotics have been outpacing theory for several years,” said Stanton. “This effort is a significant step in our ability to predict and design for dynamics and control in highly deformable robots operating in confined spaces with complex contacts and constantly changing environments.”</p>



<p>The technology is now being explored and tried on other kinds of soft robots. One of those areas is robots that are based on the movements of bacteria and starfish, which could be utilized in oceanography tasks such as monitoring seawater conditions or inspecting marine life.</p>
<p>The post <a href="https://www.aiuniverse.xyz/computer-graphics-technology-adapted-for-soft-robotics/">Computer Graphics Technology Adapted for Soft Robotics</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/computer-graphics-technology-adapted-for-soft-robotics/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Reinforcement Learning: The Algorithms Changing How Computers Make Decisions</title>
		<link>https://www.aiuniverse.xyz/reinforcement-learning-the-algorithms-changing-how-computers-make-decisions/</link>
					<comments>https://www.aiuniverse.xyz/reinforcement-learning-the-algorithms-changing-how-computers-make-decisions/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 23 Mar 2020 06:42:35 +0000</pubDate>
				<category><![CDATA[Reinforcement Learning]]></category>
		<category><![CDATA[algorithms]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[researchers]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7636</guid>

					<description><![CDATA[<p>Source: inc42.com The last decade of tech was to a large part defined by the advent of Deep Supervised Learning (DL). The availability of cheap data at scale, computational <a class="read-more-link" href="https://www.aiuniverse.xyz/reinforcement-learning-the-algorithms-changing-how-computers-make-decisions/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/reinforcement-learning-the-algorithms-changing-how-computers-make-decisions/">Reinforcement Learning: The Algorithms Changing How Computers Make Decisions</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: inc42.com</p>



<p>The last decade of tech was to a large part defined by the advent of Deep Supervised Learning (DL). The availability of cheap data at scale, computational power, and researcher interest have made it the de-facto school of algorithms used for most pattern recognition problems. Face recognition on social media, product recommendations on sites, voice assistants like Google Assistant, Alexa, and Siri are some examples largely powered by DL.</p>



<p>The issue with deep learning is that the resources that led to its rise are also giving rise to inequities. Today, it is tough for startups to beat ‘big tech’ like Apple, Google, Amazon, and Microsoft in deep learning through better research capabilities or better data.</p>



<p>My prediction that in the 2020s, we shall see this inequity broken down. This shall be due to the rise of Deep Reinforcement Learning (RL) as a prominent algorithm for such problems.</p>



<p>RL, in essence, is mimicking what humans do. Let’s take the example of a kid learning to ride a bike. The kid has no understanding of what steps to take. But it tries to ride the bike for longer without falling down and learns in the process. You can’t explain how you ride a bike, just that you can ride it. RL works in a similar way. Given an environment, it learns to optimise for a goal through multiple trials and errors.</p>



<p> “…  I believe that in some sense reinforcement learning is the future of AI … an intelligent system must be able to learn on its own, without constant supervision …” – Richard Sutton, Founding Father of Reinforcement Learning.</p>



<p>To go a bit deeper into the tech in a watered-down way, RL has three components – the state, the policy, and the action. The state is a description of what the environment is like right now. The policy evaluates the state and finds an optimal path to the goal set for the algorithm.</p>



<p>The action is the step suggested by the policy and taken by the algorithm to reach the goal. RL algorithms iteratively run through states, use their policy to generate an action, run the action, and given the environment’s feedback – called reward – optimise the policy to give more goal-oriented actions.</p>



<p>In this manner, RL allows us to solve many problems without actually needing as much supervised/labelled data as a traditional DL model does – since it keeps generating its own data. Of course, there’s the caveat that RL doesn’t solve the same set of problems as DL – but there is a strong intersection. In this manner, RL can level the playing fields as Data may not necessarily be the moat it earlier was.</p>



<p>The biggest application of RL that we’ve seen until now has been in games – AlphaGo Zero, Deepmind’s expert-level AI to play the board game Go; DeepMind’s efforts to master a multi-agent game like StarCraft called AlphaStar; OpenAI’s research that shows multiple agents playing Hide And Seek. – these all leverage RL.</p>



<p>In the future I see RL changing how Control Systems are built for complex machines. Machines will leverage RL for 3-dimensional path and motion planning. RL will improve systems that tend to have conversational interfaces, leveraging each conversation to improve the policy. RL could potentially be used for most decision making processes in extremely complex environments with low precedent data. This will be the decade of RL.</p>
<p>The post <a href="https://www.aiuniverse.xyz/reinforcement-learning-the-algorithms-changing-how-computers-make-decisions/">Reinforcement Learning: The Algorithms Changing How Computers Make Decisions</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/reinforcement-learning-the-algorithms-changing-how-computers-make-decisions/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>DEEP LEARNING TECHNOLOGIES IMPACTING COMPUTER VISION ADVANCES</title>
		<link>https://www.aiuniverse.xyz/deep-learning-technologies-impacting-computer-vision-advances/</link>
					<comments>https://www.aiuniverse.xyz/deep-learning-technologies-impacting-computer-vision-advances/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 07 Mar 2020 06:59:28 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Technologies]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7308</guid>

					<description><![CDATA[<p>Source: analyticsinsight.net The promise of deep learning in the field of computer vision is better performance by models that may require more data however, less digital sign preparing ability <a class="read-more-link" href="https://www.aiuniverse.xyz/deep-learning-technologies-impacting-computer-vision-advances/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-technologies-impacting-computer-vision-advances/">DEEP LEARNING TECHNOLOGIES IMPACTING COMPUTER VISION ADVANCES</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: analyticsinsight.net</p>



<p>The promise of deep learning in the field of computer vision is better performance by models that may require more data however, less digital sign preparing ability to train and work. There is a ton of promotion and large claims around deep learning methods, however, past the hype, deep learning techniques are accomplishing cutting edge results on challenging issues. Outstandingly, on computer vision tasks, for example, image classification, object recognition, and face detection. Deep learning strategies are well known, principally in light of the fact that they are delivering on their promise.</p>



<p>This isn’t to imply that there is no publicity around the innovation, however, that the hype depends on genuine outcomes that are being exhibited over a suite of challenging artificial intelligence issues from computer vision and natural language processing.</p>



<p>Some of the principal large demonstrations of the power of deep learning were in computer vision, explicitly image recognition. All the more as of late in object detection and face recognition.</p>



<p>Among the most noticeable factors that added to the enormous boost in deep learning is the presence of large, high-quality, publicly available labelled datasets, alongside the empowerment of parallel GPU computing, which enabled the transition from CPU-based to GPU-based training in this way taking into account huge speeding up in deep models’ training.</p>



<p>Extra factors may have played a lesser job also, for example, the alleviation of the vanishing gradient problem owing to the disengagement from saturating activation functions (such as hyperbolic tangent and the logistic function), the proposal of new regularization techniques (e.g., dropout, batch normalization, and data augmentation), and the appearance of powerful frameworks like TensorFlow, theano, and mxnet, which allow for faster prototyping.</p>



<p>Before getting too amped up for progress in computer vision, it’s imperative to comprehend the constraints of current AI technologies. While enhancements are critical, we are still a long way from having computer vision algorithms that can understand photographs and videos similarly as people do.</p>



<p>Until further notice, deep neural networks, the fundamentals of computer vision frameworks, are truly adept at coordinating trends at the pixel level. They’re especially productive at classifying images and localizing objects in images. Yet, with regards to understanding the context of visual data and depicting the connection between various articles, they flop wretchedly.</p>



<p>Recent work done in the field shows the constraints of computer vision algorithms and the requirement for new assessment techniques. In any case, the present utilization of computer vision shows what amount can be cultivated with pattern matching alone.</p>



<p>A significant focus of study in the field of computer vision is on systems to recognize and remove highlights from digital pictures. Extracted features context for inference about an image, and often the more extravagant the highlights, the better the derivation.</p>



<p>Sophisticated hand-designed features, for example, scale-invariant feature transform (SIFT), Gabor filters, and histogram of oriented gradients (HOG) have been the focus of computer vision for feature extraction for some time, and have seen good success.</p>



<p>The promise of deep learning is that mind boggling and valuable highlights can be consequently gained legitimately from large image datasets. All the more explicitly, that a deep hierarchy of rich features can be taken in and consequently extricated from images, given by the numerous deep layers of neural network models.</p>



<p>Deep neural network models are delivering on this promise, most strikingly exhibited by the change away from sophisticated hand-crafted feature detection methods such as SIFT toward deep convolutional neural networks on standard computer vision benchmark datasets and competitions, such as the ImageNet Large Scale Visual Recognition Competition (ILSVRC).</p>



<p>Until not long ago, facial recognition was an awkward and costly innovation constrained to police research labs. However, as of late, because of advances in computer vision algorithms, facial recognition has discovered its way into different computing gadgets.</p>



<p>iPhone X introduced FaceID, a validation framework that utilizes an on-device neural network to open the telephone when it sees its owner’s face. During setup, FaceID trains its AI model on the face of the owner and works modestly under various lighting conditions, facial hair, hair styles, caps, and glasses. In China, numerous stores are presently utilizing facial recognition innovation to give a smoother payment experience to customers (at the cost of their security, however). Rather than utilizing credit cards or mobile payment apps, clients just need to demonstrate their face to a computer vision-equipped camera.</p>



<p>Maybe the most significant guarantee of deep learning is that the top-performing models are completely evolved from the same basic components. The noteworthy outcomes have originated from one kind of network, called the convolutional neural system, involved convolutional and pooling layers. It was explicitly intended for image data and can be trained on pixel data directly (with some minor scaling).</p>



<p>This is unique in relation to the more extensive field that may have required specialized feature detection methods created for handwriting recognition, character recognition, face recognition, object detection, and so on. Rather, a single general class of model can be designed and utilized across every computer vision task directly. This is the assurity of machine learning when all is said in done; it is amazing that such a flexible strategy has been found and demonstrated for computer vision.</p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-technologies-impacting-computer-vision-advances/">DEEP LEARNING TECHNOLOGIES IMPACTING COMPUTER VISION ADVANCES</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deep-learning-technologies-impacting-computer-vision-advances/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Opinion &#124; Machine learning reveals computers as bad students</title>
		<link>https://www.aiuniverse.xyz/opinion-machine-learning-reveals-computers-as-bad-students/</link>
					<comments>https://www.aiuniverse.xyz/opinion-machine-learning-reveals-computers-as-bad-students/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 21 Jan 2020 09:13:27 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Artificial intelligence (AI)]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[students]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6271</guid>

					<description><![CDATA[<p>Source: livemint.com Artificial intelligence (AI) is deeply linked with machine learning (ML). In fact, almost all of AI today is simply ML—in other words, an attempt to <a class="read-more-link" href="https://www.aiuniverse.xyz/opinion-machine-learning-reveals-computers-as-bad-students/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/opinion-machine-learning-reveals-computers-as-bad-students/">Opinion | Machine learning reveals computers as bad students</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:  livemint.com</p>



<p>Artificial intelligence (AI) is deeply linked with machine learning (ML). In fact, almost all of AI today is simply ML—in other words, an attempt to get a computer to make itself more efficient at its task without the need for human intervention. As an investor in deep-tech and science companies, I have had the occasion to see several startups that claim to use AI/ML.</p>



<p>Neither AI nor ML are “deep-tech&#8221;. The applicability of ML is limited, at least today, primarily to the field of data science, where one is actually only trying to ask simple questions of a data set.</p>



<p>Most of these questions revolve around whether there is a pattern to the data that is present in the data set, and seek to answer fairly simple questions, such as, “Is this customer likely to buy product X if they have already bought product Y?&#8221; or “Does this medical scan contain evidence of cancer?&#8221;</p>



<p>ML tries to filter out the “noise&#8221; from a data set and arrive at a “signal&#8221;. This is the realm of “data science&#8221;. Data science draws on inductive reasoning—as opposed to the deductive reasoning of arithmetic and algebra. While the conclusion of a deductive process is certain, the truth of the end of an inductive reasoning process is only probable. Statistical modelling allows an ML program to systematically quantify and reason about the inherent uncertainties of inductive reasoning.</p>



<p>Every set of data being thrown at an ML model is confusing, especially when the data contained in it is at a large scale.</p>



<p>The confusion in these large data sets will mean that there are four possible outcomes while looking for a “signal&#8221;: a) that the actual data point represents a true positive (as in yes, this scan shows cancer); b) that the data point represents a true negative (as in there is no cancer); c) a false positive (as in, yes, this scan indicates cancer, when in fact it doesn’t); and d) a false negative (as in, no, this scan doesn’t indicate cancer, when in fact cancer is present).</p>



<p>One very soon begins to see that the test data used to “teach&#8221; a machine to “learn&#8221; on its own becomes crucial. This is why many startups promise to generate new data sets that can later be used to train an ML model. This “data exhaust&#8221; is presumed to be useful simply because it produces voluminous new data about a subject.</p>



<p>Not so fast, I tell these startups. Just because one can use ML, it doesn’t necessarily follow that the ML model is useful. Neither does it follow that a particular ML model is more effective than a different ML model.</p>



<p>The good news is that there are plenty of ways to gauge the effectiveness of an ML model, but they can be brought down to the four types of predictions I described above (positives, negatives, false positives and false negatives).</p>



<p>The first of these is the prevalence of positives in the data set being used to train the model, and the accuracy of the model in picking those positives.</p>



<p>Let us say only 10% of 100,000 medical scans that the ML model is being fed to learn from actually indicate the presence of cancer. This number is important, since it gives us a base measure of what the ML model should be able to achieve on its own, after it has worked its way through the vast maze of positives, negatives, false positives and false negatives.</p>



<p>In a random pick from this data set, the probability that the pick is positive is 10% and 90% that it is negative. The startup’s ML model should be much more accurate than a random pick. However, the issue with this strict statistical measure of “accuracy&#8221; is that it includes both positives and negatives (the ML model should be accurate at predicting both).</p>



<p>This can present a problem, since the model could easily pick all negatives (which constitute 90% of the data set in this instance) and still be 100% accurate. However, it would be useless, since this “accurate&#8221; model hasn’t been able to pick any of the cancer-positive scans.</p>



<p>The second is the ML model’s precision. Precision is the number of true positives that the model finds. The startup’s ML model precision would have to be significantly greater than the prevalence of true positives (10%) in the example. Otherwise the model is only as good as any random choice at predicting an outcome.</p>



<p>Now, let’s assume the model’s precision is 100%. The next measure will be its ability to collect true positives from the data set. So, in a data set with 100,000 medical scans, with 10,000 instances of cancer (10%), the efficacy of the model’s collection depends on how many instances of cancer it detects.</p>



<p>Although its precision is now 100%, if it only detects 5,000 out of the 10,000 true instances, its collection rate means it has missed the other 5,000. These two measures are trade-offs between one another. Decreasing the model’s precision can increase its collection, but now in addition to more than 5,000 true cases, it will also collect noise: Negatives, false negatives and false positives.</p>



<p>So, what kind of ML model does a startup create with trade-offs between accuracy, collection capability and precision? That depends on the outcome that the model is trying to predict.</p>



<p>There are various other complexities when models deal with sensitive data. Predicting a repeat buy of a pair of jeans is very different from detecting cancer. Despite large “data exhausts&#8221;, sane professionals who understand the field need to come in and help train the model. We will need expert humans for a while yet.</p>
<p>The post <a href="https://www.aiuniverse.xyz/opinion-machine-learning-reveals-computers-as-bad-students/">Opinion | Machine learning reveals computers as bad students</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/opinion-machine-learning-reveals-computers-as-bad-students/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Indian Ocean Dipole can be better predicted thru machine learning, say researchers</title>
		<link>https://www.aiuniverse.xyz/indian-ocean-dipole-can-be-better-predicted-thru-machine-learning-say-researchers/</link>
					<comments>https://www.aiuniverse.xyz/indian-ocean-dipole-can-be-better-predicted-thru-machine-learning-say-researchers/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 20 Jan 2020 11:57:53 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[Indian]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Ocean Dipole]]></category>
		<category><![CDATA[Science]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6259</guid>

					<description><![CDATA[<p>Source: thehindubusinessline.com Researchers in Japan and The Netherlands have, for the first time, used machine learning techniques, in particular artificial neural networks (ANNs), to predict the Indian <a class="read-more-link" href="https://www.aiuniverse.xyz/indian-ocean-dipole-can-be-better-predicted-thru-machine-learning-say-researchers/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/indian-ocean-dipole-can-be-better-predicted-thru-machine-learning-say-researchers/">Indian Ocean Dipole can be better predicted thru machine learning, say researchers</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: thehindubusinessline.com</p>



<p>Researchers in Japan and The Netherlands have, for the first time, used machine learning techniques, in particular artificial neural networks (ANNs), to predict the Indian Ocean Dipole (IOD), a positive phase of which has affected weather and climate in India and Australia in a spectacular fashion so far in 2019-20.</p>



<h2 class="wp-block-heading">Positive, negative phases</h2>



<p>The IOD has both positive and negative phases, and signals large socio-economic impacts on many countries and hence predicting the IOD well in advance will benefit the affected societies, note authors JV Ratnam and Swadhin K Behera (Application Laboratory, Japan Agency for Marine-Earth Science and Technology, Yokohama) and HA Dijkstra (Institute for Marine and Atmospheric Research Utrecht, Utrecht University in The Netherlands) in a paper published by&nbsp;<em>Nature</em>.</p>



<h2 class="wp-block-heading">Ocean temperatures</h2>



<p>The IOD is a mode of climate variability observed in the Indian Ocean sea surface temperature anomalies with one pole in Sumatra (Indonesia) and the other near East Africa. Therefore, the IOD is represented by an index derived from the gradient between the western equatorial Indian Ocean and the south-eastern equatorial Indian Ocean. It starts sometime in May-June, peaks in September-October and ends in November (2019&#8217;s rather strong positive phase of the IOD lasted into early January of 2020).</p>



<p>In a positive IOD phase, the western part of the Indian Ocean (closer to East Africa where the monsoon winds turn as south-westerly winds towards India) warms up relative to the eastern basin, beefing up the incoming monsoon flows. These conditions are more or less reversed during a negative IOD phase.</p>



<h2 class="wp-block-heading">Atmospheric teleconnections</h2>



<p>The IOD is also known to affect the climates of other parts of the world, including Sri Lanka, the Maritime Continent (Indonesia, et al), Japan, East Africa and Europe through atmospheric teleconnections. The climate of Australia and the Maritime Continent also are affected by the cool (warm) SST anomalies over the South-East Indian Ocean region during the positive (negative) phase of the IOD.</p>



<p>The anomalously cool (warm) waters around Australia and the Maritime Continent during the positive (negative) phase of IOD reduce (enhance) rainfall over those countries. The IOD also has a remote effect on the climate of Japan through modification of the Pacific-Japan teleconnection and it is also known to affect the summers of Europe due to the atmospheric teleconnections as a response to the IOD.</p>



<h2 class="wp-block-heading">Wetter India, dry and hot Australia</h2>



<p>In recent years, it has been found that the spatial distribution of the summer (monsoon) rainfall over India is affected by IOD during its various phases. During the positive IOD phase, India experiences anomalously high rainfall along the latitude belts covering Central India and during the negative phase of the IOD, the rainfall is anomalously high along the longitudinal belt with the western part of the country receiving high rainfall.</p>



<p>The extended South-West monsoon (June-September) of 2019 in India had a lag effect on the Australian monsoon (delayed to this day), which is thought to have aided and abetted the devastating bush/forest fire in the island-continent. Owing to its large impacts, previous studies have addressed the predictability of the IOD using modern coupled climate models. Various forecasting centres try to predict IOD using the coupled climate models at seasonal time scales. Such dynamical models are promising but are dependent on large computational as well as human resources.</p>



<h2 class="wp-block-heading">Machine learning to the fore</h2>



<p>But in the instant case, researchers in Japan and The Netherlands tried to complement those efforts with a simpler model based on the machine learning technique of ANNs. ANNs are tools used in machine learning which mimic the functioning of neurons in the human brain. Similar to the human brain, the ANN also learns from past data and makes decisions for the future.</p>



<p>An ANN consists of input, output and hidden layers. ANNs have been used in many fields for classification and regression studies to model processes. The correlation analysis of the IOD index indicated that a single ANN model is not suitable for forecasting the IOD index for all the months from May to November. So, researchers developed ANN models for forecasting the IOD index for every month from May to November.</p>



<p>The results were compared with persistence forecasts and also IOD index forecasts derived from the ensemble mean sea surface temperature anomalies of seven models within the North American Multi-Model Ensemble (NMME), an experimental multi-model seasonal forecasting system consisting of coupled models from the US and Canada. The ANN and NMME model results were compared with persistence forecasts to check if the models have skill higher than just persistence of the IOD index of February-April to May-November.</p>



<h2 class="wp-block-heading">Superior forecast skills</h2>



<p>The IOD forecasts were generated for May to November from February-April conditions. The attributes for the ANNs were derived from sea surface temperature and conditions in the upper levels of the atmosphere using a correlation analysis for the period 1949–2018.</p>



<p>An ensemble of ANN forecasts indicates the machine learning-based ANN models to be capable of forecasting the IOD index well in advance with excellent skills. The forecast skills are much superior to the skills obtained from the persistence forecasts that one would guess from the observed data. The ANN models also performed far better than the models of the NMME with higher correlation coefficients and lower root mean square errors for all the target months of May-November.</p>
<p>The post <a href="https://www.aiuniverse.xyz/indian-ocean-dipole-can-be-better-predicted-thru-machine-learning-say-researchers/">Indian Ocean Dipole can be better predicted thru machine learning, say researchers</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/indian-ocean-dipole-can-be-better-predicted-thru-machine-learning-say-researchers/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Meet YuMi: A Robot Nurse Built to Make the Rounds</title>
		<link>https://www.aiuniverse.xyz/meet-yumi-a-robot-nurse-built-to-make-the-rounds/</link>
					<comments>https://www.aiuniverse.xyz/meet-yumi-a-robot-nurse-built-to-make-the-rounds/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 21 Dec 2019 06:37:58 +0000</pubDate>
				<category><![CDATA[Data Robot]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[Gadgets]]></category>
		<category><![CDATA[medical technology]]></category>
		<category><![CDATA[Robotics]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5738</guid>

					<description><![CDATA[<p>Source: discovermagazine.com ABB’s robotic lab technician, YuMi, and Nurse Ratched have more in common than might appear at first blush. They’re both cold; they’re both heartless; and <a class="read-more-link" href="https://www.aiuniverse.xyz/meet-yumi-a-robot-nurse-built-to-make-the-rounds/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/meet-yumi-a-robot-nurse-built-to-make-the-rounds/">Meet YuMi: A Robot Nurse Built to Make the Rounds</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: discovermagazine.com</p>



<p>ABB’s robotic lab technician, YuMi, and Nurse Ratched have more in common than might appear at first blush. They’re both cold; they’re both heartless; and they both really want to help you take your meds.</p>



<p>But while Nurse Ratched notoriously represents the corrupting power of institutionalized bureaucracy, this robot, named YuMi, just wants to help hospitals and research labs run a little smoother.</p>



<p>The Swiss robotics and automation company showcased the roving lab tech earlier this fall at its new healthcare research hub, which is a collaboration with Texas Medical Research Innovation Institute in Houston. The hybrid lab combines a staff of 20 with an array of robotic assistants to test new ways that humans and machines can collaborate at the heart of medicine.</p>



<p>And there’s some urgency to their work. Baby Boomers are aging, and an unprecedented number of Americans are poised to enter the healthcare system over the next 10 years. Simultaneously, the industry is facing a deep shortage of nurses, doctors and other medical staff — particularly in home healthcare. There’s hope that robotics, artificial intelligence and automation will help leaders navigate these seismic demographic shifts and deliver care to more people and potentially with fewer resources.</p>



<p>Making the Rounds <br>
In contrast to gargantuan robotic arms locked in cages along automobile assembly lines, YuMi is designed to work closely with humans as a gentler, collaborative sidekick. YuMi’s precise touch and range of motion make it adaptable to a wide range of tasks, from basics like sorting and unboxing to more elaborate tasks like folding paper airplanes, playing pool or directing symphonies.</p>



<p>For one of their medical bot prototypes, ABB engineers simply mounted YuMi atop a moving platform. YuMi uses its machine vision to avoid staffers and other obstacles, and can be programmed to do any number of rote, time-consuming tasks. YuMi could pick-up patient tests and transport them to the lab for processing. Delivering food and linens is no problem. YuMi can even easily deliver morning and evening medications right to door.</p>



<p>ABB also fitted a lab with other YuMi concepts that sort pills, prepare and unpackage medicines, load and unload centrifuges, and execute lab work pipetting. The robots are best suited for the repetitive, high volume tasks that consume a big part of staff time. ABB engineers say robots can perform these tasks 50 percent faster, and can also do them 24 hours a day. Ultimately, it gives staff more time to focus on higher-level work.</p>



<p>“The health care sector is undergoing significant transformation as the diagnosis and treatment of disease advances, while coping with an aging population, increasing costs and a growing worldwide shortage of medical staff,” Sami Atiya, president of ABB’s robotics and discrete automation business, said in a press release.</p>



<p>Feeling the Crunch <br>
A recent report from the U.S. Department of Veterans Affairs Office of the Inspector General found that 96 percent of VA facilities reported at least one “severe” occupational shortage as of December 2018. Thirty-nine percent reported 20 or more shortages. Mercer, a healthcare consultancy, estimates the United States will need to hire 2.3 million healthcare workers by 2025 to address the labor gap.</p>



<p>Robots could be key to helping drive down the costs of care and help medical workers do more with smaller teams. ABB estimates there will be some 60,000 medical robots on the job within five years or so. Robots, along with telemedicine, data mining, advances in genetics and so much more, are radically redefining what it means to visit the doctor.</p>
<p>The post <a href="https://www.aiuniverse.xyz/meet-yumi-a-robot-nurse-built-to-make-the-rounds/">Meet YuMi: A Robot Nurse Built to Make the Rounds</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/meet-yumi-a-robot-nurse-built-to-make-the-rounds/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>BlizzCon 2019 attendees tried to beat Google’s DeepMind A.I. in StarCraft II</title>
		<link>https://www.aiuniverse.xyz/blizzcon-2019-attendees-tried-to-beat-googles-deepmind-a-i-in-starcraft-ii/</link>
					<comments>https://www.aiuniverse.xyz/blizzcon-2019-attendees-tried-to-beat-googles-deepmind-a-i-in-starcraft-ii/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 04 Nov 2019 07:21:05 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[DeepMind]]></category>
		<category><![CDATA[Goole]]></category>
		<category><![CDATA[StarCraft]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4981</guid>

					<description><![CDATA[<p>Source: digitaltrends.com AlphaStar, an artificial intelligence program powered by Google’s DeepMind, was present at BlizzCon 2019, with the goal of beating any human that tried to go <a class="read-more-link" href="https://www.aiuniverse.xyz/blizzcon-2019-attendees-tried-to-beat-googles-deepmind-a-i-in-starcraft-ii/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/blizzcon-2019-attendees-tried-to-beat-googles-deepmind-a-i-in-starcraft-ii/">BlizzCon 2019 attendees tried to beat Google’s DeepMind A.I. in StarCraft II</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:  digitaltrends.com</p>



<p>AlphaStar, an artificial intelligence program powered by Google’s DeepMind, was present at BlizzCon 2019, with the goal of beating any human that tried to go up against it in <em>Sta</em>rCraft II.</p>



<p>Blizzard set up computers at the Blizzard Arcade section of BlizzCon 2019, which ran from November 1 to November 2 at the Anaheim Convention Center, for attendees to try to beat AlphaStar. The catch, however, was that the A.I. program is nearly impossible to beat at the real-time strategy game.</p>



<p>AlphaStar has achieved grandmaster status in StarCraft II for all three races of Terran, Protoss, and Zerg, which means that it is capable of beating 99.8% of all ranked human players. Making the feat even more impressive is that the A.I. program was limited to viewing only the portion of the map that a human would see, and its mouse clicks were restricted to register 22 non-duplicated actions every five seconds to mimic what a human can do.</p>



<p>In a blog post, the AlphaStar team said that it looks to understand the potential and limitations of open-ended learning, which uses learning-based agents to solve tasks in training A.I.</p>



<p>“Games like <em>StarCraft</em> are an excellent training ground to advance these approaches, as players must use limited information to make dynamic and difficult decisions that have ramifications on multiple levels and timescales,” wrote the AlphaStar team in the blog post.</p>



<p> The research does not end with dominating humans in games such as StarCraft II. While AlphaStar has more than proven that it is capable of doing so, the point is that A.I. may be trained to do specific things better than most humans for various real-world applications. In fact, earlier this year, DeepMind and Waymo, a fellow unit of Google parent Alphabet, teamed up to train self-driving cars using the same method that was created to teach A.I. bots how to play StarCraft II. </p>



<p> Meanwhile, while humans tried to beat AlphaStar at StarCraft II, BlizzCon 2019 was filled with several major announcements, including Overwatch 2, Diablo IV, World of Warcraft: Shadowlands, and an apology for the recent controversy involving the suspension of competitive Hearthstone<em> </em>player Blitzchung for expressing his support for protesters in Hong Kong. </p>
<p>The post <a href="https://www.aiuniverse.xyz/blizzcon-2019-attendees-tried-to-beat-googles-deepmind-a-i-in-starcraft-ii/">BlizzCon 2019 attendees tried to beat Google’s DeepMind A.I. in StarCraft II</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/blizzcon-2019-attendees-tried-to-beat-googles-deepmind-a-i-in-starcraft-ii/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
