<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>developed Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/developed/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/developed/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Fri, 09 Jul 2021 07:28:32 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>HOW HAS AUTOMATED PREDICTIVE ANALYSIS DEVELOPED OVER THE YEARS</title>
		<link>https://www.aiuniverse.xyz/how-has-automated-predictive-analysis-developed-over-the-years/</link>
					<comments>https://www.aiuniverse.xyz/how-has-automated-predictive-analysis-developed-over-the-years/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 09 Jul 2021 07:28:31 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[analysis]]></category>
		<category><![CDATA[Automated]]></category>
		<category><![CDATA[developed]]></category>
		<category><![CDATA[Predictive]]></category>
		<category><![CDATA[YEARS]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14831</guid>

					<description><![CDATA[<p>Source &#8211; https://www.analyticsinsight.net/ Automated predictive analysis&#160;is making way for the greatest transformations in the industries! Automated predictive analysis, or predictive analytics, uses historical data to predict future <a class="read-more-link" href="https://www.aiuniverse.xyz/how-has-automated-predictive-analysis-developed-over-the-years/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-has-automated-predictive-analysis-developed-over-the-years/">HOW HAS AUTOMATED PREDICTIVE ANALYSIS DEVELOPED OVER THE YEARS</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.analyticsinsight.net/</p>



<h2 class="wp-block-heading"><strong>Automated predictive analysis</strong>&nbsp;is making way for the greatest transformations in the industries!</h2>



<p>Automated predictive analysis, or predictive analytics, uses historical data to predict future events. Throughout history, humans are obsessed with predicting the future. The fear of the unknown has led several scientific researchers and professors to develop technologies that can determine the future so that necessary steps can be taken to avoid drastic losses.</p>



<p>Predictive analytics has received a lot of attention in recent years due to its advances in supporting various technologies, particularly in the area of big data and artificial intelligence.</p>



<p>With the increased competition, businesses seek power over their competitors in bringing products and services to crowded markets. Data-driven predictive models can bring companies solutions to long-standing problems in terms of business operations. This technology provides a trove of information from which analytics tools and applications draw insights and predict the upcoming opportunities, suitable investments, and dangers in the market.</p>



<p>Businesses use tools like Hadoop and Spark to extract information from big data. These data sources might consist of transactional databases, equipment log files, images, videos, audios, sensors, and other types of data.</p>



<p>With all this data, tools are necessary to extract insights and trends. Predictive analytics finds patterns in data to build models that predict future outcomes. Other varieties of machine learning techniques are also available, including linear and nonlinear regression, neural networks, support vector machines, decision trees, and other algorithms.</p>



<p>For decades, automated predictive analysis has been used by meteorologists to predict weather and climate forecasts. With time, this concept has been used to study consumer behavior, forecast supply and demand in economic statistics, and related purposes.</p>



<ul class="wp-block-list"><li>How Can Artificial Intelligence Drive Predictive Analytics To New Heights</li><li>How Predictive Analytics Will Impact Human Resources</li><li>What Is Predictive Analytics And Can It Help You Achieve Business Objectives</li></ul>



<h4 class="wp-block-heading"><strong>Automated predictive technology: Today and beyond&nbsp;</strong></h4>



<p>Data is the core of predictive analytics. Earlier, when there were no computers, businesses used other creative ways to understand what the customers want and predict market conditions. These ways did not involve technological tools or applications.</p>



<p>Currently, one of the most vital industrial applications of predictive models includes energy load forecasting to predict energy demand in the future. Energy producers, grid operators, and traders need accurate predictions of energy load to make decisions for managing tasks in electric grids. Grid operators use data to draw actionable insights.</p>



<p>Artificial intelligence and predictive technology, have revolutionized the way advertisers and marketers work. Targeted advertising uses data like previously purchased products, location, and age to serve the target audience. Today, consumer profiles are much more advanced, and enterprises can gather information from various sources.</p>



<p>Predictive analytics&nbsp;is also used to measure vehicle and pedestrian traffic to coordinate traffic lights, public transportation, and even pedestrian crosswalks to facilitate convenience and efficiency in community design. This also boosts the safety of the public and allocates emergency services more efficiently by predicting the number of officers needed on a task and reassigns posts accordingly.</p>



<p>Automated predictive technology, has played a crucial role in facilitating better medical resources. This technology helps improve the patients’ health outcomes. Rather than completely relying on the patient’s medical history, predictive systems can generate data from a broad spectrum of symptoms, data of other patients, and the treatments used to cure the disease.</p>



<p>AI and machine learning have provided us with various ways through which we can predict the future. With the growing technological evolution in automation and data analysis, our lives will be changed forever and for the better.</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-has-automated-predictive-analysis-developed-over-the-years/">HOW HAS AUTOMATED PREDICTIVE ANALYSIS DEVELOPED OVER THE YEARS</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-has-automated-predictive-analysis-developed-over-the-years/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>IBM Developed an AI System That Engages in Debates with Humans and Convinces Some</title>
		<link>https://www.aiuniverse.xyz/ibm-developed-an-ai-system-that-engages-in-debates-with-humans-and-convinces-some/</link>
					<comments>https://www.aiuniverse.xyz/ibm-developed-an-ai-system-that-engages-in-debates-with-humans-and-convinces-some/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 22 Mar 2021 06:33:25 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Debates]]></category>
		<category><![CDATA[developed]]></category>
		<category><![CDATA[Engages]]></category>
		<category><![CDATA[humans]]></category>
		<category><![CDATA[IBM]]></category>
		<category><![CDATA[System]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13692</guid>

					<description><![CDATA[<p>Source &#8211; https://interestingengineering.com/ Artificial intelligence (AI) has been making great strides in recent years sometimes even coming close to being human-like. Now, in a new paper published in Nature magazine, <a class="read-more-link" href="https://www.aiuniverse.xyz/ibm-developed-an-ai-system-that-engages-in-debates-with-humans-and-convinces-some/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/ibm-developed-an-ai-system-that-engages-in-debates-with-humans-and-convinces-some/">IBM Developed an AI System That Engages in Debates with Humans and Convinces Some</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://interestingengineering.com/</p>



<p id="p-0">Artificial intelligence (AI) has been making great strides in recent years sometimes even coming close to being human-like. Now, in a new paper published in <em>Nature magazine</em>, IBM describes a system that can debate with humans and even sometimes win.</p>



<p id="p-1">&#8220;Here we present Project Debater, an autonomous debating system that can engage in a competitive debate with humans,&#8221; write the authors. And the system is nothing short of extraordinary.</p>



<p id="p-2">In tests of Project Debater, the AI was given only 15 minutes to research topics and prepare for debates. Each time, it proceeded to form an opening statement and even layer counterarguments. </p>



<p id="p-3">For the most part, the humans won the debate but in one instance it was able to change the stance of nine people. Not bad!</p>



<p id="p-4">&#8220;Project Debater is a crucial step in the development of argument technology and in working with arguments as local phenomena. Its successes offer a tantalizing glimpse of how an AI system could work with the web of arguments that humans interpret with such apparent ease,&#8221; Chris Reed writes in a critique of the new project published in <em>Nature magazine</em>.</p>



<p id="p-5">&#8220;Given the wildfires of fake news, the polarization of public opinion and the ubiquity of lazy reasoning, that ease belies an urgent need for humans to be supported in creating, processing, navigating and sharing complex arguments — support that AI might be able to supply.&#8221;</p>



<p id="p-6">In other words, this new AI is not here to replace humans but rather to support them in building better arguments and reasoning with more nuance. If this subject interests you, <em>Scientific American</em> has done a great podcast episode with the research&#8217;s lead Noam Slonim which tackles amongst other things whether the AI actually understands the arguments it presents and what that means for the future of debating.</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/ibm-developed-an-ai-system-that-engages-in-debates-with-humans-and-convinces-some/">IBM Developed an AI System That Engages in Debates with Humans and Convinces Some</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/ibm-developed-an-ai-system-that-engages-in-debates-with-humans-and-convinces-some/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>WHY IS PYTHON STILL A HUGE HIT AMONG DATA SCIENTISTS?</title>
		<link>https://www.aiuniverse.xyz/why-is-python-still-a-huge-hit-among-data-scientists/</link>
					<comments>https://www.aiuniverse.xyz/why-is-python-still-a-huge-hit-among-data-scientists/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 17 Nov 2020 05:05:19 +0000</pubDate>
				<category><![CDATA[Data Science]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[data scientists]]></category>
		<category><![CDATA[developed]]></category>
		<category><![CDATA[Programming Language]]></category>
		<category><![CDATA[Python]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12347</guid>

					<description><![CDATA[<p>Source: analyticsinsight.net What makes Python a top choice in the Data Science community? Python has become the most used programming language for data science practices. Developed by <a class="read-more-link" href="https://www.aiuniverse.xyz/why-is-python-still-a-huge-hit-among-data-scientists/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/why-is-python-still-a-huge-hit-among-data-scientists/">WHY IS PYTHON STILL A HUGE HIT AMONG DATA SCIENTISTS?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: analyticsinsight.net</p>



<h4 class="wp-block-heading">What makes Python a top choice in the Data Science community?</h4>



<p>Python has become the most used programming language for data science practices. Developed by Guido van Rossum and&nbsp;launched&nbsp;in 1991, it is an&nbsp;interactive and object-oriented programming&nbsp;language similar to&nbsp;PERL or Ruby. Its inherent readability, simplicity, clean visual layout, less syntactic exceptions, greater string manipulation, ideal scripting, and rapid application, an apt fit for many platforms, make it so popular among data scientists. This programming language has a plethora of libraries (e.g., TensorFlow, Scipy, and Numpy); hence Python becomes easier to perform multiple additional tasks.</p>



<p>Python is an object-oriented, open-source, flexible, and easy to learn programming language.&nbsp; According to a 2013 survey by industry analyst O’Reilly, 40% of data scientist respondents admitted using Python in their daily work. They join the many other programmers in all fields who have made Python one of the world’s top ten most popular programming languages ever since 2003. In fact, many surveys show it as the number one preferred language.</p>



<h4 class="wp-block-heading"><strong>Why is it so Popular?</strong></h4>



<p>One of the main reasons why Python is widely used in the scientific and research communities is its ease of use and simple syntax that makes it easy to adapt for people without much programming or engineering background. It is also suitable for quick prototyping. Further, it allows the developer to run the code anywhere, like Windows, Mac OS X, UNIX, and Linux. And since it is a flexible programming language, it offers data scientists the facility to solve any given problem or carry projects concerning about developing machine learning models, web services, data mining, classification, etc., in less time frame than most of the programming languages. Python libraries Python Scrapy and BeautifulSoup can help to extract data from the internet, whereas Python Seaborn and Matplotlib help in data visualization or graphical representation. In data analytics helps with better insight, understanding patterns and correlates data from big datasets. Its libraries like Tensorflow, Keras, and Theano allow data scientists to develop deep learning models and Scikit-Learn helps to develop machine learning algorithms. It can also be leveraged in non-technical fields like business and advertising.</p>



<p>Besides, Python has a huge community base of engineers and data scientists like Python.org, Fullstackpython.com, realpython.com, etc., where Python developers can impart their issues and thoughts to the community at no cost. Also, Python has great compatibility&nbsp;with Hadoop, which is a renowned open-source big data platform.</p>



<h4 class="wp-block-heading"><strong>Microsoft’s New Update</strong></h4>



<p>Microsoft has been a constant advocate of Python. It supports open-source Python in developer tools, including the Visual Studio integrated development environment (IDE), and hosts it in Azure Notebooks and uses it to build end-user experiences like the Azure command-line interface (CLI). Recently, Microsoft released a new update of its Visual Studio Code (VS Code) code editor for Windows, Windows on Arm, macOS, and Linux. In this, it launched a new version of the Python language extension for VS code editor that breaks out the Jupyter Notebooks functionality into a distinct VS Code extension. Jupyter is a free, open-source, interactive web tool, which researchers use to combine software code, computational output, explanatory text, and multimedia resources in a single document.  It draws its name from the programming languages Julia (Ju), Python (Py), and R. This means, it not only supports Python but also other popular data science languages like Julia and R.</p>



<p>Although&nbsp;Microsoft’s Python extension for VS Code&nbsp;has supported Jupyter Notebooks for a year now, the tech giant decided to break out Jupyter notebooks functionality to improve support for other data-science languages.</p>
<p>The post <a href="https://www.aiuniverse.xyz/why-is-python-still-a-huge-hit-among-data-scientists/">WHY IS PYTHON STILL A HUGE HIT AMONG DATA SCIENTISTS?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/why-is-python-still-a-huge-hit-among-data-scientists/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Why human-like robots elicit uncanny feelings</title>
		<link>https://www.aiuniverse.xyz/why-human-like-robots-elicit-uncanny-feelings/</link>
					<comments>https://www.aiuniverse.xyz/why-human-like-robots-elicit-uncanny-feelings/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 17 Sep 2020 07:29:31 +0000</pubDate>
				<category><![CDATA[Robotics]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[developed]]></category>
		<category><![CDATA[human]]></category>
		<category><![CDATA[machines]]></category>
		<category><![CDATA[Robots]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11640</guid>

					<description><![CDATA[<p>Source: nanowerk.com (Nanowerk News) Androids, or robots with humanlike features, are often more appealing to people than those that resemble machines — but only up to a <a class="read-more-link" href="https://www.aiuniverse.xyz/why-human-like-robots-elicit-uncanny-feelings/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/why-human-like-robots-elicit-uncanny-feelings/">Why human-like robots elicit uncanny feelings</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: nanowerk.com</p>



<p>(Nanowerk News) Androids, or robots with humanlike features, are often more appealing to people than those that resemble machines — but only up to a certain point. Many people experience an uneasy feeling in response to robots that are nearly lifelike, and yet somehow not quite “right.” The feeling of affinity can plunge into one of repulsion as a robot’s human likeness increases, a zone known as “the uncanny valley.”</p>



<p>The journal Perception (&#8220;The Uncanny Valley Phenomenon and the Temporal Dynamics of Face Animacy Perception &#8220;) published new insights by Emory psychologists into the cognitive mechanisms underlying this phenomenon.</p>



<p>Since the uncanny valley was first described, a common hypothesis developed to explain it. Known as the mind-perception theory, it proposes that when people see a robot with human-like features, they automatically add a mind to it. A growing sense that a machine appears to have a mind leads to the creepy feeling, according to this theory.</p>



<p>“We found that the opposite is true,” says Wang Shensheng, first author of the new study, who did the work as a graduate student at Emory and recently received his PhD in psychology. “It’s not the first step of attributing a mind to an android but the next step of ‘dehumanizing’ it by subtracting the idea of it having a mind that leads to the uncanny valley. Instead of just a one-shot process, it’s a dynamic one.”</p>



<p>The findings have implications for both the design of robots and for understanding how we perceive one another as humans.</p>



<p>“Robots are increasingly entering the social domain for everything from education to healthcare,” Wang says. “How we perceive them and relate to them is important both from the standpoint of engineers and psychologists.”</p>



<p>“At the core of this research is the question of what we perceive when we look at a face,” adds Philippe Rochat, Emory professor of psychology and senior author of the study. “It’s probably one of the most important questions in psychology. The ability to perceive the minds of others is the foundation of human relationships. ”</p>



<p>The research may help in unraveling the mechanisms involved in mind-blindness — the inability to distinguish between humans and machines — such as in cases of extreme autism or some psychotic disorders, Rochat says.</p>



<p>Co-authors of the study include Yuk Fai Cheong and Daniel Dilks, both associate professors of psychology at Emory.</p>



<p>Anthropomorphizing, or projecting human qualities onto objects, is common. “We often see faces in a cloud for instance,” Wang says. “We also sometimes anthropomorphize machines that we’re trying to understand, like our cars or a computer.”</p>



<p>Naming one’s car or imagining that a cloud is an animated being, however, is not normally associated with an uncanny feeling, Wang notes. That led him to hypothesize that something other than just anthropomorphizing may occur when viewing an android.</p>



<p>To tease apart the potential roles of mind-perception and dehumanization in the uncanny valley phenomenon the researchers conducted experiments focused on the temporal dynamics of the process. Participants were shown three types of images — human faces, mechanical-looking robot faces and android faces that closely resembled humans — and asked to rate each for perceived animacy or “aliveness.” The exposure times of the images were systematically manipulated, within milliseconds, as the participants rated their animacy.<br>The results showed that perceived animacy decreased significantly as a function of exposure time for android faces but not for mechanical-looking robot or human faces. And in android faces, the perceived animacy drops at between 100 and 500 milliseconds of viewing time. That timing is consistent with previous research showing that people begin to distinguish between human and artificial faces around 400 milliseconds after stimulus onset.</p>



<p>A second set of experiments manipulated both the exposure time and the amount of detail in the images, ranging from a minimal sketch of the features to a fully blurred image. The results showed that removing details from the images of the android faces decreased the perceived animacy along with the perceived uncanniness.</p>



<p>“The whole process is complicated but it happens within the blink of an eye,” Wang says. “Our results suggest that at first sight we anthropomorphize an android, but within milliseconds we detect deviations and dehumanize it. And that drop in perceived animacy likely contributes to the uncanny feeling.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/why-human-like-robots-elicit-uncanny-feelings/">Why human-like robots elicit uncanny feelings</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/why-human-like-robots-elicit-uncanny-feelings/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Smart 3D Universal Inspection System Uses Deep Learning</title>
		<link>https://www.aiuniverse.xyz/smart-3d-universal-inspection-system-uses-deep-learning/</link>
					<comments>https://www.aiuniverse.xyz/smart-3d-universal-inspection-system-uses-deep-learning/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 08 Sep 2020 09:13:31 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[3D]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[developed]]></category>
		<category><![CDATA[System]]></category>
		<category><![CDATA[Universal]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11432</guid>

					<description><![CDATA[<p>Source: metrology.news As Industry 4.0 takes hold, industrial automation and robotics are replacing many manual tasks in manufacturing. However, when it comes to visual quality inspection, most <a class="read-more-link" href="https://www.aiuniverse.xyz/smart-3d-universal-inspection-system-uses-deep-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/smart-3d-universal-inspection-system-uses-deep-learning/">Smart 3D Universal Inspection System Uses Deep Learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: metrology.news</p>



<p>As Industry 4.0 takes hold, industrial automation and robotics are replacing many manual tasks in manufacturing. However, when it comes to visual quality inspection, most production lines still employ human workers in the tedious task of examining products and judging defects.</p>



<p>The biggest drawback of manual visual inspection is that humans make mistakes. Tired workers often miss defects that ‘escape’ the quality screens on the production floor and leak into finished goods packages or into integrated systems. When these defects are discovered or surface at a later stage often by end customers, users or consumers, it is too late and very costly to fix. The Cost of Poor Quality (CoPQ) in these cases is significant.  It includes – among other elements – the costs of returned or rejected goods (RMA), scrap, rework and in many cases the negative impact on brand reputation and end customer dissatisfaction.</p>



<p>Israel based Kitov is paving the way towards smart manufacturing, by developing the technology to enable smart computer-driven visual inspection and support manufacturers along their digital transformation path.</p>



<p>KITOV ONE is a Smart 3D, Universal System that can effectively inspect virtually any product. Leveraging advanced 3D computer vision and deep-learning algorithms, KITOV ONE achieves unprecedented levels of detection, eliminating the tedious work and inconsistent results associated with manual inspection. KITOV supports complex 3D structures, numerous materials, and complete inspection specifications.</p>



<p>By imitating human learning processes, KITOV ONE features&nbsp;an intuitive method to teach the system how to optimally inspect almost any product.&nbsp; Setting up the system does not require programming skills or knowledge of robotics or optics. KITOV ONE software computes and controls the processes of image acquisition and image processing by using pre-set algorithms called detectors. Artificial intelligence capabilities are used to find and classify defects.</p>



<p>“We have developed artificial intelligence (AI) systems for advanced manufacturing that can be intuitively trained within a few hours by a non-expert to automatically plan and perform sophisticated visual inspection tasks on complex 3D products at the highest performance levels.” states&nbsp;Dr. Yossi Rubner, CTO and Founder of Kitov.ai.</p>



<p>By using dashboards and Big Data Analytics Kitov helps manufacturers to identify trends and proactively attend to quality issues early on and by providing powerful insights about manufacturing process and product design can support root cause analysis and elimination of defects.</p>
<p>The post <a href="https://www.aiuniverse.xyz/smart-3d-universal-inspection-system-uses-deep-learning/">Smart 3D Universal Inspection System Uses Deep Learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/smart-3d-universal-inspection-system-uses-deep-learning/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Mitra Robot maker startup decries govt apathy to US shipment stuck at Bengaluru airport</title>
		<link>https://www.aiuniverse.xyz/mitra-robot-maker-startup-decries-govt-apathy-to-us-shipment-stuck-at-bengaluru-airport/</link>
					<comments>https://www.aiuniverse.xyz/mitra-robot-maker-startup-decries-govt-apathy-to-us-shipment-stuck-at-bengaluru-airport/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 10 Aug 2020 05:59:41 +0000</pubDate>
				<category><![CDATA[Robotics]]></category>
		<category><![CDATA[airport]]></category>
		<category><![CDATA[bengaluru]]></category>
		<category><![CDATA[developed]]></category>
		<category><![CDATA[Narendra Modi]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10769</guid>

					<description><![CDATA[<p>Source: businesstoday.in Balaji Mitra, the CEO of Mitra Robot, a humanoid robot designed and developed by his startup venture Invento Robotics has complained to Commerce and Industry <a class="read-more-link" href="https://www.aiuniverse.xyz/mitra-robot-maker-startup-decries-govt-apathy-to-us-shipment-stuck-at-bengaluru-airport/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/mitra-robot-maker-startup-decries-govt-apathy-to-us-shipment-stuck-at-bengaluru-airport/">Mitra Robot maker startup decries govt apathy to US shipment stuck at Bengaluru airport</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: businesstoday.in</p>



<p>Balaji Mitra, the CEO of Mitra Robot, a humanoid robot designed and developed by his startup venture Invento Robotics has complained to Commerce and Industry Minister&nbsp;<mark>Piyush Goyal</mark>&nbsp;that his company&#8217;s robots bound for the US customers have been held in the Bengaluru International Airport&#8217;s customs for four weeks. &nbsp;</p>



<p>Mitra in a tweet on August 8 Mitra said, &#8220;Dear @PiyushGoyal Ji our robots bound for USA customers have been held in BLR customs for 4 weeks. How can we become a major exporter with such red tape? How do you expect Indian companies to be taken seriously by global customers?&#8221;</p>



<p>He also blamed the United Parcel Service (UPS), a US-based package delivery company, for being &#8220;equally irresponsible in handling this.&#8221;</p>



<p>Founded in 2016, Invento Robotics is a robotic company headquartered in Bengaluru. It came into the limelight in late-2017 when a five-foot-tall bot named Mitra developed by Invento Robotics greeted Ivanka Trump at the Global Entrepreneurship Summit (GES) in Hyderabad.</p>



<p>The humanoid was programmed with facial and speech recognition technologies to greet dignitaries, including Prime Minister&nbsp;<mark>Narendra Modi</mark>, at the event.</p>



<p>This was a turning point for the company, which gained visibility and also got its bank loan approved. Soon after, it also started corporate orders from the world over. Its promise of &#8220;made-in-India&#8221; robots intrigued Chief Technology Officers (CTOs) of companies across the world who invited Invento to give demonstrations of its products in their respective countries.</p>
<p>The post <a href="https://www.aiuniverse.xyz/mitra-robot-maker-startup-decries-govt-apathy-to-us-shipment-stuck-at-bengaluru-airport/">Mitra Robot maker startup decries govt apathy to US shipment stuck at Bengaluru airport</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/mitra-robot-maker-startup-decries-govt-apathy-to-us-shipment-stuck-at-bengaluru-airport/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AI and machine learning facilitate pioneering research on Parkinson&#8217;s</title>
		<link>https://www.aiuniverse.xyz/ai-and-machine-learning-facilitate-pioneering-research-on-parkinsons/</link>
					<comments>https://www.aiuniverse.xyz/ai-and-machine-learning-facilitate-pioneering-research-on-parkinsons/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 07 Aug 2020 06:37:54 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[developed]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Research]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10723</guid>

					<description><![CDATA[<p>Source: techrepublic.com/ A long-sought understanding of Parkinson&#8217;s Disease (PD) will be revealed at Friday&#8217;s 2020 Machine Learning for Healthcare Conference. In early 2019, IBM Research and The Michael <a class="read-more-link" href="https://www.aiuniverse.xyz/ai-and-machine-learning-facilitate-pioneering-research-on-parkinsons/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/ai-and-machine-learning-facilitate-pioneering-research-on-parkinsons/">AI and machine learning facilitate pioneering research on Parkinson&#8217;s</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: techrepublic.com/</p>



<p>A long-sought understanding of Parkinson&#8217;s Disease (PD) will be revealed at Friday&#8217;s 2020 Machine Learning for Healthcare Conference. In early 2019, IBM Research and The Michael J. Fox Foundation (MJFF) announced plans to collaborate and use artificial intelligence (AI) and machine learning (ML) to decode the elusive and complex mysteries surrounding PD symptoms and progression. </p>



<p>IBM and the MJFF have built an innovative disease progression model that helps clinicians more accurately pinpoint the exact status of a PD patient&#8217;s progression. Despite PD&#8217;s first identification more than a century ago in 1817, how it affects patients during the course of the disease has been an undertaking that previously evaded both doctors and researchers.</p>



<p>Yet many questions about the chronic disease remain unanswered, but a better understanding through clinical trials can improve patient-care management and more efficient development of mitigating drugs.</p>



<p>Machine learning has helped attempts to grasp the complexities surrounding PD. The team designed innovative algorithms that use factors that can mask the outward appearance of someone&#8217;s PD, including medications that can palliate symptoms such as tremors, improve motor control, and modify other common symptoms.&nbsp;</p>



<p>PD is a neurological disorder that affects a person&#8217;s movements and often includes tremors&#8211;dopamine levels drop because of brain nerve-cell damage. It usually starts with tremors in one hand, but other symptoms that develop from the potentially lifelong disease&#8211;which remains incurable&#8211;are loss of balance, stiffness, and slow movement.&nbsp;</p>



<p>Since PD&#8217;s underlying biology is still unknown, it has been onerous for doctors to determine how advanced the disease is by just judging a patient&#8217;s outward appearance. It&#8217;s difficult to detect the connection from disease states to biological mechanisms. If a patient is on medication (as is often the case), the physician is further challenged, as medications can mask some symptoms.&nbsp;</p>



<p>PD patients do not react to medications, develop symptoms or related issues in the exact same way, making progression not straightforward, and difficult to define, and, the development of understanding and classifying stages very difficult. The collaborative study takes into consideration the effects of different medications, which may manifest differently in each individual at different stages&#8211;this had not been explored previously.</p>



<p>IBM will further use a vast amount of PD patient data, aggregated by the MJFF, in the hopes of discovering new results that can accurately define each stage of PD as it develops; if this stage is developed clinicians will be assisted in designing more accurate and customized treatment plans. Achieving the goal will also provide drug developers with more accurate levels when recruiting for clinical trials of new treatments and potential cures.&nbsp;</p>



<p>Further, the team hopes that the research might be inspirational or useful in the examinations and research into other chronic conditions, such as diabetes, Alzheimer&#8217;s disease, and ALS. The next stage for IBM Research and the MJFF will be to focus on the recent discoveries, from the application of the new models, combined with the extensive data the MJFF has provided.&nbsp;</p>



<p>PD is one of the top 10 causes of death in those 65 and older, and it&#8217;s estimated that 6 million people worldwide, and one million people in the US have PD&#8211;these figures are expected to double by 2040, making research and even more understanding critical and urgent.&nbsp;</p>
<p>The post <a href="https://www.aiuniverse.xyz/ai-and-machine-learning-facilitate-pioneering-research-on-parkinsons/">AI and machine learning facilitate pioneering research on Parkinson&#8217;s</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/ai-and-machine-learning-facilitate-pioneering-research-on-parkinsons/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Facebook’s New Poker-Playing AI ReBel Performs Better Than Humans</title>
		<link>https://www.aiuniverse.xyz/facebooks-new-poker-playing-ai-rebel-performs-better-than-humans/</link>
					<comments>https://www.aiuniverse.xyz/facebooks-new-poker-playing-ai-rebel-performs-better-than-humans/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 31 Jul 2020 05:21:27 +0000</pubDate>
				<category><![CDATA[Reinforcement Learning]]></category>
		<category><![CDATA[cybersecurity]]></category>
		<category><![CDATA[developed]]></category>
		<category><![CDATA[Facebook]]></category>
		<category><![CDATA[ReBel]]></category>
		<category><![CDATA[researchers]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10607</guid>

					<description><![CDATA[<p>Source: top10pokersites.net A team of researchers from Facebook have recently developed a poker-playing AI that is capable of beating human players in heads-up, no-limit Texas hold’em poker. Called&#160;Recursive Belief-based <a class="read-more-link" href="https://www.aiuniverse.xyz/facebooks-new-poker-playing-ai-rebel-performs-better-than-humans/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/facebooks-new-poker-playing-ai-rebel-performs-better-than-humans/">Facebook’s New Poker-Playing AI ReBel Performs Better Than Humans</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: top10pokersites.net</p>



<p>A team of researchers from Facebook have recently developed a poker-playing AI that is capable of beating human players in heads-up, no-limit Texas hold’em poker.</p>



<p>Called&nbsp;<strong>Recursive Belief-based Learning</strong>&nbsp;(ReBel), the general AI framework learns poker faster than any other previous poker-specific AI, using less domain knowledge, and researchers are claiming this with a supporting experiment.</p>



<p>The AI was pitted against Dong Kim, considered one of the best heads-up players in the world, alongside three other top human players as part of a series of trials, and the outcomes are impressive!</p>



<p>Not only did ReBel played at a faster pace than its human opponents (faster than two seconds per hand and taking not more than five seconds to make a decision across 7,500 hands), it achieved an aggregated score of 165 thousandths of a big blind per game, defeating Kim with a standard deviation of 69. ReBel performed better than Facebook’s previous poker AI Libratus which recorded an aggregated score of 147.</p>



<h3 class="wp-block-heading">ReBel’s Development &amp; Applications</h3>



<p>ReBel fixes common problems encountered in previous AIs by operating two AI models representing value and policy. Contrary to how past AI’s were developed, such as&nbsp;<strong>DeepMind’s Alpha Zero</strong>&nbsp;that combined reinforcement learning and search using AI model training for a number of board games like Shogi, Go, and chess, ReBel is mainly developed on game state concepts.</p>



<p>This method results in the creation of a public belief state which enables the AI to come up with probabilities according to the sequence of actions and game states. During the decision-making process, all relevant aspects are considered, including the overall pot and chips, as well as the possible result of a given hand. Based on that information, ReBel creates a “subgame” and then incorporates reinforcement learning until it reaches the designated accuracy level.</p>



<p>Because ReBel does not rely heavily on specific domain knowledge, it’s application is more general and universal, especially in aspects that involve uncertainties and information that are not always available, such as in the game of poker.</p>



<p>The researchers believe the ReBel framework can be applied in developing techniques that involve interactions between multiple agents, such as self-driving cars, negotiations, auctions, and cybersecurity – areas that are usually associated with imperfect-information multi-agent interactions.</p>



<p>To prevent possible cheating in real-life high-stakes games, Facebook has opted not to release the ReBel codebase for poker</p>
<p>The post <a href="https://www.aiuniverse.xyz/facebooks-new-poker-playing-ai-rebel-performs-better-than-humans/">Facebook’s New Poker-Playing AI ReBel Performs Better Than Humans</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/facebooks-new-poker-playing-ai-rebel-performs-better-than-humans/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Advantages and challenges Jio will face in its global 5G aspirations</title>
		<link>https://www.aiuniverse.xyz/advantages-and-challenges-jio-will-face-in-its-global-5g-aspirations/</link>
					<comments>https://www.aiuniverse.xyz/advantages-and-challenges-jio-will-face-in-its-global-5g-aspirations/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 24 Jul 2020 08:51:48 +0000</pubDate>
				<category><![CDATA[Internet of things]]></category>
		<category><![CDATA[5G Technology]]></category>
		<category><![CDATA[developed]]></category>
		<category><![CDATA[Internet of Things]]></category>
		<category><![CDATA[Reliance Jio]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10450</guid>

					<description><![CDATA[<p>Source: moneycontrol.com The fifth generation of wireless connectivity (5G) promises a 50-fold improvement in data speed over what a large majority of us use today. The future <a class="read-more-link" href="https://www.aiuniverse.xyz/advantages-and-challenges-jio-will-face-in-its-global-5g-aspirations/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/advantages-and-challenges-jio-will-face-in-its-global-5g-aspirations/">Advantages and challenges Jio will face in its global 5G aspirations</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: moneycontrol.com</p>



<p>The fifth generation of wireless connectivity (5G) promises a 50-fold improvement in data speed over what a large majority of us use today. The future is ‘networked’. Long-anticipated opportunities from the Internet such as immersive gaming, augmented-reality product experience, virtual shopping aisles, adoption of self-driving cars and remote robotic precision surgeries will become a reality. The common thread binding this disparate list is their dependency on 5G’s promise of low latency Internet and network of connected devices that will make the ‘Internet of Things’ (IoT) mainstream.</p>



<p>Telstra in Australia, and NTT and Rakuten in Japan, and several other telcos were cumulatively spending upwards of $150 billion-plus a year in upgrading their networks towards 5G. Reliance Industries Limited’s Jio has joined that club with a significant stack of offerings planned to ride on it. The intensity of play and participation in undertaking the shift differs — from a largely passive wait-and-watch approach among the European Union’s 100-plus operators to huge investments by the troika of Verizon, AT&amp;T and T-Mobile in the United States.</p>



<p>These attempts attain significance in light of Jio’s announcement of having developed an ‘end-to-end 5G technology’ which can be ‘exported’ to players across the world. Over the years, almost all telcos have built the networks (2G/3G/4G) over which the ‘Internet flows’, but customer willingness to pay has dropped dramatically due to data’s commoditised nature. Even if the customers can afford, the rapid acceleration of Internet penetration in India has opened the eyes of countries such as Brazil where data continues to be quite expensive. In this scenario, telcos across the world are seeking a cost-efficient way to offer 5G services to their customers.</p>



<p>A recent research by Praxis Global Alliance on 5G readiness of telcos globally highlights that Jio is uniquely placed to drive and enable this shift. The opportunity also marks a rare coming together of company strategy along with strong geopolitical and macroeconomic tailwinds that Jio stands to benefit from.The key factors which make Jio poised to win in the new landscape are:</p>



<ul class="wp-block-list"><li>India is a fertile testing ground to check Jio’s 5G solution as the second-largest base of Internet users with 600 million-plus users. Jio is well placed to test in pockets, validate at scale, and refine its solution before a global launch. The auctions to award spectrum for the provision of 5G are due in 2021 in India, and Jio is already reportedly asking the government to release some rationed 5G spectrum for testing.</li><li>Jio was launched in 2016. The company does not have legacy infrastructure. The absence of archaic hardware, full 4G base (that is much easier to upgrade to 5G) and a ‘software-centric’ approach places the company at a vantage point.</li><li>Most telcos are battling the question of economics from 5G deployment and the timing of it such that the 3G/4G investments yield the intended ROI. Jio’s strategic partnership with Qualcomm could enable it to transfer a cost-competitive ‘5G stack’ comprising of licenses, code, technical blueprints, and production know-how to any telco while giving the telco enough independence to build a virtualised network by picking components across a range of vendors.</li><li>In December 2018, Jio joined the ORAN Alliance — a worldwide, operator-led effort that wants network infrastructure manufacturers of 5G such as Samsung, Ericsson, and ZTE to agree on common standards to make joint operations easier. What started as a technology forum has gained increased prominence since the use of Chinese telecom network equipment is seen with concern, both in Europe and the US. Jio is well placed as a member to cater to the global demand.</li><li>Finally, as a player which has successfully transformed from a telco to one that is establishing a digital ecosystem spanning telecom, ecommerce and retail in India, Jio is well-placed to help operators abroad in their journey of becoming a ‘digital telco’. Jio platforms i.e. ‘the digital ecosystem play’ has received recent validation of multiple global investors and is valued at $ 58 billion currently.</li></ul>



<p>However, there is much to be done in 5G whether by Jio or other players. Beyond technological novelties, the business case for 5G continues to evolve. We do not have a ‘fully-virtualised end-to-end cloud-native network’ as of date, in spite of 20-plus 5G networks that are already running globally. In this scenario, there are three strategic challenges in Jio’s attempt to disrupt the 5G network infrastructure space:</p>



<ol class="wp-block-list"><li>Developing a successful vertically-integrated 5G ecosystem is hard work. We have seen this from the previous failed attempts of multiple network infrastructure and telco players to forward/backward integrate. ‘Compilers’ and ‘libraries’ (tools which are used to develop software for telecom networking gear) take years to develop, and much rides on Jio’s ability to leverage its strategic partnership-cum-investments from Qualcomm and Intel.</li><li>As of March, the five biggest 5G patent holders are Huawei, Samsung, ZTE, LG, and Nokia. Qualcomm comes in at seventh and Intel at the eight position. While the volume of patents does not always contribute to the value created, this is often a strong indicator of who is leading the technology race. The 10-year lock in period with Samsung is set to expire but that does not obviate the momentous innovation challenges in areas such as network security.</li><li>Network infrastructure players such as Huawei and Ericsson have traditionally been cost-competitive because of their ability to spread R&amp;D costs over many operators. As a new player in the infrastructure game, the company has not clearly outlined the exact nature of revenue and cost apportions between Jio and partners such as Qualcomm.</li></ol>



<p>The opportunity is unprecedented and the window to race ahead is narrow. Specific bright spots include the development of a low-cost 5G smartphone to accelerate the migration to 5G globally and the proposition of a ‘clean’ and ‘converged architecture’ commercially-feasible network in view of rising concerns of data security.</p>



<p>5G innovations such as ‘network slicing’ promise to create access for digital communities, content, and commerce to flourish as easily as in the hinterlands of India to the downtowns of Europe. Eleven years after the first-ever launch of 4G in the Nordic cities of Stockholm and Oslo by TeliaSonera, we are gearing up for disruptions across the global communications value chain by an Indian entity as 5G appears to be set to become the central nervous system of the global economy.</p>
<p>The post <a href="https://www.aiuniverse.xyz/advantages-and-challenges-jio-will-face-in-its-global-5g-aspirations/">Advantages and challenges Jio will face in its global 5G aspirations</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/advantages-and-challenges-jio-will-face-in-its-global-5g-aspirations/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google&#8217;s deep learning outperforms general pathologists in prostate biopsies</title>
		<link>https://www.aiuniverse.xyz/googles-deep-learning-outperforms-general-pathologists-in-prostate-biopsies/</link>
					<comments>https://www.aiuniverse.xyz/googles-deep-learning-outperforms-general-pathologists-in-prostate-biopsies/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 24 Jul 2020 06:14:03 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[applications]]></category>
		<category><![CDATA[Automated]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[developed]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[researchers]]></category>
		<category><![CDATA[software]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10431</guid>

					<description><![CDATA[<p>Source: labpulse.com A deep-learning system developed at Google outperformed general pathologists for Gleason grading of prostate cancer biopsies, company researchers reported in JAMA Oncology online July 23. The research <a class="read-more-link" href="https://www.aiuniverse.xyz/googles-deep-learning-outperforms-general-pathologists-in-prostate-biopsies/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/googles-deep-learning-outperforms-general-pathologists-in-prostate-biopsies/">Google&#8217;s deep learning outperforms general pathologists in prostate biopsies</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: labpulse.com</p>



<p>A deep-learning system developed at Google outperformed general pathologists for Gleason grading of prostate cancer biopsies, company researchers reported in JAMA Oncology online July 23.</p>



<p>The research comes from a deep-learning/artificial intelligence (AI) research team at Google Health in Palo Alto, CA. For the study, performance of the deep-learning system and general pathologists were compared to assessment by a panel of urologic subspecialist pathologists for the evaluation of prostate cancer needle core biopsy formalin-fixed paraffin-embedded (FFPE) specimens.</p>



<p>In a validation set of 498 specimens that were positive for cancer, the deep-learning system came up with the same result as the panel of sub-specialists in 71.7% of cases, versus 58% for general pathologists, a statistically significant result.</p>



<p>&#8220;The [deep-learning system] warrants evaluation as an assistive tool for improving prostate cancer diagnosis and treatment decisions, especially where subspecialist expertise is unavailable,&#8221; wrote Dr.&nbsp;Craig Mermel, PhD, product lead for pathology at Google Brain, and colleagues.</p>



<p><strong>Gleason grading an &#8216;imperfect diagnostic tool&#8217;</strong></p>



<p>Deep learning has potential to play a role in an area of pathologic evaluation of prostate biopsies that needs improvement &#8212; Gleason grading, which is an imperfect diagnostic tool, the authors wrote. Grading of prostate biopsy specimens to indicate clinical risk is based on subjective assessment of various patterns, such as pattern 5, which represents poorly differentiated cells, they noted.</p>



<p>&#8220;Consequently, it is common for different pathologists to assign a different [Gleason grade group] to the same biopsy (30%- 50% discordances),&#8221; Mermel et al wrote. &#8220;In general, pathologists with urologic subspeciality training show higher rates of interobserver agreement than general pathologists, and reviews by experts lead to more accurate risk stratification than reviews by less experienced pathologists.&#8221;</p>



<p>In recent years, Google has been actively developing AI algorithms for a variety of healthcare applications, such as detecting breast cancer on mammograms, assessing cancer risk on computed tomography (CT) lung cancer screening exams, finding diabetic retinopathy on retinal scans, and predicting medical events through analysis of electronic medical record software.</p>



<p>Company researchers reported results for their deep-learning system in analyzing pathology images and predicting survival of patients with 10 cancer types in PLOS One in June.</p>



<p><strong>False positives trade-off</strong></p>



<p>In the latest study, subspecialists made their assessments using three histological sections plus an immunohistochemical stained section for each specimen. A majority opinion was determined. Performance was compared to the deep-learning system and evaluations from a panel of board-certified general pathologists. General pathologists and the algorithm worked without immunohistochemical stained sections, simulating routine workflow.</p>



<p>In addition to Gleason grading, the study evaluated performance of subspecialists, general pathologists, and the deep-learning system for differentiating specimens with and without cancer. A total of 752 specimens were assessed for this part of the study. And in this scenario, the performance of the general pathologists and the deep-learning system were on par; that is, in agreement with the subspecialist findings in 94.3% and 94.7% of cases, respectively. Compared with general pathologists, the deep-learning system caught more cancers, but also flagged more false positives.</p>



<p>&#8220;This trade-off suggests that the [deep-learning system] could help alert pathologists to tumors that may otherwise be missed, while relying on pathologist judgment to overrule false-positive categorizations on small tissue regions,&#8221; the authors advised.</p>



<p>Overall, the results suggest that an automated system could bring performance closer to the level of experts and boost the value of prostate biopsies, in the authors&#8217; view.</p>



<p>&#8220;Future research is necessary to evaluate the potential utility of using the [deep-learning system] as a decision support tool in clinical workflows and to improve the quality of prostate cancer grading for therapy decisions,&#8221; Mermel et al concluded.</p>
<p>The post <a href="https://www.aiuniverse.xyz/googles-deep-learning-outperforms-general-pathologists-in-prostate-biopsies/">Google&#8217;s deep learning outperforms general pathologists in prostate biopsies</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/googles-deep-learning-outperforms-general-pathologists-in-prostate-biopsies/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
