<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>scientists Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/scientists/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/scientists/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Tue, 02 Mar 2021 11:18:05 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>FICO Xpress Insight Empowers 8+ Million Python Developers to Foster Collaboration Between Data Scientists and Business Users, Drastically Accelerating Project Deployment</title>
		<link>https://www.aiuniverse.xyz/fico-xpress-insight-empowers-8-million-python-developers-to-foster-collaboration-between-data-scientists-and-business-users-drastically-accelerating-project-deployment/</link>
					<comments>https://www.aiuniverse.xyz/fico-xpress-insight-empowers-8-million-python-developers-to-foster-collaboration-between-data-scientists-and-business-users-drastically-accelerating-project-deployment/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 02 Mar 2021 11:18:04 +0000</pubDate>
				<category><![CDATA[Python]]></category>
		<category><![CDATA[8+ Million]]></category>
		<category><![CDATA[Collaboration]]></category>
		<category><![CDATA[Developers]]></category>
		<category><![CDATA[Empowers]]></category>
		<category><![CDATA[FICO Xpress]]></category>
		<category><![CDATA[scientists]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13172</guid>

					<description><![CDATA[<p>Source &#8211; https://www.prnewswire.com/ Using FICO Xpress Insight, Python Developers Can Help Business Leaders Make More Informed, Data-Driven Decisions Highlights: The addition of native Python support to FICO® Xpress <a class="read-more-link" href="https://www.aiuniverse.xyz/fico-xpress-insight-empowers-8-million-python-developers-to-foster-collaboration-between-data-scientists-and-business-users-drastically-accelerating-project-deployment/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/fico-xpress-insight-empowers-8-million-python-developers-to-foster-collaboration-between-data-scientists-and-business-users-drastically-accelerating-project-deployment/">FICO Xpress Insight Empowers 8+ Million Python Developers to Foster Collaboration Between Data Scientists and Business Users, Drastically Accelerating Project Deployment</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.prnewswire.com/</p>



<p>Using FICO Xpress Insight, Python Developers Can Help Business Leaders Make More Informed, Data-Driven Decisions</p>



<p><strong>Highlights:</strong></p>



<ul class="wp-block-list"><li>The addition of native Python support to FICO® Xpress Insight enables Python&#8217;s 8.2 million users to empower business professionals with easy-to-use applications that can execute sophisticated analytic models.</li><li>With FICO® Xpress Insight business users and analysts can work with any advanced analytic model in business terms to perform simulations, compare scenarios and visualize outcomes to make better informed decisions.</li><li>Python developers can now build and operationalize Python based models all within a single framework, reducing time to deployment by orders of magnitude.</li></ul>



<p>FICO, a global analytics leader, today announced it has added native Python support to FICO® Xpress Insight. Xpress Insight enables data scientists to quickly build and deploy any advanced analytic or optimization model as a powerful business application. </p>



<p>Across industries, data scientists create powerful models to solve complex business problems. Yet, according to most industry analysts, more than half of data science projects are never fully deployed. FICO Xpress Insight helps translate between the data scientist and the line of business user by taking highly complex analytic or optimization models and turning them into simple point and click applications that help them make real business decisions.</p>



<p>&#8220;It&#8217;s a huge waste of time and resources when the models don&#8217;t reach the intended business users,&#8221; said&nbsp;<strong>Bill Waid, vice president and general manager of FICO® Decision Management Suite</strong>. &#8220;The real value of highly complex analytic models comes when they&#8217;re in the hands of the business users and become an integral part of their decision making process.&#8221;</p>



<p>The newest update to Xpress Insight enables the 8.2 million developers that use the popular coding language Python to rapidly build business applications that can execute sophisticated analytic models. Getting analytic applications into the hands of business users allows business leaders to make more informed decisions, perform simulations, compare scenarios and visualize outcomes.</p>



<p>&#8220;Python is one of the most productive tools for our digital and data science initiatives. Having Python supported in Xpress Insight helps us smoothly build end-to-end applications from data preparation, to modeling, to visualization,&#8221; said&nbsp;<strong>Sitao Zhang, data scientist, Supply Chain Digital and Data Science, Johnson and Johnson</strong>. &#8220;This new feature has not only made our applications more coherent during programming and development, but it also helps us deliver a more user-friendly experience to the business professionals using the applications to drive results.&#8221;</p>



<p>FICO® Xpress Insight is part of the FICO® Platform, a decisioning foundation critical for enterprises&#8217; digital transformation. The platform is designed to eliminate data siloes and enable interoperability between enterprise applications. By connecting data-derived insights from disparate business units, enterprises can respond quickly to customers&#8217; immediate needs and anticipate their future demands, resulting in deeper, more engaging customer experiences.</p>
<p>The post <a href="https://www.aiuniverse.xyz/fico-xpress-insight-empowers-8-million-python-developers-to-foster-collaboration-between-data-scientists-and-business-users-drastically-accelerating-project-deployment/">FICO Xpress Insight Empowers 8+ Million Python Developers to Foster Collaboration Between Data Scientists and Business Users, Drastically Accelerating Project Deployment</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/fico-xpress-insight-empowers-8-million-python-developers-to-foster-collaboration-between-data-scientists-and-business-users-drastically-accelerating-project-deployment/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>MACHINE LEARNING UNRAVELS A NEW JOURNEY FOR DATA SCIENTISTS</title>
		<link>https://www.aiuniverse.xyz/machine-learning-unravels-a-new-journey-for-data-scientists/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-unravels-a-new-journey-for-data-scientists/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 19 Feb 2021 05:27:58 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[data]]></category>
		<category><![CDATA[Journey]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[scientists]]></category>
		<category><![CDATA[UNRAVELS]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12919</guid>

					<description><![CDATA[<p>Source &#8211; https://www.analyticsinsight.net/ As the need for machine learning increases, data scientists are on the spree to become experts in the technology The concept of machine learning and the <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-unravels-a-new-journey-for-data-scientists/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-unravels-a-new-journey-for-data-scientists/">MACHINE LEARNING UNRAVELS A NEW JOURNEY FOR DATA SCIENTISTS</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.analyticsinsight.net/</p>



<h2 class="wp-block-heading">As the need for machine learning increases, data scientists are on the spree to become experts in the technology</h2>



<p>The concept of machine learning and the increasing demand for data scientists has been around for a while. But the ability to apply machine learning algorithms and mathematical calculation to big data is gathering momentum only recently. As the need for machine learning increases, data scientists are on the spree to become experts in the technology. It is anticipated that machine learning will have a lot to give for data scientists in future.</p>



<p>Before accessing the importance of machine learning for data scientists, let us have a look at the role of data scientists and the benefits of machine learning. The introduction of smart phones and digitization has turned human’s daily life into a mission to gather data. With or without knowledge, people click on thousands of things on their device every day, creating quintillion of data. Meanwhile, the continuation of Moore’s Law – the idea that computing would dramatically increase in power and decrease in relative cost over time has made cheap computing power widely available. Data scientists emerge from the gap between these two innovations.</p>



<p>The role of data scientist became more pivotal in recent years. Even traditional organizations who didn’t previously invest much of their budgets in technology positions are recruiting skilful data scientists to improve decision making and analytic processes. Data scientists help companies interpret and manage data and solve complex problems using expertise in a variety of data niches. They generally have a foundation in computer science, modelling, statistics, analytics, and math-coupled with a strong business sense. Especially, companies that recruit data scientists get an opportunity to improve marketing based on customer behaviour to provide personalized campaigns and advertisements, enhance innovation by analyzing through a thorough understanding of customer needs and enrichment of lives by assisting consumers in their personal life.</p>



<p>On the other hand, machine learning enables computers to get into self-learning mode without explicit programming. Machine learning algorithms are responsible for the vast majority of the artificial intelligence advancements and applications we hear today. Typically, machine learning algorithms use statistics to find patterns in a massive amount of data. The data encompasses a lot of things like numbers, words, images, clicks, etc. These enormous amounts of data are digitally stored and fed into machine learning algorithms. Generally put, machine learning is everywhere starting from recommendation systems on Netflix, YouTube and Spotify to social media feed on Facebook and Twitter. Businesses enable machine learning to cut costs and make lucrative solutions to many problems. Machine learning is one of the many tools in the belt of data scientists. But remarkably, machine learning is becoming more and more essential for them as new techniques emerge from it.</p>



<p>Data scientists and machine learning are already interlinked with each other. While data science focuses on tackling big data tasks like data preparation, cleansing, and analysis done by data scientists, machine learning comprises of machines that use a set of algorithms to train on the set of data. Henceforth, to perform better as a data scientist, the profession holders should have an open arm towards machine learning.</p>



<h4 class="wp-block-heading"><strong>The advantages of machine learning for data scientists</strong></h4>



<p>In the fast-evolving world, innovations emerge every day. Most of the new methods that unfold from technology are replacing humans form their routine hard-laboured jobs. Machine learning is at the core of accelerator technology to adopt human capabilities and intelligence. The increase in usage of machine learning in many industries will act as a catalyst to push data science to increase relevance. Since the role of data scientists involves making humans work easy by involving data analysis and insights, they should understand machine learning for quality prediction and estimation. This can help machines to take right decisions and smarter actions in real-time with zero human intervention.</p>



<p>Besides, machine learning is helping data scientists on small scales by transforming how data mining and interpretation works. It has also replaced traditional statistical techniques with more accurate automatic sets of generic methods. Going forward, basic levels of machine learning will become a standard requirement for data scientists. Here are four machine learning skills that every data scientist should be aware of.</p>



<p><strong>•&nbsp;</strong>Data scientists should have knowledge and expertise in computer fundamentals like computer organization, system architecture and layers and application software.</p>



<p><strong>•&nbsp;</strong>Since data scientists’ work involves a lot of estimation, knowledge of probability is very important for them. Besides, they should also focus on analyzing statistics to perform better.</p>



<p><strong>•&nbsp;</strong>Data scientists should scheme through data modelling that is used to analyze various data objects and interact with each other.</p>



<p><strong>•&nbsp;</strong>One of the essential talents of data scientists is to have programming skills and sound knowledge of programming languages like python and R.</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-unravels-a-new-journey-for-data-scientists/">MACHINE LEARNING UNRAVELS A NEW JOURNEY FOR DATA SCIENTISTS</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-unravels-a-new-journey-for-data-scientists/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Liverpool scientists deploy Artificial Intelligence to develop model that predicts the next pandemic</title>
		<link>https://www.aiuniverse.xyz/liverpool-scientists-deploy-artificial-intelligence-to-develop-model-that-predicts-the-next-pandemic/</link>
					<comments>https://www.aiuniverse.xyz/liverpool-scientists-deploy-artificial-intelligence-to-develop-model-that-predicts-the-next-pandemic/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 18 Feb 2021 04:42:28 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Develop]]></category>
		<category><![CDATA[Liverpool]]></category>
		<category><![CDATA[Pandemic]]></category>
		<category><![CDATA[Predicts]]></category>
		<category><![CDATA[scientists]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12888</guid>

					<description><![CDATA[<p>Source &#8211; https://www.timesnownews.com/ A team of scientists at the UK&#8217;s Liverpool University has used artificial intelligence (AI) to work out where the next novel coronavirus could emerge. <a class="read-more-link" href="https://www.aiuniverse.xyz/liverpool-scientists-deploy-artificial-intelligence-to-develop-model-that-predicts-the-next-pandemic/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/liverpool-scientists-deploy-artificial-intelligence-to-develop-model-that-predicts-the-next-pandemic/">Liverpool scientists deploy Artificial Intelligence to develop model that predicts the next pandemic</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.timesnownews.com/</p>



<p>A team of scientists at the UK&#8217;s Liverpool University has used artificial intelligence (AI) to work out where the next novel coronavirus could emerge.</p>



<h2 class="wp-block-heading">KEY HIGHLIGHTS</h2>



<ul class="wp-block-list"><li>The COVID-19 pandemic was the first such massive and natural calamity to strike mankind in almost a century.</li></ul>



<ul class="wp-block-list"><li>Mankind had just not provided for such an eventuality and was caught off guard on almost all counts of preparedness.</li></ul>



<ul class="wp-block-list"><li>With climate change being real and threat of pandemics looming large, it would certainly help to know if a disease is going to acquire pandemic proportions.</li></ul>



<p>In a rapidly advancing globalisation that has turned the entire Earth into one huge village, speedy connectivity and communication also ensured a rapid advance of the COVID-19 pandemic that began with a strain of the novel coronavirus that first emerged in Wuhan, China in late 2019. Now, as per a science paper published in Nature Communications, &#8220;The spread of influenza can be modelled and forecast using a machine-learning-based analysis of anonymized mobile phone data. The mobility map, presented in Nature Communications this week, is shown to accurately forecast the spread of influenza in New York City and Australia.&#8221;</p>



<p>The year 2020 dawned with the world bracing to handle a possible crisis and by the end of the year, global deaths reached nearly 2 million.</p>



<p>To cut the long story short, mankind has now been through so much in terms of mental agony, pain, loss, death, long-lasting illnesses and economic downslide &#8211; all on account of this pandemic &#8211; despite rapid advances in science &#8211; that it has begun to dread the prediction by environmentalists and scientists that we have just entered a pandemic era and more such pandemics are likely to come.<br><br><strong>Predicting the onset of a Pandemic:</strong><br>According to a report in the&nbsp;<em>BBC</em>, a team of scientists has used artificial intelligence (AI) to work out where the next novel coronavirus could emerge.</p>



<p>The researchers are reportedly putting to use a combination of learnings from fundamental biology and tools pertaining to machine learning.</p>



<p>This is not mere conjecture and the scientists are taking ahead of what they have gained from similar experiments in the past. Their computer algorithm predicted many more potential hosts of new virus strains that have previously been detected.&nbsp;The findings have been published in the journal&nbsp;<em>Nature Communications.&nbsp;</em></p>



<p>According to this report in&nbsp;<em>Nature Communications</em>, the spread of viral diseases through a population is dependent on interactions between infected people and uninfected people. The Building-models that predict how the diseases will spread across a city or country currently make use of data that are sparse and imprecise, such as commuter surveys or internet search data.</p>



<p>Dr Marcus Blagrove, a virologist from the University of Liverpool, UK, who was involved in the study, emphasises the need to know where the next coronavirus might come from.</p>



<p>&#8220;One way they&#8217;re generated is through recombination between two existing coronaviruses &#8211; so two viruses infect the same cell and they recombine into a &#8216;daughter&#8217; virus that would be an entirely new strain.&#8221;</p>



<p>Scientists say that to get the prediction algorithm right, the first step was to look for species that were able to harbour several viruses at once. Lead researcher Dr Maya Wardeh, who is also from the University of Liverpool, successfully deployed existing biological knowledge to teach the algorithm to search for patterns that made this more likely to happen.</p>



<p>This step concluded that many more mammals were potential hosts for new coronaviruses than previous surveillance work &#8211; screening animals for viruses &#8211; had shown.</p>
<p>The post <a href="https://www.aiuniverse.xyz/liverpool-scientists-deploy-artificial-intelligence-to-develop-model-that-predicts-the-next-pandemic/">Liverpool scientists deploy Artificial Intelligence to develop model that predicts the next pandemic</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/liverpool-scientists-deploy-artificial-intelligence-to-develop-model-that-predicts-the-next-pandemic/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Machine-learning method can crunch data to find new uses for existing drugs</title>
		<link>https://www.aiuniverse.xyz/machine-learning-method-can-crunch-data-to-find-new-uses-for-existing-drugs/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-method-can-crunch-data-to-find-new-uses-for-existing-drugs/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 05 Jan 2021 05:32:55 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[machine-learning]]></category>
		<category><![CDATA[scientists]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12501</guid>

					<description><![CDATA[<p>Source: news-medical.net Scientists have developed a machine-learning method that crunches massive amounts of data to help determine which existing medications could improve outcomes in diseases for which <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-method-can-crunch-data-to-find-new-uses-for-existing-drugs/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-method-can-crunch-data-to-find-new-uses-for-existing-drugs/">Machine-learning method can crunch data to find new uses for existing drugs</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: news-medical.net</p>



<p>Scientists have developed a machine-learning method that crunches massive amounts of data to help determine which existing medications could improve outcomes in diseases for which they are not prescribed.</p>



<p>The intent of this work is to speed up drug repurposing, which is not a new concept &#8211; think Botox injections, first approved to treat crossed eyes and now a migraine treatment and top cosmetic strategy to reduce the appearance of wrinkles.</p>



<p>But getting to those new uses typically involves a mix of serendipity and time-consuming and expensive randomized clinical trials to ensure that a drug deemed effective for one disorder will be useful as a treatment for something else.</p>



<p>The Ohio State University researchers created a framework that combines enormous patient care-related datasets with high-powered computation to arrive at repurposed drug candidates and the estimated effects of those existing medications on a defined set of outcomes.</p>



<p>Though this study focused on proposed repurposing of drugs to prevent heart failure and stroke in patients with coronary artery disease, the framework is flexible &#8211; and could be applied to most diseases.</p>



<p>Drug repurposing is an attractive pursuit because it could lower the risk associated with safety testing of new medications and dramatically reduce the time it takes to get a drug into the marketplace for clinical use.</p>



<p>Randomized clinical trials are the gold standard for determining a drug&#8217;s effectiveness against a disease, but Zhang noted that machine learning can account for hundreds &#8211; or thousands &#8211; of human differences within a large population that could influence how medicine works in the body. These factors, or confounders, ranging from age, sex and race to disease severity and the presence of other illnesses, function as parameters in the deep learning computer algorithm on which the framework is based.</p>



<p>That information comes from &#8220;real-world evidence,&#8221; which is longitudinal observational data about millions of patients captured by electronic medical records or insurance claims and prescription data.</p>



<p>&#8220;Real-world data has so many confounders. This is the reason we have to introduce the deep learning algorithm, which can handle multiple parameters,&#8221; said Zhang, who leads the Artificial Intelligence in Medicine Lab and is a core faculty member in the Translational Data Analytics Institute at Ohio State. &#8220;If we have hundreds or thousands of confounders, no human being can work with that. So we have to use artificial intelligence to solve the problem.</p>



<p>&#8220;We are the first team to introduce use of the deep learning algorithm to handle the real-world data, control for multiple confounders, and emulate clinical trials,&#8221; Zhang said.</p>



<p>The research team used insurance claims data on nearly 1.2 million heart-disease patients, which provided information on their assigned treatment, disease outcomes and various values for potential confounders. The deep learning algorithm also has the power to take into account the passage of time in each patient&#8217;s experience &#8211; for every visit, prescription and diagnostic test. The model input for drugs is based on their active ingredients.</p>



<p>Applying what is called causal inference theory, the researchers categorized, for the purposes of this analysis, the active drug and placebo patient groups that would be found in a clinical trial. The model tracked patients for two years &#8211; and compared their disease status at that end point to whether or not they took medications, which drugs they took and when they started the regimen.</p>



<p>&#8220;With causal inference, we can address the problem of having multiple treatments. We don&#8217;t answer whether drug A or drug B works for this disease or not, but figure out which treatment will have the better performance,&#8221; Zhang said.</p>



<p>Their hypothesis: that the model would identify drugs that could lower the risk for heart failure and stroke in coronary artery disease patients.</p>



<p>The model yielded nine drugs considered likely to provide those therapeutic benefits, three of which are currently in use &#8211; meaning the analysis identified six candidates for drug repurposing. Among other findings, the analysis suggested that a diabetes medication, metformin, and escitalopram, used to treat depression and anxiety, could lower risk for heart failure and stroke in the model patient population. As it turns out, both of those drugs are currently being tested for their effectiveness against heart disease.</p>



<p>Zhang stressed that what the team found in this case study is less important than how they got there.</p>



<p>&#8220;My motivation is applying this, along with other experts, to find drugs for diseases without any current treatment. This is very flexible, and we can adjust case-by-case,&#8221; he said. &#8220;The general model could be applied to any disease if you can define the disease outcome.&#8221;</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-method-can-crunch-data-to-find-new-uses-for-existing-drugs/">Machine-learning method can crunch data to find new uses for existing drugs</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-method-can-crunch-data-to-find-new-uses-for-existing-drugs/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Q&#038;A: Physical scientists turn to deep learning to improve Earth systems modeling</title>
		<link>https://www.aiuniverse.xyz/qa-physical-scientists-turn-to-deep-learning-to-improve-earth-systems-modeling/</link>
					<comments>https://www.aiuniverse.xyz/qa-physical-scientists-turn-to-deep-learning-to-improve-earth-systems-modeling/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 05 Sep 2020 07:18:59 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[DAS]]></category>
		<category><![CDATA[data & analytics]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[NERSC]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[scientists]]></category>
		<category><![CDATA[systems]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11386</guid>

					<description><![CDATA[<p>Source: phys.org The role of deep learning in science is at a turning point, with weather, climate, and Earth systems modeling emerging as an exciting application area <a class="read-more-link" href="https://www.aiuniverse.xyz/qa-physical-scientists-turn-to-deep-learning-to-improve-earth-systems-modeling/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/qa-physical-scientists-turn-to-deep-learning-to-improve-earth-systems-modeling/">Q&#038;A: Physical scientists turn to deep learning to improve Earth systems modeling</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: phys.org</p>



<p>The role of deep learning in science is at a turning point, with weather, climate, and Earth systems modeling emerging as an exciting application area for physics-informed deep learning that can more effectively identify nonlinear relationships in large datasets, extract patterns, emulate complex physical processes, and build predictive models.</p>



<p>&#8220;Deep learning has had unprecedented success in some very challenging problems, but scientists want to understand exactly how these models work and why they do the things they do,&#8221; said Karthik Kashinath, a computer scientist and engineer in the Data &amp; Analytics Services Group (DAS) at the National Energy Research Scientific Computing Center (NERSC) who has been deeply involved in NERSC&#8217;s research and education efforts in this area. &#8220;A key goal of deep learning for science is how do you design and train a neural network so that it can capture accurately the complexity of the processes it seeks to model, emulate, or predict, and we&#8217;re developing ways to infuse physics and domain knowledge into these neural networks so that they obey the laws of nature and their results are explainable, robust, and trustworthy.&#8221;</p>



<p>We caught up with Kashinath following the Artificial Intelligence for Earth System Science (AI4ESS) Summer School, a week-long virtual event hosted in June by the National Center for Atmospheric Research (NCAR) and the University Corporation for Atmospheric Research (UCAR) that was attended by more than 2,400 researchers from around the world. Kashinath was involved in organizing and presenting at the event, along with David John Gagne and Rich Loft of NCAR. Much of Kashinath&#8217;s current research focuses on the application of deep learning methods to climate and Earth systems modeling.</p>



<p><strong>How are deep learning methodologies being adopted in weather, climate, and Earth systems research?</strong></p>



<p>In recent years we&#8217;ve seen a significant rise in the use of deep learning in science, not just in augmenting, enhancing or replacing existing methods, but also for discovering new science in physics, chemistry, biology, medicine, and more – discoveries that were nearly impossible with traditional statistical methods. We are now starting to see the same in the Earth sciences, with the number of publications in journals like <em>Geophysical Research Letters</em> and Nature Geoscience rising and scientific conferences now featuring entire tracks involving machine and deep learning.</p>



<p><strong>What does deep learning bring to the table?</strong></p>



<p>It is extremely powerful in pattern recognition and discovering very complex nonlinear relationships that exist in large datasets, both of which are critical for developing models of Earth science systems. The key goal of a weather or climate modeler is to understand the ways in which processes in nature operate and to model them in an effective manner so we can predict the future of climate change and extreme weather events. Deep learning offers new methods for using existing data to understand how these processes operate and to develop models for them that are not only accurate and effective but also computationally much faster than traditional methods. Traditionally, climate and weather models solve large systems of coupled nonlinear partial differential equations, which is extremely computationally intensive. Deep learning is starting to augment, enhance, or even replace parts of these models with very efficient and fast physical process emulators. And that&#8217;s a significant step forward.</p>



<p>Pattern recognition is another area where deep learning is influencing Earth systems research. The DAS group at NERSC has been pushing hard on pattern recognition for detecting and tracking weather and climate patterns in large datasets. The 2018 Gordon Bell prize for exascale climate analytics using deep learning testifies to our contributions in that area. Given that we already have petabytes of climate data and that it is increasing at a crazy rate, it is physically impossible to sift through and recognize the key features and patterns using traditional statistical approaches. Deep learning offers very fast ways to mine that data and extract useful information such as extreme weather patterns.</p>



<p>A third area is downscaling; that is, given a low-resolution dataset, how do you produce very high-resolution data that is necessary for things like planning, especially on regional and local scales? Part of the grand challenge of climate science is how to build very high-resolution models that are accurate and produce data that we can reliably work with. One way to attack the problem is to say okay, we know these models are extremely expensive, and in the foreseeable future – even with computing getter faster and better – we&#8217;re really not going to be able to build reliable global climate models at a spatial resolution of 1 km or finer. So if we can create a deep learning model that takes low-resolution climate data and produces high-resolution data that is physically meaningful, reliable, and accurate – that is a game changer.</p>



<p><strong>What is a grand challenge for deep learning applied to Earth system science?</strong></p>



<p>I come from a background in fluid dynamics, where modeling turbulence is a long-standing grand challenge. A similar challenge in the atmospheric sciences is modeling clouds. All climate models have parameterizations – components in the climate model that describe how various physical processes behave and interact with each other. In the atmosphere that includes how clouds form, how radiation works, when and where precipitation happens, etc. Cloud modeling is also known to be the largest source of uncertainty in climate model projections, and for decades one of the big challenges has been how to reduce the uncertainty. Models have become much more complex and capture many more physical phenomena, but they still have large uncertainties in their predictions. So one area where deep learning could have a significant impact is to help us build better emulators of atmospheric processes like clouds, with the goal of reducing the uncertainties in predictions. That is a very concrete scientific goal.<br><strong><br>As you look ahead, what are you most excited about in terms of the impact of deep learning on climate and Earth systems research?</strong></p>



<p>The major pushback we&#8217;ve had from the scientific community is that neural networks are black boxes that are hard to understand and interpret, and scientists obviously would like to understand exactly how these neural networks work and why they do the things they do. So one thing I&#8217;m really excited about is developing better ways to interpret and understand these networks and incorporate the knowledge that we have about the physics of the Earth system into these models so they are more robust, reliable, trustworthy, interpretable, explainable, and transparent. The goal is to convince ourselves that these models are behaving in ways that respect the physics of nature, are effectively using the domain knowledge that we have, and are making predictions that we can trust. I was invited to submit a paper to Proceedings of the Royal Society on exactly this topic, &#8220;Physics-informed Deep Learning for Weather and Climate Modeling,&#8221; which is now under review.</p>



<p>I&#8217;m also excited about proving, in operation, that these deep learning models provide the computational speedup we claim they will provide when we embed them into a large climate or weather model. For example, the European Weather Forecasting Center has started to replace some parts of its weather forecasting model with machine and deep learning models, and they are already starting to see benefits. In the U.S., NCAR and the National Oceanic and Atmospheric Administration are also starting to replace parts of their climate and weather models with machine learning and deep learning models, and a number of academic and industry-based research groups are working on related projects. Chris Bretherton, one of the world&#8217;s leading climate scientists, heads a group at the University of Washington that is working to replace some of the complicated cloud processes in these large climate models with deep learning methods. So I&#8217;m looking forward to seeing their results in a year or two on speedup and performance.</p>



<p><strong>What was the focus of the AI4ESS event, and why was it so well-attended?</strong></p>



<p>The Artificial Intelligence for Earth System Science (AI4ESS) Summer School focused on how attendees can strengthen their background in statistics and machine learning, learn the fundamentals of deep learning and neural networks, and learn how to use these for challenging problems in the Earth system sciences. We had an overwhelming response to the school – it was supposed to be an in-person event in Boulder, Colo., with a capacity of 80 students. But once it went virtual, we had 2,400 attendees from 40 countries across the globe. It was live-streamed through UCAR and they tracked the daily log-ins.</p>



<p>There was great participation throughout the week. We had invited speakers every day – three lectures a day, so 15 lectures over the week – with experts from machine learning, deep learning, and the Earth sciences. Each day there was also a panel discussion for 30 minutes over lunch, and for me, these were super exciting because all of these experts were discussing and debating about the challenges and opportunities of using machine learning and deep learning for Earth system science. The school also held a week-long hackathon, where teams of six each chose a project from six different problems to work on for the week. About 500 people participated in the hackathon, with a lot of collaboration and interaction, including individual Slack channels for each of the hackathon teams. There were also Slack channels for the entire week of the summer school on various things: lecture-related Q&amp;As, hackathon challenge problems, technical tips and tricks in machine learning and deep learning, etc. So there was a lot of Slack activity going on, with people exchanging ideas, sharing results, and so forth.</p>



<p><strong>Why is everyone so keen on learning this stuff?</strong></p>



<p>I think the community, especially the younger scientists, see that deep learning can be a game changer in science and they don&#8217;t want to be left behind. They believe that it is going to be mainstream soon and that it is going to be essential for doing science. That&#8217;s the main motivator. So AI4ESS focused on teaching the fundamentals and laying the groundwork for them to begin applying machine and deep learning successfully to their research.</p>
<p>The post <a href="https://www.aiuniverse.xyz/qa-physical-scientists-turn-to-deep-learning-to-improve-earth-systems-modeling/">Q&#038;A: Physical scientists turn to deep learning to improve Earth systems modeling</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/qa-physical-scientists-turn-to-deep-learning-to-improve-earth-systems-modeling/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Big Data Analytics Enables Scientists to Model COVID-19 Spread</title>
		<link>https://www.aiuniverse.xyz/big-data-analytics-enables-scientists-to-model-covid-19-spread/</link>
					<comments>https://www.aiuniverse.xyz/big-data-analytics-enables-scientists-to-model-covid-19-spread/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 15 Jul 2020 05:56:09 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[COVID-19]]></category>
		<category><![CDATA[data analytics]]></category>
		<category><![CDATA[researchers]]></category>
		<category><![CDATA[scientists]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10187</guid>

					<description><![CDATA[<p>Source: newswise.com Newswise — Public health efforts depend heavily on predicting how diseases like COVID-19 spread across the globe. Researchers from Florida Atlantic University’s College of Engineering and Computer <a class="read-more-link" href="https://www.aiuniverse.xyz/big-data-analytics-enables-scientists-to-model-covid-19-spread/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/big-data-analytics-enables-scientists-to-model-covid-19-spread/">Big Data Analytics Enables Scientists to Model COVID-19 Spread</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: newswise.com</p>



<p>Newswise — Public health efforts depend heavily on predicting how diseases like COVID-19 spread across the globe. Researchers from Florida Atlantic University’s College of Engineering and Computer Science in collaboration with LexisNexis Risk<sup>(R)</sup> Solutions, a global data technology and advanced analytics leader, have received a rapid research (RAPID) grant from the National Science Foundation (NSF) to develop a model of COVID-19 spread using innovative big data analytics techniques and tools. The project leverages prior experience in modeling Ebola spread to successfully model the spread of COVID-19.</p>



<p>Researchers will use big data analytics techniques to develop computational models to predict the spread of the disease utilizing forward simulation from a given patient and the propagation of the infection into the community; and backward simulation tracing a number of verified infections to a possible patient “zero.” Users of the models and algorithms developed by FAU and LexisNexis Risk Solutions will conform to all applicable requirements of HIPAA and other privacy regulations.</p>



<p>The project also will provide quick and automatic contact tracing and is expected to help reduce the number of patients infected with COVID-19 and virus-related deaths. This new methodology, which includes coalition-building efforts, will also support solutions for a wide range of other public health issues.</p>



<p>“This National Science Foundation grant will enable our researchers to advance knowledge within the field of big data analytics as well as across different fields including medical, health care, and public applications,” said Stella Batalama, Ph.D., dean of FAU’s College of Engineering and Computer Science. “Through our collaboration with LexisNexis Risk Solutions, we will jointly address public health concerns of national and global significance using cutting-edge computer science, big data analytics, data visualization techniques, and decision support systems.”</p>



<p>The era of “big data” is quickly changing how models are used to understand the dynamics of disease propagation. The FAU project, led by Borko Furht, Ph.D., principal investigator, a professor in the Department of Computer and Electrical Engineering and Computer Science, and director of the NSF Industry/University Cooperative Research Center for Advanced Knowledge Enablement (CAKE), FAU’s College of Engineering and Computer Science, will use an innovative risk score approach in modeling and predicting COVID-19 spread.</p>



<p>“The HPCC Systems team at LexisNexis Risk Solutions has an outstanding relationship with Dr. Furht and FAU,” said Flavio Villanustre, vice president, Technology and CISO, LexisNexis Risk Solutions. “FAU and LexisNexis Risk Solutions have been collaborating on several projects over the last five years. Our most recent work involved the NSF grant for modeling Ebola using the HPCC Systems platform and big data analytics. We are grateful to the NSF, FAU and Dr. Furht for their continued investment in research that helps the community.”&nbsp;</p>



<p>For the project, COVID-19 spread patterns will be fed into a decision support system (DSS), which also contains information about social groups or individual people. Social groups could include nurses and doctors who had contact with a patient infected with COVID-19, passengers who travelled on the same plane with an individual diagnosed with COVID-19, or family members living with someone who contracted COVID-19, among others. Based on spread patterns, the DSS would then calculate probabilities for a social group or a given person to become infected with COVID-19. Data will be provided as reports to appropriate state and government agencies so that they can immediately contact and test people who have a high score related to the person who is infected with COVID-19.</p>



<p>“The data analytics expertise we will receive from LexisNexis Risk Solutions will enable us to develop a model that will automatically and quickly identify every contact of an infected person,” said Furht, who received an NSF RAPID grant for modeling Ebola spread using big data analytics. “Our approach will be much faster and more efficient than methods that are done manually and we expect it to significantly reduce infection rates and the number of deaths in the United States and around the world.”&nbsp;</p>



<p>Members of the FAU team for “Modeling Coronavirus Spread Using Big Data Analytics,” include Taghi Khoshgoftaar, Ph.D., Motorola Professor; Waseem Asghar, Ph.D., an associate professor; Ankur Agarwal, Ph.D., a professor; Behnaz Ghoraani, Ph.D., an associate professor and a fellow of FAU’s Institute for Sensing and Embedded Network Systems Engineering (I-SENSE); and Mirjana Pavlovic, Ph.D., an instructor, all within FAU’s Department of Computer and Electrical Engineering and Computer Science.</p>



<p>The LexisNexis Risk Solutions team includes Villanustre; Arjuna Chala, senior director, Operations; Roger Dev, senior architect; and Jesse Shaw, principal statistical modeler.</p>



<p>“Because of a lack of actual social network data, mathematical compartmental modeling has been restricted to hypothetical populations. However, emerging LexisNexis Risk Solutions technologies could accelerate the accumulation of knowledge around disease propagation in the United States,” said Furht. “For our research, we plan to calculate various scores related to COVID-19 spread including population density rank, household mortality risk, street level mortality risk, and county mortality risk.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/big-data-analytics-enables-scientists-to-model-covid-19-spread/">Big Data Analytics Enables Scientists to Model COVID-19 Spread</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/big-data-analytics-enables-scientists-to-model-covid-19-spread/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Scientists developed the fastest soft robot</title>
		<link>https://www.aiuniverse.xyz/scientists-developed-the-fastest-soft-robot/</link>
					<comments>https://www.aiuniverse.xyz/scientists-developed-the-fastest-soft-robot/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 11 May 2020 08:25:37 +0000</pubDate>
				<category><![CDATA[Robotics]]></category>
		<category><![CDATA[developed]]></category>
		<category><![CDATA[robot]]></category>
		<category><![CDATA[scientists]]></category>
		<category><![CDATA[Technologies]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8693</guid>

					<description><![CDATA[<p>Source: By getting inspired by the biomechanics of cheetahs, scientists at North Carolina State University have developed the fastest ever soft robot that can move quickly on concrete surfaces <a class="read-more-link" href="https://www.aiuniverse.xyz/scientists-developed-the-fastest-soft-robot/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/scientists-developed-the-fastest-soft-robot/">Scientists developed the fastest soft robot</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:</p>



<p>By getting inspired by the biomechanics of cheetahs, scientists at North Carolina State University have developed the fastest ever soft robot that can move quickly on concrete surfaces or in the water than previous generations of soft robots.</p>



<p>What’s more, it can delicately grab objects or with sufficient strength to lift heavy objects.</p>



<p>The robot is 7 centimeters long and weighs about 45 grams. It has a spring-powered, ‘bistable’ spine. This means it has two stable states.</p>



<p>Jie Yin, an assistant professor of mechanical and aerospace engineering at North Carolina State University, said, “We can switch between these stable states rapidly by pumping air into channels that line the soft, silicone robot. Switching between the two states releases a significant amount of energy, allowing the robot to exert force against the ground quickly. This enables the robot to gallop across the surface, meaning that its feet leave the ground.”</p>



<p>“Previous soft robots were crawlers, remaining in contact with the ground at all times. This limits their speed.”</p>



<p>The new class of soft robots, which are called “Leveraging Elastic instabilities for Amplified Performance” (LEAP), can reach speeds of up to 2.7 body lengths per second—more than three times faster—at a low actuation frequency of about 3Hz. These new robots are also capable of running up steep inclines, which can be challenging or impossible for soft robots that exert less force against the ground.”</p>



<p>Yin said, “We also demonstrated the use of several soft robots working together, like pincers, to grab objects. By tuning the force exerted by the robots, we were able to lift objects as delicate as an egg, as well as objects weighing 10 kilograms or more.”</p>



<p>Scientists noted that this work caters as a proof of idea and is idealistic that they can modify the design to make LEAP robots that are even faster quicker and more remarkable.</p>



<p>Yin said, “Potential applications include search and rescue technologies, where speed is essential, and industrial manufacturing robotics. For example, imagine production line robotics that is faster, but still capable of handling fragile objects.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/scientists-developed-the-fastest-soft-robot/">Scientists developed the fastest soft robot</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/scientists-developed-the-fastest-soft-robot/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google Scientists Develop Software That Could Enable AI To Evolve With No Human Input</title>
		<link>https://www.aiuniverse.xyz/google-scientists-develop-software-that-could-enable-ai-to-evolve-with-no-human-input/</link>
					<comments>https://www.aiuniverse.xyz/google-scientists-develop-software-that-could-enable-ai-to-evolve-with-no-human-input/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 17 Apr 2020 06:44:55 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Google Developers]]></category>
		<category><![CDATA[machine learning (ML)]]></category>
		<category><![CDATA[scientists]]></category>
		<category><![CDATA[software]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8218</guid>

					<description><![CDATA[<p>Source: iflscience.com Machine learning (ML) is a method by which algorithms adapt their activity using inputted data, rather than being programmed to do so. But building and <a class="read-more-link" href="https://www.aiuniverse.xyz/google-scientists-develop-software-that-could-enable-ai-to-evolve-with-no-human-input/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-scientists-develop-software-that-could-enable-ai-to-evolve-with-no-human-input/">Google Scientists Develop Software That Could Enable AI To Evolve With No Human Input</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: iflscience.com</p>



<p>Machine learning (ML) is a method by which algorithms adapt their activity using inputted data, rather than being programmed to do so. But building and “training” these algorithms takes time, and can often ingrain human biases.</p>



<p>To overcome these limitations, and enable further innovation in machine learning, researchers have explored the field of AutoML, whereby the machine learning process can be progressively automated, relying on machine compute time, rather than human research time.</p>



<p>So far, although some steps have been automated, the benchmark of virtually zero human input has yet to be attained. However, a team of scientists from Google have seen some “preliminary success” in discovering machine learning algorithms from scratch, indicating a “promising new direction for the field.”</p>



<p>In a paper, published on the preprint server arXiv, Quoc Le, a computer scientist at Google, and colleagues, employed concepts from Darwinian evolution, such as natural selection, to enable ML algorithms to improve generation upon generation. Combining basic mathematical operations, their program, called AutoML-Zero, generated 100 unique algorithms that they then tested on simple tasks, such as image recognition.</p>



<p>After their performance was compared to hand-designed algorithms, the best were kept, and small random “mutations” in their code were introduced, whilst the weaker candidates were removed. As the cycle continued, a high-performing set of algorithms were found, some of which are comparable to a number of classic machine learning techniques – such as neural networks (a kind of computer program that loosely mimics how our brain cells work together to make decisions).</p>



<p>This proves the team’s concept, Le told Science Magazine, but he is hopeful that the processes can be scaled up to eventually create much more complex AIs, which human researchers could never find.</p>



<p>“Our goal is to show that AutoML can go further: it is possible today to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks,” the team wrote in the paper, which is awaiting peer-review.</p>



<p>“Starting from empty component functions and using only basic mathematical operations, we evolved linear regressors, neural networks, gradient descent, multiplicative interactions, weight averaging, normalized gradients, etc.” the authors continued. “These results are promising, but there is still much work to be done.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-scientists-develop-software-that-could-enable-ai-to-evolve-with-no-human-input/">Google Scientists Develop Software That Could Enable AI To Evolve With No Human Input</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-scientists-develop-software-that-could-enable-ai-to-evolve-with-no-human-input/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google AI Scientists –“Developing Algorithms that Mirror Darwinian Evolution”</title>
		<link>https://www.aiuniverse.xyz/google-ai-scientists-developing-algorithms-that-mirror-darwinian-evolution/</link>
					<comments>https://www.aiuniverse.xyz/google-ai-scientists-developing-algorithms-that-mirror-darwinian-evolution/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 16 Apr 2020 06:43:09 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[AutoML]]></category>
		<category><![CDATA[Developing]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[scientists]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8204</guid>

					<description><![CDATA[<p>Source: dailygalaxy.com Science-fiction author Vernor Vinge once said that mankind’s last great invention will be the first self-replicating machine. Now, AI scientists working in Google Brain division <a class="read-more-link" href="https://www.aiuniverse.xyz/google-ai-scientists-developing-algorithms-that-mirror-darwinian-evolution/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-ai-scientists-developing-algorithms-that-mirror-darwinian-evolution/">Google AI Scientists –“Developing Algorithms that Mirror Darwinian Evolution”</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: dailygalaxy.com</p>



<p>Science-fiction author Vernor Vinge once said that mankind’s last great invention will be the first self-replicating machine. Now, AI scientists working in Google Brain division are testing how machine learning algorithms can be created from scratch, then evolve naturally, based on simple math, according to Google’s AutoML team who suggest the software could potentially be updated to “automatically discover” completely unknown algorithms while also reducing human bias during the data input process. The software, known as AutoML-Zero, resembles the process of evolution, with code improving every generation with little human interaction.</p>



<p>Machine learning tools are “trained” to find patterns in vast amounts of data while automating such processes and constantly being refined based on past experience.</p>



<p>But there’s a draw back, “Human-designed components bias the search results in favor of human-designed algorithms, possibly reducing the innovation potential of AutoML,” according to the team’s paper. “Innovation is also limited by having fewer options: you cannot discover what you cannot search for.” The analysis, which was published last month on arXiv, is titled “Evolving Machine Learning Algorithms From Scratch”.</p>



<p>In an interview with Newsweek, Haran Jackson, the chief technology officer (CTO) at Techspert, who has a PhD in Computing from the University of Cambridge, said that AutoML tools are typically used to “identify and extract” the most useful features from datasets—and this approach is a welcome development.</p>



<p>“There is a sense,” he added, “that among many members of the community that the most impressive feats of artificial intelligence will only be achieved with the invention of new algorithms that are fundamentally different to those that we as a species have so far devised. This is what makes the aforementioned paper so interesting. It presents a method by which we can automatically construct and test completely novel machine learning algorithms.”</p>



<p>Jackson concluded that the approach taken was similar to the theory of evolution proposed by Charles Darwin, noting how the Google team was able to induce “mutations” into the set of algorithms. “The mutated algorithms that did a better job of solving real-world problems were kept alive, with the poorly-performing ones being discarded.</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-ai-scientists-developing-algorithms-that-mirror-darwinian-evolution/">Google AI Scientists –“Developing Algorithms that Mirror Darwinian Evolution”</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-ai-scientists-developing-algorithms-that-mirror-darwinian-evolution/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Scientists&#8217; next AI agenda: Making machines learn &#8216;common sense&#8217; and &#8216;teach&#8217; themselves</title>
		<link>https://www.aiuniverse.xyz/scientists-next-ai-agenda-making-machines-learn-common-sense-and-teach-themselves/</link>
					<comments>https://www.aiuniverse.xyz/scientists-next-ai-agenda-making-machines-learn-common-sense-and-teach-themselves/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 11 Apr 2020 10:58:42 +0000</pubDate>
				<category><![CDATA[Reinforcement Learning]]></category>
		<category><![CDATA[agenda]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial intelligence (AI)]]></category>
		<category><![CDATA[machines learning]]></category>
		<category><![CDATA[scientists]]></category>
		<category><![CDATA[teach]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8125</guid>

					<description><![CDATA[<p>Source: ibtimes.sg Artificial intelligence (AI) seems to be taking over the world and is even helping us combat the ongoing coronavirus pandemic, but so far it has been a <a class="read-more-link" href="https://www.aiuniverse.xyz/scientists-next-ai-agenda-making-machines-learn-common-sense-and-teach-themselves/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/scientists-next-ai-agenda-making-machines-learn-common-sense-and-teach-themselves/">Scientists&#8217; next AI agenda: Making machines learn &#8216;common sense&#8217; and &#8216;teach&#8217; themselves</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: ibtimes.sg</p>



<p>Artificial intelligence (AI) seems to be taking over the world and is even helping us combat the ongoing coronavirus pandemic, but so far it has been a product of human supervision – we teach computers to see patterns, just like we teach children to read. However, researchers believe the future of AI depends on systems that are capable of learning on their own, without any supervision.</p>



<p><strong>What is supervised learning?</strong></p>



<p>When a parent points towards a dog and tells the baby to &#8220;Look at the doggie,&#8221; the child learns and understands what to call the furry four-legged friends. This is an example of supervised learning, as pointed out by New York Times. However, when the baby stands and stumbles, over and over again, before she learns how to walk, that is something else.</p>



<p>Computers and humans are quite similar when it comes to learning. Just as we learn mostly through observation or trial and error, computers also have to pass through the stage of supervised learning before they can reach the human-level of intelligence.</p>



<p>Even if a supervised learning system reads all the books in the world, it would still not be able to achieve human-level intelligence because a large chunk of our knowledge and expertise is not penned down.</p>



<p><strong>Limitations of human supervision</strong></p>



<p>Supervised learning comprises of feeding data, including images, audio, or text that is fed into computer algorithms, which teams machines to do what they do. However, this learning method has its restrictions.</p>



<p>&#8220;There is a limit to what you can apply supervised learning to today due to the fact that you need a lot of labeled data,&#8221; said Yann LeCun, an expert in the field of machine learning and artificial intelligence, and a recipient of the Turing Award, the equivalent of a Nobel Prize in computer science, in 2018. He is also the vice president and chief A.I. scientist at Facebook.</p>



<p>Although learning methods that are not dependent on such human intervention are less explored, they have been overshadowed by the success of supervised learning and its many practical applications in the real world, from self-driving cars to smart speakers. But supervised learning still can&#8217;t do many of the tasks that are simple enough even for a toddler.</p>



<p><strong>Artificial intelligence that learns on its own</strong></p>



<p>Therefore, scientists leading the charge of artificial intelligence research have shifted their focus to less-supervised learning methods in which the artificial intelligence develops a common sense or sorts and carries out tasks by learning on its own.</p>



<p>&#8220;There&#8217;s self-supervised and other related ideas, like reconstructing the input after forcing the model to a compact representation, predicting the future of a video or masking part of the input and trying to reconstruct it,&#8221; said Samy Bengio, a research scientist at Google.</p>



<p>Scientists are also exploring reinforcement learning, which requires very limited supervision and does not rely on data. This learning method, pioneered by University of Alberta&#8217;s Richard Sutton, follows a reward-driven learning mode, essentially like a dog performing a trick to earn a treat. The strategy has been developed to teach computer systems to learn new actions on their own.</p>



<p>All they need to do is set a goal, and a reinforcement learning system will try to achieve the said goal through trial and error until it is consistently receiving a reward. A more appropriate term for this future AI is &#8220;predictive learning,&#8221; which means that systems not only recognize patterns but also predict outcomes and choose a course of action autonomously.</p>



<p>For instance, if a self-supervised computer system &#8220;watches&#8221; millions of videos on YouTube, it will gather a representation of the world from the clips and when the machine is asked to perform a particular task, it can take action based on what it has learned from the videos – in other words, teach itself.</p>
<p>The post <a href="https://www.aiuniverse.xyz/scientists-next-ai-agenda-making-machines-learn-common-sense-and-teach-themselves/">Scientists&#8217; next AI agenda: Making machines learn &#8216;common sense&#8217; and &#8216;teach&#8217; themselves</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/scientists-next-ai-agenda-making-machines-learn-common-sense-and-teach-themselves/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
