<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>algorithm Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/algorithm/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/algorithm/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Sat, 03 Jul 2021 10:10:34 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Machine Learning Algorithm Brings Predictive Analytics to Cell Study</title>
		<link>https://www.aiuniverse.xyz/machine-learning-algorithm-brings-predictive-analytics-to-cell-study/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-algorithm-brings-predictive-analytics-to-cell-study/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 03 Jul 2021 10:10:32 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[algorithm]]></category>
		<category><![CDATA[Analytics]]></category>
		<category><![CDATA[Brings]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Predictive]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14746</guid>

					<description><![CDATA[<p>Source &#8211; https://healthitanalytics.com/ A new machine learning algorithm system uses predictive analytics to determine which transcription factors are active in individual cells. Scientists at the University of <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-algorithm-brings-predictive-analytics-to-cell-study/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-algorithm-brings-predictive-analytics-to-cell-study/">Machine Learning Algorithm Brings Predictive Analytics to Cell Study</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://healthitanalytics.com/</p>



<p>A new machine learning algorithm system uses predictive analytics to determine which transcription factors are active in individual cells.</p>



<p>Scientists at the University of Illinois Chicago have introduced a new system that uses a machine learning algorithm and predictive analytics to find what transcription factors are most likely to be active in individual cells. The system was created to provide researchers with a more efficient method of identifying the regulators of genes.</p>



<p>Transcription factors are proteins that bind to DNA and have control around what genes are active inside a cell. Understanding and manipulating these signals in a cell is crucial to the biomedical field. Additionally, using this method of manipulating signals within a cell has proven to be an effective way to discover new treatments and illnesses.</p>



<p>However, there are hundreds of transcription factors inside a human cell. It could take years of research, and lots of trial and error, to determine the most active factor.</p>



<h4 class="wp-block-heading">Dig Deeper</h4>



<ul class="wp-block-list"><li>Machine Learning Predicts Dialysis, Death in COVID-19 Patients</li><li>Machine Learning Gauges Unconsciousness Under Anesthesia</li><li>AI, Predictive Analytics Pave Way for Premature Baby Care</li></ul>



<p>&#8220;One of the challenges in the field is that the same genes may be turned ‘on’ in one group of cells but turned ‘off’ in a different group of cells within the same organ,&#8221; Jalees Rehman, UIC professor in the department of medicine and the department of pharmacology and regenerative medicine at the College of Medicine, said in a press release.</p>



<p>&#8220;Being able to understand the activity of transcription factors in individual cells would allow researchers to study activity profiles in all the major cell types of major organs such as the heart, brain or lungs,&#8221; Rehman continued.</p>



<p>The system developed by the University of Illinois Chicago is named BITFAM, standing for Bayesian Inference Transcription Factor Activity Model. The machine learning algorithm system operates by “combining new gene expression profile data gathered from single cell RNA sequencing with existing biological data on transcription factor target genes,” UIC stated in a press release.</p>



<p>With all the information, the system will run multiple computer-based simulations to find the best fit and predict the activity for every transcription factor in the cell.</p>



<p>The system was tested on cells from tissue in the lung, heart, and brain by Rehman and fellow UIC researcher Yang Dai, UIC associate professor in the department of bioengineering at the College of Medicine and the College of Engineering.</p>



<p>&#8220;Our approach not only identifies meaningful transcription factor activities but also provides valuable insights into underlying transcription factor regulatory mechanisms,&#8221; Shang Gao, first author of the study and a doctoral student in the department of bioengineering said in a press release.</p>



<p>&#8220;For example, if 80% of a specific transcription factor&#8217;s targets are turned on inside the cell, that tells us that its activity is high. By providing data like this for every transcription factor in the cell, the model can give researchers a good idea of which ones to look at first when exploring new drug targets to work on that type of cell,&#8221; Gao continued.</p>



<p>According to the researchers, the machine learning algorithm system is available to the public and could be applied widely. Users can combine the system with additional analysis methods that may be better suited for their own studies. This could include finding new drug targets.</p>



<p>&#8220;This new approach could be used to develop key biological hypotheses regarding the regulatory transcription factors in cells related to a broad range of scientific hypotheses and topics. It will allow us to derive insights into the biological functions of cells from many tissues,&#8221; Dai said.</p>



<p>Rehman explained the application relevant to his lab is to use the new machine learning algorithm system to focus on factors that increase disease in certain cells.</p>



<p>“For example, we would like to understand if there is transcription factor activity that distinguished a healthy immune cell response from an unhealthy one, as in the case of conditions such as COVID-19, heart disease or Alzheimer&#8217;s disease where there is often an imbalance between healthy and unhealthy immune responses,&#8221; Rehman said.</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-algorithm-brings-predictive-analytics-to-cell-study/">Machine Learning Algorithm Brings Predictive Analytics to Cell Study</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-algorithm-brings-predictive-analytics-to-cell-study/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>This New Algorithm can Explain Artificial Intelligence (XAI)</title>
		<link>https://www.aiuniverse.xyz/this-new-algorithm-can-explain-artificial-intelligence-xai/</link>
					<comments>https://www.aiuniverse.xyz/this-new-algorithm-can-explain-artificial-intelligence-xai/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 03 Apr 2021 06:45:41 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[algorithm]]></category>
		<category><![CDATA[Explain]]></category>
		<category><![CDATA[explainable]]></category>
		<category><![CDATA[researchers]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13917</guid>

					<description><![CDATA[<p>Source &#8211; https://www.eletimes.com/ Researchers from the University of Toronto and LG AI Research have developed an “explainable” artificial intelligence (XAI) algorithm that can help identify and eliminate <a class="read-more-link" href="https://www.aiuniverse.xyz/this-new-algorithm-can-explain-artificial-intelligence-xai/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/this-new-algorithm-can-explain-artificial-intelligence-xai/">This New Algorithm can Explain Artificial Intelligence (XAI)</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.eletimes.com/</p>



<p>Researchers from the University of Toronto and LG AI Research have developed an “explainable” artificial intelligence (XAI) algorithm that can help identify and eliminate defects in display screens.</p>



<p>The&nbsp;new algorithm, which outperformed comparable approaches on industry benchmarks, was developed through an ongoing AI research collaboration between LG and U of T that was expanded in 2019 with a focus on AI applications for businesses.</p>



<p>Researchers say the XAI algorithm could potentially be applied in other fields that require a window into how&nbsp;machine learning&nbsp;makes its decisions, including the interpretation of data from medical scans.</p>



<p>XAI is an emerging field that addresses issues with the ‘black box’ approach of machine learning strategies.</p>



<p>In a black-box model, a computer might be given a set of training data in the form of millions of labeled images. By analyzing the data, the algorithm learns to associate certain features of the input (images) with certain outputs (labels). Eventually, it can correctly attach labels to images it has never seen before.</p>



<p>The machine decides for itself which aspects of the image to pay attention to and which to ignore, meaning its designers will never know exactly how it arrives at a result.</p>



<p>But such a “black box” model presents challenges when it’s applied to areas such as health care, law, and insurance.</p>



<p>For example, a [machine learning] model might determine a patient has a 90 percent chance of having a tumor. The consequences of acting on inaccurate or biased information are literally life or death. To fully understand and interpret the model’s prediction, the doctor needs to know how the algorithm arrived at it.In contrast to traditional machine learning, XAI is designed to be a “glass box” approach that makes decision-making transparent. XAI algorithms are run simultaneously with traditional algorithms to audit the validity and the level of their learning performance. The approach also provides opportunities to carry out debugging and find training efficiencies.</p>



<p>The first, known as backpropagation, relies on the underlying AI architecture to quickly calculate how the network’s prediction corresponds to its input. The second, known as a perturbation, sacrifice some speed for accuracy and involves changing data inputs and tracking the corresponding outputs to determine the necessary compensation.</p>



<p>There is a lot of potential in SISE for widespread application. The problem and intent of the particular scenario will always require adjustments to the algorithm—but these heat maps or ‘explanation maps’ could be more easily interpreted by, for example, a medical professional.</p>



<p>LG’s goal in partnering with the University of Toronto is to become a world leader in AI innovation. This first achievement in XAI speaks to our company’s ongoing efforts to make contributions in multiple areas, such as the functionality of LG products, innovation of manufacturing, management of supply chain, the efficiency of material discovery, and others, using AI to enhance customer satisfaction.</p>



<p>When both sets of researchers come to the table with their respective points of view, it can often accelerate problem-solving. It is invaluable for graduate students to be exposed to this process.</p>



<p>While it was a challenge for the team to meet the aggressive accuracy and run-time targets within the year-long project—all while juggling Toronto/Seoul time zones and working under COVID-19 constraints—Sudhakar says the opportunity to generate a practical solution for a world-renowned <strong>manufacturer</strong> was well worth the effort.</p>
<p>The post <a href="https://www.aiuniverse.xyz/this-new-algorithm-can-explain-artificial-intelligence-xai/">This New Algorithm can Explain Artificial Intelligence (XAI)</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/this-new-algorithm-can-explain-artificial-intelligence-xai/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deep-learning algorithm designs soft robots with sensors</title>
		<link>https://www.aiuniverse.xyz/deep-learning-algorithm-designs-soft-robots-with-sensors/</link>
					<comments>https://www.aiuniverse.xyz/deep-learning-algorithm-designs-soft-robots-with-sensors/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 25 Mar 2021 06:28:10 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[algorithm]]></category>
		<category><![CDATA[deep-learning]]></category>
		<category><![CDATA[designs]]></category>
		<category><![CDATA[Robots]]></category>
		<category><![CDATA[SENSORS]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13779</guid>

					<description><![CDATA[<p>Source &#8211; https://www.theweek.in/ Soft robots collect more useful information about their surroundings Creating soft robots has been a long-running challenge in robotics. Their rigid counterparts have a <a class="read-more-link" href="https://www.aiuniverse.xyz/deep-learning-algorithm-designs-soft-robots-with-sensors/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-algorithm-designs-soft-robots-with-sensors/">Deep-learning algorithm designs soft robots with sensors</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.theweek.in/</p>



<p>Soft robots collect more useful information about their surroundings</p>



<p>Creating soft robots has been a long-running challenge in robotics. Their rigid counterparts have a built-in advantage: a limited range of motion. Rigid robots&#8217; finite array of joints and limbs usually makes for manageable calculations by the algorithms that control mapping and motion planning.</p>



<p>A team of MIT researchers developed a deep learning neural network to aid the design of soft-bodied robots.</p>



<p>Soft-bodied robots are able to interact with people more safely or slip into tight spaces with ease. Soft robots are not so tractable. But for robots to reliably complete their programmed duties, they need to know the whereabouts of all their body parts. That&#8217;s a tall task for a soft robot that can deform in a virtually infinite number of ways. The algorithm developed by MIT researchers can help engineers design soft robots that collect more useful information about their surroundings. The deep-learning algorithm suggests an optimised placement of sensors within the robot&#8217;s body, allowing it to better interact with its environment and complete assigned tasks. The advance is a step toward the automation of robot design. &#8220;The system not only learns a given task, but also how to best design the robot to solve that task,&#8221; says Alexander Amini. &#8220;Sensor placement is a very difficult problem to solve. So, having this solution is extremely exciting.&#8221;</p>



<p>Soft-bodied robots are flexible and pliant—they generally feel more like a bouncy ball than a bowling ball. &#8220;The main problem with soft robots is that they are infinitely dimensional,&#8221; says co-author Andrew Spielberg. &#8220;Any point on a soft-bodied robot can, in theory, deform in any way possible.&#8221; That makes it tough to design a soft robot that can map the location of its body parts. Past efforts have used an external camera to chart the robot&#8217;s position and feed that information back into the robot&#8217;s control program. But the researchers wanted to create a soft robot untethered from external aid.</p>



<p>&#8220;You can&#8217;t put an infinite number of sensors on the robot itself,&#8221; says Spielberg. &#8220;So, the question is: How many sensors do you have, and where do you put those sensors in order to get the most bang for your buck?&#8221; The team turned to deep learning for an answer.</p>



<p>The researchers developed a novel neural network architecture that both optimises sensor placement and learns to efficiently complete tasks. First, the researchers divided the robot&#8217;s body into regions called &#8220;particles.&#8221; Each particle&#8217;s rate of strain was provided as an input to the neural network. Through a process of trial and error, the network &#8220;learns&#8221; the most efficient sequence of movements to complete tasks, like gripping objects of different sizes. At the same time, the network keeps track of which particles are used most often, and it culls the lesser-used particles from the set of inputs for the networks&#8217; subsequent trials.</p>



<p>Spielberg says their work could help to automate the process of robot design. In addition to developing algorithms to control a robot&#8217;s movements, &#8220;we also need to think about how we&#8217;re going to sensorize these robots, and how that will interplay with other components of that system,&#8221; he says. And better sensor placement could have industrial applications, especially where robots are used for fine tasks like gripping. &#8220;That&#8217;s something where you need a very robust, well-optimized sense of touch,&#8221; says Spielberg. &#8220;So, there&#8217;s potential for immediate impact.&#8221;</p>



<p>&#8220;Automating the design of sensorised soft robots is an important step toward rapidly creating intelligent tools that help people with physical tasks,&#8221; says coauthor Daniela Rus. &#8220;The sensors are an important aspect of the process, as they enable the soft robot to &#8220;see&#8221; and understand the world and its relationship with the world.&#8221;</p>



<p>The research will be presented during April&#8217;s IEEE International Conference on Soft Robotics and will be published in the journal IEEE Robotics and Automation Letters.&nbsp;&nbsp;</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-algorithm-designs-soft-robots-with-sensors/">Deep-learning algorithm designs soft robots with sensors</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deep-learning-algorithm-designs-soft-robots-with-sensors/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>UNRAVELLING A NEW ALGORITHM CAPABLE OF REDUCING THE COMPLEXITY OF DATA</title>
		<link>https://www.aiuniverse.xyz/unravelling-a-new-algorithm-capable-of-reducing-the-complexity-of-data/</link>
					<comments>https://www.aiuniverse.xyz/unravelling-a-new-algorithm-capable-of-reducing-the-complexity-of-data/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 18 Mar 2021 06:23:03 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[algorithm]]></category>
		<category><![CDATA[CAPABLE]]></category>
		<category><![CDATA[complexity]]></category>
		<category><![CDATA[data]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[REDUCING]]></category>
		<category><![CDATA[UNRAVELLING]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13588</guid>

					<description><![CDATA[<p>Source &#8211; https://www.analyticsinsight.net/ The new algorithm is an effective machine learning tool that is capable of extracting the desired information Big data, evidently, is too large to <a class="read-more-link" href="https://www.aiuniverse.xyz/unravelling-a-new-algorithm-capable-of-reducing-the-complexity-of-data/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/unravelling-a-new-algorithm-capable-of-reducing-the-complexity-of-data/">UNRAVELLING A NEW ALGORITHM CAPABLE OF REDUCING THE COMPLEXITY OF DATA</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.analyticsinsight.net/</p>



<h2 class="wp-block-heading">The new algorithm is an effective machine learning tool that is capable of extracting the desired information</h2>



<p>Big data, evidently, is too large to be processed using conventional data processing tools and techniques. Majority of the information systems produce data in huge quantities that poses difficulties to measure. This complex Big data that organizations have to deal with are characterized by – huge volume, high value, big variability, high velocity, much variety, and low veracity.</p>



<p>Yet another area that generates huge amount of data is the one involving scientific experiments. As days passed by, researchers have come up with highly efficient ways to plan, conduct, and assess research. A combination of computational, algorithmic, statistical and mathematical techniques is what goes behind these scientific experiments. Also, whenever a scientific experiment is conducted, the results obtained are usually transformed into numbers. All this ultimately results in huge datasets. Such big data isn’t that easy to handle and extracting meaningful insights from the same is a trickier task. This is why every possible method to reduce the size of the data is being employed and tested. Today, different types of algorithms are being employed to reduce the data size and also pave the way for extracting the principal features and insights. All this ultimately throwing light on the most critical part of the data, its statistical properties. On the downside, the fact that certain algorithms cannot be applied directly to these large volumes of big data cannot be overlooked.</p>



<p>With many researchers and programmers coming up with ways to deal with this humungous big data in the most optimal manner, Reza Oftadeh, a doctoral student in the Department of Computer Science and Engineering at Texas A&amp;M University, too took a step towards this. Reza developed an algorithm which, according to him, is an effective machine learning tool as it is capable of extracting the desired information. Reza along with his team, which comprises of a couple of other doctoral students and some assistant professors, have published their research work in the proceedings from the 2020 International Conference on Machine learning. This research by Reza and his team was funded by the National Science Foundation and U.S. Army Research Office Young Investigator Award.</p>



<p>There is a fair chance that the data set in consideration has high dimensionality, meaning that it has a lot of features. The problem associated with this is the ability to generalize. This is why efforts from every corner are put in to reduce the dimensionality of the data. With those areas being identified which need to undergo reduction in dimensionality, annotated samples of the same are made to make it easy for further analysis. Well, not just this, tasks such as classification, visualization, modelling, etc. also see a smooth workflow.</p>



<p>Though this isn’t for the first time that such algorithms and methodologies have been put in place. This has been doing rounds for quite some time now but with big data increasing exponentially, analysing it is not just time consuming but also complicated. This led to the invention of ANNs – Artificial Neural Networks. Artificial Neural Networks are one of the greatest innovations that the world has seen on the technical front. Artificial neural networks are made up of billions of artificial neurons. Their task is to extract meaningful information from the dataset provided. In simple terms, Artificial Neural Networks are models that are equipped with a well-defined architecture of many interconnected artificial neurons and are designed to simulate how the human brain works when it comes to analysing and processing data. Artificial Neural Networks have seen numerous applications so far and that one application which sets it apart is the way it is capable enough of classifying big data into different categories based on its features.</p>



<p>When Reza was asked his views on the same, he started off by mentioning how much we rely on ANNs in our day-to-day life. He quoted the examples of Alexa, Siri and Google Translate saying how they are trained to be able to understand what the person is saying. However, he also mentioned how all the features possessed aren’t equally significant. He supported his statement by giving an example of a specific type of ANN called an “autoencoder”. This cannot tell where the features are located and also which features are more critical than the rest, he added. Running the model repeatedly doesn’t serve the purpose as this too, is time consuming.</p>



<p>Reza and his team aim to come take their algorithm to a next level altogether. They plan on to add a new cost function to the network. With this feature, it is possible to provide the exact location of the features. For this, they incorporated an OCR – Optical Character Recognition experiment. This team of researchers trained their machine learning model to convert images of both typed as well as handwritten text into machine encoded text. They made use of digital physical documents for this experiment. This model, on being trained for OCR, holds the potential to tell which features among all are important and must be put into priority. They claim that their machine learning tool would cater to bigger datasets as well, thereby resulting in an improved data analysis.</p>



<p>As of now, the algorithm that this group of researchers have come up with stands the potential to deal with one-dimensional data samples only. However, the team is willing to extend its capabilities to the extent that it will be possible to deal with even more complex unstructured data. The team is ready to face all the challenges that might come their way and explore this algorithm to the farthest level possible. They would also be working in the area of generalizing their method. The reason for doing this is to provide a unified framework to produce other machine learning methods. Ultimately, the objective that still remains is to extract features by dealing with a smaller set of specifications.</p>
<p>The post <a href="https://www.aiuniverse.xyz/unravelling-a-new-algorithm-capable-of-reducing-the-complexity-of-data/">UNRAVELLING A NEW ALGORITHM CAPABLE OF REDUCING THE COMPLEXITY OF DATA</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/unravelling-a-new-algorithm-capable-of-reducing-the-complexity-of-data/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Algorithm helps artificial intelligence systems dodge &#8216;adversarial&#8217; inputs</title>
		<link>https://www.aiuniverse.xyz/algorithm-helps-artificial-intelligence-systems-dodge-adversarial-inputs/</link>
					<comments>https://www.aiuniverse.xyz/algorithm-helps-artificial-intelligence-systems-dodge-adversarial-inputs/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 09 Mar 2021 11:54:03 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[adversarial]]></category>
		<category><![CDATA[algorithm]]></category>
		<category><![CDATA[dodge]]></category>
		<category><![CDATA[helps]]></category>
		<category><![CDATA[systems]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13345</guid>

					<description><![CDATA[<p>Source &#8211; https://techxplore.com/ In a perfect world, what you see is what you get. If this were the case, the job of artificial intelligence systems would be <a class="read-more-link" href="https://www.aiuniverse.xyz/algorithm-helps-artificial-intelligence-systems-dodge-adversarial-inputs/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/algorithm-helps-artificial-intelligence-systems-dodge-adversarial-inputs/">Algorithm helps artificial intelligence systems dodge &#8216;adversarial&#8217; inputs</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://techxplore.com/</p>



<p>In a perfect world, what you see is what you get. If this were the case, the job of artificial intelligence systems would be refreshingly straightforward.</p>



<p>Take collision avoidance systems in self-driving cars. If visual input to on-board cameras could be trusted entirely, an AI system could directly map that input to an appropriate action—steer right, steer left, or continue straight—to avoid hitting a pedestrian that its cameras see in the road.</p>



<p>But what if there&#8217;s a glitch in the cameras that slightly shifts an image by a few pixels? If the car blindly trusted so-called &#8216;adversarial inputs,&#8217; it might take unnecessary and potentially dangerous action.</p>



<p>A new deep-learning algorithm developed by MIT researchers is designed to help machines navigate in the real, imperfect world, by building a healthy &#8216;skepticism&#8217; of the measurements and inputs they receive.</p>



<p>The team combined a reinforcement-learning algorithm with a deep neural network, both used separately to train computers in playing video games like Go and chess, to build an approach they call CARRL, for Certified Adversarial Robustness for Deep Reinforcement Learning.</p>



<p>The researchers tested the approach in several scenarios, including a simulated collision-avoidance test and the video game Pong, and found that CARRL performed better—avoiding collisions and winning more Pong games—over standard machine-learning techniques, even in the face of uncertain, adversarial inputs.</p>



<p>&#8220;You often think of an adversary being someone who&#8217;s hacking your computer, but it could also just be that your sensors are not great, or your measurements aren&#8217;t perfect, which is often the case,&#8221; says Michael Everett, a postdoc in MIT&#8217;s Department of Aeronautics and Astronautics (AeroAstro). &#8220;Our approach helps to account for that imperfection and make a safe decision. In any safety-critical domain, this is an important approach to be thinking about.&#8221;</p>



<p>Everett is the lead author of a study outlining the new approach, which appears in IEEE&#8217;s <em>Transactions on Neural Networks and Learning Systems</em>. The study originated from MIT Ph.D. student Björn Lütjens&#8217; master&#8217;s thesis and was advised by MIT AeroAstro Professor Jonathan How.</p>



<p><strong>Possible realities</strong></p>



<p>To make AI systems robust against adversarial inputs, researchers have tried implementing defenses for supervised learning. Traditionally, a neural network is trained to associate specific labels or actions with given inputs. For instance, a neural network that is fed thousands of images labeled as cats, along with images labeled as houses and hot dogs, should correctly label a new image as a cat.</p>



<p>In robust AI systems, the same supervised-learning techniques could be tested with many slightly altered versions of the image. If the network lands on the same label—cat—for every image, there&#8217;s a good chance that, altered or not, the image is indeed of a cat, and the network is robust to any adversarial influence.</p>



<p>But running through every possible image alteration is computationally exhaustive and difficult to apply successfully to time-sensitive tasks such as collision avoidance. Furthermore, existing methods also don&#8217;t identify what label to use, or what action to take, if the network is less robust and labels some altered cat images as a house or a hotdog.</p>



<p>&#8220;In order to use neural networks in safety-critical scenarios, we had to find out how to take real-time decisions based on worst-case assumptions on these possible realities,&#8221; Lütjens says.</p>



<p><strong>The best reward</strong></p>



<p>The team instead looked to build on reinforcement learning, another form of machine learning that does not require associating labeled inputs with outputs, but rather aims to reinforce certain actions in response to certain inputs, based on a resulting reward. This approach is typically used to train computers to play and win games such as chess and Go.</p>



<p>Reinforcement learning has mostly been applied to situations where inputs are assumed to be true. Everett and his colleagues say they are the first to bring &#8220;certifiable robustness&#8221; to uncertain, adversarial inputs in reinforcement learning.</p>



<p>Their approach, CARRL, uses an existing deep-reinforcement-learning algorithm to train a deep Q-network, or DQN—a neural network with multiple layers that ultimately associates an input with a Q value, or level of reward.</p>



<p>The approach takes an input, such as an image with a single dot, and considers an adversarial influence, or a region around the dot where it actually might be instead. Every possible position of the dot within this region is fed through a DQN to find an associated action that would result in the most optimal worst-case reward, based on a technique developed by recent MIT graduate student Tsui-Wei &#8220;Lily&#8221; Weng Ph.D. &#8217;20.</p>



<p><strong>An adversarial world</strong></p>



<p>In tests with the video game Pong, in which two players operate paddles on either side of a screen to pass a ball back and forth, the researchers introduced an &#8220;adversary&#8221; that pulled the ball slightly further down than it actually was. They found that CARRL won more games than standard techniques, as the adversary&#8217;s influence grew.</p>



<p>&#8220;If we know that a measurement shouldn&#8217;t be trusted exactly, and the ball could be anywhere within a certain region, then our approach tells the computer that it should put the paddle in the middle of that region, to make sure we hit the ball even in the worst-case deviation,&#8221; Everett says.</p>



<p>The method was similarly robust in tests of collision avoidance, where the team simulated a blue and an orange agent attempting to switch positions without colliding. As the team perturbed the orange agent&#8217;s observation of the blue agent&#8217;s position, CARRL steered the orange agent around the other agent, taking a wider berth as the adversary grew stronger, and the blue agent&#8217;s position became more uncertain.</p>



<p>There did come a point when CARRL became too conservative, causing the orange agent to assume the other agent could be anywhere in its vicinity, and in response completely avoid its destination. This extreme conservatism is useful, Everett says, because researchers can then use it as a limit to tune the algorithm&#8217;s robustness. For instance, the algorithm might consider a smaller deviation, or region of uncertainty, that would still allow an agent to achieve a high reward and reach its destination.</p>



<p>In addition to overcoming imperfect sensors, Everett says CARRL may be a start to helping robots safely handle unpredictable interactions in the real world.</p>



<p>&#8220;People can be adversarial, like getting in front of a robot to block its sensors, or interacting with them, not necessarily with the best intentions,&#8221; Everett says. &#8220;How can a robot think of all the things people might try to do, and try to avoid them? What sort of adversarial models do we want to defend against? That&#8217;s something we&#8217;re thinking about how to do.&#8221;</p>
<p>The post <a href="https://www.aiuniverse.xyz/algorithm-helps-artificial-intelligence-systems-dodge-adversarial-inputs/">Algorithm helps artificial intelligence systems dodge &#8216;adversarial&#8217; inputs</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/algorithm-helps-artificial-intelligence-systems-dodge-adversarial-inputs/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Machine learning helps grow artificial organs</title>
		<link>https://www.aiuniverse.xyz/machine-learning-helps-grow-artificial-organs/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-helps-grow-artificial-organs/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 18 Sep 2020 06:48:54 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[algorithm]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[artificial organs]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11661</guid>

					<description><![CDATA[<p>Source: myvetcandy.com Researchers from the Moscow Institute of Physics and Technology, Ivannikov Institute for System Programming, and the Harvard Medical School-affiliated Schepens Eye Research Institute have developed <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-helps-grow-artificial-organs/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-helps-grow-artificial-organs/">Machine learning helps grow artificial organs</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: myvetcandy.com</p>



<p>Researchers from the Moscow Institute of Physics and Technology, Ivannikov Institute for System Programming, and the Harvard Medical School-affiliated Schepens Eye Research Institute have developed a neural network capable of recognizing retinal tissues during the process of their differentiation in a dish. Unlike humans, the algorithm achieves this without the need to modify cells, making the method suitable for growing retinal tissue for developing cell replacement therapies to treat blindness and conducting research into new drugs. The study was published in Frontiers in Cellular Neuroscience.</p>



<p>This would allow to expand the applications of the technology for multiple fields including the drug discovery and development of cell replacement therapies to treat blindness</p>



<p>In multicellular organisms, the cells making up different organs and tissues are not the same. They have distinct functions and properties, acquired in the course of development. They start out the same, as so-called stem cells, which have the potential to become any kind of cell the mature organism incorporates. They then undergo differentiation by producing proteins specific to certain tissues and organs.</p>



<p>The most advanced technique for replicating tissue differentiation in vitro relies on 3D cell aggregates called organoids. The method has already proved effective for studying the development of the retina, the brain, the inner ear, the intestine, the pancreas, and many other tissue types. Since organoid-based differentiation closely mimics natural processes, the resulting tissue is very similar to the one in an actual biological organ.</p>



<p>Some of the stages in cell differentiation toward retina have a stochastic (random) nature, leading to considerable variations in the number of cells with a particular function even between artificial organs in the same batch. The discrepancy is even greater when different cell lines are involved. As a result, it is necessary to have a means of determining which cells have already differentiated at a given point in time. Otherwise, experiments will not be truly replicable, making clinical applications less reliable, too.</p>



<p>To spot differentiated cells, tissue engineers use fluorescent proteins. By inserting the gene responsible for the production of such a protein into the DNA of cells, researchers ensure that it is synthesized and produces a signal once a certain stage in cell development has been reached. While this technique is highly sensitive, specific, and convenient for quantitative assessments, it is not suitable for cells intended for transplantation or hereditary disease modeling.</p>



<p>To address that pitfall, the authors of the recent study in&nbsp;<em>Frontiers in Cellular Neuroscience</em>have proposed an alternative approach based on tissue structure. No reliable and objective criteria for predicting the quality of differentiated cells have been formulated so far. The researchers proposed that the best retinal tissues &#8212; those most suitable for transplantation, drug screening, or disease modeling &#8212; should be selected using neural networks and artificial intelligence.</p>



<p>&#8220;One of the main focuses of our lab is applying the methods of bioinformatics, machine learning, and AI to practical tasks in genetics and molecular biology. And this solution, too, is at the interface between sciences. In it, neural networks, which are among the things MIPT traditionally excels at, address a problem important for biomedicine: predicting stem cell differentiation into retina,&#8221; said study co-author Pavel Volchkov, who heads the Genome Engineering Lab at MIPT.</p>



<p>&#8220;The human retina has a very limited capacity for regeneration,&#8221; the geneticist went on. &#8220;This means that any progressive loss of neurons &#8212; for example, in glaucoma &#8212; inevitably leads to complete loss of vision. And there is nothing a physician can recommend, short of getting a head start on learning Braille. Our research takes biomedicine a step closer to creating a cellular therapy for retinal diseases that would not only halt the progression but reverse vision loss.&#8221;</p>



<p>The team trained a neural network &#8212; that is, a computer algorithm that mimics the way neurons work in the human brain &#8212; to identify the tissues in a developing retina based on photographs made by a conventional light microscope. The researchers first had a number of experts identify the differentiated cells in 1,200 images via an accurate technique that involves the use of a fluorescent reporter. The neural network was trained on 750 images, with another 150 used for validation and 250 for testing predictions. At this last stage, the machine was able to spot differentiated cells with an 84% accuracy, compared with 67% achieved by humans.</p>



<p>&#8220;Our findings indicate that the current criteria used for early-stage retinal tissue selection may be subjective. They depend on the expert making the decision. However, we hypothesized that the tissue morphology, its structure, contains clues that enable predicting retinal differentiation, even at very early stages. And unlike a human, the computer program can extract that information!&#8221; commented Evgenii Kegeles of the MIPT Laboratory for Orphan Disease Therapy and Schepens Eye Research Institute, U.S.</p>



<p>&#8220;This approach does not require images of a very high quality, fluorescent reporters, or dyes, making it relatively easy to implement,&#8221; the scientist added. &#8220;It takes us one step closer to developing cellular therapies for the retinal diseases such as glaucoma and macular degeneration, which today invariably lead to blindness. Besides that, the approach can be transferred not just to other cell lines, but also to other human artificial organs.&#8221;</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-helps-grow-artificial-organs/">Machine learning helps grow artificial organs</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-helps-grow-artificial-organs/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>New algorithm can identify misogyny on Twitter</title>
		<link>https://www.aiuniverse.xyz/new-algorithm-can-identify-misogyny-on-twitter/</link>
					<comments>https://www.aiuniverse.xyz/new-algorithm-can-identify-misogyny-on-twitter/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 29 Aug 2020 06:04:59 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[algorithm]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[identify]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Twitter]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11300</guid>

					<description><![CDATA[<p>Source: thenextweb.com Researchers from the&#160;Queensland University of Technology (QUT) in Australia have developed an algorithm that detects misogynistic content on Twitter. The&#160;team&#160;developed the system by first mining&#160;1 <a class="read-more-link" href="https://www.aiuniverse.xyz/new-algorithm-can-identify-misogyny-on-twitter/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/new-algorithm-can-identify-misogyny-on-twitter/">New algorithm can identify misogyny on Twitter</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: thenextweb.com</p>



<p>Researchers from the&nbsp;Queensland University of Technology (QUT) in Australia have developed an algorithm that detects misogynistic content on Twitter.</p>



<p>The&nbsp;team&nbsp;developed the system by first mining&nbsp;1 million tweets. They then refined the dataset by searching the posts for three abusive keywords: whore, slut, and rape.</p>



<p>Next, they categorized the remaining 5,000 tweets as either misogynistic or not, based on their context and intent. These labeled tweets were then fed to a machine learning classifier, which used the samples to create its own classification model.</p>



<p>The system uses a deep learning algorithm to adjust its knowledge of terminology&nbsp;as language evolves. While the AI built up its vocabulary, the researchers monitored the context and intent of the language, to help the algorithm differentiate between abuse, sarcasm, and “friendly use of aggressive terminology.”</p>



<p>“Take the phrase ‘get back to the kitchen’ as an example — devoid of context of structural inequality, a machine’s literal interpretation could miss the misogynistic meaning,” said Professor Richi Naya, a co-author of the study.</p>



<p>“But seen with the understanding of what constitutes abusive or misogynistic language, it can be identified as a misogynistic tweet.”</p>



<p>Nayak said this enabled the system to understand different contexts just by analyzing text, and without the help of tone.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>We were very happy when our algorithm identified ‘go back to the kitchen’ as misogynistic — it demonstrated that the context learning works.</p></blockquote>



<p>The researchers say the model identifies misogynistic tweets with 75% accuracy. It could also be adjusted to spot racism, homophobia, or abuse of disabled people.</p>



<p>The team now wants social media platforms to develop their research into an abuse detection tool.</p>



<p>“At the moment, the onus is on the user to report abuse they receive,”&nbsp;said&nbsp;Naya. “We hope our machine-learning solution can be adopted by social media platforms to automatically identify and report this content to protect women and other user groups online.”</p>



<p>You can read the research paper on the Springer database of academic journals.</p>
<p>The post <a href="https://www.aiuniverse.xyz/new-algorithm-can-identify-misogyny-on-twitter/">New algorithm can identify misogyny on Twitter</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/new-algorithm-can-identify-misogyny-on-twitter/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Fifty new planets discovered through machine-learning algorithm</title>
		<link>https://www.aiuniverse.xyz/fifty-new-planets-discovered-through-machine-learning-algorithm/</link>
					<comments>https://www.aiuniverse.xyz/fifty-new-planets-discovered-through-machine-learning-algorithm/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 27 Aug 2020 05:51:57 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[algorithm]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[planets]]></category>
		<category><![CDATA[researchers]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11248</guid>

					<description><![CDATA[<p>Source: malaysiasun.com Fifty potential planets have had their existence confirmed by a new machine learning algorithm developed by the University of Warwick scientists. For the first time, <a class="read-more-link" href="https://www.aiuniverse.xyz/fifty-new-planets-discovered-through-machine-learning-algorithm/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/fifty-new-planets-discovered-through-machine-learning-algorithm/">Fifty new planets discovered through machine-learning algorithm</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: malaysiasun.com</p>



<p>Fifty potential planets have had their existence confirmed by a new machine learning algorithm developed by the University of Warwick scientists.</p>



<p>For the first time, astronomers have used a process based on machine learning, a form of artificial intelligence, to analyse a sample of potential planets and determine which ones are real and which are &#8216;fakes,&#8217; or false positives, calculating the probability of each candidate to be a true planet.</p>



<p>Their results are reported in a new study published in the Monthly Notices of the Royal Astronomical Society, where they also perform the first large scale comparison of such planet validation techniques. Their conclusions make the case for using multiple validation techniques, including their machine learning algorithm, when statistically confirming future exoplanet discoveries.</p>



<p>Many exoplanet surveys search through huge amounts of data from telescopes for the signs of planets passing between the telescope and their star, known as transiting. This results in a telltale dip in light from the star that the telescope detects, but it could also be caused by a binary star system, interference from an object in the background, or even slight errors in the camera. These false positives can be sifted out in a planetary validation process.</p>



<p>Researchers from Warwick&#8217;s Departments of Physics and Computer Science, as well as The Alan Turing Institute, built a machine learning-based algorithm that can separate out real planets from fake ones in the large samples of thousands of candidates found by telescope missions such as NASA&#8217;s Kepler and TESS.</p>



<p>It was trained to recognise real planets using two large samples of confirmed planets and false positives from the now retired Kepler mission. The researchers then used the algorithm on a dataset of still unconfirmed planetary candidates from Kepler, resulting in fifty new confirmed planets and the first to be validated by machine learning.</p>



<p>Previous machine learning techniques have ranked candidates, but never determined the probability that a candidate was a true planet by themselves, a required step for planet validation.</p>



<p>Those fifty planets range from worlds as large as Neptune to smaller than the Earth, with orbits as long as 200 days to as little as a single day. By confirming that these fifty planets are real, astronomers can now prioritise these for further observations with dedicated telescopes.</p>



<p>&#8220;The algorithm we have developed lets us take fifty candidates across the threshold for planet validation, upgrading them to real planets. We hope to apply this technique to large samples of candidates from current and future missions like TESS and PLATO,&#8221; Dr David Armstrong, from the University of Warwick Department of Physics, said.</p>



<p>&#8220;In terms of planet validation, no-one has used a machine learning technique before. Machine learning has been used for ranking planetary candidates but never in a probabilistic framework, which is what you need to truly validate a planet. Rather than saying which candidates are more likely to be planets, we can now say what the precise statistical likelihood is. Where there is less than a 1% chance of a candidate being a false positive, it is considered a validated planet,&#8221; added Armstrong.</p>



<p>&#8220;Probabilistic approaches to statistical machine learning are especially suited for an exciting problem like this in astrophysics that requires incorporation of prior knowledge &#8212; from experts like Dr Armstrong &#8212; and quantification of uncertainty in predictions. A prime example when the additional computational complexity of probabilistic methods pays off significantly,&#8221; Dr Theo Damoulas from the University of Warwick Department of Computer Science, and Deputy Director, Data Centric Engineering and Turing Fellow at The Alan Turing Institute, said.</p>



<p>Once built and trained the algorithm is faster than existing techniques and can be completely automated, making it ideal for analysing the potentially thousands of planetary candidates observed in current surveys like TESS. The researchers argue that it should be one of the tools to be collectively used to validate planets in future.</p>



<p>&#8220;Almost 30 per cent of the known planets to date have been validated using just one method, and that&#8217;s not ideal. Developing new methods for validation is desirable for that reason alone. But machine learning also lets us do it very quickly and prioritise candidates much faster. We still have to spend time training the algorithm, but once that is done it becomes much easier to apply it to future candidates,&#8221; Dr Armstrong said.</p>



<p>&#8220;You can also incorporate new discoveries to progressively improve it. A survey like TESS is predicted to have tens of thousands of planetary candidates and it is ideal to be able to analyse them all consistently. Fast, automated systems like this that can take us all the way to validated planets in fewer steps let us do that efficiently,&#8221; added Dr Armstrong. (ANI)</p>
<p>The post <a href="https://www.aiuniverse.xyz/fifty-new-planets-discovered-through-machine-learning-algorithm/">Fifty new planets discovered through machine-learning algorithm</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/fifty-new-planets-discovered-through-machine-learning-algorithm/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google’s AutoML Zero lets the machines create algorithms to avoid human bias</title>
		<link>https://www.aiuniverse.xyz/googles-automl-zero-lets-the-machines-create-algorithms-to-avoid-human-bias/</link>
					<comments>https://www.aiuniverse.xyz/googles-automl-zero-lets-the-machines-create-algorithms-to-avoid-human-bias/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 16 Apr 2020 07:18:14 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[algorithm]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AutoML]]></category>
		<category><![CDATA[bias]]></category>
		<category><![CDATA[cloud]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Tech]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8210</guid>

					<description><![CDATA[<p>Source: thenextweb.com It looks like Google‘s working on some major upgrades to its autonomous machine learning development language ‘AutoML.’ According to a pre-print research paper authored by <a class="read-more-link" href="https://www.aiuniverse.xyz/googles-automl-zero-lets-the-machines-create-algorithms-to-avoid-human-bias/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/googles-automl-zero-lets-the-machines-create-algorithms-to-avoid-human-bias/">Google’s AutoML Zero lets the machines create algorithms to avoid human bias</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: thenextweb.com</p>



<p>It looks like Google‘s working on some major upgrades to its autonomous machine learning development language ‘AutoML.’ According to a pre-print research paper authored by several of the big G’s AI researchers, ‘AutoML Zero’ is coming, and it’s bringing evolutionary algorithms with it.</p>



<p>AutoML is a tool from Google that automates the process of developing machine learning algorithms for various tasks. It’s user-friendly, fairly simple to use, and completely open-source. Best of all, Google‘s always updating it.</p>



<p>In its current iteration, AutoML has a few drawbacks. You still have to manually create and tune several algorithms to act as building blocks for the machine to get started. This allows it to take your work and experiment with new parameters in an effort to optimize what you’ve done. Novices can get around this problem by using pre-made algorithm packages, but Google‘s working to automate this part too.</p>



<p>Per the Google team’s pre-print paper:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>It is possible today to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks. We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space.</p><p>Despite the vastness of this space, evolutionary search can still discover two-layer neural networks trained by backpropagation. These simple neural networks can then be surpassed by evolving directly on tasks of interest, e.g. CIFAR-10 variants, where modern techniques emerge in the top algorithms, such as bilinear interactions, normalized gradients, and weight averaging.</p><p>Moreover, evolution adapts algorithms to different task types: e.g., dropout-like techniques appear when little data is available.</p></blockquote>



<p>In other words: Google‘s figured out how to tap evolutionary algorithms for AutoML using nothing but basic math concepts. The developers created a learning paradigm in which the machine will spit out 100 randomly generated algorithms and then work to see which ones perform the best.</p>



<p>After several generations, the algorithms become better and better until the machine finds one that performs well enough to evolve. In order to generate novel algorithms that can solve new problems, the ones that survive the evolutionary process are tested against various standard AI problems, such as computer vision.</p>



<p>Perhaps the most interesting byproduct of Google‘s quest to completely automate the act of generating algorithms and neural networks is the removal of human bias from our AI systems. Without us there to determine what the best starting point for development is, the machines are free to find things we’d never think of.</p>



<p>According to the researchers, AutoML Zero already outperforms its predecessor and similar state-of-the-art machine learning-generation tools. Future research will involve setting a more narrow scope for the AI and seeing how well it performs in more specific situations using a hybrid approach that creates algorithms with a combination of ‘Zero’s’ self-discovery techniques and human-curated starter libraries.</p>
<p>The post <a href="https://www.aiuniverse.xyz/googles-automl-zero-lets-the-machines-create-algorithms-to-avoid-human-bias/">Google’s AutoML Zero lets the machines create algorithms to avoid human bias</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/googles-automl-zero-lets-the-machines-create-algorithms-to-avoid-human-bias/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Facebook AI Researchers Are Relying on Maths for Automatic Translations of Words</title>
		<link>https://www.aiuniverse.xyz/facebook-ai-researchers-are-relying-on-maths-for-automatic-translations-of-words/</link>
					<comments>https://www.aiuniverse.xyz/facebook-ai-researchers-are-relying-on-maths-for-automatic-translations-of-words/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 15 Oct 2019 09:37:43 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[algorithm]]></category>
		<category><![CDATA[Artificial]]></category>
		<category><![CDATA[Automatic]]></category>
		<category><![CDATA[Facebook]]></category>
		<category><![CDATA[Intelligence]]></category>
		<category><![CDATA[researchers]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4644</guid>

					<description><![CDATA[<p>Source: news18.com Designers of machine translation tools still mostly rely on dictionaries to make a foreign language understandable. But now there is a new way: numbers. Facebook <a class="read-more-link" href="https://www.aiuniverse.xyz/facebook-ai-researchers-are-relying-on-maths-for-automatic-translations-of-words/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/facebook-ai-researchers-are-relying-on-maths-for-automatic-translations-of-words/">Facebook AI Researchers Are Relying on Maths for Automatic Translations of Words</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: news18.com</p>



<p>Designers of machine translation tools still mostly rely on dictionaries to make a foreign language understandable. But now there is a new way: numbers. Facebook researchers say rendering words into figures and exploiting mathematical similarities between languages is a promising avenue, even if a universal communicator a la Star Trek remains a distant dream. Powerful automatic translation is a big priority for internet giants. Allowing as many people as possible worldwide to communicate is not just an altruistic goal, but also good business. Facebook, Google and Microsoft as well as Russia&#8217;s Yandex, China&#8217;s Baidu and others are constantly seeking to improve their translation tools.</p>



<p>Facebook has artificial intelligence experts on the job at one of its research labs in Paris. Up to 200 languages are currently used on Facebook, said Antoine Bordes, European co-director of fundamental AI research for the social network. Automatic translation is currently based on having large databases of identical texts in both languages to work from. But for many language pairs there just aren&#8217;t enough such parallel texts. That&#8217;s why researchers have been looking for another method, like the system developed by Facebook which creates a mathematical representation for words. Each word becomes a &#8220;vector&#8221; in a space of several hundred dimensions. Words that have close associations in the spoken language also find themselves close to each other in this vector space.</p>



<p><strong>From Basque to Amazonian?</strong></p>



<p>&#8220;For example, if you take the words &#8216;cat&#8217; and &#8216;dog&#8217;, semantically, they are words that describe a similar thing, so they will be extremely close together physically&#8221; in the vector space, said Guillaume Lample, one of the system&#8217;s designers. &#8220;If you take words like Madrid, London, Paris, which are European capital cities, it&#8217;s the same idea.&#8221; These language maps can then be linked to one another using algorithms, at first roughly, but eventually becoming more refined, until entire phrases can be matched without too many errors.</p>



<p>Lample said results are already promising. For the language pair of English-Romanian, Facebook&#8217;s current machine translation system is &#8220;equal or maybe a bit worse&#8221; than the word vector system, said Lample. But for the rarer language pair of English-Urdu, where Facebook&#8217;s traditional system doesn&#8217;t have many bilingual texts to reference, the word vector system is already superior, he said.</p>



<p>But could the method allow translation from, say, Basque into the language of an Amazonian tribe? In theory, yes, said Lample, but in practice, a large body of written texts are needed to map the language, something lacking in Amazonian tribal languages. &#8220;If you have just tens of thousands of phrases, it won&#8217;t work. You need several hundreds of thousands,&#8221; he said.</p>



<p><strong>Holy Grail</strong></p>



<p>Experts at France&#8217;s CNRS national scientific centre said the approach Lample has taken for Facebook could produce useful results, even if it doesn&#8217;t result in perfect translations. Thierry Poibeau of CNRS&#8217;s Lattice laboratory, which also does research into machine translation, called the word vector approach &#8220;a conceptual revolution&#8221;. He said &#8220;translating without parallel data&#8221;, dictionaries or versions of the same documents in both languages is something of the Holy Grail&#8221; of machine translation.</p>



<p>&#8220;But the question is what level of performance can be expected&#8221; from the word vector method, said Poibeau. The method &#8220;can give an idea of the original text&#8221; but the capability for a good translation every time remains unproven. Francois Yvon, a researcher at CNRS&#8217;s Computer Science Laboratory for Mechanics and Engineering Sciences, said &#8220;the linking of languages is much more difficult&#8221; when they are far removed from one another. &#8220;The manner of denoting concepts in Chinese is completely different from French,&#8221; he added. However even imperfect translations can be useful, said Yvon, and could prove sufficient to track hate speech, a major priority for Facebook.</p>
<p>The post <a href="https://www.aiuniverse.xyz/facebook-ai-researchers-are-relying-on-maths-for-automatic-translations-of-words/">Facebook AI Researchers Are Relying on Maths for Automatic Translations of Words</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/facebook-ai-researchers-are-relying-on-maths-for-automatic-translations-of-words/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
