<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>algorithms Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/algorithms/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/algorithms/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Fri, 16 Jul 2021 07:01:28 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>EHR Data Boosts Machine Learning Algorithms for Chronic Disease</title>
		<link>https://www.aiuniverse.xyz/ehr-data-boosts-machine-learning-algorithms-for-chronic-disease/</link>
					<comments>https://www.aiuniverse.xyz/ehr-data-boosts-machine-learning-algorithms-for-chronic-disease/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 16 Jul 2021 07:01:27 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[algorithms]]></category>
		<category><![CDATA[Boosts]]></category>
		<category><![CDATA[Chronic Disease]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=15055</guid>

					<description><![CDATA[<p>Soure &#8211; https://healthitanalytics.com/ A study reveals the use of machine learning algorithms leveraging EHR data could assist in a patient’s lung cancer prognosis. By using machine learning <a class="read-more-link" href="https://www.aiuniverse.xyz/ehr-data-boosts-machine-learning-algorithms-for-chronic-disease/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/ehr-data-boosts-machine-learning-algorithms-for-chronic-disease/">EHR Data Boosts Machine Learning Algorithms for Chronic Disease</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Soure &#8211; https://healthitanalytics.com/</p>



<p>A study reveals the use of machine learning algorithms leveraging EHR data could assist in a patient’s lung cancer prognosis.</p>



<p>By using machine learning algorithms, researchers examined if creating a large-scale electronic health record (EHR) data-based lung cancer cohort could be effective in studying a patient’s prognosis and estimating survival. The cohort study was recently published in&nbsp;<em>JAMA.</em></p>



<p>Across the world, lung cancer is one of the most diagnosed cancers and is the leading cause of cancer-related deaths behind skin cancer. In the United States, the current five-year survival rate is around 20.6 percent. However, patients with lung cancer will have different outcomes based on a variety of clinical factors.</p>



<p>“A large cohort with adequate clinical information is necessary to identify stable and reliable prognostic variables and the factors associated with improved survival outcomes,” the authors wrote in the study.</p>



<h4 class="wp-block-heading">Dig Deeper</h4>



<ul class="wp-block-list"><li>Machine Learning Algorithm Brings Predictive Analytics to Cell Study</li><li>Machine Learning Model Helps Predict Clinical Lab Test Results</li><li>Deep Learning Aids Prediction of Lung Cancer Immunotherapy Response</li></ul>



<p>As the accessibility of EHR data continues to grow, researchers are given a timely and low-cost alternative to the traditional cohort study. With EHR data being coding in various ways, implementing machine learning algorithms was an important step for researchers to compare information accurately.</p>



<p>“Our primary goal was to build a large and reliable lung cancer EHR cohort that could be used for studying lung cancer progression with a set of generalizable approaches. To this end, we combined structured data and unstructured data to identify patients with lung cancer and extract clinical variables. We evaluated the completeness and accuracy of the extracted data,” the authors wrote.</p>



<p>“To further illustrate the application of EHR cohort data, we developed and validated a prognostic model to predict 1-year to 5-year overall survival (OS) among individuals with non–small cell lung cancer (NSCLC).,” the study authors continued.</p>



<p>In the cohort study, patients with lung cancer were identified from 76,643 individuals with at least one lung cancer diagnostic coded deposited in an EHR in Mass General Brigham health care system from July 1988 to October 2018.</p>



<p>A machine learning algorithm identified patients and extracted clinical information from structured and unstructured data by using natural language processing tools. Researchers then examined the data’s completeness and accuracy by comparing the Boston Lung Cancer study to the standard EHR review results.</p>



<p>Additionally, a prognostic model for non-small cell lung cancer (NSCLC) overall survival was created for clinical application.</p>



<p>Of the 76,642 patients with at least one lung cancer diagnostic code, 42,069 patients were identified to have lung cancer. The AI tool produced a positive predictive value of 94.4 percent. The study cohort was made up of 35,375 patients after removing those with a history of lung cancer and less than 14 days of follow-up after the initial diagnosis.</p>



<p>“We assembled a large lung cancer cohort from EHRs using a phenotyping algorithm and extraction strategies combining structured and unstructured data. Our findings suggest that a prognostic model based on EHR cohort may be used conveniently to facilitate prediction of NSCLC survival,” the authors concluded.</p>
<p>The post <a href="https://www.aiuniverse.xyz/ehr-data-boosts-machine-learning-algorithms-for-chronic-disease/">EHR Data Boosts Machine Learning Algorithms for Chronic Disease</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/ehr-data-boosts-machine-learning-algorithms-for-chronic-disease/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Machine Learning Algorithms Are the Design Tools of the Information Age</title>
		<link>https://www.aiuniverse.xyz/machine-learning-algorithms-are-the-design-tools-of-the-information-age/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-algorithms-are-the-design-tools-of-the-information-age/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 05 Mar 2021 07:07:28 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Age]]></category>
		<category><![CDATA[algorithms]]></category>
		<category><![CDATA[Design]]></category>
		<category><![CDATA[information]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13254</guid>

					<description><![CDATA[<p>Source &#8211; https://www.metropolismag.com/ The coleader of computational design at SmithGroup explains how machine learning tools can refine data into information, helping designers work smarter. Technology is changing <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-algorithms-are-the-design-tools-of-the-information-age/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-algorithms-are-the-design-tools-of-the-information-age/">Machine Learning Algorithms Are the Design Tools of the Information Age</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.metropolismag.com/</p>



<p>The coleader of computational design at SmithGroup explains how machine learning tools can refine data into information, helping designers work smarter.</p>



<p><em>Technology is changing the world as we know—and design—it. But have architects and designers unlocked the full potential of cutting-edge digital tools? In this series of comments, practitioners with a visionary approach examine some of the most influential and disruptive tech today—like blockchain technology, VR/AR/MR, spatial computing, machine learning, and cloud computing—and envisage their impact on the practice of architecture and interior design tomorrow. The changes they describe, while forecasts, will likely come to fruition, driving the way we plan, work, and create. Consider this a glimpse of the not-so-distant future.</em></p>



<p>Machine learning will enable a more integrated, informed design process by disrupting how and when architects engage with data. If we begin by viewing machine learning as a collection of algorithmic tools that refine data into information—just as a saw helps to shape wood into furniture—the opportunities these tools present become more focused. I imagine machine learning tools will help design professionals understand the impact of decisions as they are made—not days or weeks later.</p>



<p>Methods such as surrogate modeling, which uses regressor algorithms to replace slow calculation engines with an instantaneous predictive “surrogate,” will support real-time, data-rich design interfaces that allow teams to react at the speed of a designer’s curiosity. I expect future engineers will operate like data analysts. They will spend most of their time modeling, analyzing, and explaining data rather than manually operating analysis software. For example, once a design challenge is parametrically modeled and translated into structured data—an approach we call design space exploration—simple algorithms like multiple linear regression can measure which parameters have the greatest impact on performance.</p>



<p>Clustering algorithms, classifiers, and dimensionality reduction techniques can then be used to tease out obscure relationships that can provide actionable direction to teams. These algorithms represent a fraction of the machine learning tools that design professionals can and should learn to use. But machine learning algorithms are not magic. They are tools of the information age that we can leverage to better inform the design process moving forward.</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-algorithms-are-the-design-tools-of-the-information-age/">Machine Learning Algorithms Are the Design Tools of the Information Age</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-algorithms-are-the-design-tools-of-the-information-age/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Machine learning, meet human emotions: How to help a computer monitor your mental state</title>
		<link>https://www.aiuniverse.xyz/machine-learning-meet-human-emotions-how-to-help-a-computer-monitor-your-mental-state/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-meet-human-emotions-how-to-help-a-computer-monitor-your-mental-state/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 20 Aug 2020 06:35:15 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[algorithms]]></category>
		<category><![CDATA[Cybernetics Magazine]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[meet human emotions]]></category>
		<category><![CDATA[researchers]]></category>
		<category><![CDATA[Science and Engineering]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11064</guid>

					<description><![CDATA[<p>SOURCE:-eurekalert Researchers from Skoltech, INRIA and the RIKEN Advanced Intelligence Project have considered several state-of-the-art machine learning algorithms for the challenging tasks of determining the mental workload <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-meet-human-emotions-how-to-help-a-computer-monitor-your-mental-state/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-meet-human-emotions-how-to-help-a-computer-monitor-your-mental-state/">Machine learning, meet human emotions: How to help a computer monitor your mental state</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>SOURCE:-eurekalert</p>



<p>Researchers from Skoltech, INRIA and the RIKEN Advanced Intelligence Project have considered several state-of-the-art machine learning algorithms for the challenging tasks of determining the mental workload and affective states of a human brain. Their software can help design smarter brain-computer interfaces for applications in medicine and beyond. The paper was published in the IEEE Systems, Man, and Cybernetics Magazine.</p>



<p>A brain-computer interface, or BCI, is a link between a human brain and a machine that can allow users to control various devices, such as robot arms or a wheelchair, by brain activity only (these are called active BCIs) or can monitor the mental state or emotions of a user and categorize them (these are passive BCIs). Brain signals in a BCI are usually measured by electroencephalography, a typically noninvasive method of recording electrical activity of the brain.</p>



<p>But there is quite a long way from raw continuous EEG signals to digitally processed signals or patterns that would have the ability to correctly identify a user&#8217;s mental workload or affective states, something that passive BCIs need to be functional. Existing experiments have shown that the accuracy of these measurements, even for simple tasks of, say, discriminating low from high workload, is insufficient for reliable practical applications.</p>



<p>&#8220;The low accuracy is due to extremely high complexity of a human brain. The brain is like a huge orchestra with thousands of musical instruments from which we wish to extract specific sounds of each individual instrument using a limited number of microphones or other sensors,&#8221; Andrzej Cichocki, professor at the Skoltech Center for Computational and Data-Intensive Science and Engineering (CDISE) and a coauthor of the paper, notes.</p>



<p>Thus, more robust and accurate EEG classification and recognition of various brain patterns algorithms are badly needed. Cichocki and his colleagues looked at two groups of machine learning algorithms, Riemannian geometry based classifiers (RGC) and convolutional neural networks (CNN), which have been doing quite well on the active side of BCIs. The researchers wondered whether these algorithms can work not just with so-called motor imaginary tasks, where a subject imagines some movements of limbs without any real movement, but for workload and affective states estimation.</p>



<p>They ran a competition of sorts among seven algorithms, two of which the scientists designed themselves by further improving well-performing Riemannian methods. The algorithms were tested in two studies, one with a typical arrangement for BCIs where algorithms were trained on data from a specific subject and later tested on that same subject, and one that was subject-independent &#8212; a much more challenging setup, since our brain waves could be quite different. Real EEG data was taken from earlier experiments done by Fabien Lotte, a coauthor of the paper, and his colleagues, as well as from DEAP, an existing database for emotion analysis.</p>



<p>The scientists found, for instance, that an artificial deep neural network outperformed all its competitors quite significantly in the workload estimation task but did poorly in emotion classification. And the two modified Riemannian algorithms did quite well in both tasks. Overall, as the paper concludes, using passive BCIs for affective state classification is much harder than for workload estimation, and subject-independent calibration leads, at least for now, to much lower accuracies.</p>



<p>&#8220;In the next steps, we plan to use more sophisticated artificial intelligence (AI) methods, especially deep learning, which allow us to detect very tiny changes in brain signals or brain patterns. Deep neural networks can be trained on the basis of a large set of data for many subjects in different scenarios and under different conditions. AI is a real revolution and is also potentially useful for BCI and recognition of human emotions,&#8221; Cichocki said.</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-meet-human-emotions-how-to-help-a-computer-monitor-your-mental-state/">Machine learning, meet human emotions: How to help a computer monitor your mental state</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-meet-human-emotions-how-to-help-a-computer-monitor-your-mental-state/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>DECODING THE LINK BETWEEN ARTIFICIAL NEURAL NETWORKS AND DEEP LEARNING ALGORITHMS</title>
		<link>https://www.aiuniverse.xyz/decoding-the-link-between-artificial-neural-networks-and-deep-learning-algorithms/</link>
					<comments>https://www.aiuniverse.xyz/decoding-the-link-between-artificial-neural-networks-and-deep-learning-algorithms/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 24 Jun 2020 07:50:53 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[algorithms]]></category>
		<category><![CDATA[Artificial intelligence (AI)]]></category>
		<category><![CDATA[artificial neural networks]]></category>
		<category><![CDATA[DECODING]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9750</guid>

					<description><![CDATA[<p>Source: analyticsinsight.net The idea of creating intelligent systems has always fascinated data science professionals. The advent of computers and technology uplifts the notion that an algorithm that can <a class="read-more-link" href="https://www.aiuniverse.xyz/decoding-the-link-between-artificial-neural-networks-and-deep-learning-algorithms/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/decoding-the-link-between-artificial-neural-networks-and-deep-learning-algorithms/">DECODING THE LINK BETWEEN ARTIFICIAL NEURAL NETWORKS AND DEEP LEARNING ALGORITHMS</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: analyticsinsight.net</p>



<p>The idea of creating intelligent systems has always fascinated data science professionals. The advent of computers and technology uplifts the notion that an algorithm that can learn from itself and adapt to changing model inputs. The art of self-learning algorithms helping data science with valuable information is an uncharted territory that AI-powered neural networks would like to explore more, courtesy the growing interest of professionals and technology experts alike.</p>



<h4 class="wp-block-heading"><strong>Understanding Artificial Neural Networks (ANNs)</strong></h4>



<p>To understand the complexities of Artificial Neural Networks (ANNs) lets first decode how our brain learns and relearns from different experiences. The human brain is made up of interconnected networks, these are called neurons. These neurons are responsible for processing different pieces of information. Let’s understand through the concept of a hierarchy pyramid. Our brain is composed of different levels, and each level is responsible for decoding and understanding information from the surroundings.</p>



<p>As information passes on through layers from different levels arranged hierarchically, each layer of neurons understand, process, gather knowledgeable insight, and pass this information to the next layer in the hierarchy. Thus, ensuring the information which reaches on the pinnacle of the pyramid is intelligently accurate and without any bias.</p>



<p>Let’s understand the Artificial Neural Network through food!</p>



<p>For example, when you get a whiff of something delicious cooking, for instance, a loaf of banana bread with chocolate chips baking your brain may process the information as… ‘I smell banana bread and chocolate chips,’ (that’s your data input) … ‘I love banana bread with chocolate chips!’ (thought) … ‘I’ll eat a lot of banana bread with chocolate chips’ (decision making) … ‘Oh, but they add to calories, I promised to go on a diet’ (memory) … ‘But, one slice won’t hurt?’ (reasoning) ‘I will have one slice for sure!’ (final course of action).</p>



<p>Likewise, ANNs seek to simulate information passing through layers of networks or interconnected brain cells, to let them learn and make decisions in a realistically human like manner. This is the layered approach to process information that ANNs strive to simulate. The human brain is complex and replicating it is a tough task, however, in its simplest form, an ANN can comprise of three layers of neurons-</p>



<p>1. The input layer (for data input)</p>



<p>2. The hidden layer (information processing layer)</p>



<p>3. The output layer (decision-making step).</p>



<p>A lot happens in the hidden layer, often called as the black box of ANN decision making point. The black box can add up to multiple hidden layers for the flow of information from one layer to another, just like what happens inside the human brain.</p>



<h4 class="wp-block-heading"><strong>Comprehending Deep Learning Algorithms</strong></h4>



<p>Deep learning seeks to understand what exactly happens within those hidden layers of the ANN. Representing the very cutting edge of Artificial Intelligence (AI), an algorithm trains itself to process and learn from data that is injected into the model in the input layer.</p>



<p>How is that possible? Thanks to the hidden layer of ANNs which is also called a ‘deep neural network’ (DNNs), or in simple words deep learning. It is a self-teaching algorithm that filters information through multiple hidden layers same as a human mind does. Here are some interesting concepts and viewpoints of deep learning-</p>



<p>• Goodfellow, Bengio and Courville explained that while shallow neural networks are trained to handle complex problems, deep learning networks add to accuracy as more neuron layers are added in the information hierarchy.</p>



<p>•&nbsp;These additional layers can yield maximum accuracy till the 9<sup>th</sup>&nbsp;or 10<sup>th</sup>&nbsp;layer, after which a decline is observed in their predictive power.</p>



<p>•&nbsp;At present, most ANNs and implementations deploy a maximum of 3-10 deep network neuron layers.</p>



<h4 class="wp-block-heading"><strong>Bridging the Gap between ANNs and Deep Learning</strong></h4>



<p>To make DNNs “learn” increasingly complex algorithms for accurate prediction, classification, several features run behind the black box, adding more layers to the hidden layer is one of them. More layers and more neurons do represent complex models with greater accuracy, but at the same time, data science experts must compute the cost and time aspect in model building.</p>



<p>The tech world is looking forward to achieve the perfect balance blending time, cost, model building and accuracy in predictions with deep neural networks, to solve the complex classification and prediction tasks in a jiffy.</p>
<p>The post <a href="https://www.aiuniverse.xyz/decoding-the-link-between-artificial-neural-networks-and-deep-learning-algorithms/">DECODING THE LINK BETWEEN ARTIFICIAL NEURAL NETWORKS AND DEEP LEARNING ALGORITHMS</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/decoding-the-link-between-artificial-neural-networks-and-deep-learning-algorithms/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AD CLICK-THROUGH-RATE (CTR) PREDICTION USING REINFORCEMENT LEARNING</title>
		<link>https://www.aiuniverse.xyz/ad-click-through-rate-ctr-prediction-using-reinforcement-learning/</link>
					<comments>https://www.aiuniverse.xyz/ad-click-through-rate-ctr-prediction-using-reinforcement-learning/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 12 May 2020 11:10:12 +0000</pubDate>
				<category><![CDATA[Reinforcement Learning]]></category>
		<category><![CDATA[algorithms]]></category>
		<category><![CDATA[CTR]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8726</guid>

					<description><![CDATA[<p>Source: analyticsindiamag.com There are almost all the websites over the internet which display ads. The companies who wish to advertise their products, choose these web places as <a class="read-more-link" href="https://www.aiuniverse.xyz/ad-click-through-rate-ctr-prediction-using-reinforcement-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/ad-click-through-rate-ctr-prediction-using-reinforcement-learning/">AD CLICK-THROUGH-RATE (CTR) PREDICTION USING REINFORCEMENT LEARNING</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: analyticsindiamag.com</p>



<p>There are almost all the websites over the internet which display ads. The companies who wish to advertise their products, choose these web places as a medium of advertisement. The challenge is that if the company has a range of advertisement versions, which among these versions can get the highest conversion rate, i.e. the maximum number of clicks on the ad.</p>



<p>In this article, we will discuss reinforcement learning in Python for Click-Through-Rate (CTR) prediction of web advertisements. We will see the practical implementation of Upper Confidence Bound (UCB), a method of reinforcement learning applied in this task. Using this implementation, one can be able to find the best version of the advertisement from a set of available versions that can get a maximum number of clicks by the visitors on the website.</p>



<h4 class="wp-block-heading">The Upper Confidence Bound (UCB) Method</h4>



<p>The Upper Confidence Bound (UCB) algorithm belongs to the family of Reinforcement Learning algorithms. This method is applied in action selection where it uses uncertainty in the action value estimates for balancing exploration and exploitation. This method is popularly used in solving the Multi-Arm Bandit Problem. For more details on the UCB algorithm, please read the article “Reinforcement Learning: The Concept Behind UCB Explained With Code”. </p>



<h4 class="wp-block-heading">The Dataset</h4>



<p>In this experiment, we have used the Ads CTR Optimization dataset that is publically available on Kaggle. This dataset comprises the response of 10,000 visitors to 10 advertisements displayed on a web platform. These 10 advertisements are actually the 10 ad versions of the same product. The responses are represented in terms of rewards given to those 10 ads by visitors. If the visitor has clicked on an ad, the reward is 1 and if the visitor has ignored the ad, the reward is 0. Now, based on these rewards, the task is to identify which among the 10 ads has the highest CTR so that the ad with the highest conversion rate should be placed on the web platform.</p>



<h4 class="wp-block-heading">Implementation of Upper Confidence Bound (UCB)</h4>



<p>In this reinforcement learning in python implementation, we will compare two approaches – Random selection of ads and selection using UCB method so that we would be able to conclude the effectiveness of UCB method. First, we need to import the required libraries and then the dataset that we have downloaded from the Kaggle.</p>



<p>As we can see the dataset in the above picture, it comprises values 1 or 0 for 10 ads that are displayed on the website. Each row of the dataset represents a visitor to the website. If the visitor has clicked on the ad, the value for that ad is 1 and if the visitor has ignored the ad, the value for that ad is 0. There are 10,000 such records in the dataset.</p>



<h4 class="wp-block-heading">Using Random Selection Method</h4>



<p>To see the difference, first, we need to see how random selection of ad versions works. From the set of 10 ad versions, one ad is selected at random and displayed to the visitor. </p>



<p>Once the above code snippet is executed, the above algorithm selects one ad at a random and displays before the visitor. If the visitor clicks on the ad, the reward 1 is added and if the visitor ignores it, the rewarded 0 is added. See in the below screenshot this random selection of ads.</p>



<p>As we can see in the above picture, for the first user, the second ad was displayed (keep in mind the indices in python) and rewarded 0 was added. For the second user, 5th ad was selected and reward 1 was added. In the same way, this random selection and addition of reward continue till the last, 10,000th visitor. In this way, the total reward value is calculated iteratively.</p>



<p>So, as we can see, the total reward using the random selection method is 1196. Now, we visualize through the histogram the number of times each ad was clicked.</p>



<p>Please note that this entire process of random ad selection including the above histogram and total reward will vary at each run of the program.</p>



<h4 class="wp-block-heading">Using the UCB Method</h4>



<p>In the above section, we have seen the random selection of ads and rewards received. Now we will see the implementation of Upper Confidence bound (UCB) in the same task. Please refer to the formulae in the above-referred article for better understanding.</p>



<p>When the above code snippet will be executed, first we will see the selection of ads to be displayed before each visitor.</p>



<p>For the first visitor, the first ad is displayed and reward 1 is added. This process continues as we have seen in the above section. To the last visitors, you can see that 4th as is displayed most of the time. This is because the 4th ad is mostly rewarded positively by the visitors. So the algorithm leant that trend and displays 4th ad most of the time.</p>



<p>The same trend can be seen in the histogram given below where 4th ad has the highest number of clicks.</p>



<p>Finally, we will see the total reward when using UCB algorithm.</p>



<p>The total reward received when using UCB algorithm is nearly double the total reward received in random selection. Finally, we can conclude that the Upper Confidence Bound (UCB) algorithm helps in finding the best ad from a set of ad versions to be displayed to visitors so that maximum click and highest conversion rate can be obtained. Using this number of clicks on each of the ads and using the number of impressions, one can easily find out the Click-Through Rate (CTR) of these ads. The CTR can be obtained as (Total No.of Clicks / Total Impression) x 100.</p>



<p>Hope this implementation of Reinforcement Learning in python helps you in learning how it helps in predictive analytics.</p>
<p>The post <a href="https://www.aiuniverse.xyz/ad-click-through-rate-ctr-prediction-using-reinforcement-learning/">AD CLICK-THROUGH-RATE (CTR) PREDICTION USING REINFORCEMENT LEARNING</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/ad-click-through-rate-ctr-prediction-using-reinforcement-learning/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>When Artificial Intelligence Meets Cryptography: a Pop Culture Trope</title>
		<link>https://www.aiuniverse.xyz/when-artificial-intelligence-meets-cryptography-a-pop-culture-trope/</link>
					<comments>https://www.aiuniverse.xyz/when-artificial-intelligence-meets-cryptography-a-pop-culture-trope/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 25 Mar 2020 07:50:03 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[algorithms]]></category>
		<category><![CDATA[crypto]]></category>
		<category><![CDATA[Cryptography]]></category>
		<category><![CDATA[Culture]]></category>
		<category><![CDATA[Decentralization]]></category>
		<category><![CDATA[Pop Culture]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7707</guid>

					<description><![CDATA[<p>Source: hackernoon.com When envisioning pop culture, depending on what generation you might be born into, the perceptions range from HAL 9000’s glowing Red Eye from 2001 to <a class="read-more-link" href="https://www.aiuniverse.xyz/when-artificial-intelligence-meets-cryptography-a-pop-culture-trope/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/when-artificial-intelligence-meets-cryptography-a-pop-culture-trope/">When Artificial Intelligence Meets Cryptography: a Pop Culture Trope</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: hackernoon.com</p>



<p>When envisioning pop culture, depending on what generation you might be born into, the perceptions range from HAL 9000’s glowing Red Eye from 2001 to Ava from Ex-Machina. However, the most modern concept of pop culture comes hand in hand with artificial intelligence. To make the experience wholesome, we look at artificial intelligence or specifically, machine language and cryptography as sister fields to see the implications of the phenomena in modern-day culture as well as in the times to come.</p>



<p>As products of a capitalist society, the millennials are inebriated with the concept of privacy, security and the idea of personal space. As such, one can understand that the hype for artificial intelligence is essentially due to its capability of creating machines that can mimic or surpass human proficiency. These machines can adapt to self-optimization in various scenarios to keep up-to-date with each new phenomena being introduced. Thus, a closed-loop feedback system is induced. Furthermore, in the digital world, the need for security is ever increasing which is where cryptography comes in.</p>



<p>Assiduous individuals work behind the screens to introduce cryptographic algorithms which provide blankets of security and are astonishingly adept in many logic-based situations. The blockchain systems when efficiently designed, also, incentivize users to contribute to the network whilst getting benefit from it; a phenomenon essential in a time when AI has become game masters in their own right. This can be further understood through an example.</p>



<p>If developers develop an AI solution in a centralized platform, they need to ensure an authentic interface for a reliable representation of an AI output. Furthermore, they also need to ensure that the integrity and security of the data is maintained. In a situation like this, if a blockchain system is used, its transparency and accessibility pump up the security that is required by the developer of the AI algorithm.</p>



<p>However, one must understand that artificial intelligence and its off-shoots are not a future trend, they are already here and they are influencing our day-to-day activities. Perceptions will always vary but one can hold artificial intelligence analogous to a Chocolate truffle surprise. No matter how many times you open it and see the decadent goodness, you can never be sure what flavour you will find on the inside. In short, for each new flavour, you will develop a different kind of fondness. </p>
<p>The post <a href="https://www.aiuniverse.xyz/when-artificial-intelligence-meets-cryptography-a-pop-culture-trope/">When Artificial Intelligence Meets Cryptography: a Pop Culture Trope</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/when-artificial-intelligence-meets-cryptography-a-pop-culture-trope/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Reinforcement Learning: The Algorithms Changing How Computers Make Decisions</title>
		<link>https://www.aiuniverse.xyz/reinforcement-learning-the-algorithms-changing-how-computers-make-decisions/</link>
					<comments>https://www.aiuniverse.xyz/reinforcement-learning-the-algorithms-changing-how-computers-make-decisions/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 23 Mar 2020 06:42:35 +0000</pubDate>
				<category><![CDATA[Reinforcement Learning]]></category>
		<category><![CDATA[algorithms]]></category>
		<category><![CDATA[computers]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[researchers]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7636</guid>

					<description><![CDATA[<p>Source: inc42.com The last decade of tech was to a large part defined by the advent of Deep Supervised Learning (DL). The availability of cheap data at scale, computational <a class="read-more-link" href="https://www.aiuniverse.xyz/reinforcement-learning-the-algorithms-changing-how-computers-make-decisions/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/reinforcement-learning-the-algorithms-changing-how-computers-make-decisions/">Reinforcement Learning: The Algorithms Changing How Computers Make Decisions</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: inc42.com</p>



<p>The last decade of tech was to a large part defined by the advent of Deep Supervised Learning (DL). The availability of cheap data at scale, computational power, and researcher interest have made it the de-facto school of algorithms used for most pattern recognition problems. Face recognition on social media, product recommendations on sites, voice assistants like Google Assistant, Alexa, and Siri are some examples largely powered by DL.</p>



<p>The issue with deep learning is that the resources that led to its rise are also giving rise to inequities. Today, it is tough for startups to beat ‘big tech’ like Apple, Google, Amazon, and Microsoft in deep learning through better research capabilities or better data.</p>



<p>My prediction that in the 2020s, we shall see this inequity broken down. This shall be due to the rise of Deep Reinforcement Learning (RL) as a prominent algorithm for such problems.</p>



<p>RL, in essence, is mimicking what humans do. Let’s take the example of a kid learning to ride a bike. The kid has no understanding of what steps to take. But it tries to ride the bike for longer without falling down and learns in the process. You can’t explain how you ride a bike, just that you can ride it. RL works in a similar way. Given an environment, it learns to optimise for a goal through multiple trials and errors.</p>



<p> “…  I believe that in some sense reinforcement learning is the future of AI … an intelligent system must be able to learn on its own, without constant supervision …” – Richard Sutton, Founding Father of Reinforcement Learning.</p>



<p>To go a bit deeper into the tech in a watered-down way, RL has three components – the state, the policy, and the action. The state is a description of what the environment is like right now. The policy evaluates the state and finds an optimal path to the goal set for the algorithm.</p>



<p>The action is the step suggested by the policy and taken by the algorithm to reach the goal. RL algorithms iteratively run through states, use their policy to generate an action, run the action, and given the environment’s feedback – called reward – optimise the policy to give more goal-oriented actions.</p>



<p>In this manner, RL allows us to solve many problems without actually needing as much supervised/labelled data as a traditional DL model does – since it keeps generating its own data. Of course, there’s the caveat that RL doesn’t solve the same set of problems as DL – but there is a strong intersection. In this manner, RL can level the playing fields as Data may not necessarily be the moat it earlier was.</p>



<p>The biggest application of RL that we’ve seen until now has been in games – AlphaGo Zero, Deepmind’s expert-level AI to play the board game Go; DeepMind’s efforts to master a multi-agent game like StarCraft called AlphaStar; OpenAI’s research that shows multiple agents playing Hide And Seek. – these all leverage RL.</p>



<p>In the future I see RL changing how Control Systems are built for complex machines. Machines will leverage RL for 3-dimensional path and motion planning. RL will improve systems that tend to have conversational interfaces, leveraging each conversation to improve the policy. RL could potentially be used for most decision making processes in extremely complex environments with low precedent data. This will be the decade of RL.</p>
<p>The post <a href="https://www.aiuniverse.xyz/reinforcement-learning-the-algorithms-changing-how-computers-make-decisions/">Reinforcement Learning: The Algorithms Changing How Computers Make Decisions</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/reinforcement-learning-the-algorithms-changing-how-computers-make-decisions/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Meet the engineer teaching robots how to get a grip</title>
		<link>https://www.aiuniverse.xyz/meet-the-engineer-teaching-robots-how-to-get-a-grip/</link>
					<comments>https://www.aiuniverse.xyz/meet-the-engineer-teaching-robots-how-to-get-a-grip/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 07 Feb 2020 06:49:53 +0000</pubDate>
				<category><![CDATA[Data Robot]]></category>
		<category><![CDATA[algorithms]]></category>
		<category><![CDATA[Australian research]]></category>
		<category><![CDATA[Automation]]></category>
		<category><![CDATA[COLLABORATIVE ROBOTS]]></category>
		<category><![CDATA[HUMAN-ROBOT INTERACTIONS]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Robotics]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6616</guid>

					<description><![CDATA[<p>Source: createdigital.org.au Australian research into ‘active perception’ could change this – and eventually see high-tech assistants installed in our homes. The key lies in teaching robots to <a class="read-more-link" href="https://www.aiuniverse.xyz/meet-the-engineer-teaching-robots-how-to-get-a-grip/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/meet-the-engineer-teaching-robots-how-to-get-a-grip/">Meet the engineer teaching robots how to get a grip</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: createdigital.org.au</p>



<p>Australian research into ‘active perception’ could change this – and eventually see high-tech assistants installed in our homes.</p>



<p>The key lies in teaching robots to behave more like humans, said Doug Morrison, PhD researcher at the Australian Centre for Robotic Vision (ACRV).</p>



<p>“You can have a robot doing the same thing over and over very quickly, but as soon as anything changes, you need to reprogram it,” he told create.</p>



<p>“What we want are robots that can work in unstructured environments like your home, workplace, a hospital – you name it.”</p>



<p>Morrison studied electrical engineering and worked in mining and automation before landing at the ACRV, which is headquartered at the Queensland University of Technology in Brisbane. The centre also spans labs at the University of Adelaide, Monash University and the Australian National University.</p>



<p>He has been creating algorithms to help robots respond to their surroundings for the past three years, focusing on teaching them to grasp objects in real time. The ultimate goal is creating robots with the ability to move and think at the same time.</p>



<p>“My research is looking at ways to let the robot be an active participant in its world,” Morrison said</p>



<p>“We want robots to be able to pick up objects that they’ve never seen before in environments they’ve never seen before.”</p>



<h3 class="wp-block-heading">Engineering active perception</h3>



<p>A key pillar of this is the Generative Grasping Convolutional Neural Network (GG-CNN) Morrison created in 2018, which lets robots more accurately and quickly grasp moving objects in cluttered spaces.</p>



<p>Before this, a robot would look at an object, take up to a minute to think, and then attempt to pick it up. Morrison’s approach speeds up the process.</p>



<p>He has since built on this by adding another layer that allows a robot to look around, which is not a skill robots have really possessed before.</p>



<p>Morrison’s main innovation is the development of a multi-view picking controller, which selects informative viewpoints for an eye-in-hand camera while reaching to grasp. This reveals high-quality grasps that would be hidden in a static view.</p>



<p>According to the ACRV, this active perception approach is the first in the world to focus on real-time grasping by stepping away from a static camera position or fixed data collecting routines.</p>



<p>It is also unique in the way it builds a ‘map’ of grasps in a pile of objects, which updates as the robot moves. This real-time mapping predicts the quality and pose of grasps at every pixel in a depth image.</p>



<p>“The beauty of our active perception approach is that it’s smarter and quicker than static, single viewpoint grasp detection methods thanks to our GG-CNN, which is 10 times faster than other systems,” Morrison said.</p>



<p>“We strip out lost time by making the act of reaching towards an object a meaningful part of the grasping pipeline rather than just a mechanical necessity.</p>



<p>“Like humans, this allows the robot to change its mind on-the-go in order to select the best object to grasp and remove from a messy pile of others.”</p>



<p>Morrison validated his approach by having a robotic arm remove 20 objects, one at a time, from a pile. He achieved an 80 per cent success rate, which is about 12 per cent more than traditional single viewpoint grasp detection methods.</p>



<p>“[The approach] allows the robot to act very fluidly and gain a better understanding of its world,” he said.</p>



<p>“It makes them much more useful and practical. It means they’re better at grasping in unstructured environments such as a cluttered home or warehouse, where a robot needs to walk around an object, understand what’s there and be able to compute the best way to pick it up.”</p>



<h3 class="wp-block-heading">Grasping with intent</h3>



<p>Currently, one of the main limitations of robotic learning is how data-hungry the algorithms are. To teach robots to grasp, researchers rely on either huge masses of precompiled data sets of objects or on an extensive amount of trial and error.</p>



<p>“Neither of these are practical if you want a robot that can learn to adapt to its environment very quickly,” Morrison said.</p>



<p>“We’re taking a step back and looking at automatically developing data sets that allow robots to efficiently learn to grasp effectively any object they encounter, even if they’ve never seen it – or anything like it – before.”</p>



<p>The next step is to teach robots to ”grasp with intent”.</p>



<p>“It’s all well and good to be able to pick things up, but there are a lot of tasks where we need to pick up objects in a specific way,” Morrison said.</p>



<p>“We’re looking at everything from household tasks to stacking shelves and doing warehouse tasks like packing things into a box.”</p>



<p>He is also exploring how to use evolutionary algorithms to create new, diverse shapes that can be tested in simulations and also 3D printed for robots to practice on.</p>



<p>“We’re looking at coming up with ways of automatically developing shapes that fill gaps in a robot’s knowledge,” Morrison said.</p>



<p>“If you’ve learnt to grasp one object with a handle, seeing more objects with a handle might not help you very much … Seeing something completely different will probably fill a gap in your knowledge and accelerate your training lot faster than seeing 10,000 more handles.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/meet-the-engineer-teaching-robots-how-to-get-a-grip/">Meet the engineer teaching robots how to get a grip</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/meet-the-engineer-teaching-robots-how-to-get-a-grip/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Concentric Applies Deep Learning Algorithms to Data Security</title>
		<link>https://www.aiuniverse.xyz/concentric-applies-deep-learning-algorithms-to-data-security/</link>
					<comments>https://www.aiuniverse.xyz/concentric-applies-deep-learning-algorithms-to-data-security/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 04 Feb 2020 04:54:36 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[algorithms]]></category>
		<category><![CDATA[CEO Karthik Krishnan]]></category>
		<category><![CDATA[cybersecurity]]></category>
		<category><![CDATA[data security]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[intelligence platform]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6503</guid>

					<description><![CDATA[<p>Source: securityboulevard.com Fresh off raising an additional $7 million in funding, Concentric has launched a tool that employs deep learning algorithms to enable cybersecurity teams to identify <a class="read-more-link" href="https://www.aiuniverse.xyz/concentric-applies-deep-learning-algorithms-to-data-security/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/concentric-applies-deep-learning-algorithms-to-data-security/">Concentric Applies Deep Learning Algorithms to Data Security</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: securityboulevard.com</p>



<p>Fresh off raising an additional $7 million in funding, Concentric has launched a tool that employs deep learning algorithms to enable cybersecurity teams to identify documents and repositories where sensitive data has been stored.</p>



<p>Company CEO Karthik Krishnan said the Semantic Intelligence platform takes the place of relying on users to remember where documents containing personally identifiable information (PII) data might be stored, for example. The technology takes less than 30 minutes to install and about a week to learn the overall environment. Once the deep learning algorithms are trained to identify sensitive data, cybersecurity teams can focus their efforts on securing the documents and repositories where that data is stored, he said.</p>



<p>Krishnan said Concentric’s deep learning algorithms, also known as neural networks, have already been employed by beta customers to find millions of unprotected or inappropriately shared documents accessible by thousands of employees.</p>



<p>A business on average has nearly 10 million documents, he said, with 1.2 million documents deemed business-critical. Of those business-critical documents, more than 15% are at risk because of improper sharing with users and groups or inadequate/incorrect data classification. In addition to the inherent cybersecurity risks those documents represent, Krishnan noted organizations could be fined millions of dollars for breaching any number of compliance mandates.</p>



<p>Cybersecurity and compliance teams have wrestled with data classification issues for decades. Most processes are deeply flawed because they rely on end users to determine how sensitive a document might be. Over time, the number of documents that are misclassified even though they may include, for example, Social Security numbers starts to multiply. Rather than rely on end users, Concentric is making the case for employing artificial intelligence (AI) in the form of deep learning algorithms to determine the appropriate level of classification for any document and identify which policies should be applied, said Krishnan.</p>



<p>The existence of an AI tool to classify data should not absolve end users of the responsibility to classify data. However, given that people make mistakes or simply forget, an AI tool will enable cybersecurity and compliance teams to enforce policies at a more granular level without having to disrupt the business. That capability is becoming a fundamental requirement as regulations such as the General Data Protection Rule (GDPR) and California Consumer Privacy Act (CCPA) raise the stakes involving almost any type of data breach or even accidental sharing of sensitive data.</p>



<p>While there’s obviously a lot of hype surrounding AI these days, it’s arguably in the realm of rote tasks where algorithms can prove most effective. The more narrowly focused the task, the more accurate algorithms tend to become. Of course, the more the humans employed to train algorithms know about a process, the faster the AI system tends become effective. The challenge now is cutting through all the hype to identify practical use cases for AI. Arguably, data classification along with other data security and data management functions make an ideal area of initial focus.</p>
<p>The post <a href="https://www.aiuniverse.xyz/concentric-applies-deep-learning-algorithms-to-data-security/">Concentric Applies Deep Learning Algorithms to Data Security</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/concentric-applies-deep-learning-algorithms-to-data-security/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How deep learning can reduce bias in advertising</title>
		<link>https://www.aiuniverse.xyz/how-deep-learning-can-reduce-bias-in-advertising/</link>
					<comments>https://www.aiuniverse.xyz/how-deep-learning-can-reduce-bias-in-advertising/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 05 Dec 2019 07:59:31 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[ADVERTISING]]></category>
		<category><![CDATA[algorithms]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Oracle]]></category>
		<category><![CDATA[researchers]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5484</guid>

					<description><![CDATA[<p>Source: searchengineland.com Algorithms, especially those that utilize deep learning in some manner, are notorious for being opaque. To be clear, this means that when you ask a <a class="read-more-link" href="https://www.aiuniverse.xyz/how-deep-learning-can-reduce-bias-in-advertising/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-deep-learning-can-reduce-bias-in-advertising/">How deep learning can reduce bias in advertising</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: searchengineland.com</p>



<p>Algorithms, especially those that utilize deep learning in some manner, are notorious for being opaque. To be clear, this means that when you ask a deep learning algorithm to answer a question, the algorithm gives you an answer without any explanation of how it came to that conclusion. It does not show its work; you simply ask it a question and it generates an answer, like a mysterious oracle.&nbsp;</p>



<p>As author Scott Fulton III points out, “We’ve created systems that draw mostly, though never entirely, correct inferences from ordinary data, by way of logic that is by no means obvious.” Should these systems be trained on faulty or incomplete data, they have the capacity to further entrench existing biases and perpetuate discriminatory behavior.</p>



<h2 class="wp-block-heading">Bias isn’t inherent</h2>



<p>I believe that there are ways that we can use deep learning to help eliminate these inequalities, but it requires organizations to interrogate their existing practices and data more deeply. We are beginning to see the biased and hurtful results of this in the advertising industry, as deep learning is increasingly used to decide what ads you see. Everyone needs to exhibit greater awareness about how their ads are perceived, as well as who is viewing them.</p>



<p>The problem that many marketers currently face is that they rely on third-party platforms to determine who their ad is shown to. For example, while advertising platforms like Facebook allow marketers to roughly sketch out their target audience, it is ultimately up to the algorithm itself to identify the exact users who will see the ad. To put it another way, a company might put out an ad that is not targeted to a specific age group, ethnicity, or gender, and still find that certain groups are more likely to see their ads than others.</p>



<h2 class="wp-block-heading">How algorithms can perpetuate bias</h2>



<p>Earlier this year, a team from Northeastern University decided to carry out a series of experiments designed to identify the extent to which Facebook’s ad delivery is skewed along demographic lines. While Facebook’s algorithm does allow advertisers to target certain demographics, age groups, and genders with precision, the researchers wanted to see whether giving the algorithm neutral targeting parameters for a series of ads would also result in a similar skew, despite not targeting any single specific group; in other words, whether people with particular demographic characteristics were more likely to see certain ads than others.</p>



<p>To test this hypothesis, the researchers ran a series of ads that were targeted to the exact same audience and had the same budget, but used different images, headlines, and copy. They found that ads with creative stereotypically associated with a specific group (ie. bodybuilding for men or cosmetics for female audiences) would overwhelmingly perform better amongst those groups, despite not being set up to target those audiences specifically.</p>



<p>Researchers also discovered that, of all the creative elements, the image was by far the most important in determining the ad’s audience, noting that “an ad whose headline and text would stereotypically be of the most interest to men with the image that would stereotypically be of the most interest to women delivers primarily to women at the same rate as when all three ad creative components are stereotypically of the most interest to women.”&nbsp;</p>



<p>These stereotypes become much more harmful when advertising housing or job openings, to name a few sensitive areas. As the researchers from Northeastern discovered, job postings for secretaries and pre-school teachers were more likely to be shown to women, whereas listings for taxi drivers and janitors were shown to a higher percentage of minorities.</p>



<p>Why does this happen? Because Facebook’s algorithm optimizes ads based on a market objective (maximizing engagement, generating sales, garnering more views, etc.), and does not pay as much attention to minimizing bias. As a result, Karen Hao notes in the MIT Technology Review, “if the algorithm discovered that it could earn more engagement by showing more white users homes for purchase, it would end up discriminating against black users.”</p>



<h2 class="wp-block-heading">There are other approaches</h2>



<p>However, the algorithm does this because it has been taught to approach the issue from a purely economic perspective. If, on the other hand, it had been trained from the start to be aware of potential discrimination, and to guard against it, the algorithm could end up being much less biased than if marketers were left to their own devices. Brookings Institute suggests that developers of algorithms create what they term a “bias impact statement” beforehand, which is defined as “a template of questions that can be flexibly applied to guide them through the design, implementation, and monitoring phases,” and whose purpose is to “help probe and avert any potential biases that are baked into or are resultant from the algorithmic decision.”</p>



<p>Take, for instance, mortgage lending. Research has shown that minorities, especially African Americans and Latinos, are more likely to be denied mortgages even when taking into account income, the size of the loan, and other factors. An algorithm that relies solely on data from previous loans to determine who to give a mortgage to would only perpetuate those biases; however, one that is designed specifically to take those factors into account could end up creating a much fairer and more equitable system.</p>



<p>While there is clearly more work to be done to make sure that algorithms do not perpetuate existing biases (or create new ones), there is ample research out there to suggest a way forward — namely, by making marketers and developers aware of the prejudices inherent in the industry and having them take steps to mitigate those preferences throughout the design, data cleansing, and implementation process. A good algorithm is like wine; as it ages, it takes on nuance and depth, two qualities that the marketing industry sorely needs.</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-deep-learning-can-reduce-bias-in-advertising/">How deep learning can reduce bias in advertising</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-deep-learning-can-reduce-bias-in-advertising/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
