<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>AI systems Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/ai-systems/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/ai-systems/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Fri, 11 Dec 2020 04:56:18 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>15 common data science techniques to know and use</title>
		<link>https://www.aiuniverse.xyz/15-common-data-science-techniques-to-know-and-use/</link>
					<comments>https://www.aiuniverse.xyz/15-common-data-science-techniques-to-know-and-use/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 11 Dec 2020 04:56:16 +0000</pubDate>
				<category><![CDATA[Data Science]]></category>
		<category><![CDATA[AI systems]]></category>
		<category><![CDATA[analysis]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[techniques]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12405</guid>

					<description><![CDATA[<p>Source: searchbusinessanalytics.techtarget.com Data science has taken hold at many enterprises, and data scientist is quickly becoming one of the most sought-after roles for data-centric organizations. Data science <a class="read-more-link" href="https://www.aiuniverse.xyz/15-common-data-science-techniques-to-know-and-use/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/15-common-data-science-techniques-to-know-and-use/">15 common data science techniques to know and use</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: searchbusinessanalytics.techtarget.com</p>



<p>Data science has taken hold at many enterprises, and data scientist is quickly becoming one of the most sought-after roles for data-centric organizations. Data science applications utilize technologies such as machine learning and the power of big data to develop deep insights and new capabilities, from predictive analytics to image and object recognition, conversational AI systems and beyond.</p>



<p>Indeed, organizations that aren&#8217;t adequately investing in data science likely will soon be left in the dust by competitors that are gaining significant competitive advantages by doing so.</p>



<p>What exactly are data scientists doing that provides such transformative business benefits? The field of data science is a collection of a few key components: statistical and mathematical approaches for accurately extracting quantifiable data; technical and algorithmic approaches that facilitate working with large data sets, using advanced analytics techniques and methodologies that tackle data analysis from a scientific perspective; and engineering tools and methods that can help wrangle large amounts of data into the formats needed to derive high-quality insights.</p>



<p>In this article, we&#8217;ll dive deeper into common statistical and analytical techniques that data scientists use. Some of these data science techniques are rooted in centuries of mathematics and statistics work, while others are relatively new ones that take advantage of the latest research in machine learning, deep learning and other forms of advanced analytics.</p>



<h3 class="wp-block-heading">How data science finds relationships between data</h3>



<p>When trying to identify information needles in data haystacks, data scientists, first, need to discern how different data elements correlate with or relate to each other. For example, if you have a bunch of data points plotted on a graph, how do you know if there&#8217;s any meaning in them?</p>



<p>Perhaps the data represents a relationship between two or more variables and the job is to plot some sort of line or multidimensional plane that best describes the relationship. Or perhaps it represents clustered groups that have some affinity. Other data could represent different categories. By finding these relationships, we give meaning to the otherwise randomness of the data, which can then be analyzed and visualized to provide information that organizations can use to make decisions or plan strategies.</p>



<p>Now, let&#8217;s look closer at the various data science techniques and methods that are available to perform the analysis.</p>



<h3 class="wp-block-heading">Classification techniques</h3>



<p>The primary question data scientists are looking to answer in classification problems is, &#8220;What category does this data belong to?&#8221; There are many reasons for classifying data into categories. Perhaps the data is an image of handwriting and you want to know what letter or number the image represents. Or perhaps the data represents loan applications and you want to know if it should be in the &#8220;approved&#8221; or &#8220;declined&#8221; category. Other classifications could be focused on determining patient treatments or whether an email message is spam.</p>



<p>The algorithms and methods that data scientists use to filter data into categories include the following, among others:</p>



<ul class="wp-block-list"><li><strong>Decision trees.</strong> These are a branching logic structure that uses machine-generated trees of parameters and values to classify data into defined categories.</li><li><strong>Naïve Bayes classifiers.</strong> Using the power of probability, Bayes classifiers can help put data into simple categories.</li><li><strong>Support vector machines.</strong> SVMs aim to draw a line or plane with a wide margin to separate data into different categories.</li><li><strong>K-nearest neighbor.</strong> This technique uses a simple &#8220;lazy decision&#8221; method to identify what category a data point should belong to based on the categories of its nearest neighbors in a data set.</li><li><strong>Logistic regression.</strong> A classification technique despite its name, it uses the idea of fitting data to a line to distinguish between different categories on each side. The line is shaped such that data is shifted to one category or another rather than allowing more fluid correlations.</li><li><strong>Neural networks.</strong> This approach uses trained artificial neural networks, especially deep learning ones with multiple hidden layers. Neural nets have shown profound capabilities for classification with extremely large sets of training data.</li></ul>



<h3 class="wp-block-heading">Regression techniques</h3>



<p>What if instead of trying to find out which category the data falls into, you&#8217;d like to know the relationship between different data points? The main idea of regression is to answer the question, &#8220;What is the predicted value for this data?&#8221; A simple concept that comes from the statistical idea of &#8220;regression to the mean,&#8221; it can either be a straightforward regression between one independent and one dependent variable or a multidimensional one that tries to find the relationship between multiple variables.</p>



<p>Some classification techniques, such as decision trees, SVMs and neural networks, can also be used to do regressions. In addition, the regression techniques available to data scientists include the following:</p>



<ul class="wp-block-list"><li><strong>Linear regression.</strong>&nbsp;One of the most widely used data science methods, this approach tries to find the line that best fits the data being analyzed based on the correlation between two variables.</li><li><strong>Lasso regression.</strong>&nbsp;Lasso, short for &#8220;least absolute shrinkage and selection operator,&#8221; is a technique that improves upon the prediction accuracy of linear regression models by using a subset of data in a final model.</li><li><strong>Multivariate regression.</strong>&nbsp;This involves different ways to find lines or planes that fit multiple dimensions of data potentially containing many variables.</li></ul>



<h3 class="wp-block-heading">Clustering and association analysis techniques</h3>



<p>Another set of data science techniques focuses on answering the question, &#8220;How does this data form into groups, and which groups do different data points belong to?&#8221; Data scientists can discover clusters of related data points that share various characteristics in common, which can yield useful information in analytics applications.</p>



<p>The methods available for clustering uses include the following:</p>



<ul class="wp-block-list"><li><strong>K-means clustering.</strong> A k-means algorithm determines a certain number of clusters in a data set and finds the &#8220;centroids&#8221; that identify where different clusters are located, with data points assigned to the closest one.</li><li><strong>Mean-shift clustering.</strong> Another centroid-based clustering technique, it can be used separately or to improve on k-means clustering by shifting the designated centroids.</li><li><strong>DBSCAN.</strong> Short for &#8220;Density-Based Spatial Clustering of Applications with Noise,&#8221; DBSCAN is another technique for discovering clusters that uses a more advanced method of identifying cluster densities.</li><li><strong>Gaussian mixture models.</strong> GMMs help find clusters by using a Gaussian distribution to group data together rather than treating the data as singular points.</li><li><strong>Hierarchical clustering.</strong> Similar to a decision tree, this technique uses a hierarchical, branching approach to find clusters.</li></ul>



<p>Association analysis is a related, but separate, technique. The main idea behind it is to find association rules that describe the commonality between different data points. Similar to clustering, we&#8217;re looking to find groups that data belongs to. However, in this case, we&#8217;re trying to determine when data points will occur together, rather than just identify clusters of them. In clustering, the goal is to segregate a large data set into identifiable groups, whereas with association analysis, we&#8217;re measuring the degree of association between data points.</p>



<h3 class="wp-block-heading">Data science application examples</h3>



<p>The above methods and techniques in the data science tool belt need to be applied appropriately to specific analytics problems or questions and the data that&#8217;s available to address them. Good data scientists must be able to understand the nature of the problem at hand &#8212; is it clustering, classification or regression? &#8212; and the best algorithmic approach that can yield the desired answers given the characteristics of the data. This is why data science is, in fact, a scientific process, rather than one that has hard and fast rules and allows you to just program your way to a solution.</p>



<p>Using these techniques, data scientists can tackle a wide range of applications, many of which are commonly seen across different types of industries and organizations. Here are a few examples.</p>



<p><strong>Anomaly detection.</strong> If you can find the pattern for expected or &#8220;normal&#8221; data, then you can also find those data points that don&#8217;t fit the pattern. Companies in industries as diverse as financial services, healthcare, retail and manufacturing regularly employ a variety of data science methods to identify anomalies in their data for uses such as fraud detection, customer analytics, cybersecurity and IT systems monitoring. Anomaly detection can also be used to eliminate outlier values from data sets for better analytics accuracy.</p>



<p><strong>Binary and multiclass classification.</strong> One primary application of classification techniques is to determine if something is or is not in a particular category. This is known as binary classification, because we could ask something like, &#8220;Is there a cat in the picture, or not?&#8221; A practical business application is to identify contracts or invoices among piles of documents using image recognition. In multiclass classification, we have many different categories in a data set and we&#8217;re trying to find the best fit for data points. For example, the U.S. Bureau of Labor Statistics does automated classification of workplace injuries.</p>



<p><strong>Personalization.</strong> Organizations looking to personalize interactions with people or recommend products and services to customers first need to group them into data buckets with shared characteristics. Effective data science work enables websites, marketing offers and more to be tailored to the specific needs and preferences of individuals, using technologies such as recommendation engines and hyper-personalization systems that are driven by matching the data in detailed profiles of people.</p>



<p>That&#8217;s just a sample of useful data science applications. By understanding the various techniques, methods, tools and analytical approaches, data scientists can help the organizations that employ them achieve the strategic and competitive benefits that many business rivals are already enjoying.</p>
<p>The post <a href="https://www.aiuniverse.xyz/15-common-data-science-techniques-to-know-and-use/">15 common data science techniques to know and use</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/15-common-data-science-techniques-to-know-and-use/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Artificial Intelligence Must Be More Responsible Than Humans</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-must-be-more-responsible-than-humans/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-must-be-more-responsible-than-humans/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 05 Oct 2020 06:03:28 +0000</pubDate>
				<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[AI systems]]></category>
		<category><![CDATA[Amazon]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[humans]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11894</guid>

					<description><![CDATA[<p>Source: businessworld.in Since the dawn of Bronze age civilizations more than 5000 years ago, humans have been creating norms of societal governance. The process continues with many <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-must-be-more-responsible-than-humans/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-must-be-more-responsible-than-humans/">Artificial Intelligence Must Be More Responsible Than Humans</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: businessworld.in</p>



<p>Since the dawn of Bronze age civilizations more than 5000 years ago, humans have been creating norms of societal governance. The process continues with many imperfections. Off late, Artificial Intelligence (AI) is increasing its influence in decision making processes in the lives of humans and expectations are whether AI will follow similar or better norms. Principles that govern the behaviour of responsible AI systems are being established.</p>



<p><strong>Principles</strong></p>



<p><strong>Fair</strong></p>



<p>All AI systems should be fair in dealing with people and be inclusive in coverage. In particular, they should not show any bias in working. Historically, humans have used at least 2 major criteria for unfair treatment, i.e. gender and caste/race/ethnicity.&nbsp;</p>



<p>Amazon tried to develop an algorithm for recruitment. However, it started showing less tendency to select female candidates. Even after removing gender specific indicators, females were still discriminated against. The project had to be abandoned.</p>



<p>Compas, a risk-assessment tool developed by a privately held company and used by the Wisconsin Department of Corrections predicted that people of colour had higher tendency of repeat offences than they actually do. California has decided not to use face recognition technology for law enforcement. A study by Stanford researchers in 2020 found that voice recognition software of Amazon, Apple, Google, IBM, and Microsoft have higher error rates when working on voice of black people.</p>



<p><strong>Transparent and Accountable</strong></p>



<p>Unlike traditional software, it is hard to predict the outcome of AI algorithms as they dynamically change with training. This makes them less transparent and this “Black box” nature of AI makes it very difficult to find the source of error in case of a wrong prediction. This also makes it makes difficult to pinpoint accountability. Neural networks are the underlying technology for many face, voice, character etc recognition systems. Unfortunately, it is more difficult to trace problems in neural networks especially deep ones (with many layers) than in other AI algorithms e.g decision trees etc. And new variants of neural networks e.g. GANs (Generational Adversarial Networks), Spiking Neural Networks etc continue to gain popularity.</p>



<p><strong>Reliable and safe</strong></p>



<p>Security and reliability of AI systems has certain peculiar dimensions e.g. unpredictability. Facebook in collaboration with Georgia Institute of Technology created bots that could negotiate but they also learnt how to lie. This was not intended during programming. Another issue is slow rise of Artificial General Intelligence (AGI) or Broad AI or Strong AI that aims to create systems that genuinely simulate human reasoning and generalize across a broad range of circumstances. These algorithms will be able to do transfer learning, so an algorithm that learns to play Chess will also be to able learn how to play Go. This will vastly increase the context in which a machine can operate and this cannot be predicted in advance.</p>



<p>Unpredictability reduces reliability and safety of the systems.</p>



<p><strong>Problem sources</strong></p>



<p><strong>Models and features</strong></p>



<p>The power of AI algorithms is based on the models and features and the weightages of the features that are used while creating models. The AI in use currently is also Narrow AI and it will not work if the context changes. For example, a system designed to scrutinize applications for medical insurance policies may discriminate against people with diseases if used to vet applications for car insurance since the features and their weightages are not appropriate for the latter case. Hence models or features framed without fairness in mind can induce biases.</p>



<p><strong>Data</strong></p>



<p>The biggest source of biases in AI systems is data as biases may be inherent in the data, either explicitly or subconsciously. This can happen if data is not uniformly sampled or carries implicit historical or societal biases. In credit risk, data of customers who defaulted less as they were supported by tax benefits will give incorrect results when used for scenarios where tax benefits are not there. MIT researchers found that facial analysis technologies had higher error rates for minorities and particularly minority women, potentially due to unrepresentative training data. The reason for failure for Amazon recruitment software was that it was trained on 10 years of data where resumes of male candidates outnumbered that of females. It also focussed on words e.g. “executed”, “captured” etc that are more commonly used by males.</p>



<p><strong>Other issues</strong></p>



<p>The rise of AI poses additional challenges not found in traditional systems.</p>



<p><strong>Driverless Vehicles</strong></p>



<p>The driverless vehicles will start plying on the roads in a decade or so. Any accident will raise the question of civil and criminal liability. In 2018 a pedestrian died when she was hit by Uber test car despite a human driver sitting inside the car. A vehicle may be programmed to save either the passengers or pedestrians. Potential accused could be vehicle manufacturer, vehicle operator or even the government. This will also change the underwriting models. Liability issues will also come as companies allow operation decisions to be more data driven as now programmers will appear to be the sole accused.&nbsp;</p>



<p><strong>Weapons</strong></p>



<p>Countries e.g. US, Russia, Korea etc plan to use AI in weapons e.g. drones or robots etc. Currently the machines do not have emotions and this raises the concern if an autonomous machine goes on killing spree. In 2018, Google had to stop engagement with US government over its Maven military program due to public outcry.</p>



<p><strong>Safeguards</strong></p>



<p><strong>Guidelines</strong></p>



<p>The concerns over ethics in AI have resulted in many organizations formulating guidelines governing the use of AI e.g. European Commission&#8217;s, &#8220;Ethics Guidelines for Trustworthy Artificial Intelligence&#8221;, &nbsp;US government’s “Roadmap for AI Policy”, IEEE’s P7000 standards projects etc. These contain the general principles of ethics and responsibility that AI systems should follow.</p>



<p><strong>Software</strong></p>



<p>Many companies have created frameworks, software, guidelines etc that can help to create Responsible AI e.g. IBM, Google, Microsoft, PWC, Amazon, Pega, Arthur, H2O etc. Their software help to explain model’s “Black box” behaviour and hence bring transparency, assess fairness of the systems, mitigate bias against any identity based groups, keep the data secure etc by constant monitoring.</p>



<p><strong>Companies</strong></p>



<p>Within companies, Responsible AI can be facilitated by imposing standards through overseeing groups, creating diversity in teams and cascading the message to individuals. There should be conscious efforts to reduce biases in data.</p>



<p><strong>Future</strong></p>



<p>In the next two decades, machines will become more autonomous in decision making processes and human will slowly cede control of their own lives. Establishment of Responsible AI will reduce biases and increase acceptance of AI. This will help in creating a more fair and equitable society. An unchecked growth of AI will not only make humans less tolerant to AI but also to each other.</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-must-be-more-responsible-than-humans/">Artificial Intelligence Must Be More Responsible Than Humans</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-must-be-more-responsible-than-humans/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>WHY BIAS IN ARTIFICIAL INTELLIGENCE IS BAD NEWS FOR SOCIETY</title>
		<link>https://www.aiuniverse.xyz/why-bias-in-artificial-intelligence-is-bad-news-for-society/</link>
					<comments>https://www.aiuniverse.xyz/why-bias-in-artificial-intelligence-is-bad-news-for-society/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 10 Jul 2020 09:50:38 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI systems]]></category>
		<category><![CDATA[application]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10121</guid>

					<description><![CDATA[<p>Source: analyticsinsight.net The practice to include Artificial Intelligence in industry application is skyrocketing for a decade now. It is evident since, AI and its constituent applications Machine Learning, computer <a class="read-more-link" href="https://www.aiuniverse.xyz/why-bias-in-artificial-intelligence-is-bad-news-for-society/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/why-bias-in-artificial-intelligence-is-bad-news-for-society/">WHY BIAS IN ARTIFICIAL INTELLIGENCE IS BAD NEWS FOR SOCIETY</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: analyticsinsight.net</p>



<p>The practice to include Artificial Intelligence in industry application is skyrocketing for a decade now. It is evident since, AI and its constituent applications Machine Learning, computer vision, facial analysis, autonomous vehicles, deep learning form the pillars of modern digital empowerment. The ability to learn the data it is trained up to understand the binary, quantum computation of the world, and make decisions derived from its insights makes AI unique than earlier technologies. Leaders believe that possessing AI-based technologies equate to future industry successes. From healthcare, research, finance, logistics to military, law enforcement department AI holds the key to massive competitive edge and up-gradation with monetary benefits too. This is where AI emerges as a double-faced sword. Under the authority and accessibility of malicious entities, AI can have negative implications on humans. Not only that, but AI is also set to bring a paradigm shift of power in favor of one who possesses it or ones who escaped its bias.</p>



<h4 class="wp-block-heading"><strong>Why is it bad?</strong></h4>



<p>The recent global outrage against George Floyd’s death highlighted the bias that may exist in today’s technologies, especially when AI has a history of racial, ethnic bias. While incidents like Microsoft’s row over mislabeling a famous singer were a random and unfortunate mistake, there is evidence that the data fed to <em>AI </em>systems are already biased enough. This data contain implicit racial, gender, or ideological biases, thus resulting in discrimination when they find their way into the AI systems that design, and are used to make decisions by many, from governments to businesses. IBM predicts the number of biased AI systems and algorithms will grow within the next five years. This is alarming as AI is explicitly used in sectors like healthcare, criminal justice, customer services, among other sensitive areas. A simple biased AI system may end up not grant loans or faulty surveillance in neighborhood streets. Thereby it shall cause the trust upon these systems to corrode with time and threaten the socio-economic and political balance.</p>



<h4 class="wp-block-heading"><strong>How does bias occur in AI?</strong></h4>



<p>Though bias has been identified in facial recognition systems, hiring programs, and the algorithms behind web searches question lies that how biases enter systems. Well, they emerge during any of these three processes: building a model, data collection, or preparing dataset governed by certain attributes. Out of these three, the most common is biases during the collection of data. This can happen in two ways, either the data garnered is unrepresentative of reality, or it reflects existing prejudices. The first case might occur, for example, if a deep-learning algorithm is fed more photos of light-skinned faces than dark-skinned faces. This can ‘train’ system towards failure in detecting dark-skinned faces. For example, Joy Buolamwini at MIT, working with Timnit Gebru, found that facial analysis technologies had higher error rates for minorities and particularly minority women, potentially due to unrepresentative training data.</p>



<p>In the case of the latter scenario, it can happen when, for instance, an internal recruiting tool filter out female candidates during the hiring process. This is exactly what had happened with Amazon when its AI recruiting tool dismissed female candidates as it was trained on historical hiring decisions that favored men over women. Furthermore, it is essential to note that developers are not responsible for AI bias. However, a lack of diverse social representation can compound the frequency of designing a skewed system.</p>



<h4 class="wp-block-heading"><strong>Possible Solutions</strong></h4>



<p>To mitigate this situation, we must realize that the sensitivity of the issue depends on how we define bias. Simply removing bias from the data set is never the solution. This is because, removing bias from the training dataset at test time when one is deploying the solution, can render the system unfair. So, the best way to resolve bias in AI is by cross-checking the algorithm to see if there are patterns of unintended bias and retraining the system. There are also discussions on a global scale to augment AI with social intelligence to eradicate biases. Besides, it is a symbiotic tradeoff. AI can help us by revealing ways in which we are partial, parochial, and cognitively biased, thus leading us to adopt more impartial or egalitarian views. In the process of recognizing our bias and teaching machines about our shared values, we too can improve AI.</p>



<p>Other methods to counter bias in AI is by trying to understand how Machine Learning and deep learning algorithms arrived at a specific decision or observation. Through, explainability techniques, we can learn if the outcome was based on predefined bias or not. On the data side, researchers have made progress on text classification tasks by adding more data points to improve performance for protected groups. Innovative training techniques such as using transfer learning or decoupled classifiers for different groups have proven useful for reducing discrepancies in facial analysis technologies. Another solution is encouraging ethics education with companies and organizations. By educating employees on cultural and lifestyle differences, one can create an awareness of groups within the society that might have been overlooked or not even considered.</p>
<p>The post <a href="https://www.aiuniverse.xyz/why-bias-in-artificial-intelligence-is-bad-news-for-society/">WHY BIAS IN ARTIFICIAL INTELLIGENCE IS BAD NEWS FOR SOCIETY</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/why-bias-in-artificial-intelligence-is-bad-news-for-society/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AI’s Carbon Footprint Problem</title>
		<link>https://www.aiuniverse.xyz/ais-carbon-footprint-problem-2/</link>
					<comments>https://www.aiuniverse.xyz/ais-carbon-footprint-problem-2/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 06 Jul 2020 06:53:53 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI systems]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[researchers]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10015</guid>

					<description><![CDATA[<p>Source: scienceblog.com For all the advances enabled by artificial intelligence, from speech recognition to self-driving cars, AI systems consume a lot of power and can generate high <a class="read-more-link" href="https://www.aiuniverse.xyz/ais-carbon-footprint-problem-2/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/ais-carbon-footprint-problem-2/">AI’s Carbon Footprint Problem</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: scienceblog.com</p>



<p>For all the advances enabled by artificial intelligence, from speech recognition to self-driving cars, AI systems consume a lot of power and can generate high volumes of climate-changing carbon emissions.</p>



<p>A study last year found that training an off-the-shelf AI language-processing system produced 1,400 pounds of emissions – about the amount produced by flying one person roundtrip between New York and San Francisco. The full suite of experiments needed to build and train that AI language system from scratch can generate even more: up to 78,000 pounds, depending on the source of power. That’s twice as much as the average American exhales over an entire lifetime.</p>



<p>But there are ways to make machine learning cleaner and greener, a movement that has been called “Green AI.” Some algorithms are less power-hungry than others, for example, and many training sessions can be moved to remote locations that get most of their power from renewable sources.</p>



<p>The key, however, is for AI developers and companies to know how much their machine learning experiments are spewing and how much those volumes could be reduced.</p>



<p>Now, a team of researchers from Stanford, Facebook AI Research, and McGill University has come up with an easy-to-use tool that quickly measures both how much electricity a machine learning project will use and how much that means in carbon emissions.</p>



<p>“As machine learning systems become more ubiquitous and more resource intensive, they have the potential to significantly contribute to carbon emissions,” says Peter Henderson, a PhD student at Stanford in computer science and the lead author. “But you can’t solve a problem if you can’t measure it. Our system can help researchers and industry engineers understand how carbon-efficient their work is, and perhaps prompt ideas about how to reduce their carbon footprint.”</p>



<h4 class="wp-block-heading">Tracking Emissions</h4>



<p>Henderson teamed up on the “experiment impact tracker” with Dan Jurafsky, chair of linguistics and professor of computer science at Stanford; Emma Brunskill, an assistant professor of computer science at Stanford; Jieru Hu, a software engineer at Facebook AI Research; Joelle Pineau, a professor of computer science at McGill and co-managing director of Facebook AI Research; and Joshua Romoff, a PhD candidate at McGill.</p>



<p>“There’s a big push to scale up machine learning to solve bigger and bigger problems, using more compute power and more data,” says Jurafsky. “As that happens, we have to mindful of whether the benefits of these heavy-compute models are worth the cost of the impact on the environment.”</p>



<p>Machine learning systems build their skills by running millions of statistical experiments around the clock, steadily refining their models to carry out tasks. Those training sessions, which can last weeks or even months, are increasingly power-hungry. And because the costs have plunged for both computing power and massive datasets, machine learning is increasingly pervasive in business, government, academia, and personal life.</p>



<p>To get an accurate measure of what that means for carbon emissions, the researchers began by measuring the power consumption of a particular AI model. That’s more complicated than it sounds, because a single machine often trains several models at the same time, so each training session has to be untangled from the others. Each training session also draws power for shared overhead functions, such as data storage and cooling, which need to be properly allocated.</p>



<p>The next step is to translate power consumption into carbon emissions, which depend on the mix of renewable and fossil fuels that produced the electricity. That mix varies widely by location as well as by time of day. In areas with a lot of solar power, for example, the carbon intensity of electricity goes down as the sun climbs higher in the sky.</p>



<p>To get that information, the researchers scoured public sources of data about the energy mix in different regions of the United States and the world. In California, the experiment-tracker plugs into real-time data from California ISO, which manages the flow of electricity over most of the state’s grids. At 12:45 p.m. on a day in late May, for example, renewables were supplying 47% of the state’s power.</p>



<p>The location of an AI training session can make a big difference in its carbon emissions. The researchers estimated that running a session in Estonia, which relies overwhelmingly on shale oil, will produce 30 times the volume of carbon as the same session would in Quebec, which relies primarily on hydroelectricity.</p>



<h4 class="wp-block-heading">Greener AI</h4>



<p>Indeed, the researchers’ first recommendation for reducing the carbon footprint is to move training sessions to a location supplied mainly by renewable sources. That can be easy, because datasets can be stored on a cloud server and accessed from almost anywhere.</p>



<p>In addition, however, the researchers found that some machine learning algorithms are bigger energy hogs than others. At Stanford, for example, more than 200 students in a class on reinforcement learning were asked to implement common algorithms for a homework assignment. Though two of the algorithms performed equally well, one used far more power. If all the students had used the more efficient algorithm, the researchers estimated they would have reduced their collective power consumption by 880 kilowatt-hours – about what a typical American household uses in a month.</p>



<p>The result highlights the opportunities for reducing carbon emissions even when it’s not practical to move work to a carbon-friendly location. That is often the case when machine learning systems are providing services in real time, such as car navigation, because long distances cause communication lags or “latency.”</p>



<p>Indeed, the researchers have incorporated an easy-to-use tool into the tracker that generates a website for comparing the energy efficiency of different models. One simple way to conserve energy, they say, would be to establish the most efficient program as the default setting when choosing which one to use.</p>



<p>“Over time,” says Henderson, “it’s likely that machine learning systems will consume even more energy in production than they do during training. The better that we understand our options, the more we can limit potential impacts to the environment.”</p>



<p>The experiment impact tracker is available online for researchers. It is already being used at the SustaiNLP workshop at this year’s Conference on Empirical Methods in Natural Language Processing, where researchers are encouraged to build and publish energy-efficient NLP algorithms. The research, which has not been peer-reviewed, was published on the preprint site Arxiv.org.</p>
<p>The post <a href="https://www.aiuniverse.xyz/ais-carbon-footprint-problem-2/">AI’s Carbon Footprint Problem</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/ais-carbon-footprint-problem-2/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AI System – Using Neural Networks With Deep Learning – Beats Stock Market in Simulation</title>
		<link>https://www.aiuniverse.xyz/ai-system-using-neural-networks-with-deep-learning-beats-stock-market-in-simulation/</link>
					<comments>https://www.aiuniverse.xyz/ai-system-using-neural-networks-with-deep-learning-beats-stock-market-in-simulation/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 03 Jun 2020 06:57:52 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[AI systems]]></category>
		<category><![CDATA[Automatica Sinica]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[neural networks]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9236</guid>

					<description><![CDATA[<p>Source: scitechdaily.com Researchers in Italy have melded the emerging science of convolutional neural networks (CNNs) with deep learning — a discipline within artificial intelligence — to achieve <a class="read-more-link" href="https://www.aiuniverse.xyz/ai-system-using-neural-networks-with-deep-learning-beats-stock-market-in-simulation/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/ai-system-using-neural-networks-with-deep-learning-beats-stock-market-in-simulation/">AI System – Using Neural Networks With Deep Learning – Beats Stock Market in Simulation</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: scitechdaily.com</p>



<p>Researchers in Italy have melded the emerging science of convolutional neural networks (CNNs) with deep learning — a discipline within artificial intelligence — to achieve a system of market forecasting with the potential for greater gains and fewer losses than previous attempts to use AI methods to manage stock portfolios. The team, led by Prof. Silvio Barra at the University of Cagliari, published their findings on IEEE/CAA Journal of Automatica Sinica.</p>



<p>The University of Cagliari-based team set out to create an AI-managed “buy and hold” (B&amp;H) strategy — a system of deciding whether to take one of three possible actions — a long action (buying a stock and selling it before the market closes), a short action (selling a stock, then buying it back before the market closes), and a hold (deciding not to invest in a stock that day). At the heart of their proposed system is an automated cycle of analyzing layered images generated from current and past market data. Older B&amp;H systems based their decisions on machine learning, a discipline that leans heavily on predictions based on past performance.</p>



<p>By letting their proposed network analyze current data layered over past data, they are taking market forecasting a step further, allowing for a type of learning that more closely mirrors the intuition of a seasoned investor rather than a robot. Their proposed network can adjust its buy/sell thresholds based on what is happening both in the present moment and the past. Taking into account present-day factors increases the yield over both random guessing and trading algorithms not capable of real-time learning.</p>



<p>To train their CNN for the experiment, the research team used S&amp;P 500 data from 2009 to 2016. The S&amp;P 500 is widely regarded as a litmus test for the health of the overall global market.</p>



<p>At first, their proposed trading system predicted the market with about 50 percent&nbsp;accuracy, or about accurate enough to break even in a real-world situation. They discovered that short-term outliers, which unexpectedly over- or underperformed, generating a factor they called “randomness.” Realizing this, they added threshold controls, which ended up greatly stabilizing their method.</p>



<p>“The mitigation of randomness yields two simple, but significant consequences,” Prof. Barra said. “When we lose, we tend to lose very little, and when we win, we tend to win considerably.”</p>



<p>Further enhancements will be needed, according to Prof. Barra, as other methods of automated trading already in use make markets more and more difficult to predict.</p>



<p>Reference: “Deep Learning and Time Series-to-Image Encoding for Financial Forecasting” by Silvio Barra, Salvatore Mario Carta, Andrea Corriga, Alessandro Sebastian Podda and Diego Reforgiato Recupero, May 2020, IEEE/CAA Journal of Automatica Sinica.</p>
<p>The post <a href="https://www.aiuniverse.xyz/ai-system-using-neural-networks-with-deep-learning-beats-stock-market-in-simulation/">AI System – Using Neural Networks With Deep Learning – Beats Stock Market in Simulation</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/ai-system-using-neural-networks-with-deep-learning-beats-stock-market-in-simulation/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Artificial Intelligence can&#8217;t technically invent things, says patent office</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-cant-technically-invent-things-says-patent-office/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-cant-technically-invent-things-says-patent-office/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 01 May 2020 09:49:27 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI systems]]></category>
		<category><![CDATA[technically]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8503</guid>

					<description><![CDATA[<p>Source: edition.cnn.com Artificial intelligence is the future. If &#8220;Westworld&#8221; or &#8220;Black Mirror&#8221; are to be believed, there will soon come a day when the computers rule us <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-cant-technically-invent-things-says-patent-office/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-cant-technically-invent-things-says-patent-office/">Artificial Intelligence can&#8217;t technically invent things, says patent office</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: edition.cnn.com</p>



<p>Artificial intelligence is the future. If &#8220;Westworld&#8221; or &#8220;Black Mirror&#8221; are to be believed, there will soon come a day when the computers rule us all.But for now, an AI&#8217;s power ends at the US Patent Office.The USPTO has denied a pair of patents filed on behalf of DABUS, an artificial intelligence system, and published a ruling that says US patents can only be granted to &#8220;natural persons.&#8221;The two patents were for a food container and a flashlight, and were filed by Stephen Thaler, an AI researcher and DABUS&#8217; creator. According to the filing from the USPTO, Thaler calls DABUS a &#8220;creativity machine&#8221; and wanted the AI to get full credit for the inventions. The filing says Thaler argued that &#8220;allowing a machine to be listed as an inventor would incentivize innovation using AI systems.&#8221; CNN has reached out to Thaler for comment. </p>



<p>However, according to the USPTO&#8217;s ruling, inventions can only be submitted (and depending on how philosophical you want to get, conceived) by a &#8220;natural person,&#8221; as reflected in the language of patent law and also in previous federal court rulings.Speaking of philosophy, the ruling quotes a Federal Circuit court decision from 1994 that expounds on the nature of invention in a way that&#8217;s certain to send your brain down the maze of reflexive self-awareness.&#8221;Conception is the touchstone of inventorship, the completion of the mental part of invention. It is the formation in the mind of the inventor, of a definite and permanent idea of the complete and operative invention &#8230; [Conception] is a mental act &#8230;&#8221;Patents that list DABUS as the inventor have also been denied in Europe and the UK for similar reasons related to personhood. The European Patent Office also raised the issue of who, exactly, would enforce the rights granted to an inventor under such a circumstance.Thaler, the mind behind DABUS, is a physicist and founder of Imagination Engines, a company that researches and develops artificial neural networks. DABUS is one such artificial neural network. Imagination Engines describes DABUS as a &#8220;true artificial inventor&#8221; that is programmed to mimic the neural patterns of human thought that lead to the mysterious, primal spark of invention. </p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-cant-technically-invent-things-says-patent-office/">Artificial Intelligence can&#8217;t technically invent things, says patent office</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-cant-technically-invent-things-says-patent-office/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AI and Machine Learning Symposium: Artificial Intelligence and Machine Learning in Armed Conflict</title>
		<link>https://www.aiuniverse.xyz/ai-and-machine-learning-symposium-artificial-intelligence-and-machine-learning-in-armed-conflict/</link>
					<comments>https://www.aiuniverse.xyz/ai-and-machine-learning-symposium-artificial-intelligence-and-machine-learning-in-armed-conflict/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 29 Apr 2020 09:05:59 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI systems]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8418</guid>

					<description><![CDATA[<p>Source: opiniojuris.org Artificial intelligence (AI) systems are computer programs that carry out tasks – often associated with human intelligence – that require cognition, planning, reasoning or learning. <a class="read-more-link" href="https://www.aiuniverse.xyz/ai-and-machine-learning-symposium-artificial-intelligence-and-machine-learning-in-armed-conflict/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/ai-and-machine-learning-symposium-artificial-intelligence-and-machine-learning-in-armed-conflict/">AI and Machine Learning Symposium: Artificial Intelligence and Machine Learning in Armed Conflict</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: opiniojuris.org</p>



<p>Artificial intelligence (AI) systems are computer programs that carry out tasks – often associated with human intelligence – that require cognition, planning, reasoning or learning. Machine learning systems are AI systems that are “trained” on and “learn” from data, which ultimately define the way they function. Both are complex software tools, or algorithms, that can be applied to many different tasks. However, AI and machine learning systems are distinct from the “simple” algorithms used for tasks that do not require these capacities. The potential implications for armed conflict – and for the International Committee of the Red Cross’ (ICRC) humanitarian work – are broad. There are at least three overlapping areas that are relevant from a humanitarian perspective.</p>



<p>Three conflict-specific implications of AI and machine learning</p>



<p>The first area is the use of AI and machine learning tools to control military hardware, in particular the growing diversity of unmanned robotic systems – in the air, on land, and at sea. AI may enable greater autonomy in robotic platforms, whether armed or unarmed. For the ICRC, autonomous weapon systems are the immediate concern (see above). AI and machine learning software – particularly for “automatic target recognition” – could become a basis for future autonomous weapon systems, amplifying core concerns about loss of human control and unpredictability. However, not all autonomous weapons incorporate AI.</p>



<p>The second area is the application of AI and machine learning to cyber warfare: AI-enabled cyber capabilities could automatically search for vulnerabilities to exploit, or simultaneously defend against cyber attacks while launching counter-attacks, and could therefore increase the speed, number and types of attacks and their consequences. These developments will be relevant to discussions about the potential human cost of cyber warfare. AI and machine learning are also relevant to information operations, in particular the creation and spread of false information (whether intended to deceive or not). AI-enabled systems can generate “fake” information – whether text, audio, photos or video – that is increasingly difficult to distinguish from “real” information and might be used by parties to a conflict to manipulate opinion and influence decisions. These digital risks can pose real dangers for civilians.</p>



<p>The third area, and the one with perhaps the most far-reaching implications, is the use of AI and machine learning systems for decision-making. AI may enable widespread collection and analysis of multiple data sources to identify people or objects, assess “patterns of life” or behaviour, make recommendations for courses of action, or make predictions about future actions or situations. The possible uses of these “decision-support” or “automated decision-making” systems are extremely broad: they range from decisions about whom – or what – to attack and when, and whom to detain and for how long, to decisions about overall military strategy – even on use of nuclear weapons – as well as specific operations, including attempts to predict, or pre-empt, adversaries.</p>



<p>AI and machine learning-based systems can facilitate faster and broader collection and analysis of available information. This may enable better decisions by humans in conducting military operations in compliance with IHL and minimizing risks for civilians. However, the same algorithmically-generated analyses, or predictions, might also facilitate wrong decisions, violations of IHL and exacerbated risks for civilians. The challenge consists in using all the capacities of AI to improve respect for IHL in situations of armed conflict, while at the same time remaining aware of the significant limitations of the technology, particularly with respect to unpredictability, lack of transparency, and bias. The use of AI in weapon systems must be approached with great caution.</p>



<p>AI and machine learning systems could have profound implications for the role of humans in armed conflict. The ICRC is convinced of the necessity of taking a human-centred, and humanity-centred, approach to the use of these technologies in armed conflict.</p>



<p>It will be essential to preserve human control and judgement in using AI and machine learning for tasks, and in decisions, that may have serious consequences for people’s lives, and in circumstances where the tasks – or decisions – are governed by specific IHL rules. AI and machine learning systems remain tools that must be used to serve human actors, and augment and improve human decision-making, not to replace them.</p>



<p>Ensuring human control and judgement in AI-enabled tasks and decisions that present risks to human life, liberty, and dignity will be needed for compliance with IHL and to preserve a measure of humanity in armed conflict. In order for humans to meaningfully play their role, these systems may need to be designed and used to inform decision-making at “human speed” rather than accelerate decisions to “machine speed”.</p>



<p>The nature of human-AI interaction required will likely depend on the specific application, the associated consequences, and the particular IHL rules and other pertinent law that apply in the circumstances – as well as on ethical considerations.</p>



<p>However, ensuring human control and judgement in the use of AI systems will not be sufficient in itself. In order to build trust in the functioning of a given AI system, it will be important to ensure, including through weapon reviews: predictability and reliability – or safety – in the operation of the system and the consequences of its use; transparency – or explainability – in how the system functions and why it reaches its output; and lack of bias in the design and use of the system.</p>
<p>The post <a href="https://www.aiuniverse.xyz/ai-and-machine-learning-symposium-artificial-intelligence-and-machine-learning-in-armed-conflict/">AI and Machine Learning Symposium: Artificial Intelligence and Machine Learning in Armed Conflict</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/ai-and-machine-learning-symposium-artificial-intelligence-and-machine-learning-in-armed-conflict/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The EU White Paper on Artificial Intelligence: the five requirements</title>
		<link>https://www.aiuniverse.xyz/the-eu-white-paper-on-artificial-intelligence-the-five-requirements/</link>
					<comments>https://www.aiuniverse.xyz/the-eu-white-paper-on-artificial-intelligence-the-five-requirements/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 17 Apr 2020 10:44:16 +0000</pubDate>
				<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[AI systems]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[COVID 19]]></category>
		<category><![CDATA[data analysis]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8243</guid>

					<description><![CDATA[<p>Source: jdsupra.com Artificial intelligence (AI) remains one of the main features of most European countries’ strategies, even during these times of the COVID-19 emergency. AI can in <a class="read-more-link" href="https://www.aiuniverse.xyz/the-eu-white-paper-on-artificial-intelligence-the-five-requirements/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-eu-white-paper-on-artificial-intelligence-the-five-requirements/">The EU White Paper on Artificial Intelligence: the five requirements</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: jdsupra.com</p>



<p>Artificial intelligence (AI) remains one of the main features of most European countries’ strategies, even during these times of the COVID-19 emergency. AI can in fact not only improve health care systems but also be a fundamental tool to analyze data to fight and prevent pandemics.</p>



<p>While there is little doubt about the benefits that that can be drawn, there are also increasing concerns about how to effectively address the risks associated with the usage of AI systems. Such concerns include, among others, data privacy risks – AI may easily be used to de-anonymize individuals’ data etc… (see this previous bite on this point) ̶ and also potential breaches of other fundamental rights, including freedom of expression, non-discrimination, human dignity, etc.</p>



<p>There has been a demand for a common approach to address such concerns, in order to give citizens and corporations enough trust in using (and investing in) AI systems, while also avoiding the market fragmentation that would limit the scale of development throughout Europe.</p>



<p>With this in mind, the European Commission recently published its White Paper on Artificial Intelligence, which is aligned with the key principles set out in the Guidelines on Trustworthy AI published by the EU High-Level Expert Group, namely human agency and oversight, technical robustness and safety, privacy and data governance, transparency and accountability, diversity, non-discrimination and fairness, societal and environmental wellbeing.</p>



<p>In addition to some improvements to the liability regime (such improvements are separately addressed in our TMT Bites), the EU Commission proposes to opt for a risk-based approach, to make proportional regulatory intervention in order to address mainly “high-risk” AI applications. Such high risks are identified where both the relevant sector (e.g. health care)&nbsp;<strong>and</strong>&nbsp;the intended use involve significant risks.</p>



<p>According to the EU Commission, AI regulations should be based on the following main requirements:</p>



<ol class="wp-block-list"><li><strong>Training data</strong>&nbsp;– Datasets and their usage should meet the standards set out in the applicable EU safety rules, in addition to the existing provisions set out in the GDPR and in the Law Enforcement Directive. There should also be ad hoc AI training data provisions. For instance, AI systems should operate with sufficiently broad data sets to cover all scenarios needed to avoid dangerous situations, thus avoiding unnecessary risks. This includes also taking reasonable measures to avoid discrimination, e.g. where applicable, adequate gender and ethnical coverage.</li><li><strong>Record-keeping</strong>&nbsp;– Adequate measures should be taken to avoid the so-called “black box effect”. Accordingly, records should be kept on the data sets used to train and test the AI systems, as well as the main characteristics. There should also be clear documentation on the programming, training and processes used to build and validate the AI systems. In certain cases, the data themselves should also be kept, although this may entail additional storage costs.</li><li><strong>Information</strong>&nbsp;– AI systems should be transparent. Information on the use of AI systems should be provided, also including information on the capabilities, limitations and expected level of accuracy. This also implies a proactive approach, e.g. informing individuals when they are interacting with AI systems, while all information needs to be concise and understandable.</li><li><strong>Robustness</strong>&nbsp;&#8211; Many AI technologies are unpredictable and difficult to control, even ex-post. There should be an ex-ante assessment of risks, as well as assessments to check that AI systems operate accurately during the whole of their life-cycle phase, with reproducible outcomes. Furthermore, AI systems should also adequately deal with errors, with processes to handle and correct them. Additional regulations should also be drawn up to ensure resilience against attacks and attempts to manipulate the data or the algorithms.</li><li><strong>Human oversight</strong>&nbsp;– There should be adequate involvement by human beings, in addition to what is already established by the GDPR for automated decision making. Depending upon the circumstances, the human oversight should intervene prior to the output to be produced, or afterwards and/or throughout the whole learning and output process. This will depend upon the type of system and its usage: for instance, an automated driverless car should have a safety button or similar device in order to allow a human to take control under certain circumstances; it should also provide for an interruption of operations when certain sensors are not operating in a reliable way.</li></ol>



<p>Other requirements may be set for other specific systems, including remote biometric identification, which allows identification at a distance and in a public space of individuals through a set of biometric identifiers (e.g. fingerprints, facial image, etc.) which are compared to other data stored in database(s). Additional requirements may be set, whatever sector is involved, in order to ensure that any such processing is justified, proportionate and subject to adequate safeguards.</p>



<p>The Commission further highlighted that, in order to make future regulations effective, there should be a level playing field, and accordingly any such requirement should be applied to all those that provide AI products or services in the EU, thus including non-EU companies.</p>



<p>The detailed implementation of the above requirements is yet to be determined, including the frameworks for testing and certification.</p>



<p>Do you agree with the above requirements? We would be interested in hearing your views.</p>
<p>The post <a href="https://www.aiuniverse.xyz/the-eu-white-paper-on-artificial-intelligence-the-five-requirements/">The EU White Paper on Artificial Intelligence: the five requirements</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-eu-white-paper-on-artificial-intelligence-the-five-requirements/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Things You Should Know About Artificial Intelligence</title>
		<link>https://www.aiuniverse.xyz/things-you-should-know-about-artificial-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/things-you-should-know-about-artificial-intelligence/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 13 Mar 2020 09:46:06 +0000</pubDate>
				<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[AI systems]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Digital assistance]]></category>
		<category><![CDATA[Future]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7413</guid>

					<description><![CDATA[<p>Source: theusbport.com In today’s world that we live in, it seems as if every industry is using artificial intelligence in one way or another and raving about <a class="read-more-link" href="https://www.aiuniverse.xyz/things-you-should-know-about-artificial-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/things-you-should-know-about-artificial-intelligence/">Things You Should Know About Artificial Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: theusbport.com</p>



<p>In today’s world that we live in, it seems as if every industry is using artificial intelligence in one way or another and raving about its benefits. Artificial intelligence has made it possible for machines to receive information, process it using the record of past patterns in their database, and perform tasks that could previously only be performed by humans.</p>



<p>From automated systems to self-driving cars and smart applications, there are many examples of artificial intelligence that we come across every day. However, the concept still appears to be unclear to most people. The common man does not know what AI is, how it used in different industries, and the fantastic benefits it has to offer. Here, you get a closer look at everything you need to know about artificial intelligence.</p>



<h3 class="wp-block-heading">WHAT IS ARTIFICIAL INTELLIGENCE?</h3>



<p>In simple words, artificial intelligence involves developing systems that can perform tasks that require personal information. AI systems mimic human intelligence by performing functions that require skills such as sound recognition, visual perception, planning, learning, problem-solving, and decision-making. It is an artificial intelligence that has made it possible for virtual assistants like Apple’s Siri and Amazon’s Alexa to be able to understand your commands and for Facebook to identify you in any photo posted by a friend.</p>



<h3 class="wp-block-heading">WHAT ARE THE DIFFERENT TYPES OF AI?</h3>



<p>There are two main categories of AI:</p>



<p>1.Weak artificial intelligence: These AI systems designed and trained to perform a single task only that doesn’t require significant information. The main benefit offered by these systems is the automation of tasks.</p>



<p>2.Strong artificial intelligence: These AI systems have ‘human intelligence’ capabilities. When they give a new job, they have the information to interpret the data and find a unique solution. The main benefits of such systems are their problem-solving and decision-making abilities.</p>



<h3 class="wp-block-heading">HOW IS AI BEING USED ACROSS DIFFERENT INDUSTRIES?</h3>



<p>As businesses realize the benefits of artificial intelligence, they are turning to AI services to help them develop systems that drive business growth and give them a competitive edge. Here is how AI used across different industries:</p>



<p>1. Health Care: AI applications can read and interpret medical reports. Virtual healthcare assistants remind patients to take their medicines, eat healthier, or get their exercise done.</p>



<p>2. Retail: AI systems provide customized shopping recommendations to each customer based on their interests and help with the purchasing decision, increasing the chances of a sale.</p>



<p>3. Manufacturing: Efficient robots can work faster and more accurately than humans. They can also work nonstop, unlike humans, who need breaks. AI systems can also perform predictive analysis and forecast demand, helping decide production quantity.</p>



<p>4. Banking: AI systems have helped improve the precision, effectiveness, and speed of banking transactions. They can quickly identify potentially fraudulent transactions, verify documents, perform accurate and quick credit scoring.</p>



<p>5. Transportation: AI has allowed the industry to come up with self-driving cars that can sense their environment and reach their destination without any human interference. It can completely transform the transport system as the vehicle can analyze traffic and decide the best route to take by itself, significantly reducing travel time.</p>



<h3 class="wp-block-heading">WHAT ARE SOME OF THE BENEFITS OF AI?</h3>



<p>Artificial intelligence has numerous benefits to offer, and this is the reason why it has transformed various industries. Some of these include:</p>



<p>1. Reduces chances of human error: Humans make mistakes, but a computer does not, as long as it is programmed correctly. AI systems are highly accurate at processing vast amounts of data and analyzing past patterns. There are no chances of error in processing the data, and this improves the quality of output. For instance, AI has made weather forecasting increasingly accurate by eliminating the possibility of human error.</p>



<p>2. Automation of repetitive jobs: Whether it is sending payment reminder emails, verifying documents, or performing quality checks, there are plenty of repetitive tasks that each of us has to perform as part of our jobs. AI allows us to automate such tasks, saving human time for more ‘creative’ work.</p>



<p>3. Digital assistance: A lot of companies now provide digital support to their customers, which keeps the need to hire staff for these roles. Many websites now have Chatbots that answer customer queries and help them find what they need. Most times, these Chatbots are so efficient that the interaction with them feels the same as that with an actual human.</p>



<p>4. Improved decision-making: When humans make decisions, it is done by taking many factors into account and is also influenced by the person’s own emotions and judgments. With a smart AI system, the decision is based solely on the programming and information available, resulting in more accurate and quicker decisions.</p>



<h3 class="wp-block-heading">CAN AI BE A CAUSE OF UNEMPLOYMENT IN THE FUTURE?</h3>



<p>The possibility of AI systems replacing the need to hire humans for specific jobs is present. While AI systems can’t replace humans altogether, they have certainly brought significant changes across various industries and are very likely to continue to do so.</p>



<p>There are a lot of jobs that require routine, repetitive tasks, and AI systems are ideal for automating them. So many jobs involve entering and processing data, verifying documents, sending reminders, performing quality checks – these jobs are most likely to be affected by the introduction of AI systems. However, as is the case with any technological change, new posts will also be created to help implement, manage, and support the functioning of these new AI systems.</p>



<h3 class="wp-block-heading">THE BOTTOM LINE</h3>



<p>The world of AI has come a long way and seems to be continually evolving with time. The question is: will it ever be able to match the capabilities of the human brain fully? It is unlikely, as the human brain is so much powerful that we still have a long way to go in understanding it fully and being able to develop something that can mimic it effectively. However, AI systems are already in place everywhere you look, and no one can deny the numerous benefits they offer. It is about time we embrace this technology and use it to our advantage wherever possible.</p>
<p>The post <a href="https://www.aiuniverse.xyz/things-you-should-know-about-artificial-intelligence/">Things You Should Know About Artificial Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/things-you-should-know-about-artificial-intelligence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Tackle bias to build trustworthy artificial intelligence systems</title>
		<link>https://www.aiuniverse.xyz/tackle-bias-to-build-trustworthy-artificial-intelligence-systems/</link>
					<comments>https://www.aiuniverse.xyz/tackle-bias-to-build-trustworthy-artificial-intelligence-systems/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 03 Mar 2020 07:30:01 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI systems]]></category>
		<category><![CDATA[rtificial intelligence]]></category>
		<category><![CDATA[systems]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7194</guid>

					<description><![CDATA[<p>Source: dqindia.com Businesses have embraced artificial intelligence with open arms. Ecommerce platforms are leveraging the technology to personalize customer experience and improve customer relations. Healthcare is using <a class="read-more-link" href="https://www.aiuniverse.xyz/tackle-bias-to-build-trustworthy-artificial-intelligence-systems/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/tackle-bias-to-build-trustworthy-artificial-intelligence-systems/">Tackle bias to build trustworthy artificial intelligence systems</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: dqindia.com</p>



<p>Businesses have embraced artificial intelligence with open arms. Ecommerce platforms are leveraging the technology to personalize customer experience and improve customer relations. Healthcare is using it for improved medical diagnosis, law enforcement agencies are using it to fight crime, and organizations are using it to screen potential candidates, and so many other sectors are using it. However, the other side of the coin is not that inspiring as artificial intelligence also has its share of risks.</p>



<h4 class="wp-block-heading"><strong>Skewed data propagates bias</strong></h4>



<p>The biggest risk that artificial intelligence poses today is that of bias. This bias creeps in when the data used to train the mathematical models is skewed. Since the technology is completely data-driven, the biases in the data reflect in the output.</p>



<p>There are many instances where automated systems have produced sexist and racist outputs. This can be especially scary if government bodies or law enforcement agencies work with skewed data. Taking cognizance of the serious implications biased data can have on policing, the European Commission (EU) has suggested training the AI systems with unbiased data.</p>



<h4 class="wp-block-heading"><strong>Use smaller data sets to train artificial intelligence systems</strong></h4>



<p>However, it is difficult to ensure that the data is 100% bias-free. Also often the AI systems are built before the data is cleaned. Therefore, as a measure to facilitate training AI systems on unbiased data, organizations can consider using smaller sets of training data, which can help significantly reduce bias.</p>



<h4 class="wp-block-heading"><strong>Towards a more trustworthy artificial intelligence</strong></h4>



<p>Currently, there is a lot of debate around tackling bias in artificial intelligence. The EU’s Ethics Guidelines for Trustworthy AI mandates that trustworthy AI should be lawful, ethical, and robust. These guidelines prescribe seven key requirements that the AI systems should meet. These include human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, nondiscrimination and fairness, environmental and societal well-being, and accountability.</p>



<p>Apart from the EU’s guidelines, the Organization for Economic Cooperation and Development (OECD) has also released its Principles on Artificial Intelligence that has been adopted by 42 countries. These principles will form the basis of practical guidelines for implementation.</p>
<p>The post <a href="https://www.aiuniverse.xyz/tackle-bias-to-build-trustworthy-artificial-intelligence-systems/">Tackle bias to build trustworthy artificial intelligence systems</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/tackle-bias-to-build-trustworthy-artificial-intelligence-systems/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
