<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Artifical intelligence Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/artifical-intelligence/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/artifical-intelligence/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Tue, 19 May 2020 06:51:40 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Peeking Inside the Black Box: Techniques for Making AI Models More Easily Interpretable</title>
		<link>https://www.aiuniverse.xyz/peeking-inside-the-black-box-techniques-for-making-ai-models-more-easily-interpretable/</link>
					<comments>https://www.aiuniverse.xyz/peeking-inside-the-black-box-techniques-for-making-ai-models-more-easily-interpretable/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 19 May 2020 06:51:38 +0000</pubDate>
				<category><![CDATA[Data Science]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Explainability]]></category>
		<category><![CDATA[AI models]]></category>
		<category><![CDATA[Artifical intelligence]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[techniques]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8866</guid>

					<description><![CDATA[<p>Source: rtinsights.com When training a machine learning or AI model, typically the main goal is to make the most accurate prediction possible. Data scientists and machine learning <a class="read-more-link" href="https://www.aiuniverse.xyz/peeking-inside-the-black-box-techniques-for-making-ai-models-more-easily-interpretable/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/peeking-inside-the-black-box-techniques-for-making-ai-models-more-easily-interpretable/">Peeking Inside the Black Box: Techniques for Making AI Models More Easily Interpretable</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: rtinsights.com</p>



<p>When training a machine learning or AI model, typically the main goal is to make the most accurate prediction possible. Data scientists and machine learning engineers will transform their data in myriad ways and tweak algorithms in any way possible to bring that accuracy score as close to 100 percent as possible, which can unintentionally lead to a model that is difficult to interpret or creates ethical quandaries.</p>



<p>Considering the increasing awareness and consequences of faulty AI, explainable AI is going to be “one of the seminal issues that’s going to be facing data science over the next ten years,” Josh Poduska, Chief Data Scientist at Domino Data Lab noted during his talk at the recent virtual Open Data Science Conference (ODSC) East.</p>



<p><strong>What is Explainable AI?</strong></p>



<p>Explainable AI, or xAI, is the concept of understanding what is happening “under the hood” of AI models and not just taking the most accurate model and blindly trusting its results.</p>



<p>It is important because machine learning models, and in particular neural networks, have a reputation for being “black boxes,” where we do not really know how the algorithm came up with its prediction. All we know is how well it performed.</p>



<p>Models that are not easily explainable or interpretable can lead to some of the following problems:</p>



<ul class="wp-block-list"><li>Models that are not understood by the end user could be used inappropriately or, in fact, could be wrong altogether.</li><li>Ethical issues that arise in models that have some bias towards or against certain groups of people.</li><li>Customers may require models that are interpretable, otherwise they may not end up using them at all.</li></ul>



<p>Furthermore, there are recent regulations, and potentially new ones in the future, that may require models, at least in certain contexts, to be explainable. As Poduska explains, GDPR gives customers the right to understand why a model gave a certain outcome. For example, if a banking customer’s loan application was rejected, that customer has a right to know what contributed to this model result.</p>



<p>So, how do we address these issues and create AI models that are more easily interpretable? The first issue is to understand how one wants to apply the model. Poduska explains that there is a balance between “global” versus “local” explainability.</p>



<p>Global interpretability refers to understanding generally the resulting predictions from different examples that you feed your model. In other words, if an online store is trying to predict who will buy a certain item, a model may find that people within a certain age range who have bought a similar item in the past will purchase that item.</p>



<p>In the case of local interpretability, one is trying to understand how the model came up with its result for one particular input example. In other words, how much does age versus purchase history affect the prediction of one person’s future buying habits?</p>



<h3 class="wp-block-heading"><strong>Techniques for Understanding AI Reasoning</strong></h3>



<p>One standard option that has been around for a while is the concept of feature importance, which is often examined in training decision tree models, such as a random forest. However, there are issues with this method.</p>



<p>A more sophisticated option is called SHAP (SHapley Additive exPlanations). The basic idea behind this option is to hold one input feature of the model constant and randomize the other features, in order to estimate how that feature contributes to the prediction. The downside here is that this method can be very computationally expensive, especially for models with a large number of input features.</p>



<p>For understanding a model on a local level, LIME (Local Interpretable Model-agnostic Explanations) builds a simpler, linear model around each prediction of the original model in order to understand an individual prediction. This method is much faster, computationally, than SHAP, but is focused on local interpretability.</p>



<p>Going even further than the above solutions, some designers of machine learning algorithms are starting to reconstruct the underlying mathematics of these algorithms in order to give better interpretability and high accuracy simultaneously. One such algorithm is AddTree.</p>



<p>When training an AddTree model, one of the hyperparameters of the model is how interpretable the model should be. Depending on how this hyperparameter is set, the AddTree algorithm will train a decision tree model that is either weighted toward better explainability or toward higher accuracy.</p>



<p>For deep neural networks, two options are TCAV and Interpretable CNNs. TCAV (Testing with Concept Activation Vectors) is focused on global interpretability, in particular showing how important different everyday concepts are for making different predictions. For example, how important is color in predicting whether an image is a cat or not?</p>



<p>Interpretable CNNs is a modification of Convolutional Neural Networks where the algorithm automatically forces each filter to represent a distinct part of an object in an image. For example, when training on images of a cat, a standard CNN may have a layer that includes different parts of a cat, whereas the Interpretable CNN has a layer that identifies just a cat’s head.</p>



<p>If your goal is to be able to better understand and explain an existing model, techniques like SHAP and LIME are good options. However, as the demands for more explainable AI continue to increase, even more models will be built in the coming years that have interpretability baked into the algorithm itself, Poduska predicts.</p>



<p>Poduska has a preview of some of these techniques here. These new algorithms will make it easier for all machine learning practitioners to produce explainable models that will hopefully make businesses, customers, and governments more comfortable with the ever-increasing reach of AI.</p>
<p>The post <a href="https://www.aiuniverse.xyz/peeking-inside-the-black-box-techniques-for-making-ai-models-more-easily-interpretable/">Peeking Inside the Black Box: Techniques for Making AI Models More Easily Interpretable</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/peeking-inside-the-black-box-techniques-for-making-ai-models-more-easily-interpretable/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Machine Learning vs. Deep Learning. Which Does Your Business Need?</title>
		<link>https://www.aiuniverse.xyz/machine-learning-vs-deep-learning-which-does-your-business-need/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-vs-deep-learning-which-does-your-business-need/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 15 Feb 2020 06:04:34 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Artifical intelligence]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6786</guid>

					<description><![CDATA[<p>Source: rtinsights.com In recent years, artificial intelligence research and applications have accelerated at a rapid speed. Simply saying your organization will incorporate AI isn’t as specific as <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-vs-deep-learning-which-does-your-business-need/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-vs-deep-learning-which-does-your-business-need/">Machine Learning vs. Deep Learning. Which Does Your Business Need?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: rtinsights.com</p>



<p>In recent years, artificial intelligence research and applications have accelerated at a rapid speed. Simply saying your organization will incorporate AI isn’t as specific as it once was. There are diverse implementation options for AI, Machine Learning, and Deep Learning, and within each of them, a series of different algorithms you can leverage to improve operations and establish a competitive edge.</p>



<p>Algorithms are utilized across almost every industry. For example, to power the recommendation engines in all media platforms, the chatbots that support customer service efforts at scale, and the self-driving vehicles being tested by the world’s largest automotive and technology companies. Because of how diverse AI has become and the many ways in which it works with data, companies must carefully evaluate what will work best for them.</p>



<p><strong>Defining AI and What It Means for Your Company</strong></p>



<p>AI is an umbrella term referring to any technology that can evaluate and make decisions based on large volumes of data input. This takes several forms, which can make it difficult for companies to start the implementation process.&nbsp;</p>



<p>While 83% of businesses claim AI is a strategic priority, only 23% say they have successfully incorporated it into processes and product/service offerings. This is likely to change soon. The AI market is expected to surpass $190 billion by 2025, and by next year, spending will reach $57.6 billion.</p>



<p>Why does this matter? Consider the sweeping benefits of AI. PwC illustrates what many technologists have long described as mass automation of the US workforce. As many as 38% of US jobs could be partially or fully automated by the early 2030s, boosting overall labor productivity by 40%. Companies that do not invest will be at a significant disadvantage against those that do. Most businesses know this, with 84% saying AI will help them gain a competitive advantage.</p>



<p>We’re past the point of value recognition. Now is the time for companies to invest, but where and how. Many executives are concerned that their managers don’t understand AI, and 93% of automation specialists don’t feel prepared to use smarter technologies. A big part of this is understanding what is available and how to match it to your business case.</p>



<p><strong>Machine Learning vs. Deep Learning</strong></p>



<p>Much as AI refers to several forms of technology (including machine learning and deep learning), deep learning is itself a subset of machine learning. The main difference between the two is the type of data fed to the system.</p>



<p>In the case of Machine Learning, structured data that has a single, direct input for each field is utilized. Think of an excel sheet with predetermined values selected for each entry. The data is clean, it’s easy to work with, and there are no nuances to it. This, of course, leads to limitations in what an algorithm can do with that data.</p>



<p>Deep Learning, on the other hand, works with unstructured data, for which there are no set, recognizable answers. These are the message and text fields on your forms. The transcripts of a chat conversation on your website. The wealth of emails, conversations, and other “messy” data is captured every day from billions of users.</p>



<p>So, which is best for your application? It truly does depend on what you are attempting to do. Deep learning certainly sounds more robust, but remember that it works with a messier data set, and for some applications, clarity is key.&nbsp;</p>



<p>Machine learning is best when you have massive volumes of structured data that would take years for a human operator to process. It can be immensely efficient at classifying information, predicting outcomes based on previous behavior and performance, and organizing information together based on key variables.</p>



<p>Deep learning is better for volumes of data that a human mind cannot even fathom. Think of the healthcare industry, for example, where unstructured data in the form of medical notes, exam results, and patient feedback is massive in scope. Or transaction and conversation data for a major bank or retailer – the volume alone makes deep learning a valuable resource. The added depth even more so.</p>



<p><strong>Choosing the Right Algorithm for Your Organization</strong></p>



<p>At this stage, you are still at the earliest stages of the process – determining what general methodologies will work best. Other considerations that must be made to select the algorithm best able to support your efforts, however, include:</p>



<ul class="wp-block-list"><li>Know the data you are working with: Is your data structured or unstructured? Has it been visualized to identify outliers and show the spread of data? Have you evaluated correlations to find the strongest relationships?</li><li>Clean your data to work with it: Most models will be impacted to some degree by missing data (some significantly more so), along with outliers.</li><li>Augmenting your data: Raw data is rarely ready for modeling. Several steps are needed to make the data easier to work with.</li><li>Determine your problem and how you want to fix it: First, map out the input you have – the data that will be input to the machine. Then map out what you’d like to get back. Are you trying to retrieve a number? A class? A group of inputs? This is the issue you’re trying to solve and will determine which algorithm makes the most sense.</li></ul>



<p>There are several tools already available to leverage for your efforts based on the results of the four steps above. This is where you will evaluate the type of model you should use, whether it meets your business goals, and how accurate and actionable it is in context.</p>



<p><strong>Deciding How Best to Utilize AI for Your Company</strong></p>



<p>Once you know what’s needed, how do you implement it? Who do you hire, what kind of training is needed for existing staff? Who will spearhead the initiative?</p>



<p>For larger organizations, the first step is to establish a data science team, led by a CIO or CISO with more than just a passing knowledge of AI applications. For smaller companies, a data scientist who can lead the initiative may be enough. These individuals will be responsible for evaluating your core needs and determining which combination of algorithms and support systems will help create value for your organization.</p>



<p>Most importantly, you don’t want to over-invest. Deep learning is capable of incredible things, but if you are working with mostly structured data for straightforward purposes, Machine Learning can be a much more viable and affordable application, especially as a smaller organization with limited resources.</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-vs-deep-learning-which-does-your-business-need/">Machine Learning vs. Deep Learning. Which Does Your Business Need?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-vs-deep-learning-which-does-your-business-need/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
