<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Hyperparameters Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/hyperparameters/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/hyperparameters/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Tue, 25 Aug 2020 07:22:03 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>How Important Is The Role Of Human Competency In Deep Learning Success</title>
		<link>https://www.aiuniverse.xyz/how-important-is-the-role-of-human-competency-in-deep-learning-success/</link>
					<comments>https://www.aiuniverse.xyz/how-important-is-the-role-of-human-competency-in-deep-learning-success/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 25 Aug 2020 07:21:49 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[AutoML]]></category>
		<category><![CDATA[dataset]]></category>
		<category><![CDATA[Hyperparameters]]></category>
		<category><![CDATA[ImageNet]]></category>
		<category><![CDATA[reproducibility]]></category>
		<category><![CDATA[Squeezenet]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11171</guid>

					<description><![CDATA[<p>Source:-analyticsindiamag Hyperparameters are usually tuned by a human operator such as an ML engineer. This is still a standard practice despite the great success of AutoML platforms. <a class="read-more-link" href="https://www.aiuniverse.xyz/how-important-is-the-role-of-human-competency-in-deep-learning-success/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-important-is-the-role-of-human-competency-in-deep-learning-success/">How Important Is The Role Of Human Competency In Deep Learning Success</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:-analyticsindiamag</p>



<p>Hyperparameters are usually tuned by a human operator such as an ML engineer. This is still a standard practice despite the great success of AutoML platforms. Though there is no doubt that businesses are more readily embracing AutoML tools, the role of a human operator cannot be disregarded. So, now the question is — does the result of machine learning models depend on the competencies of the human operator. The answer is, of course, a plain YES. But that wouldn’t suffice. Organisations invest heavily in picking the right candidate. So, it is crucial to know about this aspect in more detail.</p>



<p>To find out, researchers from Delft University of Technology, Delft, The Netherlands surveyed a group of ML engineers of varying expertise. The results of this survey were published recently in a paper titled, ‘Black Magic in Deep Learning: How Human Skill Impacts Network Training’.</p>



<p>The extraordinary skill of a human expert to tune hyperparameters, wrote the researchers, is informally referred to as “black magic” in deep learning here.</p>



<p>For the experiment, the researchers selected the Squeezenet model as they found it to be efficient to train and achieve a reasonable accuracy compared to more complex networks. To prevent exploiting model-specific knowledge, they did not share the network design with the participants.</p>



<p>Participants were given access to 15 common hyperparameters. Mandatory ones were — number of epochs, batch size, loss function, and optimiser. The other 11 optional hyperparameters were set to their default values.</p>



<p>Taking size and difficulty into account, the participants were given an image classification task on a subset of ImageNet. The name was kept under wraps, and only the image classification task was revealed to them along with the dataset statistics that consists of 10 classes, 13,000 training images, 500 validation images, and 500 test images.</p>



<p>The whole experimental procedure can be summarised as follows:</p>



<p>The participants enter their information.<br>Hyperparameter values and evaluates intermediate training results are submitted.<br>Once training is finished, the participant can either submit a new hyperparameter configuration or end the experiment.<br>This is repeated until the clock ticks 120 minutes.<br>The researchers segregated the participants based on the number of months of the deep learning experience. They collected a total of 463 different hyperparameter combinations from 31 participants. Of which, the Novice group contained 8 participants with no experience in deep learning, the 12 participants with less than nine months of experience and the rest with more than nine months experience.</p>



<p>Whenever a participant submitted their final choice of hyperparameters, the experiment ended, and the optimal hyperparameter configuration was then trained 10 times. “Each of the 10 repeats has a different random seed, while the seeds are the same for each participant,” stated the researchers.</p>



<p>The results showed that human skills do impact accuracy. Few other key findings from this survey are:</p>



<p>Even for people with similar levels of experience in tuning the model performed differently.<br>Even for experts, there can be an accuracy difference of 5%.<br>More experience correlates with optimisation skill.<br>The trend shows a strong positive correlation between experience and the final performance of the model.<br>Inexperienced participants usually followed a random search strategy, where they often start by tuning optional hyperparameters which may be best left at their defaults initially.<br>On a concluding note, the team behind this work shared a couple of insightful recommendations. The authors underlined the importance of reproducibility and urged to share the final hyperparameter settings. And, since it is difficult to say if the purported superior performance is due to a massive supercomputer, they advise reviewers to pay more attention to reproducibility, baseline comparisons and put less emphasis on superior performance.</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-important-is-the-role-of-human-competency-in-deep-learning-success/">How Important Is The Role Of Human Competency In Deep Learning Success</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-important-is-the-role-of-human-competency-in-deep-learning-success/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Machine learning algorithms explained</title>
		<link>https://www.aiuniverse.xyz/machine-learning-algorithms-explained/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-algorithms-explained/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 10 May 2019 07:13:54 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[algorithms]]></category>
		<category><![CDATA[Data encoding]]></category>
		<category><![CDATA[Hyperparameters]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Programming]]></category>
		<category><![CDATA[SGD]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=3481</guid>

					<description><![CDATA[<p>Source:- infoworld.com Machine learning uses algorithms to turn a data set into a predictive model. Which algorithm works best depends on the problem Machine learning and deep learning <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-algorithms-explained/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-algorithms-explained/">Machine learning algorithms explained</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source:- infoworld.com</p>
<h3>Machine learning uses algorithms to turn a data set into a predictive model. Which algorithm works best depends on the problem</h3>
<p>Machine learning and deep learning have been widely embraced, and even more widely misunderstood. In this article, I’d like to step back and explain both machine learning and deep learning in basic terms, discuss some of the most common machine learning algorithms, and explain how those algorithms relate to the other pieces of the puzzle of creating predictive models from historical data.</p>
<h2>What are machine learning algorithms?</h2>
<p>Recall that machine learning is a class of methods for automatically creating predictive models from data. Machine learning algorithms are the engines of machine learning, meaning it is the algorithms that turn a data set into a model. Which kind of algorithm works best (supervised, unsupervised, classification, regression, etc.) depends on the kind of problem you’re solving, the computing resources available, and the nature of the data.</p>
<h2>How machine learning works</h2>
<p>Ordinary programming algorithms tell the computer what to do in a straightforward way. For example, sorting algorithms turn unordered data into data ordered by some criteria, often the numeric or alphabetical order of one or more fields in the data.</p>
<p>Linear regression algorithms fit a straight line to numeric data, typically by performing matrix inversions to minimize the squared error between the line and the data. Squared error is used as the metric because you don’t care whether the regression line is above or below the data points; you only care about the distance between the line and the points.</p>
<p>Nonlinear regression algorithms, which fit curves (such as polynomials and exponentials) to data, are a little more complicated, because, unlike linear regression problems, they can’t be solved with a deterministic method. Instead, the nonlinear regression algorithms implement some kind of iterative minimization process, often some variation on the method of steepest descent.</p>
<p>Steepest descent basically computes the squared error and its gradient at the current parameter values, picks a step size (aka learning rate), follows the direction of the gradient “down the hill,” and then recomputes the squared error and its gradient at the new parameter values. Eventually, with luck, the process converges. The variants on steepest descent try to improve the convergence properties.</p>
<aside id="" class="nativo-promo nativo-promo-1 smartphone"></aside>
<p>Machine learning algorithms are even less straightforward than nonlinear regression, partly because machine learning dispenses with the constraint of fitting to a specific mathematical function, such as a polynomial. There are two major categories of problems that are often solved by machine learning: regression and classification. Regression is for numeric data (e.g. What is the likely income for someone with a given address and profession?) and classification is for non-numeric data (e.g. Will the applicant default on this loan?).</p>
<p>Prediction problems (e.g. What will the opening price be for Microsoft shares tomorrow?) are a subset of regression problems for time series data. Classification problems are sometimes divided into binary (yes or no) and multi-category problems (animal, vegetable, or mineral).</p>
<h2>Supervised learning vs. unsupervised learning</h2>
<p>Independent of these divisions, there are another two kinds of machine learning algorithms: supervised and unsupervised. In <em>supervised learning</em>, you provide a training data set with answers, such as a set of pictures of animals along with the names of the animals. The goal of that training would be a model that could correctly identify a picture (of a kind of animal that was included in the training set) that it had not previously seen.</p>
<p>In <em>unsupervised learning</em>, the algorithm goes through the data itself and tries to come up with meaningful results. The result might be, for example, a set of clusters of data points that could be related within each cluster. That works better when the clusters don’t overlap.</p>
<p>Training and evaluation turn supervised learning algorithms into models by optimizing their parameters to find the set of values that best matches the ground truth of your data. The algorithms often rely on variants of steepest descent for their optimizers, for example stochastic gradient descent (SGD), which is essentially steepest descent performed multiple times from randomized starting points. Common refinements on SGD add factors that correct the direction of the gradient based on momentum or adjust the learning rate based on progress from one pass through the data (called an epoch) to the next.</p>
<h2>Data cleaning for machine learning</h2>
<p>There is no such thing as clean data in the wild. To be useful for machine learning, data must be aggressively filtered. For example, you’ll want to:</p>
<aside id="" class="nativo-promo nativo-promo-2 tablet desktop smartphone"></aside>
<ol>
<li>Look at the data and exclude any columns that have a lot of missing data.</li>
<li>Look at the data again and pick the columns you want to use for your prediction. (This is something you may want to vary when you iterate.)</li>
<li>Exclude any rows that still have missing data in the remaining columns.</li>
<li>Correct obvious typos and merge equivalent answers. For example, U.S., US, USA, and America should be merged into a single category.</li>
<li>Exclude rows that have data that is out of range. For example, if you’re analyzing taxi trips within New York City, you’ll want to filter out rows with pick-up or drop-off latitudes and longitudes that are outside the bounding box of the metropolitan area.</li>
</ol>
<p>There is a lot more you can do, but it will depend on the data collected. This can be tedious, but if you set up a data-cleaning step in your machine learning pipeline you can modify and repeat it at will.</p>
<h2>Data encoding and normalization for machine learning</h2>
<p>To use categorical data for machine classification, you need to encode the text labels into another form. There are two common encodings.</p>
<p>One is <em>label encoding</em>, which means that each text label value is replaced with a number. The other is <em>one-hot encoding</em>, which means that each text label value is turned into a column with a binary value (1 or 0). Most machine learning frameworks have functions that do the conversion for you. In general, one-hot encoding is preferred, as label encoding can sometimes confuse the machine learning algorithm into thinking that the encoded column is ordered.</p>
<p>To use numeric data for machine regression, you usually need to normalize the data. Otherwise, the numbers with larger ranges may tend to dominate the Euclidian distance between <em>feature vectors</em>, their effects can be magnified at the expense of the other fields, and the steepest descent optimization may have difficulty converging. There are a number of ways to normalize and standardize data for ML, including min-max normalization, mean normalization, standardization, and scaling to unit length. This process is often called <em>feature scaling</em>.</p>
<h2>What are machine learning features?</h2>
<p>Since I mentioned feature vectors in the previous section, I should explain what they are. First of all, a <em>feature</em> is an individual measurable property or characteristic of a phenomenon being observed. The concept of a “feature” is related to that of an explanatory variable, which is used in statistical techniques such as linear regression. Feature vectors combine all of the features for a single row into a numerical vector.</p>
<p>Part of the art of choosing features is to pick a minimum set of <em>independent</em> variables that explain the problem. If two variables are highly correlated, either they need to be combined into a single feature, or one should be dropped. Sometimes people perform principal component analysis to convert correlated variables into a set of linearly uncorrelated variables.</p>
<p>Some of the transformations that people use to construct new features or reduce the dimensionality of feature vectors are simple. For example, subtract <code>Year of Birth</code> from <code>Year of Death</code> and you construct <code>Age at Death</code>, which is a prime independent variable for lifetime and mortality analysis. In other cases, <em>feature construction</em> may not be so obvious.</p>
<h2>Common machine learning algorithms</h2>
<p>There are dozens of machine learning algorithms, ranging in complexity from linear regression and logistic regression to deep neural networks and ensembles (combinations of other models). However, some of the most common algorithms include:</p>
<ul>
<li>Linear regression, aka least squares regression (for numeric data)</li>
<li>Logistic regression (for binary classification)</li>
<li>Linear discriminant analysis (for multi-category classification)</li>
<li>Decision trees (for both classification and regression)</li>
<li>Naïve Bayes (for both classification and regression)</li>
<li>K-Nearest Neighbors, aka KNN (for both classification and regression)</li>
<li>Learning Vector Quantization, aka LVQ (for both classification and regression)</li>
<li>Support Vector Machines, aka SVM (for binary classification)</li>
<li>Random Forests, a type of “bagging” ensemble algorithm (for both classification and regression)</li>
<li>Boosting methods, including AdaBoost and XGBoost, are ensemble algorithms that create a series of models where each new model tries to correct errors from the previous model (for both classification and regression)</li>
</ul>
<p>Where are the neural networks and deep neural networks that we hear so much about? They tend to be compute-intensive to the point of needing GPUs or other specialized hardware, so you should use them only for specialized problems, such as image classification and speech recognition, that aren’t well-suited to simpler algorithms. Note that “deep” means that there are many hidden layers in the neural network.</p>
<p>For more on neural networks and deep learning, see “What deep learning really means.”</p>
<h2>Hyperparameters for machine learning algorithms</h2>
<p>Machine learning algorithms train on data to find the best set of weights for each independent variable that affects the predicted value or class. The algorithms themselves have variables, called hyperparameters. They’re called hyperparameters, as opposed to parameters, because they control the operation of the algorithm rather than the weights being determined.</p>
<p>The most important hyperparameter is often the learning rate, which determines the step size used when finding the next set of weights to try when optimizing. If the learning rate is too high, the gradient descent may quickly converge on a plateau or suboptimal point. If the learning rate is too low, the gradient descent may stall and never completely converge.</p>
<p>Many other common hyperparameters depend on the algorithms used. Most algorithms have stopping parameters, such as the maximum number of epochs, or the maximum time to run, or the minimum improvement from epoch to epoch. Specific algorithms have hyperparameters that control the shape of their search. For example, a Random Forest Classifier has hyperparameters for minimum samples per leaf, max depth, minimum samples at a split, minimum weight fraction for a leaf, and about 8 more.</p>
<h2>Hyperparameter tuning</h2>
<p>Several production machine-learning platforms now offer automatic hyperparameter tuning. Essentially, you tell the system what hyperparameters you want to vary, and possibly what metric you want to optimize, and the system sweeps those hyperparameters across as many runs as you allow. (Google Cloud hyperparameter tuning extracts the appropriate metric from the TensorFlow model, so you don’t have to specify it.)</p>
<p>There are three search algorithms for sweeping hyperparameters: Bayesian optimization, grid search, and random search. Bayesian optimization tends to be the most efficient.</p>
<p>You would think that tuning as many hyperparameters as possible would give you the best answer. However, unless you are running on your own personal hardware, that could be very expensive. There are diminishing returns, in any case. With experience, you’ll discover which hyperparameters matter the most for your data and choice of algorithms.</p>
<h2>Automated machine learning</h2>
<p>Speaking of choosing algorithms, there is only one way to know which algorithm or ensemble of algorithms will give you the best model for your data, and that’s to try them all. If you also try all the possible normalizations and choices of features, you’re facing a combinatorial explosion.</p>
<p>Trying everything is impractical to do manually, so of course machine learning tool providers have put a lot of effort into releasing AutoML systems. The best ones combine feature engineering with sweeps over algorithms and normalizations. Hyperparameter tuning of the best model or models is often left for later. Feature engineering is a hard problem to automate, however, and not all AutoML systems handle it.</p>
<p>In summary, machine learning algorithms are just one piece of the machine learning puzzle. In addition to algorithm selection (manual or automatic), you’ll need to deal with optimizers, data cleaning, feature selection, feature normalization, and (optionally) hyperparameter tuning.</p>
<p>When you’ve handled all of that and built a model that works for your data, it will be time to deploy the model, and then update it as conditions change. Managing machine learning models in production is, however, a whole other can of worms.</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-algorithms-explained/">Machine learning algorithms explained</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-algorithms-explained/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
	</channel>
</rss>
