<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>machine learning techniques Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/machine-learning-techniques/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/machine-learning-techniques/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Fri, 21 Aug 2020 07:14:01 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Machine learning techniques for detecting electrode misplacement and interchanges when recording ECGs: A systematic review and meta-analysis</title>
		<link>https://www.aiuniverse.xyz/machine-learning-techniques-for-detecting-electrode-misplacement-and-interchanges-when-recording-ecgs-a-systematic-review-and-meta-analysis/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-techniques-for-detecting-electrode-misplacement-and-interchanges-when-recording-ecgs-a-systematic-review-and-meta-analysis/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 21 Aug 2020 07:13:47 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Chest leads]]></category>
		<category><![CDATA[electrode misplacement]]></category>
		<category><![CDATA[lead misplacement]]></category>
		<category><![CDATA[Limb leads]]></category>
		<category><![CDATA[machine learning techniques]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11108</guid>

					<description><![CDATA[<p>Source:-sciencedirect Highlights•ECGs with electrode misplacement can simulate abnormalities such as ectopic rhythm, chamber enlargement or myocardial infarction, which can lead to significant diagnostic errors such as false <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-techniques-for-detecting-electrode-misplacement-and-interchanges-when-recording-ecgs-a-systematic-review-and-meta-analysis/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-techniques-for-detecting-electrode-misplacement-and-interchanges-when-recording-ecgs-a-systematic-review-and-meta-analysis/">Machine learning techniques for detecting electrode misplacement and interchanges when recording ECGs: A systematic review and meta-analysis</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:-sciencedirect</p>



<p><strong>Highlights</strong><br>•<br>ECGs with electrode misplacement can simulate abnormalities such as ectopic rhythm, chamber enlargement or myocardial infarction, which can lead to significant diagnostic errors such as false positive diagnoses of anterior infarction, ventricular hypertrophy, ischemia, or Brugada syndrome.<br>•<br>V2 is the most sensitive misplaced electrode with regards to the change in the signal followed by V3, V4 and V1. While V1 and V2 are the most frequent misplaced electrode (>50%).<br>•<br>Vertical misplacement of V1 and V2 can show a spurious rSr´ pattern.<br>•<br>LA-LL is the most challenging electrode misplacement/interchange scenario for ML to solve.</p>



<p><strong>Abstract</strong><br><strong>Introduction</strong><br>Electrode misplacement and interchange errors are known problems when recording the 12‑lead electrocardiogram (ECG). Automatic detection of these errors could play an important role for improving clinical decision making and outcomes in cardiac care. The objectives of this systematic review and meta-analysis is to 1) study the impact of electrode misplacement on ECG signals and ECG interpretation, 2) to determine the most challenging electrode misplacements to detect using machine learning (ML), 3) to analyse the ML performance of algorithms that detect electrode misplacement or interchange according to sensitivity and specificity and 4) to identify the most commonly used ML technique for detecting electrode misplacement/interchange. This review analysed the current literature regarding electrode misplacement/interchange recognition accuracy using machine learning techniques.</p>



<p><strong>Method</strong><br>A search of three online databases including IEEE, PubMed and ScienceDirect identified 228 articles, while 3 articles were included from additional sources from co-authors. According to the eligibility criteria, 14 articles were selected. The selected articles were considered for qualitative analysis and meta-analysis.</p>



<p><strong>Results</strong><br>The articles showed the effect of lead interchange on ECG morphology and as a consequence on patient diagnoses. Statistical analysis of the included articles found that machine learning performance is high in detecting electrode misplacement/interchange except left arm/left leg interchange.</p>



<p><strong>Conclusion</strong><br>This review emphasises the importance of detecting electrode misplacement detection in ECG diagnosis and the effects on decision making. Machine learning shows promise in detecting lead misplacement/interchange and highlights an opportunity for developing and operationalising deep learning algorithms such as convolutional neural network (CNN) to detect electrode misplacement/interchange.</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-techniques-for-detecting-electrode-misplacement-and-interchanges-when-recording-ecgs-a-systematic-review-and-meta-analysis/">Machine learning techniques for detecting electrode misplacement and interchanges when recording ECGs: A systematic review and meta-analysis</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-techniques-for-detecting-electrode-misplacement-and-interchanges-when-recording-ecgs-a-systematic-review-and-meta-analysis/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Amazon Web Services boosts machine learning to treat depression</title>
		<link>https://www.aiuniverse.xyz/amazon-web-services-boosts-machine-learning-to-treat-depression/</link>
					<comments>https://www.aiuniverse.xyz/amazon-web-services-boosts-machine-learning-to-treat-depression/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 31 Aug 2018 05:08:19 +0000</pubDate>
				<category><![CDATA[Data Science]]></category>
		<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Amazon Web Services]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[data scientists]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[depression]]></category>
		<category><![CDATA[machine learning techniques]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2799</guid>

					<description><![CDATA[<p>Source &#8211; healthcareitnews.com Pharmaceutical company Takeda and research and development data science institute ConvergeHEALTH by Deloitte have partnered to study patient datasets to better understand the etiology, <a class="read-more-link" href="https://www.aiuniverse.xyz/amazon-web-services-boosts-machine-learning-to-treat-depression/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/amazon-web-services-boosts-machine-learning-to-treat-depression/">Amazon Web Services boosts machine learning to treat depression</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; healthcareitnews.com</p>
<p>Pharmaceutical company Takeda and research and development data science institute ConvergeHEALTH by Deloitte have partnered to study patient datasets to better understand the etiology, progression and most effective therapies for difficult diseases.</p>
<p>Using insurance claims information including diagnoses, medical procedures and prescriptions, they ran linear and non-linear models on disease datasets like treatment-resistant depression. The goal was to identify data factors with the highest impact on predicting patient outcomes.</p>
<p>By combining the right data and the right questions, the organizations improved the predictability of deep learning models, allowing for the analysis of wider and more complex data sets and a better understanding of patient trajectories.</p>
<p>They also identified potential for these machine-learning techniques for use on other difficult to diagnose diseases, to determine what patients are more prone to these illnesses and the best courses for personalized treatment.</p>
<p>&#8220;In severe depression, patients often go through multiple medications before finding one that works,&#8221; said Dan Housman, chief technology officer at ConvergeHEALTH by Deloitte. &#8220;This testing process can be challenging for patients and their psychiatrists.&#8221;</p>
<p>The approach is &#8220;prescribed after other medications did not work in what is deemed a treatment resistant patient, he added. “We’re interested in looking at depression patients and their journey between treatments to better understand which patients may fall into the treatment resistant category and when a certain switch will be sustained without further switches.&#8221;</p>
<p>The organizations are using claims data sets with machine learning to build predictive models to determine the patients who may be resistant and the medications or classes of depression medications for patients to switch between.</p>
<p>With effective predictive models, they can work to adjust guidelines or provide digital diagnostic tools that look at patient histories to identify who would likely benefit from switching to a product earlier or potentially using it as a first-line treatment.</p>
<p>&#8220;The benefit to the patient is a shorter journey to a drug that will keep them well and less time struggling with their depression,&#8221; Housman explained. &#8220;The benefit to Takeda is to be able to build tools both with guidelines or decision support systems to help physicians find the patients who can benefit from our products.”</p>
<p>“Predicting who will likely fail or succeed with a drug is a very challenging problem to determine given the many nuances in medical records,&#8221; he added.</p>
<p>Patient histories include related temporal events, comorbidities, diagnostic pathways and procedures. So the organizations worked through data science and machine learning, ultimately testing deep learning methods to determine the predictability of medication switches and determine if they could isolate patterns useful for practicing medicine.</p>
<p>&#8220;We’re generally looking to use AI, machine learning and deep learning to demonstrate that we can predict a future event in a data set with good accuracy, while also looking to understand the factors or patterns in the data that are important for driving that prediction,&#8221; Housman said.</p>
<p>&#8220;We used traditional machine learning models that are able to identify among the thousands of potential features in a patient record both which ones are most predictive and given the ensemble of many features what the prediction is of an event happening,&#8221; he added.</p>
<p>The data scientists do this by turning the data they have available into training and test data sets. The training data allows them to hone models. The test data allows them to see how those models perform against data where they already know the results but don&#8217;t provide them to the algorithm. This allows them to measure how accurate the models are.</p>
<p>&#8220;These prediction systems such as random forest are already very powerful tools but they fall short in certain key areas,&#8221; Housman said. &#8220;One of those areas is in looking at the timeline of a patient&#8217;s record.”</p>
<p>&#8220;Recurrent neural network deep learning algorithms had demonstrated great utility in helping to recognize patterns in natural language because they can learn not just from the words in the sentence, but also the relative order of one word relative to the others,&#8221; he explained. &#8220;So we used deep learning through recurrent neural networks to obtain better scores on our tests, presumably by being able to factor the order of patient events.&#8221;</p>
<p>Historically it&#8217;s been difficult, expensive and time-consuming to run machine learning experiments on large data sets because low-cost, high-performance computing was not available on demand. To execute these capabilities, the organizations needed to solve a number of scaling problems with computing and also needed tools for executing the analyses.</p>
<p>&#8220;We leveraged the Amazon Web Services computing systems including GPU on demand servers in order to build and train the models,&#8221; Housman said. &#8220;To manage the creation of pipelines and execution of machine learning models we used Deloitte&#8217;s Deep Miner tools and Amazon&#8217;s underlying SageMaker tools for managing execution of the machine learning jobs.”</p>
<p>&#8220;The analytical tools, data availability, and scalable computational infrastructure has brought the cost of doing data science experiments like these within reach for many projects that previously would have been too expensive to consider,&#8221; he added.</p>
<p>Housman said the results of the application of the various artificial intelligence methods were promising.</p>
<p>&#8220;AUC, area under curve, scores that manage the matrix of true positives, true negatives, false positives, and false negatives for predictive power are what we use to determine the effectiveness of our models,&#8221; he explained. &#8220;An AUC score of 50 percent would mean our model was close to random at getting a prediction right, which is not a good model. A score of 100 percent would mean the model was perfect.&#8221;</p>
<p>The reseachers said they were encouraged that the models using different techniques demonstrated increasing predictive power. In treatment resistant depression they found that AUC went at a low of 55.1 percent in traditional linear models, to 90.2 percent using RNN deep learning models.</p>
<p>&#8220;We were able to look at the key features among hundreds of thousands of features and could see that most of the features related to known etiology of the disease but also some unknown correlations that we can investigate further,&#8221; Housman said.</p>
<p>&#8220;The encouraging factor is that we know that the deep learning algorithms can use temporal patterns to predict treatment switches,&#8221; he added, &#8220;But we don&#8217;t know what those patterns are yet because the deep learning models are opaque.&#8221;</p>
<p>The post <a href="https://www.aiuniverse.xyz/amazon-web-services-boosts-machine-learning-to-treat-depression/">Amazon Web Services boosts machine learning to treat depression</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/amazon-web-services-boosts-machine-learning-to-treat-depression/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>What Is Deep Learning and How Does it Relate to AI?</title>
		<link>https://www.aiuniverse.xyz/what-is-deep-learning-and-how-does-it-relate-to-ai/</link>
					<comments>https://www.aiuniverse.xyz/what-is-deep-learning-and-how-does-it-relate-to-ai/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 20 Apr 2018 06:16:01 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[machine learning techniques]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2256</guid>

					<description><![CDATA[<p>Source &#8211; cmswire.com Google’s AlphaGo made history in May 2017 when it defeated Ke Jie, the world’s reigning champion of the ancient Chinese game Go. It was the first computer <a class="read-more-link" href="https://www.aiuniverse.xyz/what-is-deep-learning-and-how-does-it-relate-to-ai/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-deep-learning-and-how-does-it-relate-to-ai/">What Is Deep Learning and How Does it Relate to AI?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; cmswire.com</p>
<p>Google’s AlphaGo made history in May 2017 when it defeated Ke Jie, the world’s reigning champion of the ancient Chinese game Go. It was the first computer program to defeat a professional human Go player, much less a world champion. Later that year, Google introduced AlphaGo Zero, an even more powerful iteration of AlphaGo.</p>
<p>Anyone wanting to understand the difference between artificial intelligence and deep learning can start by understanding the difference between AlphaGo and AlphaGo Zero. With AlphaGo, Google trained the original AlphaGo to play by teaching it to look at data from the top players, said Avi Reichental, CEO of XponentialWorks. Within a short period of time it was able to beat almost all standing champions hands down, he said. But with AlphaGo Zero, instead of having an algorithm look at lots of data from other players, Google taught the system the rules of the game and let the algorithm learn how to improve on its own, Reichental said. The end result, he said, is a computational power unparalleled in speed and intelligence.</p>
<p>Without a doubt artificial intelligence is becoming more common in our daily and business lives. It is making appearances in voice assistants and chatbots, as well as in complex business applications. As it does, it is important to learn to distinguish among the different types of AI, such as deep learning.</p>
<h2>Defining AI and Its Many Iterations</h2>
<p>Starting with the basics, AI is a concept of getting a computer or machine or robot to do what previously only humans could do, said Mark Stadtmueller, VP of Product Strategy at Lucd. Machine learning is a type of AI where algorithms are used to analyze data, he continued. “Machine learning analysis involves looking for patterns within the data and creating and refining a model/equation that best approximates the data pattern. With this model/equation, predictions can be made on new data that follows that data pattern.”</p>
<p>Neural networks are a type of machine learning in which brain neuron behavior is approximated to model many input values to determine or predict an outcome, Stadtmueller said. When many layers of neurons are used, it is called a deep neural network. “Deep neural networks have been very successful in improving the accuracy of speech recognition, computer vision, natural language processing and other predictive capabilities,” he said. When using deep neural networks, people refer to it as deep learning, Stadtmueller said. “So deep learning is the act of using a deep neural network to perform machine learning, which is a type of AI.”</p>
<h2>Deep Learning’s Use of Very Raw Inputs</h2>
<p>One way deep learning differentiates itself from other forms of AI is its use of very raw inputs, such as sound or images, rather than inputs that have already had substantial processing, according to Ted Dunning, chief application architect at MapR. “Earlier approaches to machine learning systems typically required substantial domain knowledge to pre-process raw data into so-called features that were then used by the learning system,” he explained. “Deep learning aims to learn these features, hopefully avoiding most or all of the substantial efforts required for feature engineering.” For instance, he said, with image recognition, a non-deep learning approach might have edge, blob and line detectors while a deep learning approach would work directly with pixels and would learn about edges and other features.</p>
<h2>When Does Deep Learning Work Best?</h2>
<p>Since deep learning is one of the most advanced forms of AI, then it follows that it is better to use it compared with less advanced forms of machine learning, correct? Not so fast, said Akash Ganapathi, co-founder and CEO of Trill A.I. “The advantage of deep learning over other types of AI is that you can achieve greater accuracy than standard machine learning techniques with less bias,” he said. “However, this is strongly dependent on the amount of data available and the nature of the problem. Every problem is unique, and if deep learning techniques are applied naively, it may result in worse performance.”</p>
<p>Often times in commercial or business settings, simple questions are best answered through standard statistical or machine learning techniques, Ganapathi said, not deep learning. This is usually because the amount of data required to apply deep learning techniques is not readily available. That said, when the data is carefully stored and maintained, deep learning can provide extraordinary commercial value, according to Ganapathi. The key to success, he said, is having a clean and large data set and a business question that is worth answering.</p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-deep-learning-and-how-does-it-relate-to-ai/">What Is Deep Learning and How Does it Relate to AI?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/what-is-deep-learning-and-how-does-it-relate-to-ai/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>Artificial intelligence just made guessing your password a whole lot easier</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-just-made-guessing-your-password-a-whole-lot-easier/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-just-made-guessing-your-password-a-whole-lot-easier/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 16 Sep 2017 06:56:49 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[computer science]]></category>
		<category><![CDATA[computer scientist]]></category>
		<category><![CDATA[machine learning techniques]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1158</guid>

					<description><![CDATA[<p>Source &#8211; sciencemag.org Last week, the credit reporting agency Equifax announced that malicious hackers had leaked the personal information of 143 million people in their system. That’s reason for concern, <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-just-made-guessing-your-password-a-whole-lot-easier/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-just-made-guessing-your-password-a-whole-lot-easier/">Artificial intelligence just made guessing your password a whole lot easier</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>sciencemag.org</strong></p>
<p>Last week, the credit reporting agency Equifax announced that malicious hackers had leaked the personal information of 143 million people in their system. That’s reason for concern, of course, but if a hacker wants to access your online data by simply guessing your password, you’re probably toast in less than an hour. Now, there’s more bad news: Scientists have harnessed the power of artificial intelligence (AI) to create a program that, combined with existing tools, figured more than a quarter of the passwords from a set of more than 43 million LinkedIn profiles. Yet the researchers say the technology may also be used to beat baddies at their own game.</p>
<p>The work could help average users and companies measure the strength of passwords, says Thomas Ristenpart, a computer scientist who studies computer security at Cornell Tech in New York City but was not involved with the study. “The new technique could also potentially be used to generate decoy passwords to help detect breaches.”</p>
<p>The strongest password guessing programs, John the Ripper and hashCat, use several techniques. One is simple brute force, in which they randomly try lots of combinations of characters until they get the right one. But other approaches involve extrapolating from previously leaked passwords and probability methods to guess each character in a password based on what came before. On some sites, these programs have guessed more than 90% of passwords. But they’ve required many years of manual coding to build up their plans of attack.</p>
<p>The new study aimed to speed this up by applying deep learning, a brain-inspired approach at the cutting edge of AI. Researchers at Stevens Institute of Technology in Hoboken, New Jersey, started with a so-called generative adversarial network, or GAN, which comprises two artificial neural networks. A “generator” attempts to produce artificial outputs (like images) that resemble real examples (actual photos), while a “discriminator” tries to detect real from fake. They help refine each other until the generator becomes a skilled counterfeiter.</p>
<p>Giuseppe Ateniese, a computer scientist at Stevens and paper co-author, compares the generator and discriminator to a police sketch artist and eye witness, respectively; the sketch artist is trying to produce something that can pass as an accurate portrait of the criminal. GANs have been used to make realistic images, but have not been applied much to text.</p>
<p>The Stevens team created a GAN it called PassGAN and compared it with two versions of hashCat and one version of John the Ripper. The scientists fed each tool tens of millions of leaked passwords from a gaming site called RockYou, and asked them to generate hundreds of millions of new passwords on their own. Then they counted how many of these new passwords matched a set of leaked passwords from LinkedIn, as a measure of how successful they’d be at cracking them.</p>
<p>On its own, PassGAN generated 12% of the passwords in the LinkedIn set, whereas its three competitors generated between 6% and 23%. But the best performance came from combining PassGAN and hashCat. Together, they were able to crack 27% of passwords in the LinkedIn set, the researchers reported this month in a draft paper posted on arXiv. Even failed passwords from PassGAN seemed pretty realistic: saddracula, santazone, coolarse18.</p>
<p>Using GANs to help guess passwords is “novel,” says Martin Arjovsky, a computer scientist who studies the technology at New York University in New York City. The paper “confirms that there are clear, important problems where applying simple machine learning solutions can bring a crucial advantage,” he says.</p>
<p>Still, Ristenpart says “It’s unclear to me if one needs the heavy machinery of GANs to achieve such gains.” Perhaps even simpler machine learning techniques could have assisted hashCat just as much, he says. (Arjovsky concurs.) Indeed, an efficient neural net produced by Carnegie Mellon University in Pittsburgh, Pennsylavania, recently showed promise, and Ateniese plans to compare it directly with PassGAN before submitting his paper for peer review.</p>
<p>Ateniese says that though in this pilot demonstration PassGAN gave hashCat an assist, he’s “certain” that future iterations could surpass hashCat. That’s in part because hashCat uses fixed rules and was unable to produce more than 650 million passwords on its own. PassGan, which invents its own rules, can create passwords indefinitely. “It’s generating millions of passwords as we speak,” he says. Ateniese also says PassGAN will improve with more layers in the neural networks and training on many more leaked passwords.</p>
<p>He compares PassGAN to AlphaGo, the Google DeepMind program that recently beat a human champion at the board game Go using deep learning algorithms. “AlphaGo was devising new strategies that experts had never seen before,” Ateniese says. “So I personally believe that if you give enough data to PassGAN, it will be able to come up with rules that humans cannot think about.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-just-made-guessing-your-password-a-whole-lot-easier/">Artificial intelligence just made guessing your password a whole lot easier</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-just-made-guessing-your-password-a-whole-lot-easier/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>Why the future of machine learning will be crunching words</title>
		<link>https://www.aiuniverse.xyz/why-the-future-of-machine-learning-will-be-crunching-words/</link>
					<comments>https://www.aiuniverse.xyz/why-the-future-of-machine-learning-will-be-crunching-words/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 13 Sep 2017 06:31:57 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[digital assets]]></category>
		<category><![CDATA[IT]]></category>
		<category><![CDATA[IT professionals]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[machine learning techniques]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1090</guid>

					<description><![CDATA[<p>Source &#8211; cio.com In recent years, enterprise machine learning has revolved around crunching numbers: analyzing datasets or tracking customer behavior. But what organizations will soon realize is that <a class="read-more-link" href="https://www.aiuniverse.xyz/why-the-future-of-machine-learning-will-be-crunching-words/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/why-the-future-of-machine-learning-will-be-crunching-words/">Why the future of machine learning will be crunching words</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>cio.com</strong></p>
<p dir="ltr">In recent years, enterprise machine learning has revolved around crunching numbers: analyzing datasets or tracking customer behavior. But what organizations will soon realize is that applying machine learning to content—physical documents, images, presentations and even conversational UIs—removes the cap on who machine learning impacts, and how far its value extends across the enterprise.</p>
<p dir="ltr">Tracking down <span class="vm-hook-outer vm-hook-default"><span class="vm-hook">lost documents</span></span> and images, or drafting abstracts and case studies only to realize they’ve already been written are just a few of the daily frustrations that we typically consider unavoidable. But as it turns out, it’s these issues specifically—content discovery, tagging and classification—where machine learning is in a strategic position to make a substantial impact.</p>
<p dir="ltr">Numbers-driven algorithms have informed strategy for years now, but applying machine learning to content will likely have a similar, if not greater, impact on the <span class="vm-hook-outer vm-hook-default"><span class="vm-hook">enterprise</span></span>. As companies generate written, verbal and visual content that’s measured and impactful, they’ll start to realize that numbers aren’t the only insights driving business forward.</p>
<h2 dir="ltr">Solving the problems you never knew existed</h2>
<p dir="ltr">As words and phrases start to fuel machine learning algorithms, each and every member of an organization can reap the benefits. More often than not, these benefits will manifest in ways we never thought possible.</p>
<aside class="nativo-promo smartphone"></aside>
<p dir="ltr">The inability to access, manipulate and leverage the right content at the right time is the hidden speed bump in your daily workflow. Too often, marketers, content creators, IT professionals or project managers are inundated with digital assets—blogs, websites, Google Docs, etc.—each serving separate business goals with different audiences and categories. Hours are wasted writing, editing, organizing and analyzing content that already exists, sparking frustrations and decreasing new output.</p>
<p dir="ltr">However, machine learning techniques can now classify content by category, provide answers to questions while you type, or offer suggestions that add depth to written material. Capabilities like these  offer solutions to the problems you never thought could be fixed. By building on progress that’s already been made, companies can finally combine creativity with productivity to make future content even more compelling.</p>
<p dir="ltr">Internal content is only the half of it. Inbound materials like customer service inquiries, emails and requests may seem manageable at first, but can pile up over time. Questions arise around how to utilize this content effectively, and ultimately improve business communication as a whole. Applying machine learning in this context welcomes tools like FAQ generators that quickly consolidate inquiries, identify common questions, and generate FAQ documents accordingly. Simplifiers like these put time back into your schedule, and bring momentum to the enterprise from the inside out.</p>
<h2 dir="ltr">Discovering machine learning’s value</h2>
<p dir="ltr">As organizations generate better (and smarter) content, the entire company will move down the path of least resistance. With content becoming more intelligent, materials are rolled out more quickly, inbound requests are analyzed and turned actionable automatically, and IT isn’t summoned to help with unnecessary tasks.</p>
<aside class="nativo-promo tablet desktop"></aside>
<p dir="ltr">Even better, with content turning conversational, the potential for machine learning gets even stronger. Right now, virtual assistants like Alexa or Google Assistant lack context, essentially making the term “conversational UI” a misnomer. They are far more instructional than they are conversational, but applying content-driven machine learning to these systems will transform them into discovery mechanisms for the enterprise. Pretty soon, you’ll be able to ask Alexa what image you used in a blog post in June 2015, with the correct image appearing on your screen within seconds. As we inch closer to truly conversational content, we’ll reach a level of efficiency unlike ever before.</p>
<p dir="ltr">There’s no telling what type of content will flow through the enterprise next. But as content of all shapes and sizes becomes the playground for machine learning, productivity hacks that crunch words, rather than numbers, will start to prove their value.</p>
<p>The post <a href="https://www.aiuniverse.xyz/why-the-future-of-machine-learning-will-be-crunching-words/">Why the future of machine learning will be crunching words</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/why-the-future-of-machine-learning-will-be-crunching-words/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>How artificial intelligence can help the hunt for new materials</title>
		<link>https://www.aiuniverse.xyz/how-artificial-intelligence-can-help-the-hunt-for-new-materials/</link>
					<comments>https://www.aiuniverse.xyz/how-artificial-intelligence-can-help-the-hunt-for-new-materials/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 12 Aug 2017 05:47:05 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Data Science]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[cloud]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[machine learning techniques]]></category>
		<category><![CDATA[medical devices]]></category>
		<category><![CDATA[new materials]]></category>
		<category><![CDATA[supercomputer]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=601</guid>

					<description><![CDATA[<p>Source &#8211; imeche.org The smart materials of the future are likely to be discovered not in the lab, but on a supercomputer. Materials science has exploded in recent <a class="read-more-link" href="https://www.aiuniverse.xyz/how-artificial-intelligence-can-help-the-hunt-for-new-materials/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-artificial-intelligence-can-help-the-hunt-for-new-materials/">How artificial intelligence can help the hunt for new materials</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>imeche.org</strong></p>
<div class="long-form">
<p class="page-intro">The smart materials of the future are likely to be discovered not in the lab, but on a supercomputer.</p>
</div>
<div class="long-form">
<p>Materials science has exploded in recent years – offering tantalising potential solutions to engineering challenges ranging from higher-capacity batteries to safer medical devices. There are materials that work at the molecular level to fight bacteria, and others that can change shape when introduced to an electric current. Some transform in the heat or cold, while others are soft and spongy when poked, but form a rigid barrier when hit at speed.</p>
<p>Throughout history, the hunt for new substances has been conducted by tinkerers and scientists in labs, or pioneering craftsmen in workshop. Most were stumbled across by luck and then tested to see if they would be useful. Graphene, for instance, was discovered by two researchers at Manchester University who were speculatively playing around with Scotch tape and graphite on a Friday afternoon.</p>
<p>But now, new materials are more likely to be discovered by a supercomputer. Researchers in Europe and the United States are using computer modelling, artificial intelligence and machine learning techniques to predict new materials from ones that are known to exist.</p>
<p>Some are purely hypothetical, but others are being synthesised and tested for potentially useful properties such as magnetism, conductivity or the amount of external force they can undergo without breaking.</p>
<p>Researchers at Basel University, for example, were recently able to predict 90 different forms of a crystal called elpasolite, which could be used as a semiconductor or insulator, or emit light when exposed to radiation.</p>
<h2>Global effort</h2>
<p>There are a number of large projects around the world, including Materials Cloud in Lausanne, and the Center for Material Genomics at Duke University in North Carolina. But the first was the Materials Genome Project at MIT, which was founded by Gerbrand Ceder in 2006.</p>
<p>He took inspiration from the Human Genome Project, an ambitious attempt to create a map of our DNA. “By itself, the human genome was not a recipe for new treatments,” he told <em>Nature</em> last year, “but it gave medicine amazing amounts of basic, quantitative information to start from.”</p>
<p>Now, the same thing is happening with new materials. By creating databases of the properties of various compounds, researchers can speed up the search for potentially useful combinations.</p>
<p>It’s catching on, with a host of start-ups launching in the space including Nutonian, QuesTek Innovations, and Alphastar. In 2011, the US government launched the Materials Genome Initiative, a $500m investment in the field. That helped create a publicly available database of all the new and predicted materials. According to a five-year progress report, the database now includes “more than 66,000 crystalline compounds, 500,000 nano-porous materials, 70,000 electrochemical phase diagrams, 43,000 electronic band structures, and 2,900 full elastic tensors (important for understanding mechanical behaviour)”.</p>
<p>Artificial intelligence isn’t just about increasing the speed of progress. With machine learning, scientists can identify things that would never be spotted in the normal course of research. “Machine learning does not depend on equations that are based on the laws of physics to find patterns and model the data,” explained Dayton Horvath, a research associate at Lux Research and lead author of the report <em>Materials and Informatics: The Next Research Revolution?</em> in an email to <em>Professional Engineering.</em></p>
<p>“Any data type, even if there is no fundamental physical equation that can describe the data (such as color, or chemical resistance), can be used to help discover new materials, and predict the properties of existing and new materials.”</p>
<h2>Accelerating innovation</h2>
<p>Horvath’s report argues that artificial intelligence will accelerate the pace of innovation, with a knock on effect to every industry that uses materials. It’s an opportunity for engineering companies but it all relies on good data.</p>
<p>“[They need to] make institutional data accessible so that machine learning algorithms can properly leverage what is arguably an R&amp;D organization’s most valuable asset: decades of amassed data,” Horvath told <em>PE</em>.</p>
<p>There are also publicly available data sets that engineers can use to search for potential materials – either existing ones or predicted ones – that could meet the needs of a particular project they might be working on. “Engineers should be aware of the publicly available materials property, composition, and structure datasets that provide a good starting point for building initial training data sets and qualifying off-the-shelf machine learning algorithms for specific applications,” advises Horvath. He recommends Citrine Informatics, a start-up that provides tutorials on materials informatics, and access to their public database.</p>
<p>Companies are also getting involved in the application of machine learning to materials science. IBM are working with an unnamed company to develop an algorithm that can scan hundreds of thousands of scientific papers and patents for potentially useful discoveries – more than anyone would ever be able to read.</p>
<p>That’s been used to create a database of about 250,000 molecules that can be searched using artificial intelligence to identify ones that might be of interest to that particular researcher’s project. “You may say, ‘I want materials that are soluble,’ or ‘I want materials that can be exposed to light,’ explained Dario Gil, vice president of science and solutions at IBM Research at the EmTech Digital conference in San Francisco in March.</p>
<p>The scientists still have some input in training the algorithm and setting the parameters of the kind of molecule or material characteristics that they’re looking for. Artificial intelligence isn’t replacing them, but it is speeding up the search for new materials, and could help smooth the way for all manner of engineering advances. “What we’re doing is greatly accelerating the rate of progress and the productivity of the scientists,” says Gil.</p>
</div>
<p>The post <a href="https://www.aiuniverse.xyz/how-artificial-intelligence-can-help-the-hunt-for-new-materials/">How artificial intelligence can help the hunt for new materials</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-artificial-intelligence-can-help-the-hunt-for-new-materials/feed/</wfw:commentRss>
			<slash:comments>6</slash:comments>
		
		
			</item>
		<item>
		<title>Here’s your complete guide to machine learning and AI</title>
		<link>https://www.aiuniverse.xyz/heres-your-complete-guide-to-machine-learning-and-ai/</link>
					<comments>https://www.aiuniverse.xyz/heres-your-complete-guide-to-machine-learning-and-ai/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 21 Jul 2017 07:46:34 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[computer science]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[machine learning techniques]]></category>
		<category><![CDATA[Snapchat]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=207</guid>

					<description><![CDATA[<p>Source &#8211; engadget.com Although the field of machine learning has only recently reached mainstream notoriety, machines have been taught to learn and make predictions from data for decades. <a class="read-more-link" href="https://www.aiuniverse.xyz/heres-your-complete-guide-to-machine-learning-and-ai/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/heres-your-complete-guide-to-machine-learning-and-ai/">Here’s your complete guide to machine learning and AI</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>engadget.com</strong></p>
<div class="o-article_block pb-15 pb-5@m- o-subtle_divider">
<div class="grid@tl+">
<div class="grid@tl+__cell col-8-of-12@tl+">
<div class="article-text c-gray-1">
<p>Although the field of machine learning has only recently reached mainstream notoriety, machines have been taught to learn and make predictions from data for decades. You probably already make use of some of its more established applications like email spam filtering, optical character recognition of scanned documents, and animated dog ears in Snapchat.</p>
</div>
</div>
</div>
</div>
<div class="js-notMobileReferredByFbTw">
<div class="o-article_block pb-15 pb-5@m- mt-n35 mt-n25@m mt-n15@s">
<div class="grid@tl+">
<div class="full-width@tp- grid@tl+__cell col-8-of-12@tl+">
<div class="article-text c-gray-1 no-review">
<p>But the true potential of machine learning has yet to be reached. If you want to make sense of this exciting area of computer science, and help build the future of AI-powered devices and services, the Complete Machine Learning Bundle is an essential overview of its commonly-used technologies and programming techniques. These courses typically cost hundreds of dollars when purchased separately, but you can get the full bundle for just $39 from GDGT Deals.</p>
<p>This collection includes the following courses:</p>
<ul>
<li><strong>Quant Trading Using Machine Learning:</strong> Machine learning isn&#8217;t just used in the tech industry. Finance professionals are using machine learning to build stronger financial models, and better inform investment decisions. This crash course in quantitative trading will help you apply machine learning techniques to sophisticated financial concepts.</li>
<li><strong>Learn By Example: Statistics and Data Science in R</strong>: Master one of the most popular programming languages used in data science and statistical computing. In just 9 hours, you&#8217;ll understand the ins-and-outs of R, including how to apply it to the world of data.</li>
<li><strong>Learn By Example: Hadoop &amp; MapReduce for Big Data Problems:</strong> In order to be a professional machine learning expert, you need to have professional working knowledge around big data. This course will teach you about Hadoop and MapReduce, two essential frameworks, with 14 hours of interactive learning.</li>
<li><strong>Byte Size Chunks: Java Object-Oriented Programming &amp; Design: </strong>Java isn&#8217;t just one of the oldest programming languages, it remains one of the most essential. Master this object-oriented language with seven hours of video instruction.</li>
<li><strong>An Introduction to Machine Learning &amp; NLP in Python: </strong>Now that you&#8217;re familiar with some must-know machine learning foundational concepts, you&#8217;ll dive into the meat of the topic. You&#8217;ll study natural language processing and the basics of how machine learning technologies like speech recognition are built. Then, you&#8217;ll dive in and build them yourself.</li>
<li><strong>Byte-Sized-Chunks: Twitter Sentiment Analysis (in Python): </strong>Why is Sentiment Analysis important? You&#8217;re about to find out with 4 hours of instruction on this essential practice that helps inform machines to better solve problems.</li>
<li><strong>Byte-Sized-Chunks: Decision Trees and Random Forests: </strong>As they say: practice makes perfect—and this course will put your knowledge to the test. Learn to implement two essential learning techniques as you explore solving an age old machine learning problem.</li>
<li><strong>An Introduction To Deep Learning &amp; Computer Vision: </strong>How do you build an artificial neural network from scratch? This course will train you in deep learning, and put your skills to work.</li>
<li><strong>Byte-Sized-Chunks: Recommendation Systems: </strong>Conquer the challenge of building online recommendation tools—a skill that could very well be put to the test working for an ecommerce company.</li>
<li><strong>From 0 to 1: Learn Python Programming: </strong>Maybe you&#8217;ll start here or maybe you&#8217;ll end here—either way, Python is an essential skills for any tech professional to know. Dive in with 10.5 hours of instruction.</li>
</ul>
<p>The combined value of all course content is over $700, but you can get it today for just $39. If you&#8217;re at all interested in getting hands-on experience in this exciting field of technology, get the Complete Machine Learning Bundle today.</p>
<p>&nbsp;</p>
</div>
</div>
</div>
</div>
</div>
<p>The post <a href="https://www.aiuniverse.xyz/heres-your-complete-guide-to-machine-learning-and-ai/">Here’s your complete guide to machine learning and AI</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/heres-your-complete-guide-to-machine-learning-and-ai/feed/</wfw:commentRss>
			<slash:comments>7</slash:comments>
		
		
			</item>
	</channel>
</rss>
