<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>ML techniques Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/ml-techniques/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/ml-techniques/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Mon, 20 Aug 2018 06:24:48 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Machine Learning Is Chasing Out DDoS, The Newest Evil In Cyber Security</title>
		<link>https://www.aiuniverse.xyz/machine-learning-is-chasing-out-ddos-the-newest-evil-in-cyber-security/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-is-chasing-out-ddos-the-newest-evil-in-cyber-security/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 20 Aug 2018 06:24:48 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[cyber security]]></category>
		<category><![CDATA[DDoS]]></category>
		<category><![CDATA[DDoS attack]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[ML techniques]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2765</guid>

					<description><![CDATA[<p>Source &#8211; analyticsindiamag.com One of the most dangerous aspects looming the computer world is security threats. It is estimated that around three trillion dollars are lost in cyber crimes every <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-is-chasing-out-ddos-the-newest-evil-in-cyber-security/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-is-chasing-out-ddos-the-newest-evil-in-cyber-security/">Machine Learning Is Chasing Out DDoS, The Newest Evil In Cyber Security</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; analyticsindiamag.com</p>
<p>One of the most dangerous aspects looming the computer world is security threats. It is estimated that around three trillion dollars are lost in cyber crimes every year. This figure is expected to double by 2021. With all of these threats lurking around, it is difficult to track and eliminate every threat, especially as the number of users is rising exponentially.</p>
<p>The most popular among the existing cyber threats now is the distributed denial of service (DDoS) attack. A DDoS attack is a malicious attempt to disrupt normal traffic of a targeted server, service or network by overwhelming the target or its surrounding infrastructure with a flood of the internet traffic. DDoS attacks have adversely affected businesses on a large scale.</p>
<p>Now, with machine learning prevailing in the tech ecosystem, eliminating DDoS attacks has found a new way. In this article, we will lay out a research paper that has used ML techniques to subdue DDoS attacks in systems.</p>
<h3>Session Initiation Protocol (SIP) And Voice Over Internet Protocol (VoIP)</h3>
<p>Z Tsiatsikas and a team from the University of the Aegean, Greece, have published a new research study in countering DDoS in SIP-based VoIP systems through ML. The reason for choosing VoIP systems is its popularity and spread in the hardware ecosystem. With the growing number of digital devices and the abundant availability of the internet, VoIP is the preferred method for voice and multimedia communications.</p>
<p>In order to establish a VoIP session, Session Initiation Protocol (SIP) is the popular means of initiating and these sessions. A simple version of the SIP/VoIP architecture is given below:</p>
<ul>
<li><b>User Agent (UA):</b> The active entities in the session which represent the endpoints of SIP. For example, in the context of voice communications, the caller and the receiver, which denote the endpoints in the session.</li>
<li><b>SIP Proxy Server: An intermediate entity which acts as a client and a server simultaneously during the session. The role of this server is to maintain send and receive requests as well as transfer information to and fro from the users.</b></li>
<li><b></b><strong>Registrar:</strong> This component takes care of authentication and register requests for the UA.</li>
</ul>
<p>All of the SIP communication is logged by the VoIP provider. This is important because it gives out billing and accounting information for service providers based on users’ activity. Interestingly, it can also give out information regarding intrusion or suspicious activity present in the network. This can be a breeding ground for DDoS attacks if left neglected.</p>
<h3>Aggregating ML Techniques In VoIP</h3>
<p>The researchers consider the same SIP VoIP architecture and use five standard ML classifier algorithms in their experiments, which are as follows:</p>
<ol>
<li>Sequential minimal optimisation</li>
<li>Naive Bayes</li>
<li>Neural networks</li>
<li>Decision trees</li>
<li>Random Forest</li>
</ol>
<p>These algorithms are set up for dealing with communications directly in the experiment. Then, classification features are generated once the network is made anonymous using keyed-hash method authentication code (HMAC) for the VoIP communications. The algorithms are tested under 15 DDoS attack scenarios. In order to do this, a ‘test bed’ of DDoS simulations is designed by the researchers which is shown below:</p>
<figure id="attachment_27426" class="wp-caption aligncenter"><img fetchpriority="high" decoding="async" class="wp-image-27426 size-full" src="https://i2.wp.com/www.analyticsindiamag.com/wp-content/uploads/2018/08/ddos1.jpg?resize=835%2C591&amp;ssl=1" sizes="(max-width: 835px) 100vw, 835px" srcset="https://i2.wp.com/www.analyticsindiamag.com/wp-content/uploads/2018/08/ddos1.jpg?w=835&amp;ssl=1 835w, https://i2.wp.com/www.analyticsindiamag.com/wp-content/uploads/2018/08/ddos1.jpg?resize=768%2C544&amp;ssl=1 768w, https://i2.wp.com/www.analyticsindiamag.com/wp-content/uploads/2018/08/ddos1.jpg?resize=770%2C545&amp;ssl=1 770w" alt="" width="672" height="476" data-attachment-id="27426" data-permalink="https://www.analyticsindiamag.com/machine-learning-chasing-out-ddos-cyber-security/ddos1/" data-orig-file="https://i2.wp.com/www.analyticsindiamag.com/wp-content/uploads/2018/08/ddos1.jpg?fit=835%2C591&amp;ssl=1" data-orig-size="835,591" data-comments-opened="0" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="ddos1" data-image-description="" data-medium-file="https://i2.wp.com/www.analyticsindiamag.com/wp-content/uploads/2018/08/ddos1.jpg?fit=300%2C300&amp;ssl=1" data-large-file="https://i2.wp.com/www.analyticsindiamag.com/wp-content/uploads/2018/08/ddos1.jpg?fit=835%2C591&amp;ssl=1" /><figcaption class="wp-caption-text"><em>DDoS simulation test-bed (Image courtesy: Z Tsiatsikas and researchers)</em></figcaption></figure>
<p><i>“Three or four different Virtual Machines (VMs) have been used for the SIP proxy, the legitimate users, and the generation of the attack traffic depending on the scenario. All VMs run on an i7 processor 2.2 GHz machine having 6GB of RAM. For the SIP proxy, we employed the widely known VoIP server </i><i>Kamailio</i><i> (kam, 2014). We simulated distinct patterns for both legitimate and DoS attack traffic using sipp v.3.21 and sipsak2 tools respectively. Furthermore, for the simulation of DDoS attack, the SIPp-DD tool has been used. The well-known Weka tool has been employed for ML analysis.”</i></p>
<p>Training and Testing process for algorithms include both normal traffic and attack traffic. To simulate the attack traffic, they use a range of random high call rates to give a feel of real VoIP whereas the normal traffic has normal, observed call rates.</p>
<p>The training scenario in the experiment is denoted as SN1 and testing scenarios are denoted as SN1.1, SN1.2, SN1.3 etc. A detailed description is given here.</p>
<h3>Performance</h3>
<p>The algorithms fare well compared to non-ML detection. Among the algorithms, Random Forest and decision trees stand top when measured from an intrusion detection viewpoint. The other three fare below them. In addition, as the attack traffic rises, the intrusion detection rate drops, which means DDoS is evident. Ultimately, ML techniques outclass conventional attack detection techniques/methods.</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-is-chasing-out-ddos-the-newest-evil-in-cyber-security/">Machine Learning Is Chasing Out DDoS, The Newest Evil In Cyber Security</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-is-chasing-out-ddos-the-newest-evil-in-cyber-security/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>Machine learning may be a game-changer for climate prediction</title>
		<link>https://www.aiuniverse.xyz/machine-learning-may-be-a-game-changer-for-climate-prediction/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-may-be-a-game-changer-for-climate-prediction/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 20 Jun 2018 07:29:14 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[game changer]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[ML techniques]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2510</guid>

					<description><![CDATA[<p>Source &#8211; eurekalert.org A major challenge in current climate prediction models is how to accurately represent clouds and their atmospheric heating and moistening. This challenge is behind the <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-may-be-a-game-changer-for-climate-prediction/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-may-be-a-game-changer-for-climate-prediction/">Machine learning may be a game-changer for climate prediction</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; eurekalert.org</p>
<p>A major challenge in current climate prediction models is how to accurately represent clouds and their atmospheric heating and moistening. This challenge is behind the wide spread in climate prediction. Yet accurate predictions of global warming in response to increased greenhouse gas concentrations are essential for policy-makers (e.g. the Paris climate agreement).</p>
<p>In a paper recently published online in <em>Geophysical Research Letters</em> (May 23), researchers led by Pierre Gentine, associate professor of earth and environmental engineering at Columbia Engineering, demonstrate that machine learning techniques can be used to tackle this issue and better represent clouds in coarse resolution (~100km) climate models, with the potential to narrow the range of prediction.</p>
<p>&#8220;This could be a real game-changer for climate prediction,&#8221; says Gentine, lead author of the paper, and a member of the Earth Institute and the Data Science Institute. &#8220;We have large uncertainties in our prediction of the response of the Earth&#8217;s climate to rising greenhouse gas concentrations. The primary reason is the representation of clouds and how they respond to a change in those gases. Our study shows that machine-learning techniques help us better represent clouds and thus better predict global and regional climate&#8217;s response to rising greenhouse gas concentrations.&#8221;</p>
<p>The researchers used an idealized setup (an aquaplanet, or a planet with continents) as a proof of concept for their novel approach to convective parameterization based on machine learning. They trained a deep neural network to learn from a simulation that explicitly represents clouds. The machine-learning representation of clouds, which they named the Cloud Brain (CBRAIN), could skillfully predict many of the cloud heating, moistening, and radiative features that are essential to climate simulation.</p>
<p>Gentine notes, &#8220;Our approach may open up a new possibility for a future of model representation in climate models, which are data driven and are built &#8216;top-down,&#8217; that is, by learning the salient features of the processes we are trying to represent.&#8221;</p>
<p>The researchers also note that, because global temperature sensitivity to CO2 is strongly linked to cloud representation, CBRAIN may also improve estimates of future temperature. They have tested this in fully coupled climate models and have demonstrated very promising results, showing that this could be used to predict greenhouse gas response.</p>
<p>&nbsp;</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-may-be-a-game-changer-for-climate-prediction/">Machine learning may be a game-changer for climate prediction</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-may-be-a-game-changer-for-climate-prediction/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Why Artificial Intelligence is Important for Cyber-Security</title>
		<link>https://www.aiuniverse.xyz/why-artificial-intelligence-is-important-for-cyber-security/</link>
					<comments>https://www.aiuniverse.xyz/why-artificial-intelligence-is-important-for-cyber-security/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 14 Mar 2018 05:37:33 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[cyber security]]></category>
		<category><![CDATA[ML techniques]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2106</guid>

					<description><![CDATA[<p>Source &#8211; eweek.com There is a lot of hype around artificial Intelligence, and while the technology can be useful, it does have limitations, according to RSA CTO <a class="read-more-link" href="https://www.aiuniverse.xyz/why-artificial-intelligence-is-important-for-cyber-security/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/why-artificial-intelligence-is-important-for-cyber-security/">Why Artificial Intelligence is Important for Cyber-Security</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; eweek.com</p>
<p>There is a lot of hype around artificial Intelligence, and while the technology can be useful, it does have limitations, according to RSA CTO Zulfikar Ramzan.</p>
<p>Speaking at the Dell Technologies Experience at the South by South West (SXSW) event in Austin, Texas, on March 12, Ramzan detailed his views on AI in a session titled &#8220;AI: Boon or a Boondoggle?&#8221;</p>
<p>&#8220;There is a tendency to think of AI as this all-encompassing panacea that can solve any problem,&#8221; he said.</p>
<p>Ramzan explained that AI can be somewhat of an abstract concept. What it basically means is that computers can be trained to be intelligent with certain kinds of tasks. Within AI, there is the subfield of machine learning, which he said is often used by people interchangeably with AI. Machine learning was first defined in 1959 by mathematician Arthur Samuelson as &#8220;the &#8220;field of study that gives computers the ability to learn without being explicitly programmed,&#8221; Ramzan said.</p>
<p>Machine learning enables computers to learn from data. As such, Ramzan said that if an organization has an interesting data set, it can use a machine learning algorithm to analyze the data and make inferences about the data set to gain meaningful insights that can aid different decision-making processes.</p>
<p><strong>Cyber-Security</strong></p>
<p>Machine learning has a very strong use case inside of cyber-security, according to Ramzan.</p>
<p>&#8220;Cyber-security is about making intelligent decisions based on what is good and what is bad, based on the data that you have in front of you,&#8221; he said. &#8220;That&#8217;s a problem that is suited to machine learning techniques.&#8221;</p>
<p>For example, if an individual gets an email, it&#8217;s possible to determine if it is spam based on machine learning techniques, he said. Ramzan explained that spam filtering technologies look for things such as word patterns, where an email sent from and other reputation characteristics. In addition, machine learning techniques can be used to look at historical data on emails to help determine the rules needed to identify spam.</p>
<p>Machine learning is also playing a role in online fraud detection. Ramzan said machine learning techniques can be used to look at buying patterns and transaction data to understand what a typical transaction is for a given user, which can aid in spotting fraud.</p>
<p>Malware detection is another area where machine learning techniques can be helpful. Ramzan said malware tends to exhibit certain behaviors that are different from legitimate software. He noted that RSA was able to use machine learning to determine that one of its government customers was being attacked by malware from another nation-state.</p>
<p>&#8220;You can actually identify things that would be otherwise unknown,&#8221; Ramzan said. &#8220;There are some great applications of AI and machine learning in the area of cyber-security.&#8221;</p>
<p><strong>Pitfalls and Challenges</strong></p>
<p>AI and machine learning technologies still tend to require some level of human input. Ramzan said human experts in a given domain of analysis are still needed to help configure a machine learning algorithm to have the right classifications and feature identifiers to analyze data.</p>
<p>Beyond some level of human intervention, the most critical part of machine learning in Ramzan&#8217;s view is the data.</p>
<p>&#8220;People get so caught in the cool math, but they forget if you don&#8217;t have good data to begin with, nothing else matters—it&#8217;s just garbage in, garbage out,&#8221; he said.</p>
<p>Data has to be representative of what will actually be encountered in real life. Ultimately, people have to ask the right questions of the right data; otherwise, they won&#8217;t get the correct answers, Ramzan said.</p>
<p>&#8220;You can&#8217;t make good wine from bad grapes,&#8221; he said.</p>
<p>Another challenge identified by Ramzan is class imbalance in data sets. That is, most things in data sets are not bad. For example, the majority of credit card transactions are not fraudulent and most files on a computer are legitimate. With the high volume of legitimate items, Ramzan said there is risk of identifying false positives with machine learning that needs to be avoided.</p>
<p><strong>Adversaries Adapt</strong></p>
<p>There are also few fixed rules when it comes to dealing with agile cyber-security adversaries, in Ramzan&#8217;s view.</p>
<p>&#8220;We&#8217;re dealing with sentient adversaries, people that will adapt, figure out what&#8217;s going on and make changes,&#8221; he said.</p>
<p>Ramzan noted that in his experience, machine learning algorithms typically don&#8217;t assume adversarial scenarios where threats are actively trying to sabotage the algorithm. He added that dealing with active threat adversaries that are highly agile is still an area that machine learning technologies are struggling with.</p>
<p>&#8220;Marketing people won&#8217;t tell you this, but the reality is machine learning algorithms weren&#8217;t designed to deal with bad people. They were designed to deal with legitimate data sets they can learn from,&#8221; Ramzan said.</p>
<p>In his view, AI and machine learning techniques are good at understanding what the norm is, but they are not always as good at figuring out things that are completely beyond an individual&#8217;s comprehension.</p>
<p>&#8220;These techniques [AI and machine learning], while powerful and useful, are not a panacea and they are not going to catch every kind of threat out there,&#8221; he said.</p>
<p>The post <a href="https://www.aiuniverse.xyz/why-artificial-intelligence-is-important-for-cyber-security/">Why Artificial Intelligence is Important for Cyber-Security</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/why-artificial-intelligence-is-important-for-cyber-security/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>How machine learning can predict and prevent disruptions in reactors</title>
		<link>https://www.aiuniverse.xyz/how-machine-learning-can-predict-and-prevent-disruptions-in-reactors/</link>
					<comments>https://www.aiuniverse.xyz/how-machine-learning-can-predict-and-prevent-disruptions-in-reactors/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 12 Oct 2017 06:21:45 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[electromagnetic radiation]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[ML techniques]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1456</guid>

					<description><![CDATA[<p>Source &#8211; phys.org Robert Granetz has been a research scientist in MIT&#8217;s Plasma Science and Fusion Center for more than 40 years. He recently gave a talk hosted <a class="read-more-link" href="https://www.aiuniverse.xyz/how-machine-learning-can-predict-and-prevent-disruptions-in-reactors/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-machine-learning-can-predict-and-prevent-disruptions-in-reactors/">How machine learning can predict and prevent disruptions in reactors</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>phys.org</strong></p>
<p>Robert Granetz has been a research scientist in MIT&#8217;s Plasma Science and Fusion Center for more than 40 years. He recently gave a talk hosted by the MIT Energy Initiative (MITEI) on using machine learning to develop a real-time warning system for impending disruptions in fusion reactors. A specialist in magnetohydrodynamic instabilities and disruptions, Granetz discussed how research in this area is bringing us one step closer to creating a stable, net-energy-producing fusion device.</p>
<p><b>Q: What makes plasma different from other states of matter? What are the challenges of working with plasma as an energy source?</b></p>
<p>A: In a gas at normal temperatures, the negatively-charged electrons and positively-charged nuclei are tightly bound into atoms or molecules, which are electrically neutral. Therefore, there are no forces exerted between particles unless they happen to actually collide. (The gravitational force acts between all masses, but gravity is much too weak to be relevant.)</p>
<p>When gas particles do collide, the collisions only involve a pair of particles at a time, and the kinematics of the collision are very simple, just like billiard ball collisions. So we can easily calculate the behaviors of gases. However, at the high temperatures that we need for fusion, the thermal energy of each atom or molecule is much, much greater than the binding energy that holds the electrons and nuclei together, so the neutral particles break up into their constituents, i.e. electrons and nuclei, which we call the &#8220;plasma state.&#8221;</p>
<p>Therefore, in a plasma, all the particles are charged, and there are long-range electric and magnetic forces acting between the particles. A single electron or ion influences the motion of about a billion other electrons and ions simultaneously, and all of those billion other particles are simultaneously influencing every other individual particle. In addition, the electrons and nuclei have extremely different masses, so their velocities are very different. Also, since all the particles are charged, they can interact strongly with electromagnetic radiation. All of these complicating properties mean that in practice, we can&#8217;t accurately calculate the detailed behavior of plasmas from the basic equations of physics.</p>
<p><b>Q: In the context of fusion reactors, what&#8217;s a disruption?</b></p>
<p>A: To date, the tokamak concept for a steady-state fusion reactor outperforms all other concepts in terms of energy confinement. The tokamak relies on driving a large current—of the order millions of amperes—through the plasma to produce the magnetic field structure required to obtain good energy confinement. However, this large plasma current is somewhat unstable, and is subject to sudden termination, usually with very little warning. When a disruption occurs, the considerable thermal and magnetic energy contained within the plasma is suddenly released very quickly, which can lead to damaging thermal and electromagnetic loads on the reactor structure.</p>
<p>The whole goal of fusion energy is to develop large power plants to generate electrical power on the grid, and replace today&#8217;s fossil-fueled utility power plants, and even replace fission nuclear power plants. But if a fusion power plant is subject to disruptions, its electricity output would suddenly turn off. Even if the most damaging consequences can be avoided, it could be hours or days before the plant can recover and get back online, only to be subject to another disruption at some later time. No utility would want to use fusion energy if that were the case. If we&#8217;re going to rely on the tokamak concept for fusion reactors, we need to avoid or mitigate disruptions.</p>
<p><b>Q: How can machine learning address this problem?</b></p>
<p>A: The signs that a disruption is imminent are often quite subtle. Fusion researchers continuously measure a number of characteristic plasma parameters during a plasma discharge, and we have reason to believe, both from empirical experimental evidence and from theoretical understanding, that some of these measured plasma parameters may provide indications that a disruption is about to occur. But this information is not straightforward to interpret, not just with respect to the occurrence of an impending disruption, but also with regard to the timing of an impending disruption.</p>
<p>In an attempt to solve this problem, my team—which consists of myself, postdoc Cristina Rea, graduate students Kevin Montes and Alex Tinguely, and a dozen scientists at other U.S. and international labs—has built up large databases of measured parameters which we believe are relevant to disruptions, from several years&#8217; worth of experiments on several different tokamaks around the world. We are now applying machine learning techniques to these data to see if we can discern any patterns that would accurately predict whether or not a disruption will be occurring at a specific time in the near future. When dealing with large, complicated datasets, machine learning may be a powerful way of finding subtle patterns in the data that elude human efforts.</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-machine-learning-can-predict-and-prevent-disruptions-in-reactors/">How machine learning can predict and prevent disruptions in reactors</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-machine-learning-can-predict-and-prevent-disruptions-in-reactors/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>Protect and serve: fraud fighting finds a partner in machine learning</title>
		<link>https://www.aiuniverse.xyz/protect-and-serve-fraud-fighting-finds-a-partner-in-machine-learning/</link>
					<comments>https://www.aiuniverse.xyz/protect-and-serve-fraud-fighting-finds-a-partner-in-machine-learning/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 07 Oct 2017 07:44:57 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[cybersecurity]]></category>
		<category><![CDATA[Information Security]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[ML techniques]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1398</guid>

					<description><![CDATA[<p>Source &#8211; csoonline.com October is one of my favorite months of the year in Oregon, where I live. I call it “pumpkin patch season” and the colors are <a class="read-more-link" href="https://www.aiuniverse.xyz/protect-and-serve-fraud-fighting-finds-a-partner-in-machine-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/protect-and-serve-fraud-fighting-finds-a-partner-in-machine-learning/">Protect and serve: fraud fighting finds a partner in machine learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>csoonline.com</strong></p>
<p>October is one of my favorite months of the year in Oregon, where I live. I call it “pumpkin patch season” and the colors are magnificent, as the season changes. For many of my friends, October weekends are spent squarely rooted in front of the TV watching football. And in case it isn’t penciled in your calendar – it’s also National Cyber Security Awareness Month (NCSAM).</p>
<h2>Celebrate cybersecurity?</h2>
<p>Developed by the National Cyber Security Alliance and the U.S. Department of Homeland Security, NCSAM seeks to unite industry and government organizations in the goal of providing Americans a safer and more secure online experience. Now in its 14<sup>th</sup> year, the month feels more relevant than ever, given the almost-daily breaches reported in the news.</p>
<p>You may ask, “How does one celebrate NCSAM?” Good question – there are no costumes or candy involved, so it isn’t like Halloween, but it does have great possibilities as a community event – especially if you’re in the financial services community.</p>
<aside id="fsb-2398" class="fakesidebar fakesidebar-auto">[ Read reviews of today&#8217;s top security tools and bookmark CSO&#8217;s daily dashboard for the latest advisories and headlines. | Sign up for CSO newsletters. ]</p>
</aside>
<h2>Easy and secure — can I really have both?</h2>
<p>While most consumer-facing businesses understand the responsibility of protecting their customers, businesses face a significant hurdle when trying to provide both a safe and friendly experience. One industry in particular is finding this conundrum confounding – financial services. As financial institutions (FI’s) like banks and credit unions or insurance companies work to protect their customers’ sensitive information, they are searching for solutions that won’t compromise customer service quality – essentially, trying to find the balance between providing low-friction experiences for users and proper risk mitigation for the business.</p>
<aside class="nativo-promo smartphone"></aside>
<p>Machine learning is a viable option for these organizations. Because machine learning solutions are able to learn from previous observations and make inferences about future behavior, their value in fraud-fighting ability becomes inherent.</p>
<h2>Machine learning is being funded by FIs</h2>
<p>According to a recent report from iovation and Aite Group, more financial institutions (FIs) than ever are looking to machine learning for improving fraud mitigation and customer experience. Sixty eight percent of North American FI’s cite machine learning analytics as a high priority investment over the next few years. The high alert state of threat today is likely the new normal for organizations and consumers, so early adopters of solutions that have machine learning incorporated will not only be able to reduce fraud, but will also have a major advantage over their competitors.</p>
<p>A good example of the struggle these FIs are facing is the positive omni-channel approach to customer engagement which brings both opportunity and risk. While it’s great there are now multiple points of interaction where consumers can touch their accounts (ATMs, call centers, email) here’s the rub – this opens up more attack vectors for hackers to exploit as well.</p>
<h3 class="body">Broaden your perspective to really leverage machine learning</h3>
<p>While the adoption of machine learning analytics is a step in the right direction for FIs, embracing a broader fraud fighting solution, if possible, is their best bet. By taking advantage of customer data and applying advanced machine learning techniques, FIs can create insights that not only prevent fraud, but provide more convenient and applicable authentication methods for customers.</p>
<aside class="nativo-promo tablet desktop"></aside>
<p>As much as I support (and even celebrate) NCSAM, it’s my hope that it won’t be needed much longer. Instead, I envision a world in which organizations and individuals will learn to incorporate cyber-awareness into their daily lives, and hopefully, stay one step ahead of hackers and fraudsters. Until then, I’m eager to see where machine learning takes us. Data is our new currency, so let’s keep it out of hackers’ pockets.</p>
<p>The post <a href="https://www.aiuniverse.xyz/protect-and-serve-fraud-fighting-finds-a-partner-in-machine-learning/">Protect and serve: fraud fighting finds a partner in machine learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/protect-and-serve-fraud-fighting-finds-a-partner-in-machine-learning/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
	</channel>
</rss>
