<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>ML algorithms Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/ml-algorithms/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/ml-algorithms/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Wed, 14 Oct 2020 04:59:36 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>DeepTarget Uses ML Algorithms to Improve Customer Experience</title>
		<link>https://www.aiuniverse.xyz/deeptarget-uses-ml-algorithms-to-improve-customer-experience/</link>
					<comments>https://www.aiuniverse.xyz/deeptarget-uses-ml-algorithms-to-improve-customer-experience/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 14 Oct 2020 04:59:32 +0000</pubDate>
				<category><![CDATA[Data Mining]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[data mining]]></category>
		<category><![CDATA[DeepTarget]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[ML algorithms]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12183</guid>

					<description><![CDATA[<p>Source: martechcube.com DeepTarget Inc., a solution provider that utilizes data mining and machine learning to deliver targeted communications across digital channels for banks and credit unions, announced <a class="read-more-link" href="https://www.aiuniverse.xyz/deeptarget-uses-ml-algorithms-to-improve-customer-experience/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deeptarget-uses-ml-algorithms-to-improve-customer-experience/">DeepTarget Uses ML Algorithms to Improve Customer Experience</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: martechcube.com</p>



<p>DeepTarget Inc., a solution provider that utilizes data mining and machine learning to deliver targeted communications across digital channels for banks and credit unions, announced its use of leading-edge machine learning techniques partnered with historical, proprietary data to help community financial institutions complete more transactions, open more accounts and enhance the quality of the customer and member experience.</p>



<p>Machine learning, a subset of artificial intelligence (AI) involving computer algorithms that improve automatically through experience, has been widely adopted in the financial services industry and has proven successful in helping financial institutions better tailor products and services to consumers. In fact, FinTech News recently highlighted that machine learning offers the financial services industry with “exceptional benefits like more efficient processes, better financial analysis, and customer engagement.”</p>



<p>With it’s rich Digital Experience Platform (DXP) and innovative 3D StoryTeller, DeepTarget is unique in the industry in helping financial institutions design and execute intelligent cross-channel marketing campaigns that leverage the latest machine learning technology. By utilizing a predictive model that targets specific audiences with the highest propensity to purchase a particular product, financial institutions can calculate the likelihood that each user will open a specific account. Combined with the patent-pending 3D StoryTeller capability, financial institutions of all sizes can now drive customer engagement with unique, captivating AI-powered personalized financial stories to individual account holders. This latest innovation inspired by social media is powered by the DeepTarget “brain” – an advanced Digital Experience Platform developed over several years and available already integrated with multiple digital banking systems.</p>



<p>“Delivering rich content and relevant offers are critical to customer success,” said Jill Homan, President of DeepTarget. “Machine learning lets FIs further automate the process, lessening the burden on marketing staff. Utilizing the machine learning model provides complete flexibility for the financial institution to mix and match targeting methods per campaign – either AI, rules, or list-based targeting for truly relevant and human-like engagements. Offering this technology is critical in enabling financial institutions of all sizes to use techniques and insights previously reserved for only the largest institutions.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/deeptarget-uses-ml-algorithms-to-improve-customer-experience/">DeepTarget Uses ML Algorithms to Improve Customer Experience</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deeptarget-uses-ml-algorithms-to-improve-customer-experience/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>OverOps Brings Machine Learning to DevOps</title>
		<link>https://www.aiuniverse.xyz/overops-brings-machine-learning-to-devops/</link>
					<comments>https://www.aiuniverse.xyz/overops-brings-machine-learning-to-devops/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 17 Aug 2018 05:58:04 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[application development]]></category>
		<category><![CDATA[application programming]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[IT]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[ML algorithms]]></category>
		<category><![CDATA[OverOps]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2746</guid>

					<description><![CDATA[<p>Source &#8211; devops.com OverOps has launched a namesake platform employing machine learning algorithms to capture data from an IT environment that identify potential issues before a DevOps team <a class="read-more-link" href="https://www.aiuniverse.xyz/overops-brings-machine-learning-to-devops/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/overops-brings-machine-learning-to-devops/">OverOps Brings Machine Learning to DevOps</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; devops.com</p>
<p>OverOps has launched a namesake platform employing machine learning algorithms to capture data from an IT environment that identify potential issues before a DevOps team decides to promote an application into production.</p>
<p>Company CTO Tal Weiss said the OverOps Platform is unique in that, rather than relying on log data, it combines static and dynamic analysis of code as it executes to detect issue. That data then can be accessed either via dashboards or shared with other tools via an open application programming interface (API). The dashboards included with the OverOps Platform are based on open source project Grafana software.</p>
<p>That approach makes it possible to advance usage of artificial intelligence (AI) within IT operations without necessarily requiring that every tool in a DevOps pipeline be upgraded to include support for machine learning algorithms, Weiss said.</p>
<p>OverOps also includes in the platform access to an AWS Lambda-based framework or separate on-premises serverless computing framework to enable DevOps teams to also create their own custom functions and workflows.</p>
<p>Weiss said OverOps is designed to capture machine data about every error and exception at the moment they occur, including details such as the value of all variables across the execution stack, the frequency and failure rate of each error, the classification of new and reintroduced errors and the associated release numbers for each event. Log data is, by comparison, relatively shallow in that it is challenging to determine precise root cause analysis when trying to troubleshoot an issue, he said, noting the OverOps Platform offers visibility into the uncaught and swallowed exceptions that would otherwise be unavailable in log files.</p>
<p>DevOps teams spend an inordinate amount of time analyzing log files in the hopes of discovering an anomaly. But as IT environments continue to scale out, the practicality of analyzing millions, possibly even billions, of log files becomes impractical. OverOps is making the case for employing machine learning algorithms to analyze events before the log file is even created, which eliminates the need to find some way to store log files before they can be analyzed.</p>
<p>There’s naturally a lot of trepidation when it comes to anything to do with machine learning algorithms and other form of AI to manage IT. But as the complexity of IT environments continues to increase, it’s clear DevOps teams will need to rely more on AI to mange IT at levels of scale that were once considered unimaginable. For example, while microservices based on containers may accelerate the rate at which applications can be developed and updated, they also can introduce a phenomenal amount of operational complexity. Most DevOps professionals would rather automate as much as possible the manual labor associated with operations, especially if that leads to more certainty about the quality of the software being promoted into a production environment.</p>
<p>Of course, while making use of machine learning algorithms to analyze code represents a step forward in terms of automation, it’s still a very long way from eliminating the need for DevOps teams altogether.</p>
<p>The post <a href="https://www.aiuniverse.xyz/overops-brings-machine-learning-to-devops/">OverOps Brings Machine Learning to DevOps</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/overops-brings-machine-learning-to-devops/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>Advancing security and ensuring privacy with machine learning</title>
		<link>https://www.aiuniverse.xyz/advancing-security-and-ensuring-privacy-with-machine-learning/</link>
					<comments>https://www.aiuniverse.xyz/advancing-security-and-ensuring-privacy-with-machine-learning/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 26 Jul 2018 05:46:08 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Advancing security]]></category>
		<category><![CDATA[AI security]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[ML algorithms]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2654</guid>

					<description><![CDATA[<p>Source &#8211; helpnetsecurity.com The Internet has many issues: lack of encryption and its governance, questionable marketing techniques, a misinformed average user. These issues are as old as the <a class="read-more-link" href="https://www.aiuniverse.xyz/advancing-security-and-ensuring-privacy-with-machine-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/advancing-security-and-ensuring-privacy-with-machine-learning/">Advancing security and ensuring privacy with machine learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; helpnetsecurity.com</p>
<p>The Internet has many issues: lack of encryption and its governance, questionable marketing techniques, a misinformed average user. These issues are as old as the Internet itself. And machine learning algorithms can become the right tool to solve them.</p>
<p><strong>1999 to 2018: Better connectivity, legacy technology</strong></p>
<p>Back in 1999, only 4 percent of the world’s population was online. Today the number has reached 49%, and it just keeps growing. The migration from the offline world was followed by a rapid development of numerous online services and the advancement of connected devices.</p>
<p>The growth was so fast that most of the new industries didn’t implement adequate processes to ensure privacy and security.</p>
<p>This is not surprising knowing that getting hacked is rather common. Infections can be caught by merely opening the wrong email. Sharing information with a seemingly reputable company online could result in sensitive data leaks. Any device could become a zombie in a botnet without showing obvious signs of it.</p>
<p>Numerous data breaches threaten identities and bank accounts. Personal data is sold for anything from $50 for health records to $1000 for bank account information on the deep web. 91% of Americans agree that people have lost control over the collection and usage of personal information.</p>
<p>The technology has evolved, malicious actors are catching up swiftly, but the security is lagging behind.</p>
<p><strong>Artificial intelligence as a tool to ensure</strong> privacy<strong> and security</strong></p>
<p>Artificial Intelligence (AI) today creates a lot of heated discussions. It is seen as a marketing tool, an alternative term for statistical analysis, or an overhyped magical cure for all the tech problems. Yet, the key message is that after years of development, machine learning algorithms finally are delivering valuable results.</p>
<p>It is used in search engines, image classification, voice recognition, and many other areas. Its usage is growing in medicine, communications, transport, and gaming. Even in its early stages, AI is delivering results that are hard to compare with insights human analysts come up working without the help from machines.</p>
<p>The discussion is shifting: the internet is moving away from using AI as a buzzword towards embracing it as a tool.</p>
<h3>How can AI algorithms advance user privacy and security?</h3>
<p>The key issue that cybersecurity companies face today? It’s striking a balance between security and privacy. DNS blacklisting is the most popular legacy method to ensure this. It involves having a database of malicious websites and blocking those websites on user computers.</p>
<p>To achieve that, the legacy security companies have to read the communication between the user computer and the web server where the website is stored. The process is called Deep Packet Inspection (DPI). During it, user privacy might not be preserved: it’s the equivalent of your postman reading your letters.</p>
<p>The metadata that is unencrypted contains enough information to provide the necessary protection to the end user. Security can be provided without DPI by analyzing only the metadata: a small portion of data that can be found in the header (the “label”) of the data packet. Security companies can use machine learning, create behavioral profiles, and use only the metadata of the packets that travel from the user to the website servers.</p>
<h3>Why artificial intelligence?</h3>
<p>Security algorithms powered by AI are beneficial because of three reasons:</p>
<ul>
<li><strong>Speed.</strong> Training such algorithms is much faster than collecting rules that define malicious actors.</li>
<li><strong>Precision.</strong> Copious amounts of data can be analyzed easily; therefore more precise results are delivered. It also enables personalization.</li>
<li><strong>Privacy.</strong> You don’t need to use large amounts of data, or intrusive methods such as DPI, to provide the results.</li>
</ul>
<p>The key takeaway here is that Artificial Intelligence is a tool. It has its limitations and even at the current speed of advancement, AI is not going to replace highly skilled human workers anytime soon.</p>
<p>That said, artificial intelligence can help businesses to provide better service for their users while keeping their privacy intact. It can help create that balance that legacy strategies are missing.</p>
<h3>About CUJO AI</h3>
<p>CUJO AI is the leading artificial intelligence company providing network operators AI-driven solutions, including AI security, advanced device identification, advanced parental controls, and network analytics. CUJO AI Platform creates intuitive end-user facing applications for LAN and wireless (mobile and public wifi). Each solution can be implemented as a white-label offering. CUJO AI was recently listed as a “Vendor to Watch” and a “Cool Vendor in IoT security” by research company Gartner. In May 2018, the company has closed a strategic Series B round, led by Charter Communications, valuing the company in access of $100M. CUJO AI was selected as one of the World Economic Forum’s Technology Pioneers 2018.</p>
<p>The post <a href="https://www.aiuniverse.xyz/advancing-security-and-ensuring-privacy-with-machine-learning/">Advancing security and ensuring privacy with machine learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/advancing-security-and-ensuring-privacy-with-machine-learning/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>Writing the future of machine learning and invention</title>
		<link>https://www.aiuniverse.xyz/writing-the-future-of-machine-learning-and-invention/</link>
					<comments>https://www.aiuniverse.xyz/writing-the-future-of-machine-learning-and-invention/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 12 Jun 2018 05:51:55 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Future]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[machine learning specialists]]></category>
		<category><![CDATA[ML algorithms]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2483</guid>

					<description><![CDATA[<p>Source &#8211; itproportal.com The research paper, entitled “Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features”, will be presented by Matteo Pagliardini. Pagliardini is a senior machine learning <a class="read-more-link" href="https://www.aiuniverse.xyz/writing-the-future-of-machine-learning-and-invention/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/writing-the-future-of-machine-learning-and-invention/">Writing the future of machine learning and invention</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; itproportal.com</p>
<p>The research paper, entitled “<em>Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features</em>”, will be presented by Matteo Pagliardini. Pagliardini is a senior machine learning engineer at Iprova and one of the three scientists that authored the research paper and developed the new model for unsupervised training, Sent2Vec. The other authors are Prakhar Gupta and Professor Martin Jaggi of École polytechnique fédérale de Lausanne (EPFL).</p>
<p>While there have been several successes in deep learning in recent years, the paper notes that these have almost exclusively relied on supervised training. Pagliardini cites a specific research paper by Mikolov et al (2013) as being particularly worthy of note for the success of semantic word embeddings — representations of words with similar meanings — trained unsupervised. The new paper presents a way of finding similar success for longer sequences of text rather than individual words.</p>
<p>“There are very useful semantic representations available for words but producing and learning semantic embeddings for longer text has always proven difficult”, explained Pagliardini. &#8220;It was especially challenging to see whether such general-purpose representations could be obtained using unsupervised learning.&#8221;</p>
<p>“By taking inspiration from the existing C-BOW model of the Word2Vec algorithm, we were able to develop a computationally efficient method to train sentence embeddings. Our evaluations found that our method achieves a better performance on average than most other models, with a particular proficiency in evaluating sentence similarity. At NAACL HLT, we will explore our research further and explain where future work may take our Sent2Vec model.”</p>
<p>The paper was accepted for the NAACL HLT conference after an extensive review process from leading figures in the computational research community. The Sent2Vec model outlined in the paper is open source and available for use.</p>
<h3 id="sent2vec-in-practice">Sent2Vec in practice</h3>
<p>Sent2Vec forms part of Iprova’s pioneering technology that provides a data-driven approach for the creation of commercially relevant inventions. Hundreds of patents have been filed based on Iprova’s inventions by some of the world’s most respected technology companies. The specialised algorithm allows the right invention to be created at the right time in a way that has never before been possible – and over 20 of the world’s best-known businesses have already benefited.</p>
<p>The technology brings together topics from seemingly distant areas, for example, inventively connecting an advance in geographic mapping to elevator scheduling, or an advance in autonomous vehicle control systems to personal healthcare. Other examples include connecting a specific drug delivery technique to a high value oil exploration problem, and the introduction of LED backlit displays to gesture recognition.</p>
<p>The company has kept a discreet profile since its inception, allowing global brands including Philips, Panasonic, and Deutsche Telekom to file hundreds of new patents based on its inventions. These inventions may provide the foundation for new products and services across a wide range of industries and sectors.</p>
<p>Iprova’s inventions are driven by advances outside of the areas where its customers are active, and are complementary to those created in their R&amp;D labs.</p>
<p>Iprova’s approach creates inventions which have an improved chance of being disruptive due to their diversity and timing. During a recent project with Philips focused on nutrition, Iprova contributed to inventions driven by diverse advances in areas including healthcare, video processing, materials, genetics and predictive learning.</p>
<p>&#8220;Iprova complements Philips&#8217; own research activities with its out-of-the-box inventions”, says Maaike van Velzen, Head of IP Portfolio Management at Philips. &#8220;I am very impressed with Iprova’s technical expertise and advanced thinking.&#8221;</p>
<h3 id="data-driven-growth">Data-driven growth</h3>
<p>Due to the success of the company’s technology, Iprova has grown significantly since its formation in 2010. Now, the company has three offices: one in Lausanne, Switzerland; one in Cambridge, UK; and a newly opened one in London. As a result, Iprova is now looking to grow its team of invention developers to match both the expanding capabilities of its AI system and the growing market demand for intelligent invention.</p>
<p>Jasper Van den Berg, an invention developer working at Iprova’s head office in Lausanne, asserts that Iprova’s invention developer role redefines inventors for the digital age. “Traditional inventors were scientists or engineers with a deep understanding of a specific technical field. This only gave the inventor access to a limited amount of research insight,” explains Van den Berg.</p>
<p>“Even collaborative inventing through teamwork only provides insight into a handful of additional fields, since it’s just a team of specialists. With such approaches to invention, researchers can only dig deeper into specific areas rather than offering genuine innovation by taking the field in a different direction.</p>
<p>“Iprova does this on a massive scale – in real-time – by using data from across the spectrum of human knowledge to make connections between ideas from different fields of study.”</p>
<p>Iprova’s invention developer role provides a unique perspective on this. The job involves scientists and engineers working in a role made possible thanks to AI, with invention developers using data presented by Iprova’s intelligent algorithms to create inventions that define the products and services of tomorrow.</p>
<p>“The invention developer is a job that goes hand in hand with technological advancement,” explains Julian Nolan, founder of Iprova. “Iprova is transforming invention by making use of data, algorithms and machines to streamline the research process and create inventions much faster and with greater diversity than would otherwise be possible.</p>
<p>“Our technology and invention developers have been successful in creating landmark inventions for some of the world’s best-known companies in the US and Asia, as well as in much of Europe.  It allows us to operate in industries as diverse as healthcare, autonomous vehicles, finance and energy, which is only possible thanks to the data processing capabilities of our data-driven approach to invention. Our system has delivered jobs for the local economy and value to businesses and markets worldwide.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/writing-the-future-of-machine-learning-and-invention/">Writing the future of machine learning and invention</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/writing-the-future-of-machine-learning-and-invention/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Machine Learning, Big Data and the Future of Higher Ed</title>
		<link>https://www.aiuniverse.xyz/machine-learning-big-data-and-the-future-of-higher-ed/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-big-data-and-the-future-of-higher-ed/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 22 Mar 2018 05:23:21 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[Big Data Analytics]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[ML algorithms]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2128</guid>

					<description><![CDATA[<p>Source &#8211;  insidehighered.com If you ask, many people will say we are in a new era of higher education, one where machine learning and big data analytics­­ are <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-big-data-and-the-future-of-higher-ed/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-big-data-and-the-future-of-higher-ed/">Machine Learning, Big Data and the Future of Higher Ed</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211;  insidehighered.com</p>
<p>If you ask, many people will say we are in a new era of higher education, one where machine learning and big data analytics­­ are driving rapid change. From the influx of adaptive learning technologies to the automated student support services and predictive analytics models driving new interventions, there are fewer spaces of college and university life that are not being touched by these technological innovations.</p>
<p>These technological opportunities could offer a lot to higher education. Indeed, if we ignore the opportunities that machine learning and big data analytics might provide to complement our human capacities, we will do a disservice to those we claim to serve &#8212; our students.</p>
<p>But if we treat them as an opportunity to downsize the work force or largely replace human social interactions with automated ones, we are going to lose a lot more than we gain. Herein lies the dilemma. What is the balance?</p>
<p>Given this time of turbulent change, I want to offer some reflection on the future of higher education in the age of machine learning and big data analytics. I do so in the context of what these technological opportunities may provide and how higher education has to manage its relationships to those technologies across three key areas: teaching and learning, predictive analytics, and student support.</p>
<p>In the age of machine learning and big data analytics, higher education is being offered the opportunity to personalize its education through adaptive learning technologies. As the machines get smarter, so too do the adaptive learning algorithms that can respond to students. This is not, of course, a wholesale swap-out of faculty-led teaching for machine-driven learning. It is way too early for that world.</p>
<p>But the adaptive learning movement &#8212; driven by a growing for-profit educational technologies industry &#8212; should produce a moment of pause for higher education. The promise underneath these technologies is the notion of a hyperindividualism whereby each person can collect up skills and credentials with less and less social interaction. In fact, adaptive learning technologies, while not being built right now to take the place of everyday teaching and learning, can feed into a prevailing mind-set that continues to call into question the value of higher education, the role of the university and college, and the teaching and learning that take place there.</p>
<p>If we have learned anything from the slew of reports that suggest that fewer and fewer adults see the “real value” of higher education, the ability to bypass the critical thinking work that goes on in the day-to-day world of the university becomes appealing to those who are ideologically opposed to a broad liberal understanding of higher education. Instead of adaptive learning playing the role of supplemental tool &#8212; and a very good one in many cases &#8212; these technologies could be sold off as a cheaper, faster and less contentious road toward a credential that will support advancement in the future work force.</p>
<p>What is lost by this move is the interstitial work that happens in universities that continue to bundle their education into programs that focus student attention on the interconnections, debates and opportunities provided by thinking across different topics, concepts and ideas.</p>
<p>Technological advancements are also shaping student success and the big data futures promised by predictive analytics. Committed to one of the core values of machine learning is the potential to analyze large sets of data that can be used to focus more intently on the individual learner. No longer will institutions be beholden to crude models that rely on population-level metrics for cohorts of students. Instead, variables can collide in a big data machine and focus attention on the individual student in a much more granular way.</p>
<p>There remains a lot of uncertainty in the big data futures that many companies and universities are promising. Even after the machine learning algorithms provide results, the question of why a certain finding appears can often remain a mystery; it still takes good old-fashioned qualitative work with faculty, staff and students to really understand the answer the machines might be giving us. Machine learning and big data analytics could also push universities to provide solutions to students that might not be in their best interest. Those solutions, if built on a deficit model instead of a growth mind-set approach, might track students into support services or academic enrichment programs that create a sense of isolation from instead of a sense of belonging to the wider campus community.</p>
<p>The caution here is that the machine learning algorithms can only do so much. The “noise” found in what the system is “not telling us” remains just as important as the predictions the algorithms might generate. Practitioners have to interpret the machine learning-based results. If not, universities are going to be driving their strategies toward machine learning-derived outcomes that might not impact student success.</p>
<p>Put another way, while big data analytics provide a real opportunity to move toward more predictive models and interventions, if universities de-invest (or not invest at all!) in the talent to interpret those results and act upon them, they will likely find little value in their long-term value proposition.</p>
<p>Higher education is seeing opportunity produced by machine learning and big data analytics in the space of directed student support. New technologies are emerging that allow students to map their progress toward degree, figure out their next academic step, connect to employers and even build skills that enhance their education along the way. There is real promise in this future, particularly if these technologies can help institutions scale support in ways that are not tenable in an environment where they have more students and those students are taking advantage of online and hybrid education to diversify their learning experience. Students are simply not in the same place at the same time anymore, and technological interventions can help recreate some of that social loss if managed effectively.</p>
<p>Perhaps, most importantly, machine learning and big data analytics provide higher education the opportunity to unfetter learning from the rote system of knowledge acquisition and take advantage of learning analytics to more deliberately engage students. The challenge of any such system, of course, is the question of to what are you building connections? Is it to the system or to each other? If it is to each other, what are the goals of making those connections and to whom? But a responsive system that can easily direct students to resources can go a long way in helping institutions manage the much more complex future of student learning, where students are not arriving in first-time, full-time cohorts to complete a degree 15 units at a time.</p>
<p>I believe there remains a lot of potential in a future of machine learning and big data analytics. But, for it to be realized, higher education may have to either commit to a certain level of privacy invasion &#8212; students will have to volunteer more and more data to refine the models &#8212; or sacrifice certain analytic power to provide students the relative privacy they want to maintain. How universities create systems to both protect privacy and support students will be a challenge as the privacy debate heats up both locally and globally.</p>
<p>Given this and the many other issues outlined above, I am neither an optimist or a pessimist when it comes to the future of higher education in the age of machine learning and big data analytics. I am, however, trying to be realistic and responsive to the potential futures that new education technologies present. In that response, I hope that universities pause and reflect on the equally important human capital and social interactions that remain essential to higher education and leverage real opportunities, such as those afforded by open educational resources or collaborative, student-centered virtual learning tools based in the principles of universal design, for example, to create a better learning future.</p>
<p>What this all suggests is that higher education should have a serious and ongoing conversation about how we place machine learning and big data analytics into our institutions and what infrastructures these new technologies demand so that they are responsive to us and not the other way around.</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-big-data-and-the-future-of-higher-ed/">Machine Learning, Big Data and the Future of Higher Ed</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-big-data-and-the-future-of-higher-ed/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>Machine learning capabilities aid healthcare cybersecurity</title>
		<link>https://www.aiuniverse.xyz/machine-learning-capabilities-aid-healthcare-cybersecurity/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-capabilities-aid-healthcare-cybersecurity/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 28 Dec 2017 05:31:27 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[cyberattack]]></category>
		<category><![CDATA[cybersecurity]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[ML algorithms]]></category>
		<category><![CDATA[security tools]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1926</guid>

					<description><![CDATA[<p>Source &#8211; techtarget.com Why is now the time for healthcare organizations to consider applying machine learning capabilities to cybersecurity? Matt Mellen: Healthcare has seen more than its fair share <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-capabilities-aid-healthcare-cybersecurity/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-capabilities-aid-healthcare-cybersecurity/">Machine learning capabilities aid healthcare cybersecurity</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>techtarget.com</strong></p>
<p><b>Why is now the time for healthcare organizations to consider applying machine learning capabilities to cybersecurity?</b></p>
<p>Matt Mellen: Healthcare has seen more than its fair share of cyberattacks for a variety of reasons and it urgently needed a game-changing security technology to prevent them. I think machine learning is that game changer, and it&#8217;s going to have a pretty significant impact on [the ability of healthcare organizations] to protect themselves from cyberattacks, cyberbreaches, at the same time improving healthcare practitioners&#8217; ability to provide highly accurate diagnoses. The key in making machine learning algorithms that work properly is having a lot of data to feed into the algorithm. The more data, the better; the more data, the more accurate the machine learning algorithm result.</p>
<p>In healthcare, I know that hospital networks are building massive data lakes to store all their health information with the intent on having it evaluated by machine learning algorithms and hopefully result in the ability to provide better diagnosis. But in cybersecurity the winners are going to be &#8212; and by winners, I mean the security tools that will be the most effective &#8212; those that will have a significant amount of threat data to feed into their machine learning algorithms.</p>
<p>Machine learning is clearly going to have a growing impact on the effectiveness of cyberattack prevention and beyond just medical diagnosesto other areas of the field like predictive analytics, which is predicting outcomes before they happen, using natural language processing to extract meaning out of images, which is a real challenge in healthcare, because, for example, radiology images are not easily searched or digested by software.</p>
<p>My recommendation is for CISOs of healthcare organizations to start planning to adopt machine learning capabilities in their cybersecurity programs, and to specifically look for security products that have machine learning based on large data sets and ensure that they have consistent cyberattack coverage across the end points, the network and in the cloud.</p>
<p><b>What kind of investment will healthcare organizations have to make to apply machine learning capabilities to their cybersecurity programs?</b></p>
<p>Mellen: It really depends on the size of the organization. &#8230; But what I typically recommend is focusing on a phased approach to most problems. A lot of healthcare organizations first focus on the edge, protect the edge of their network, figure out the ingress and egress points to their network and protect those first. And you can do that with a next-generation firewall. &#8230; It does not require a significant amount of change to the environment.</p>
<p><b>Do you see cyberattacks continuing to be a threat in 2018?</b></p>
<p>Mellen: Ransomware is definitely going to continue given that it is the most effective and quickest way for attackers to monetize their efforts and not get caught. If you end up exfiltrating or stealing protected health information out of healthcare organizations, you have to figure out how to sell it. And when you do that and you go into the dark web to sell it, there&#8217;s a higher risk of getting caught by the authorities. Hence, most attackers continue to just widely use ransomware to make money and not get caught.</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-capabilities-aid-healthcare-cybersecurity/">Machine learning capabilities aid healthcare cybersecurity</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-capabilities-aid-healthcare-cybersecurity/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>5 top machine learning use cases for security</title>
		<link>https://www.aiuniverse.xyz/5-top-machine-learning-use-cases-for-security/</link>
					<comments>https://www.aiuniverse.xyz/5-top-machine-learning-use-cases-for-security/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 13 Dec 2017 06:03:54 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[ML algorithms]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1879</guid>

					<description><![CDATA[<p>Source &#8211; csoonline.com At its simplest level, machine learning is defined as “the ability (for computers) to learn without being explicitly programmed.” Using mathematical techniques across huge datasets, <a class="read-more-link" href="https://www.aiuniverse.xyz/5-top-machine-learning-use-cases-for-security/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/5-top-machine-learning-use-cases-for-security/">5 top machine learning use cases for security</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211;<strong> csoonline.com</strong></p>
<p>At its simplest level, machine learning is defined as “the ability (for computers) to learn without being explicitly programmed.” Using mathematical techniques across huge datasets, machine learning algorithms essentially build models of behaviors and use those models as a basis for making future predictions based on newly input data. It is Netflix offering up new TV series based on your previous viewing history, and the self-driving car learning about road conditions from a near-miss with a pedestrian.</p>
<p>So, what are the machine learning applications in information security?</p>
<p>In principle, machine learning can help businesses better analyze threats and respond to attacks and security incidents. It could also help to automate more menial tasks previously carried out by stretched and sometimes under-skilled security teams.</p>
<p>Subsequently, machine learning in security is a fast-growing trend. Analysts at ABI Research estimate that machine learning in cyber security will boost spending in big data, artificial intelligence (AI) and analytics to $96 billion by 2021, while some of the world’s technology giants are already taking a stand to better protect their own customers.</p>
<aside id="" class="nativo-promo nativo-promo-1 smartphone"></aside>
<p>Google is using machine learning to analyze threats against mobile endpoints running on Android &#8212; as well as identifying and removing malware from infected handsets, while cloud infrastructure giant Amazon has acquired start-up harvest.AI and launched Macie, a service that uses machine learning to uncover, sort and classify data stored on the S3 cloud storage service.</p>
<p>Simultaneously, enterprise security vendors have been working towards incorporating machine learning into new and old products, largely in a bid to improve malware detection. “Most of the major companies in security have moved from a purely “signature-based” system of a few years ago used to detect malware, to a machine learning system that tries to interpret actions and events and learns from a variety of sources what is safe and what is not,” says Jack Gold, president and principal analyst at J. Gold Associates. “It’s still a nascent field, but it is clearly the way to go in the future. Artificial intelligence and machine learning will dramatically change how security is done.”</p>
<p>Though this transformation won’t happen overnight, machine learning is already emerging in certain areas. “AI &#8212; as a wider definition which includes machine learning and deep learning &#8212; is in its early phase of empowering cyber defense where we mostly see the obvious use cases of identifying patterns of malicious activities whether on the endpoint, network, fraud or at the SIEM,” says Dudu Mimran, CTO of Deutsche Telekom Innovation Laboratories (and also of the Cyber Security Research Center at Israel’s Ben-Gurion University). “I believe we will see more and more use cases, in the areas of defense against service disruptions, attribution and user behavior modification.”</p>
<p>Here, we break down the top use cases of machine learning in security.</p>
<h2>1. Using machine learning to detect malicious activity and stop attacks</h2>
<p>Machine learning algorithms will help businesses to detect malicious activity faster and stop attacks before they get started. David Palmer should know. As director of technology at UK-based start-up Darktrace – a firm that has seen a lot of success around its machine learning-based Enterprise Immune Solution since the firm’s foundation in 2013 – he has seen the impact on such technologies.</p>
<p>Palmer says that Darktrace recently helped one casino in North America when its algorithms detected a data exfiltration attack that used a “connected fish tank as the entryway into the network.” The firm also claims to have prevented a similar attack during the Wannacry ransomware crisis last summer.</p>
<p>“Our algorithms spotted the attack within seconds in one NHS agency’s network, and the threat was mitigated without causing any damage to that organization,” he said of the ransomware, which infected more than 200,000 victims across 150 countries.  “In fact, none of our customers were harmed by the WannaCry attack including those that hadn’t patched against it.”</p>
<h2>2. Using machine learning to analyze mobile endpoints</h2>
<p>Machine learning is already going mainstream on mobile devices, but thus far most of this activity has been for driving improved voice-based experiences on the likes of Google Now, Apple’s Siri, and Amazon’s Alexa. Yet there is an application for security too. As mentioned above, Google is using machine learning to analyze threats against mobile endpoints, while enterprise is seeing an opportunity to protect the growing number of bring-your-own and choose-your-own mobile devices.</p>
<p>In October, MobileIron and Zimperium announced a collaboration to help enterprises adopt mobile anti-malware solutions incorporating machine learning. MobileIron said it would integrate Zimperium’s machine learning-based threat detection with MobileIron’s security and compliance engine and sell the combined solution, which would address challenges like detecting device, network, and application threats and immediately take automated actions to protect the company’s data.</p>
<p>Other vendors are looking to bolster their mobile solutions, too. Along with Zimperium, LookOut, Skycure (which has been acquired by Symantec), and Wandera are seen to be the leaders in the mobile threat detection and defense market. Each uses its own machine learning algorithm to detect potential threats. Wandera, for example, recently publicly released its threat detection engine MI: RIAM, which reportedly detected more than 400 strains of repackaged SLocker ransomware targeting businesses&#8217; mobile fleets.</p>
<aside id="" class="nativo-promo nativo-promo-2 tablet desktop smartphone"></aside>
<h2>3. Using machine learning to enhance human analysis</h2>
<p>At the heart of machine learning in security, there is the belief that it helps human analysts with all aspects of the job, including detecting malicious attacks, analyzing the network, endpoint protection and vulnerability assessment. There’s arguably most excitement though around threat intelligence.</p>
<p>For example, in 2016, MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) developed a system called AI<sup>2</sup>, an adaptive machine learning security platform that helped analysts find those ‘needles in the haystack’. Reviewing millions of logins each day, the system was able to filter data and pass it onto the human analyst, reducing alerts down to around 100 per day. The experiment – carried by CSAIL and start-up PatternEx &#8212; showed that the attack detection rate rose to 85 percent with a five-fold decrease in false positives.</p>
<h2>4. Using machine learning to automate repetitive security tasks</h2>
<p>The real benefit of machine learning is that it could automate repetitive tasks, enabling staff to focus on more important work. Palmer says that machine learning ultimately should aim to “remove the need for humans to do repetitive, low value decision making activity, like triaging threat intelligence. “Let the machines handle the repetitive work and the tactical firefighting like interrupting ransomware, so that the humans can free up time to deal with strategic issues — like modernizing off Windows XP — instead.”</p>
<p>Booz Allen Hamilton has gone down this route, reportedly using AI tools to more efficiently allocate human security resources, triaging threats so workers could focus on the most critical attacks.</p>
<h2>5. Using machine learning to close zero-day vulnerabilities</h2>
<p>Some believe that machine learning could help close vulnerabilities, particularly zero-day threats and others that target largely unsecured IoT devices. There has been proactive work in this area: A team at Arizona State University used machine learning to monitor traffic on the dark web to identify data relating to zero-day exploits, according to Forbes. Armed with this type of insight, organizations could potentially close vulnerabilities and stop patch exploits before they result in a data breach.</p>
<h2>Hype and misunderstanding muddies the landscape</h2>
<p>However, machine learning is no silver bullet, not least for an industry still experimenting with these technologies in proof of concepts. There are numerous pitfalls. Machine learning systems sometimes report false positives (from unsupervised learning systems where the algorithms infer categories based on data), while some analysts have spoken candidly about how machine learning in security can represent a “black box” solution, where CISOs aren’t totally sure what’s “under the hood.” They are thus forced to place their trust and responsibility on the shoulders of the vendor – and the machines.</p>
<p>This idea of trust isn’t ideal in a world where some security solutions may not even be doing machine learning, after all. “Most of the machine learning inventions that have been touted aren’t really doing any learning ‘on the job’ within the customer’s environment,” said Palmer. “Instead, they have models trained on malware samples in a vendor’s cloud and are downloaded to customer businesses like antivirus signatures. This isn’t particularly progressive in terms of customer security and remains fundamentally backward looking.”</p>
<p>Furthermore, on these training data samples — required for the algorithms to learn their models before being put to use in the ‘real’ world — there’s the suggestion that poor data and implementation will result in even poorer results. “Machine learning is only as good as the input information you provide it (garbage in, garbage out),” says Gold. “So, if your machine learning algorithms are not well designed, the results won’t be very useful. Having algorithms that work on training data sets in the lab is one thing, but one of the biggest challenges around machine learning cyber defense is getting it working at scale in live, complex networks.”</p>
<p>&nbsp;</p>
<p>The post <a href="https://www.aiuniverse.xyz/5-top-machine-learning-use-cases-for-security/">5 top machine learning use cases for security</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/5-top-machine-learning-use-cases-for-security/feed/</wfw:commentRss>
			<slash:comments>84</slash:comments>
		
		
			</item>
		<item>
		<title>Scientists use artificial intelligence to eavesdrop on dolphins</title>
		<link>https://www.aiuniverse.xyz/scientists-use-artificial-intelligence-to-eavesdrop-on-dolphins/</link>
					<comments>https://www.aiuniverse.xyz/scientists-use-artificial-intelligence-to-eavesdrop-on-dolphins/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 09 Dec 2017 07:49:15 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[ML algorithms]]></category>
		<category><![CDATA[scientists]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1855</guid>

					<description><![CDATA[<p>Source &#8211; independent.co.uk Scientists have developed an algorithm to monitor the underwater chatter of dolphins with the help of machine learning. Using autonomous underwater sensors, researchers working in <a class="read-more-link" href="https://www.aiuniverse.xyz/scientists-use-artificial-intelligence-to-eavesdrop-on-dolphins/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/scientists-use-artificial-intelligence-to-eavesdrop-on-dolphins/">Scientists use artificial intelligence to eavesdrop on dolphins</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; independent.co.uk</p>
<p>Scientists have developed an algorithm to monitor the underwater chatter of dolphins with the help of machine learning.</p>
<p>Using autonomous underwater sensors, researchers working in the Gulf of Mexico spent two years making recordings of dolphin echolocation clicks.</p>
<p>The result was a data set of 52 million click noises.</p>
<p>To sort through this vast amount of information, the scientists employed an “unsupervised” algorithm that automatically classified the noises into categories.</p>
<p>Without being “taught” to recognise patterns that were already known, the algorithm was able to seek original patterns in the data and identify types of click.</p>
<p>This enabled the scientists to determine specific patterns of clicks among the millions of clicks being recorded, and could help them to identify dolphin species in the wild.</p>
<p>“It’s fun to think about how the machine learning algorithms used to suggest music or social media friends to people could be re-interpreted to help with ecological research challenges,” said Dr Kaitlin Frasier of Scripps Institution of Oceanography, the lead author of the study published in the journal <em>PLOS Computational Biology</em>.</p>
<p>“Innovations in sensor technologies have opened the floodgates in terms of data about the natural world, and there is a lot of room for creativity right now in ecological data analysis,” she said.</p>
<p>Monitoring dolphin populations at sea is challenging.</p>
<p>Dr Frasier and her colleagues think their techniques could be employed to sift through large quantities of data and keep track of dolphin populations in a non-disruptive way.</p>
<p>Dolphins are an incredibly diverse family of mammals, and different species use different types of click to echolocate.</p>
<p>This research team’s work so far was able to identify one click type associated with a particular dolphin species – Risso’s dolphin – and they intend to conduct field work that will link other click types with other known species.</p>
<p>They also hope their research will allow them to monitor the impact of oil spills and climate change on the dolphin populations of the Gulf of Mexico.</p>
<p>&nbsp;</p>
<p>The post <a href="https://www.aiuniverse.xyz/scientists-use-artificial-intelligence-to-eavesdrop-on-dolphins/">Scientists use artificial intelligence to eavesdrop on dolphins</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/scientists-use-artificial-intelligence-to-eavesdrop-on-dolphins/feed/</wfw:commentRss>
			<slash:comments>7</slash:comments>
		
		
			</item>
		<item>
		<title>Artificial Intelligence Is Here And It Wants To Revolutionize Psychiatry</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-is-here-and-it-wants-to-revolutionize-psychiatry/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-is-here-and-it-wants-to-revolutionize-psychiatry/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 31 Oct 2017 06:07:10 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Data Science]]></category>
		<category><![CDATA[Data scientist]]></category>
		<category><![CDATA[ML algorithms]]></category>
		<category><![CDATA[Revolutionize Psychiatry]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1596</guid>

					<description><![CDATA[<p>Source &#8211; forbes.com The rapture of the machines. The subject of fiery debates and endless banter. Everyone is talking about how robots are going to get all our <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-is-here-and-it-wants-to-revolutionize-psychiatry/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-is-here-and-it-wants-to-revolutionize-psychiatry/">Artificial Intelligence Is Here And It Wants To Revolutionize Psychiatry</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>forbes.com</strong></p>
<p>The rapture of the machines. The subject of fiery debates and endless banter. Everyone is talking about how robots are going to get all our jobs and take over the entire world as soon as they get brains of their own. But that’s not how the real world is supposed to work. In the real world, real people must make conscious decisions as to how and when robots will be deployed in particular industries in ways that better the lives of the humankind. But in order to do that, we must all ask ourselves the question, what are robots good for?</p>
<p>A robot isn’t functional in ways that a natural animal is functional. It can’t climb a flight of stairs, nor can it make intelligent deductions with regards to subjective matters as well as a human can. Don’t get me wrong, we do have robots that can walk around on two feet or scour the internet for fake news, but they are not nearly as effective as real humans in their position are expected to be. The amount of resources and hard work that goes into a robot to make it walk like a real human being is simply wasteful, we all know that artificial intelligence that helps fight fake news is not nearly as good as advertised. The reason behind this is simple, machines have a specific skillset, and they work best when they are implemented based on that skillset. Artificial intelligence, as it stands today, is not good at handling a lot of different tasks at once. It can, however, be exceptionally good at performing a singular task, like playing chess or recognizing objects within images, with greater accuracy than even humans can offer.</p>
<p>There’s also one more task that artificial intelligence seems to be particularly well-equipped at handling: it’s psychiatry. For several decades, psychology has been considered to be the one subject that is neither scientific nor humanitarian, but standing at the junction of the two. It is rather telling that machines, products of years of scientific and technological advancements, should be so good at understanding the details of the human mind, a lot of which is largely a humanitarian subject, out of the bounds of mainstream science.</p>
<div id="inread" data-google-query-id="CIys8eCNmtcCFcSJaAodbeEKdw">
<div id="google_ads_iframe_/7175/fdc.forbes/article-d_0__container__"><iframe id="google_ads_iframe_/7175/fdc.forbes/article-d_0" title="3rd party ad content" name="google_ads_iframe_/7175/fdc.forbes/article-d_0" width="1" height="1" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" data-mce-fragment="1"></iframe></div>
</div>
<blockquote><p>“What is really interesting is the way that apps will be able to prompt behaviour and therefore change physiology, emotion and thought. The combination of homeostasis and entropy means that human behaviour sinks toward ease. Apps can nudge us long before problems evolve and even coach us toward excellence. When we are distressed, they can recognise this and help us out. Apple Watch nudges about 15 million people every day &#8211; calories, movement, standing, sleeping and breathing. Put this all together and we already have a massive pressure toward better health &#8211; physical, mental and emotional.”- Dr. Sven Hansen, Founder of The Resilience Institute</p></blockquote>
<p>In a paper published in April, Colin Walsh, a professional data scientist at the Vanderbilt University Medical Centre, detailed the early stages of his work on a new artificially intelligent algorithm. Using a stream of data that is publicly available via hospital records and local registers, his algorithm can, with up to 90% accuracy, predict the likelihood of someone making an attempt on their lives within the next few months. His research, while still in its infancy, means a great deal to doctors and psychiatry professionals dealing with patients with suicidal intent, being able to understand and stop a patient who is about to take their own life before they can do anything serious. Walsh is hardly alone in his efforts. Facebook, the multibillion-dollar social media platform that forms the centre stage of our presence in the digital world, recently tested an algorithm that scoured through posts and status updates in order to flag people at the risk of committing self-harm, thereby notifying their family members long before anything bad can happen.</p>
<div class="vestpocket"></div>
<p>This wasn’t the only time the social media giant trained its machine learning algorithms to pursue the realms of human psychiatry. Woebot, a revolutionary chatbot that runs inside Facebook Messenger, was recently released by a bunch of researchers from the Stanford University. By having regular conversations with its users and tracking their mood via videos and word games, Woebot can function as your very own digital therapist, making assessments and recommending treatment based on your psychological condition. Similarly, Tess, an intelligent software that communicates with you via text messages, has also been known to administer psychotherapy to patients of depression, emotional instability and so on. It comes from the house of X2_AI and is priced at $50 a month.</p>
<p>These are just a few of the many ways in which researchers and psychiatry professionals have used artificial intelligence as a key tool in diagnosing and treating mental health disorders in recent times. Researchers at Harvard University and the University of Vermont recently showed us how we can diagnose depression based on the photos people uploaded online. Scientists at the University of Texas recently outlined the use of computer vision and artificial intelligence to help detect ADHD in children. The list goes on.</p>
<blockquote><p>“As a neuroscientist, I want to understand the brain. Beyond just the physical structures of neurons and the synapsis, but how it works. How is it that we think? How is it that 2lbs of protein and water can produce this amazing, complex organ that literally drives humanity? Ultimately, behavior is what the brain is for. We, as the scientific and medical community, are studying behavior with the same types of computational approaches that we use to study the physical attributes and workings of the brain.” &#8211; Guillermo Cecchi, Biometaphorical Computing at IBM Research</p></blockquote>
<p>Despite everything that we have done to prevent this, there is no denying the fact that psychological disorders are still largely stigmatized. People in need of attention from a mental health professional often fear being ridiculed and judged, the consequences of having to share your deepest secrets with the human sitting opposite the couch. With machines, however, people feel a lot more comfortable sharing their true feelings and innermost secrets knowing that the thing on the other side is not their to judge, only to help. Can artificial intelligence become the next big thing for psychology? Only time will tell.</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-is-here-and-it-wants-to-revolutionize-psychiatry/">Artificial Intelligence Is Here And It Wants To Revolutionize Psychiatry</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-is-here-and-it-wants-to-revolutionize-psychiatry/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>How to Spot a Machine Learning Opportunity, Even If You Aren’t a Data Scientist</title>
		<link>https://www.aiuniverse.xyz/how-to-spot-a-machine-learning-opportunity-even-if-you-arent-a-data-scientist/</link>
					<comments>https://www.aiuniverse.xyz/how-to-spot-a-machine-learning-opportunity-even-if-you-arent-a-data-scientist/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 21 Oct 2017 06:20:59 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Data Science]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Data scientist]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[ML algorithms]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1523</guid>

					<description><![CDATA[<p>Source &#8211; hbr.org Artificial intelligence is no longer just a niche subfield of computer science. Tech giants have been using AI for years: Machine learning algorithms power Amazon <a class="read-more-link" href="https://www.aiuniverse.xyz/how-to-spot-a-machine-learning-opportunity-even-if-you-arent-a-data-scientist/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-to-spot-a-machine-learning-opportunity-even-if-you-arent-a-data-scientist/">How to Spot a Machine Learning Opportunity, Even If You Aren’t a Data Scientist</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; hbr.org</p>
<p>Artificial intelligence is no longer just a niche subfield of computer science. Tech giants have been using AI for years: Machine learning algorithms power Amazon product recommendations, Google Maps, and the content that Facebook, Instagram, and Twitter display in social media feeds. But William Gibson’s adage applies well to AI adoption: The future is already here, it’s just not evenly distributed.</p>
<p>The average company faces many challenges in getting started with machine learning, including a shortage of data scientists. But just as important is a shortage of executives and nontechnical employees able to spot AI opportunities. And spotting those opportunities doesn’t require a PhD in statistics or even the ability to write code. (It will, spoiler alert, require a brief trip back to high school algebra.)</p>
<p>Having an intuition for how machine learning algorithms work – even in the most general sense – is becoming an important business skill. Machine learning scientists can’t work in a vacuum; business stakeholders should help them identify problems worth solving and allocate subject matter experts to distill their knowledge into labels for data sets, provide feedback on output, and set the objectives for algorithmic success.</p>
<p>As Andrew Ng has written: “Almost all of AI’s recent progress is through one type, in which some input data (A) is used to quickly generate some simple response (B).”</p>
<p>But how does this work? Think back to high school math — I promise this will be brief — when you first learned the equation for a straight line: y = mx + b. Algebraic equations like this represent the relationship between two variables, x and y. In high school algebra, you’d be told what m and b are, be given an input value for x, and then be asked to plug them into the equation to solve for y. In this case, you start with the equation and then calculate particular values.</p>
<p>Supervised learning reverses this process, solving for m and b, given a set of x’s and y’s. In supervised learning, you start with many particulars — the data — and infer the general equation. And the learning part means you can update the equation as you see more x’s and y’s, changing the slope of the line to better fit the data. The equation almost never identifies the relationship between each x and y with 100% accuracy, but the generalization is powerful because later on you can use it to do algebra on new data. Once you’ve found a slope that captures a relationship between x and y reliably, if you are given a new x value, you can make an educated guess about the corresponding value of y.</p>
<p>As you might imagine, many exciting machine learning problems can’t be reduced to a simple equation like y = mx + b. But at their essence, supervised machine learning algorithms are also solving for complex versions of m, based on labeled values for x and y, so they can predict future y’s from future x’s. If you’ve ever taken a statistics course or worked with predictive analytics, this should all sound familiar: It’s the idea behind linear regression, one of the simpler forms of supervised learning.</p>
<p>To return to Ng’s formulation, supervised learning requires you to have examples of both the input data and the response, both the x’s and the y’s. If you have both of those, supervised learning lets you come up with an equation that approximates that relationship, so in the future you can guess y values for any new value of x.</p>
<p>So the question of how to identify AI opportunities starts with asking: What are some outcomes worth guessing? And do we have the data necessary to do supervised learning?</p>
<p>For example, let’s say a data scientist is tasked with predicting real estate prices for a neighborhood. After analyzing the data, she finds that housing price (y) is tightly correlated to size of house (x). So, she’d use many data points containing both houses’ size and price, use statistics to estimate the slope (m), and then use the equation y = mx + b to predict the price for a given house based on its size. This is linear regression, and it remains incredibly powerful.</p>
<p>Organizations use similar techniques to predict future product sales, investment portfolio risk, or customer churn. Again, the statistics behind different algorithms vary in complexity. Some techniques output simple point predictions (We think y will happen!) and others output a range of possible predictions with affiliated confidence rates (There’s a 70% chance y will happen, but if we change one assumption, our confidence falls to 60%).</p>
<p>These are all examples of prediction problems, but supervised learning is also used for classification.</p>
<p>Classification tasks clump data into buckets. Here a data scientist looks for features in data that are reliable proxies for categories she wants to separate: If data has feature x, it goes into bucket one; if not, it goes into bucket two. You can still think of this as using x’s to predict y’s, but in this case y isn’t a number but a type.</p>
<p>Organizations use classification algorithms to filter spam, diagnose abnormalities on X-rays, identify relevant documents for a lawsuit, sort résumés for a job, or segment customers. But classification gains its true power when the number of classes increases. Classification can be extended beyond binary choices like “Is it spam or not?” to include lots of different buckets. Perception tasks, like training a computer to recognize objects in images, are also classification tasks, they just have many output classes (for example, the various animal species names) instead of just Bucket 1 and Bucket 2. This makes supervised learning systems look smarter than they are, as we assume their ability to learn concepts mirrors our own. In fact, they’re just bucketing data into buckets 1, 2, 3…n, according to the “m” learned for the function.</p>
<p>So far, this all feels rather abstract. How can you bring it down to earth and learn how to identify these mathematical structures in your everyday work?</p>
<p>There are a few ways you can determine whether a task presents a good supervised learning opportunity.</p>
<p>First, write down what you do in your job. Break apart your activities into: things you do daily or regularly versus things you do sporadically; things that have become second nature versus things that require patient deliberation or lots of thought; and things that are part of a process versus things you do on your own.</p>
<p>For those tasks that you perform regularly, on your own, and that feel automatic, identify how many others in your organization do similar tasks and how many people have done this historically.</p>
<p>Examine the nature of the task. Does it include predicting something or bucketing something into categories?</p>
<p>Ask yourself: If 10 colleagues in your organization performed the task, would they all agree on the answer? If humans can’t agree something is true or false, computers can’t reliably transform judgment calls into statistical patterns.</p>
<p>How long have people in the organization been doing something similar to this task? If it’s been a long time, has the organization kept a record of successfully completed tasks? If yes, this could be used as a training data set for your supervised learning algorithm. If no, you may need to start collecting this data today, and then you can keep a human in the loop to train the algorithm over time.</p>
<p>Next, sit down with a data science team and tell them about the task. Walk them through your thought process and tell them what aspects of information you focus on when you complete your task. This will help them determine if automation is feasible and tease out the aspects of the data that will be most predictive of the desired output.</p>
<p>Ask yourself, if this were automated, how might that change the products we offer to our customers? Ask, what is the worst thing that could happen to the business if this were to be automated? And finally, ask, what is the worst thing that could happen to the business if the algorithm outputs the wrong answer or an answer with a 65% or 70% accuracy rate? What is the accuracy threshold the business requires to go ahead and automate this task?</p>
<p>Succeeding with supervised learning entails a shift in the perspective on how work gets done. It entails using past work — all that human judgment and subject matter expertise — to create an algorithm that applies that expertise to future work. When used well, this makes employees more productive and creates new value. But it starts with identifying problems worth solving and thinking about them in terms of inputs and outputs, x’s and y’s.</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-to-spot-a-machine-learning-opportunity-even-if-you-arent-a-data-scientist/">How to Spot a Machine Learning Opportunity, Even If You Aren’t a Data Scientist</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-to-spot-a-machine-learning-opportunity-even-if-you-arent-a-data-scientist/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
	</channel>
</rss>
