<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>feature Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/feature/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/feature/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Fri, 24 Jul 2020 07:13:55 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Can automated feature engineering produce machine learning that finally lives up to its name?</title>
		<link>https://www.aiuniverse.xyz/can-automated-feature-engineering-produce-machine-learning-that-finally-lives-up-to-its-name/</link>
					<comments>https://www.aiuniverse.xyz/can-automated-feature-engineering-produce-machine-learning-that-finally-lives-up-to-its-name/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 24 Jul 2020 07:13:45 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Automated]]></category>
		<category><![CDATA[Automation]]></category>
		<category><![CDATA[ENGINEERING]]></category>
		<category><![CDATA[feature]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[software]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10444</guid>

					<description><![CDATA[<p>Source: itproportal.com Automated machine learning (ML) sounds like the stuff of business leaders’ dreams. Take the question, ‘which customers should we focus our marketing budget on this <a class="read-more-link" href="https://www.aiuniverse.xyz/can-automated-feature-engineering-produce-machine-learning-that-finally-lives-up-to-its-name/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/can-automated-feature-engineering-produce-machine-learning-that-finally-lives-up-to-its-name/">Can automated feature engineering produce machine learning that finally lives up to its name?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: itproportal.com</p>



<p>Automated machine learning (ML) sounds like the stuff of business leaders’ dreams.</p>



<p>Take the question, ‘which customers should we focus our marketing budget on this year?’ ML can now deliver robust answers to these types of business questions even faster than before, through greater use of automation.</p>



<p>Data in at one end, seriously useful business insight out of the other.</p>



<p>That is partly why Forbes predicts businesses will be investing $125bn a year on Artificial Intelligence and Machine Learning by 2025.</p>



<p>But even though numerous vendors boast of “AutoML” capabilities, the reality is that the act of developing ML models is still very much driven by humans and requires an awful lot of manual trial and error, performed by (expensive) experts.</p>



<p>Whilst the human element will never completely disappear, new automation techniques will help to reduce the vast amount of labor intensive work required. Not only will this reduce the overall cost and effort, it should reduce the levels of skill and experience required to build reliable ML models.</p>



<p>By today’s standards, it is certainly an unfortunate fact that manual effort still accounts for 80 percent of the machine learning development process. The most important part of this manual effort is the feature engineering process, where different data elements are combined and enriched to generate the most potent formula for predicting future events.</p>



<p>In the case of working out which customers might churn in the next year, for example, the data may include the size of their last discount. But prediction accuracy would improve if further features were engineered such as the time since the last discount, the average time between discounts and how the discount compares to those offered to other customers.</p>



<p>The challenge here is that nobody knows for sure whether these feature combinations will work until they have been developed, tested and fully assessed together as part of an ML model.</p>



<p>Specialist knowledge has been essential in these endeavors: you can’t produce a good algorithm without a subject matter expert knowing something about which features may be the most significant, or without experienced data scientists with deep knowledge of the ML process.</p>



<p>This need for expensive experts is one of the factors that have limited the application of ML to the organizations with the skills, patience and deep pockets to indulge lengthy developments, and to low-risk use cases with the clear potential for high levels of return on investment. But this is now starting to change.</p>



<p>One area of data science development that offers the potential to transform this endeavor is automated feature engineering: Using a computer to shortcut one of the most manually-intensive aspects of ML development.</p>



<p>The challenge of bringing automation to every stage of the ML workflow is one that my company, Peak Indicators, has been exploring for years. From this work, we created Tallinn ML, a platform providing all of the components required to build and deploy predictive models automatically, significantly reducing the reliance on highly-skilled data scientists.</p>



<p>Tallinn ML includes a unique feature-engineering engine that drastically cuts the time taken to develop new predictive algorithms by generating and testing thousands of different metrics as part of the data engineering, a process of trial and error that can take human months or even years to deliver.</p>



<p>Earlier this year we applied it on Kaggle &#8211; Google’s online home for the world’s data scientists and machine-learning experts, a kind of Premier League of ML. Kaggle set an unusual challenge. Can you develop an algorithm to predict which people were most likely to survive the world’s most infamous shipwreck &#8211; the Titanic?</p>



<p>Competitors were given a set of features, such as passenger age or gender, and asked to develop the most powerful algorithm to predict who would survive. Among Kaggle’s 1 million users are some of the world’s best-known researchers and data science teams. Peak’s Tallinn ML algorithm reached the top 5 percent for accuracy.</p>



<p>While other world-class competitors developed their models through manual means, our model was produced automatically. It involved no coding and no manual trial and error. It proved that machine learning has now reached a new level of automation.</p>



<ul class="wp-block-list"><li>How automation can provide a foundation for digital transformation</li></ul>



<h3 class="wp-block-heading" id="the-business-impact-of-automated-ml">The business impact of automated ML</h3>



<p>So what difference does this make to business? Well, potentially a huge one.</p>



<p>The insights provided by predictive analytics and machine learning have been seen for some time as potentially revolutionary for business. Suddenly firms are far better able to answer crucial questions like:</p>



<ul class="wp-block-list"><li>What are the impacts of a particular marketing campaign likely to be for specific target customers?</li><li>Which of our employees are likely to leave in the next year?</li><li>Which transactions in an account are most likely to be fraudulent?</li><li>What seems to be causing a particular business problem?</li><li>Harnessing the power of automation</li></ul>



<p>Those questions are just the start. Answering them reliably means resources can be put where they are most needed. Inefficiently-used time and money can be reallocated to more productive tasks. Robust new insights into what is needed next appear magically.</p>



<p>But making that promise a reality is difficult. As Gartner highlighted just last year, “doing predictive analytics is tough. Your team needs to possess the right skills, understand business priorities and deal with data accuracy”.</p>



<p>That meant that any business, according to Gartner’s research, had previously to ask an important question: “What’s the likelihood you’ll sink under the weight of your organization’s data or swim to successful results?”.</p>



<p>Now that question is no longer so pressing. An automated solution makes it far more likely an organization will swim, because it will eliminate a considerable amount of time and effort in ML projects, and significantly reduce the need for very high-level expertise. The chances of an organization sinking &#8211; or treading water &#8211; in a sea of data become far smaller.</p>



<p>Problems that took months to solve previously can now be addressed in a matter of hours and days, and it has become economically viable to use ML to solve a much more extensive range of problems. We expect to see more experimentation and innovation using ML across all areas of business, including use-cases that didn’t justify the cost of data science projects lasting several months before.</p>



<p>Trials of Tallinn ML at a global retail and consumer-goods company produced a predictive model in two hours that was 18 percent more accurate, and delivered 19 times fewer false positives, than one developed over a three-month period by a team of experienced data scientists.</p>



<p>Another at a global financial-services organization showed that Tallinn ML’s automated feature engineering improved the accuracy of employee-churn predictions by 51 percent.</p>



<p>Beyond these improvements in pace and accuracy, this new approach promises to bring the benefits of ML to a much wider range of organizations. Automating the entire ML workflow democratizes data science to the point that any organization with an IT manager and big data sets to explore can start to derive value from it.</p>



<p>ML and the ability for algorithms to improve automatically through experience has long been recognized for its potential to bring greater intelligence and automation to the world of business. But to date, it has relied on expert humans to set up the machines to do what they do best.</p>



<p>Fully automating the development of ML models means that, for the first time, ML can deliver on its full promise. Efficiency. Productivity. Speed. Precision in prediction. Seriously useful business insight. Genuinely letting the machine take the strain, and freeing up humans to do what they do best.</p>
<p>The post <a href="https://www.aiuniverse.xyz/can-automated-feature-engineering-produce-machine-learning-that-finally-lives-up-to-its-name/">Can automated feature engineering produce machine learning that finally lives up to its name?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/can-automated-feature-engineering-produce-machine-learning-that-finally-lives-up-to-its-name/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Zoom Removes Data-Mining LinkedIn Feature</title>
		<link>https://www.aiuniverse.xyz/zoom-removes-data-mining-linkedin-feature/</link>
					<comments>https://www.aiuniverse.xyz/zoom-removes-data-mining-linkedin-feature/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 03 Apr 2020 07:55:28 +0000</pubDate>
				<category><![CDATA[Data Mining]]></category>
		<category><![CDATA[data mining]]></category>
		<category><![CDATA[feature]]></category>
		<category><![CDATA[LinkedIn]]></category>
		<category><![CDATA[Web Security]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7932</guid>

					<description><![CDATA[<p>Source: threatpost.com Zoom has nixed a feature that came under fire for “undisclosed data mining” of users’ names and email addresses, used to match them with their <a class="read-more-link" href="https://www.aiuniverse.xyz/zoom-removes-data-mining-linkedin-feature/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/zoom-removes-data-mining-linkedin-feature/">Zoom Removes Data-Mining LinkedIn Feature</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: threatpost.com</p>



<p>Zoom has nixed a feature that came under fire for “undisclosed data mining” of users’ names and email addresses, used to match them with their LinkedIn profiles.</p>



<p>The feature, the LinkedIn Sales Navigator, is a LinkedIn service used for sales prospecting. When users enter a web conference meeting, the tool automatically sent their user names and email addresses to an Zoom internal company system. This system would then match this data to their LinkedIn profiles, according to a New York Times investigation.</p>



<p>Per The New York Times, the tool also automatically allowed other meeting participants to covertly access this LinkedIn profile data, without Zoom asking for users’ permission or notifying them. That means if a user is in a Zoom meeting – even if they aren’t using their real names – other participants could collect information about their real names, locations, employer names and job titles.</p>



<p>The tool was removed on Thursday as part of several sweeping changes Zoom made in response to snowballing security and privacy concerns. Zoom founder Eric Yuan said in a Wednesday post responding to the concerns that Zoom will freeze the development of its features and instead focusing on security and privacy issues.</p>



<p>“Over the next 90 days, we are committed to dedicating the resources needed to better identify, address and fix issues proactively,” said &nbsp;Yuan. “We are also committed to being transparent throughout this process. We want to do what it takes to maintain your trust.”</p>



<p>With more employees working from home over the past few weeks due to the coronavirus pandemic, Zoom has ballooned in popularity to include 200 million daily meeting participants in March. To put that into context, the maximum number of daily meeting participants on Zoom in December was 10 million.</p>



<p>But questions over what data Zoom collects – and how it is secured – have also increased. On the privacy front, Zoom this week removed a feature in its iOS web conferencing app that was sharing analytics data with Facebook, after a report revealing the practice sparked outrage. According to the Motherboard report last week that originally disclosed the privacy issue, the transferred information included data on when a user opened the app, a user’s time zone, device OS, device model and carrier, screen size, processor cores and disk space.</p>



<p>The issue left the public — including New York attorney general, Letitia James — demanding more information about how Zoom secures user data. Some have even prohibited use of the video-conferencing app — including, according to Reuters, Elon Musk’s SpaceX rocket company, which cited “significant privacy and security concerns.”</p>



<p>Yuan said Wednesday, in response to these privacy concerns, that Zoom will prepare a transparency report detailing information related to data, records or content. In addition, he said, Zoom has now updated its privacy policy “to be more clear and transparent” around what data is collected and how it is used. The policy now explicitly clarifies that Zoom does not sell users’ data and will not going forward.</p>



<p>On the security side of things, Zoom has now patched several recently-disclosed vulnerabilities – including two zero-day flaws uncovered this week in the conferencing platform’s macOS client version, and a UNC path injection vulnerability in the Zoom Windows client, which could enable attackers to steal Windows credentials of users.</p>



<p>Moving forward, Yuan said Zoom would be “enhancing” its current bug-bounty program, and creating white-box penetration tests to “further identify and address issues.”</p>



<p>“Transparency has always been a core part of our culture,” said Yuan. “I am committed to being open and honest with you about areas where we are strengthening our platform and areas where users can take steps of their own to best use and protect themselves on the platform.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/zoom-removes-data-mining-linkedin-feature/">Zoom Removes Data-Mining LinkedIn Feature</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/zoom-removes-data-mining-linkedin-feature/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Preventing the “Climapocalypse” Using Data Science</title>
		<link>https://www.aiuniverse.xyz/preventing-the-climapocalypse-using-data-science/</link>
					<comments>https://www.aiuniverse.xyz/preventing-the-climapocalypse-using-data-science/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 05 Aug 2019 12:44:58 +0000</pubDate>
				<category><![CDATA[Data Science]]></category>
		<category><![CDATA[analyzed datasets]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Climapocalypse]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[feature]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4270</guid>

					<description><![CDATA[<p>Source: towardsdatascience.com In recent media reports, the threat of climate change has supposedly become so great that it is said to be capable of wiping out all <a class="read-more-link" href="https://www.aiuniverse.xyz/preventing-the-climapocalypse-using-data-science/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/preventing-the-climapocalypse-using-data-science/">Preventing the “Climapocalypse” Using Data Science</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: towardsdatascience.com</p>



<p>In recent media reports, the threat of climate change has supposedly become so great that it is said to be capable of wiping out all of humanity, a “climapocalypse” — the data do not support this claim. As a practicing climatologist, I have personally handled and analyzed datasets with large time spans (over 100 years) and spatial extents (global scale), and while there are signals of change within the data the possibility of a species or even nation-ending event is far from reality. Climate change is guaranteed to place new burdens upon humanity, our infrastructure, and our socio-economic status quo, but we need to be realistic when we communicate its impacts with individuals not familiar with the science. Of course, the only meaningful way of communicating this information is to use the data we have available and even this must be done with the utmost caution.</p>



<p>To understand where I am coming from with this article, go to Google and search “climate apocalypse”, then take a gander at how many results are returned from this query. At the time of writing over 9 million results were returned, and the highlighted articles can vary depending on the current “hot topic” trending that is related to weather or climate. If you navigate to Google trends, you can see that the term “climate change” peaks in searches whenever there is an extreme weather event. From a non-political standpoint, one could say that this is natural. A cluster of tornadoes moves through an area and wreaks havoc causing millions in damages — people want to know how it happened, where it happened, and when will it happen again. The number of articles pertaining to the “climate apocalypse” also skyrockets around these times as well, with media outlets publishing dreary reports of a warming world in which millions more will suffer from increased frequency of these types of events.</p>



<p>Presenting the information in this way not only has blatant political motivations that skew the truth, but it also omits the data and its message as well as an abundance of scientific research surrounding the frequency of these events. For tornadoes, droughts, flood inducing precipitation events, and tropical cyclones, research has shown that under a warming world these events will become less frequent but more intense. While many hear this and recall the overdramatized storm from the movie <em>The Day After Tomorrow</em>, such an event is unlikely to happen and even it if did humanity would survive it. This isn’t just the wishful thinking of an overly optimistic scientist, it can be observed in the data. In 1900 the worst hurricane in United States history made landfall in Galveston, Texas. Fatalities were high, property damage was high, but humans weren’t completely annihilated from the earth by this event. Relative to today, the building codes were much less strict and unsafe and the Galveston hurricane is reported to have been close to the strength of Hurricane Harvey. Both were category 4 hurricanes. The difference is that the Galveston hurricane killed 6,000 to 12,000 people, while Harvey killed 68. The loss of lives is most definitely tragic, but the sharp decline between these two events is evidence that our understanding of the impacts of extreme weather have yielded improved infrastructure, hazard management, and emergency response. Nevertheless, there were hundreds of articles claiming Harvey as the beginning of the end of humanity by climate change induced weather, which was farthest from the truth.</p>



<p>From a data science perspective advances in feature engineering, artificial intelligence, and computational methods are providing us with new ways of analyzing climate data. Feature engineering techniques such as outlier analysis, binning, k-means, correlation matrices, and even linear discriminant analysis, allow analysts and researchers to understand which variables within a dataset contribute the most to a phenomena of interest. This information can then be used to refine inferential models that forecast into the future. For artificial intelligence algorithms, it is crucial to use the best features from a dataset. Providing a simple sigmoid function based classifier temperature data from a weather station installed in a paved urban environment, or a shaded field, can lead to inaccurate and imprecise output from an algorithm. For other methods such as convolutional and recurrent neural networks, this problem will still persist, as these methods can only provide results based on the data being provided to it. One technique that is on the rise and will likely see more widespread use within the climate community is reservoir computing. It combines the best of big data techniques and machine learning to create forecasted products depending on the information provided to it. All of these methods are only as good as the data provided to it, and it’s true that the majority of our data support global warming they do not support a climate apocalypse, even if the planet continues to warm at its current rate.</p>



<p>Being in climatology or any natural science these days is interesting. The majority of the instruction or advice we receive from seniors within the field supports using data and previous literature to develop new insight regarding the field. However, if you are in a field that uses a lot of data from instrumentation, you are likely also developing yourself as a data scientist — or you should be. Certain aspects of Climatology will require more data science skills than ever before, and those who are unaware of these techniques will likely make poor models that will improperly inform the public. Understanding data and how to manipulate it is a necessary component of creating useful information, advancing the field, and assisting communities in building more sustainable and environmentally rigid infrastructure. It will also be necessary to debunk exaggerated events such as the “climapocalypse” in the near future.</p>
<p>The post <a href="https://www.aiuniverse.xyz/preventing-the-climapocalypse-using-data-science/">Preventing the “Climapocalypse” Using Data Science</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/preventing-the-climapocalypse-using-data-science/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
