<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>What Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/what/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/what/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Thu, 15 Jul 2021 10:34:10 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>What Is The Definition Of Big Data?</title>
		<link>https://www.aiuniverse.xyz/what-is-the-definition-of-big-data/</link>
					<comments>https://www.aiuniverse.xyz/what-is-the-definition-of-big-data/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 15 Jul 2021 10:34:07 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[Definition]]></category>
		<category><![CDATA[What]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=15023</guid>

					<description><![CDATA[<p>Source &#8211; https://timesnewsexpress.com/ Did you realize that a fly motor can produce more than ten terabytes of data for only 30 minutes of flight time? What’s more, <a class="read-more-link" href="https://www.aiuniverse.xyz/what-is-the-definition-of-big-data/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-the-definition-of-big-data/">What Is The Definition Of Big Data?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://timesnewsexpress.com/</p>



<p>Did you realize that a fly motor can produce more than ten terabytes of data for only 30 minutes of flight time? What’s more, what several flights are there each day? That is a few petabytes of data consistently. The New York Stock Exchange produces around one terabyte of new exchanging data each day. Facebook photograph and video transfers, posts, and remarks made more than 500 terabytes of new data consistently. Indeed, that makes data! This is the thing that we call Big Data.</p>



<p>Big Data is turning into an indispensable piece of our life. Everybody utilizes big business innovation. What’s more, they utilize the data we give to them. They continually dissect this data to expand their proficiency and foster new items.</p>



<p><strong>What Software For Big Data?</strong></p>



<p>The handling of masses of advanced data coming from various channels requires explicit PC apparatuses. A few, the vast majority of which depend on the Open Source idea—update on the most famous Big Data apparatuses. Here is best big data software development services by DICEUS. If you want to get more services of software support then you can check https://diceus.com/services/software-support/.  Big Data examination can be extremely helpful for your business, including boosting deals, getting clients, and working on internal administration. Be that as it may, to change over data into important data, it is important to outfit yourself with better insightful instruments. Here is a choice of 7 Big Data instruments for your Data Scientist and your business.</p>



<p><strong>Hadoop</strong></p>



<p>Hadoop is an open-source system for making applications fit for putting away and handling an enormous mass of data in clump mode. This free stage was animated by MapReduce, Big Table, and Google FS. Solidly, Hadoop comprises of a section expected for data stockpiling called Hadoop Distributed File System or HDFS and a section guaranteeing the preparing of data: MapReduce. Hadoop was created to deal with a lot of data by parting it into blocks dispersed among the hubs of the bunch. It is presumably the most utilized device by Chief Data Officers.</p>



<p>A few distributed computing devices like Azure HDInsight from Microsoft Azure or Amazon Elastic Compute Cloud permit Hadoop to store and break down data. On Azure HDInsight, organizations are charged dependent on the number of hubs running.</p>



<p><strong>Storm</strong></p>



<p>It is an open-source constant big data preparing framework. It very well may be utilized by both little and huge organizations. The storm is appropriate for all programming dialects. It permits data to be handled regardless of whether an associated hub of the group does not work anymore or if messages are lost. The storm is additionally ideal for Distributed RPC and Online Machine Learning. It is a decent decision among big data instruments since it coordinates with current advancements.</p>



<p><strong>Hadoop MapReduce</strong></p>



<p>Hadoop MapReduce is a programming model and programming structure for building data preparing applications. Initially created by Google, MapReduce empowers quick, equal handling of enormous data sets on hub bunches.</p>



<p>This structure has two primary capacities. In the first place, the planning capacity permitting to isolate the data to be prepared. Second, the decrease capacity to dissect the data.</p>



<p><strong>Cassandra</strong></p>



<p>It can screen enormous data sets spread across different worker bunches and in the cloud. Facebook initially created it to address an issue for an adequately incredible database for the inbox search work. Presently, numerous organizations utilize this big data apparatus with huge datasets like Netflix, eBay, Twitter, and Reddit.</p>



<p><strong>OpenRefine</strong></p>



<p>OpenRefine is an open-source device intended for untidy data. This device permits you to rapidly tidy up datasets and change them into a usable configuration. Indeed, even clients without specialized abilities can utilize this arrangement. OpenRefine additionally permits you to make interfaces between datasets immediately.</p>



<p><strong>Rapidminer</strong></p>



<p>Rapidminer is an open-source device fit for supporting unstructured data, for example, text records, traffic logs, and pictures. Solidly, this apparatus is a data science stage dependent on visual programming for activities. Capacities like control, examination, model structure, and fast mix into business measure measures are a portion of the advantages of Rapidminer.</p>



<p><strong>MongoDB</strong></p>



<p>MongoDB is an open-source NoSQL database broadly utilized for its superior, high accessibility, and versatility. It is appropriate for big data handling because of its highlights and reasonable programming dialects ​​like JavaScript, Ruby, and Python. MongoDB is not difficult to introduce, design, keep up with, and use.</p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-the-definition-of-big-data/">What Is The Definition Of Big Data?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/what-is-the-definition-of-big-data/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What Is Data Science And What Techniques Do The Data Scientists Use?</title>
		<link>https://www.aiuniverse.xyz/what-is-data-science-and-what-techniques-do-the-data-scientists-use/</link>
					<comments>https://www.aiuniverse.xyz/what-is-data-science-and-what-techniques-do-the-data-scientists-use/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 03 Mar 2021 09:10:27 +0000</pubDate>
				<category><![CDATA[Data Science]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[data scientists]]></category>
		<category><![CDATA[techniques]]></category>
		<category><![CDATA[What]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13193</guid>

					<description><![CDATA[<p>Source &#8211; https://aithority.com/ What Is Data Science? The terminology came into the picture when the amount of data had started expanding in the starting years of the <a class="read-more-link" href="https://www.aiuniverse.xyz/what-is-data-science-and-what-techniques-do-the-data-scientists-use/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-data-science-and-what-techniques-do-the-data-scientists-use/">What Is Data Science And What Techniques Do The Data Scientists Use?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://aithority.com/</p>



<h4 class="wp-block-heading"><strong>What Is Data Science?</strong></h4>



<p>The terminology came into the picture when the amount of data had started expanding in the starting years of the 21st century. As the data increased, there was a newly emerged need to select only the data that is required for a specific task. The primary function of data science is to extract knowledge and insights from all kinds of data. While data mining is a task that involves finding patterns and relations in large data sets, data science is a broader concept of finding, analyzing, and providing insights as an outcome.</p>



<p>In short, data science is the parent category of computational studies, dealing with machine learning, and big data.</p>



<p>Data science is closely related to Statistics. But as opposed to statistics, it goes way beyond the concepts of mathematics. Statistics is the collection, interpretation of quantitative data where there is accountability for assumptions ( like any other pure science field). Data science is an applied branch of statistics dealing with huge databases which require a background in computer science. And, because they are dealing with such an incomprehensible amount of data, there is no need to consider assumptions. In-depth knowledge of mathematics, programming languages, ML, graphic designing, and the domain of the business is essential to become a successful data scientist.</p>



<h4 class="wp-block-heading"><strong>How Does It Work?</strong></h4>



<p>Several practical applications provide personalized solutions for business problems. The goals and working of data science depend on the requirements of a business. The companies expect prediction from the extracted data; to predict or estimate a value based on the inputs. Via prediction graphs and forecasting, companies can retrieve actionable insights. There’s also a need for classifying the data, especially to recognize whether or not the given data is spam. Classification helps in work reduction in further cases. A similar algorithm is to detect patterns and group them so that the searching process becomes more convenient.</p>



<h4 class="wp-block-heading"><strong>Commonly Used Techniques In The Market</strong></h4>



<p>Data Science is a vast field; it is very difficult to name uses of all the types and algorithms used by data scientists today. Those techniques are generally categorized according to their functions as follows:</p>



<h5 class="wp-block-heading"><strong>Classification –</strong>&nbsp;The act of putting data into classes on both structured and unstructured data (unstructured data is not easy to process, at times distorted, and requires more storage).</h5>



<p>Further in this category, there are 7 commonly followed algorithms arranged in ascending order of efficiency. Each one has its pros and cons, so you have to use it according to your need.</p>



<p><em>Logistic Regression&nbsp;</em>is based on binary probability, most suitable for a larger sample. The bigger the size of the data, the better it functions. Even though it is a type of regression, it is used as a classifier.</p>



<p>The&nbsp;<em>Naïve&nbsp;Bayes&nbsp;</em>algorithm works best on a small amount of data and relatively easy work such as document classification and spam filtering. Many don’t use it for bigger data because the algorithm turns out to be a bad estimator.</p>



<p><em>Stochastic Gradient Descent </em>is the algorithm that keeps updating itself after every change or addition for minimal error, in simple words. But a huge problem is that the gradient changes drastically even with a small input.</p>



<p><em>K-Nearest Neighbours&nbsp;</em>is typically common to deal with large data and acts as the first step before further acting on the unstructured data. It does not generate a separate model for classification, just shows the data nearest to the&nbsp;<em>K</em>. The main work here lies in determining the K so that you get the best graph of the data.</p>



<p><em>The Decision Tree&nbsp;</em>provides simple visualized data but can be very unstable as the whole tree can change with a small variation. After giving attributes and classes, it provides a sequence of rules for classifying the data.</p>



<p><em>Random forest&nbsp;</em>is the most used technique for classification. It is a step ahead of the decision tree, by applying the concept of the latter to various subsets within the data. Owing to its complicated algorithm, the real-time analysis gets slower and is difficult to implement.</p>



<p><em>Support Vector Machine(SVM)&nbsp;</em>is the representation of training data in space, separated with as much space as possible. It’s very effective in high dimensional spaces, and very memory efficient. But for the direct probability estimations, companies have to use an expensive five-fold cross-validation.</p>



<h5 class="wp-block-heading"><strong>Feature Selection</strong>&nbsp;–&nbsp;<strong>Finding the best set of features to build a model</strong></h5>



<p><em>Filtering </em>defines the properties of a feature via univariate statistics, which proves to be cheaper in high-dimensional data. Chi-square test, fisher score, and correlation coefficient are some of the algorithms of this technique.</p>



<p><em>Wrapper methods&nbsp;</em>search all the space for all possible subsets of features against the criterion you introduce. It is more effective than filtering but costs a lot more</p>



<p><em>Embedding&nbsp;</em>maintains a cost-effective computation by using a mix of filtering and wrapping. It identifies the features that contribute the most to a dataset.</p>



<p><em>The hybrid method&nbsp;</em>uses any of the above alternatively in an algorithm. This assures minimum cost and the least number of errors possible.</p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-data-science-and-what-techniques-do-the-data-scientists-use/">What Is Data Science And What Techniques Do The Data Scientists Use?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/what-is-data-science-and-what-techniques-do-the-data-scientists-use/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What to Know About Machine Learning as a Service in 2021</title>
		<link>https://www.aiuniverse.xyz/what-to-know-about-machine-learning-as-a-service-in-2021/</link>
					<comments>https://www.aiuniverse.xyz/what-to-know-about-machine-learning-as-a-service-in-2021/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 02 Mar 2021 11:15:01 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[2021]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[service]]></category>
		<category><![CDATA[software]]></category>
		<category><![CDATA[What]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13169</guid>

					<description><![CDATA[<p>Source &#8211; https://www.iotforall.com/ Having worked as a software developer and with software developers for over a decade now, one of the things I have learned to appreciate <a class="read-more-link" href="https://www.aiuniverse.xyz/what-to-know-about-machine-learning-as-a-service-in-2021/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-to-know-about-machine-learning-as-a-service-in-2021/">What to Know About Machine Learning as a Service in 2021</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.iotforall.com/</p>



<p>Having worked as a software developer and with software developers for over a decade now, one of the things I have learned to appreciate is just how much developers dislike inefficiency. Anything we can do to automate our jobs and make them faster and easier will inevitably be done. Think back to how much work it was to build and host your own website a decade ago versus now. The manual builds and deploy steps have been gradually replaced with automated builds, testing, and deployments across multiple environments with fantastic scalability. </p>



<p>As a technology moves along the hype cycle into maturity, frameworks, tooling, and methodologies rise and fall until we begin arriving at the things that truly make technology useful and efficient. Machine Learning (ML) has seen an explosion of development in the last few years and shows no signs of slowing. Just as in other software development areas, machine learning is beginning to find its stride in the development track, making it much more accessible than ever before, thanks to MLaaS.</p>



<h2 class="wp-block-heading" id="h-what-is-machine-learning-as-a-service-mlaas">What is Machine Learning as a Service (MLaaS)?</h2>



<p>So, what is machine learning as a service? Simply put, MLaaS is when you use someone else’s tooling and infrastructure to enable machine learning development or deployment, usually at a price. MLaaS is a more specific version of&nbsp;Software as a Service&nbsp;(SaaS). In the olden days, if you wanted network storage, you bought or built a server, put it in a server rack, and attached it to your network. Now you can pay to use someone else’s server and let them handle redundancy, scalability, and maintenance, so you don’t have to.</p>



<p>Using these other servers is where much of the efficiency in as-a-service offerings come from; they help customers accelerate solutions. Economies of scale often make this solution faster to set up, easier to maintain, and generally more cost-effective over time. In the same way, if you wanted to do machine learning development a few years ago, you had to jump through several hoops. First, you needed to hire a machine learning expert. Second, put $2000+ into a high-end GPU-packed Linux box. Third, try to piece together several disparate frameworks and tooling, hoping you didn’t have any conflicting dependencies. Lastly, wrangle your data into some custom format until you could get a model training. While that may still be the right solution in some cases, for many, we have much better options now.</p>



<h2 class="wp-block-heading" id="h-ml-services">ML Services</h2>



<p>Not surprisingly, the most prominent players in the cloud computing industry are also some of the most prominent MLaaS space players. Amazon Web Services, Google Cloud Platform, and Microsoft Azure are updating and releasing new and improved machine learning tooling at a lighting fast pace.</p>



<p>The types of services that these cloud providers offer include:</p>



<ul class="wp-block-list"><li>Virtual machines for training models</li><li>Data storage</li><li>Data versioning</li><li>Data labeling/ground-truthing tools</li><li>Hosting options for models</li><li>Pre-trained models for deployment such as:<ul><li>Models for fraud detection</li><li>Models for detecting various objects in images</li><li>Models for doing sentiment analysis on text</li><li>Recommendation engines</li><li>Anomaly detection</li></ul></li><li>Development environments for data scientists and software developers</li></ul>



<p>On top of these general offerings, we see a surge in particular offerings targeted at use cases in certain industries. One example is Amazon’s Lookout for Equipment. This offering is targeted at the industrial sector, which has generally struggled with adopting machine learning. This industry’s struggle is partly due to the lack of experts available to get companies started and the high cost of entry into the ML space. Specific services like these reduce the need for in-house expertise, lower the barrier to entry, and start at a low cost. AWS has gone so far with this that they offer devices, such as Monitron, that work with their cloud infrastructure to reduce these barriers to entry further. </p>



<p>Along with the big names in cloud computing, we see very specialized companies entering this space and providing solutions that were hard to imagine 5 years ago. One great example of this is Edge Impulse. They are focused on bringing machine learning to edge devices, which has traditionally been incredibly difficult and required both a high level of expertise in embedded systems and machine learning. With their services, what used to take weeks of development time can be reduced down to days or even hours. </p>



<p>With these types of technologies, it is no wonder that companies are further embracing machine learning into the future. A recent article in Forbes highlights some of the significant shifts in the industry and points to some of the challenges for ahead companies. </p>



<p>With everyone scrambling to get a piece of the pie, it can leave companies with a lot of questions, including:</p>



<ul class="wp-block-list"><li>Should I use a service platform or do the work in-house?</li><li>If I do use a platform, which one is the best?</li><li>Should I use a pre-canned specific solution or do something more general?</li><li>How expensive is all of this?</li></ul>



<p>While we can’t answer every question in this post, let’s look at a few high-level things to consider.</p>



<h2 class="wp-block-heading" id="h-how-to-get-started-with-mlaas">How to Get Started with MLaaS</h2>



<p>There are a few high-level trade-offs to be considered.</p>



<ul class="wp-block-list"><li>Generally, an MLaaS solution improves speed but tends to decrease flexibility in frameworks, versions, or the ability to adapt and tweak models to get the best solution.</li><li>Depending on the amount of training you need to do, sometimes building in-house infrastructure may be a cheaper option.</li><li>While MLaaS solutions tend to improve the speed of getting started, they can also be slower during actual development due to the large amounts of data moving around the cloud.</li><li>Some solutions will promise the world but need to be vetted by someone with some machine learning experience and domain knowledge of your problem. Be wary of silver bullet sales pitches.</li><li>Make sure you are considering the full machine learning process. If the service doesn’t work with the way you collect and store data, that is a problem. The ability to easily deploy a trained model from this service is an important consideration. If you have no way to monitor your model’s performance, that can be a significant issue for your solution’s long-term success.</li></ul>



<p>There are also a few high-level questions to answer to help decide how to move forward.&nbsp;</p>



<h3 class="wp-block-heading" id="h-how-unique-is-the-problem">How Unique Is the Problem?</h3>



<p>Some problems in machine learning are pretty well understood and have solutions to them. If you are looking to find people in an image, implement fraud detection, or recommend products to a user, you can probably find something off the shelf that will help you accelerate your solution quickly. However, if you work in a unique domain, such as optimizing feeding patterns on your grasshopper farm, you may struggle to find a solution that cleanly fits your needs. The more specific the service offering is, the more closely your problem will need to match it. More general services, such as using Amazon SageMaker to create your model from scratch, will take more time and expertise but will ultimately be more flexible.</p>



<h3 class="wp-block-heading" id="h-is-in-house-technical-expertise-available">Is In-House Technical Expertise Available?</h3>



<p>If you have a team of data scientists and developers already working on the problem, their expertise may be able to provide better solutions at a lower cost than trying to move to an MLaaS solution. This is particularly true if the problem you are trying to solve has many nuances or needs a lot of flexibility. Often, an in-house expert will be able to quickly assess a service to know if it will work in your particular environment. If you don’t have this expertise, it will likely be wise to engage a third party to evaluate the right solution.</p>



<h3 class="wp-block-heading" id="h-is-there-in-house-infrastructure">Is There In-House Infrastructure?</h3>



<p>If you already have many in-house infrastructures to support data storage, training, and deployment, it is probably worth leveraging that. However, if you want to integrate some of this into other machine learning services, be careful about which tools allow external integration types. This can be a major headache, even if a solution claims that it offers 3rd party integration. Many times, these integrations can be cumbersome, buggy, and fragile. </p>



<h4 class="wp-block-heading" id="h-cloud-or-edge-where-should-the-model-run">Cloud or Edge: Where Should the Model Run?</h4>



<p>This question is going to drive a lot of decisions. Generally, running models in the cloud is easier. However, it comes with a lot of limitations that may or may not be an issue. For instance, if you have a model that inspects the quality of a part on a manufacturing line, you may not have enough time for data to be collected, sent up to the cloud, processed, and sent back while still maintaining your cycle time. If this is a safety-critical application, you can’t depend on a wireless connection all the time to get results. While cloud providers are working hard to make sure their services can move into this domain, it may be better to find tooling specifically targeted to your edge application. </p>



<h3 class="wp-block-heading" id="h-how-sensitive-is-the-data-or-application">How Sensitive Is the Data or Application?</h3>



<p>If you are working with highly sensitive data, this needs to be a significant consideration on how you choose to work with machine learning. Cloud platforms are becoming more and more secure and providing better options for end-to-end security than ever before. However, anytime data moves from one location to another, there is always increased risk. Each service being considered should be carefully scrutinized to know if or how it should be used in your scenario.</p>



<h3 class="wp-block-heading" id="h-will-the-problem-statement-change-significantly-in-the-future-roadmap">Will the Problem Statement Change Significantly in the Future Roadmap?</h3>



<p>Rarely do you train up a model that will work perfectly from now into eternity. Inputs change. Business problems change. Customer’s needs change. Tying yourself to a very specific machine learning service might work great today but could be a hindrance down the road. Although we can’t predict the future, having a good roadmap of where you think your product or problem is going can help you make informed decisions today.&nbsp;</p>



<p>Once you’ve answered some of these questions, you are ready to explore the options that are out there. Keep in mind that there will be trade-offs with any service that you use. Understanding what you are gaining and what you are losing is key to finding the right solution.</p>



<h2 class="wp-block-heading" id="h-set-yourself-up-for-machine-learning-success">Set Yourself Up for Machine Learning Success</h2>



<p>To set yourself up for success in machine learning:</p>



<ul class="wp-block-list"><li>Adopt a fail-fast methodology. What is the bare minimum you can try to see if a particular service will fit your need? Experiment quickly and move on quickly if things aren’t going in the right direction.</li><li>Take advantage of free tier offerings, trials, and demos. Most machine learning service providers want you to buy their products and try to make the barrier to entry lower through low to no-cost trial periods. Try it out. If you don’t like it, try something different.</li><li>Never trust a machine learning sales pitch. Machine learning can often be a black box that feels like magic. It can often be too easy for a sales demo to cherry-pick the right data to make their service look even more magical. Whenever you can, try the product for yourself and look for successful real-world use cases.</li><li>Think about your problem holistically. If you are using multiple models, make sure the service you choose will support all of them. If you need other services like monitoring, data storage, or an API to hit a machine learning endpoint, it is probably better to choose a platform that provides all of these things, so you don’t have to learn more technologies and maintain different accounts.</li><li>Try to understand what you don’t know and don’t be afraid to ask for help. If you don’t know how something works, it is better to understand it sooner rather than later. A few hours with an expert consultant could save you thousands or hundreds of thousands of dollars down the road. Know your limits and approach problems humbly.</li></ul>



<h2 class="wp-block-heading" id="h-learn-from-experience">Learn from Experience</h2>



<p>Based on the trends over the last few years and the projections moving forward, I suspect many more machine learning services will hit the market in 2021 and beyond. Some will last. Some will fail. Navigating through all of them can be a big undertaking. Finding the right solution could be just what your business needs to get to market sooner or the golden ticket that sets you apart from the competition. There are risks, but the market is showing that there is also great reward. Picking the right service or set of services will start you off on the right foot and offer much greater efficiency than trying to do it yourself.&nbsp;</p>
<p>The post <a href="https://www.aiuniverse.xyz/what-to-know-about-machine-learning-as-a-service-in-2021/">What to Know About Machine Learning as a Service in 2021</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/what-to-know-about-machine-learning-as-a-service-in-2021/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What Is Meta-Learning via Learned Losses (with Python Code)</title>
		<link>https://www.aiuniverse.xyz/what-is-meta-learning-via-learned-losses-with-python-code/</link>
					<comments>https://www.aiuniverse.xyz/what-is-meta-learning-via-learned-losses-with-python-code/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 01 Mar 2021 07:07:28 +0000</pubDate>
				<category><![CDATA[Python]]></category>
		<category><![CDATA[Code]]></category>
		<category><![CDATA[Learned]]></category>
		<category><![CDATA[Losses]]></category>
		<category><![CDATA[meta-learning]]></category>
		<category><![CDATA[What]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13145</guid>

					<description><![CDATA[<p>Source &#8211; https://analyticsindiamag.com/ Facebook AI Research (FAIR) research on meta-learning has majorly classified into two types:  First, methods that can learn representation for generalization. Second, methods that <a class="read-more-link" href="https://www.aiuniverse.xyz/what-is-meta-learning-via-learned-losses-with-python-code/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-meta-learning-via-learned-losses-with-python-code/">What Is Meta-Learning via Learned Losses (with Python Code)</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://analyticsindiamag.com/</p>



<p>Facebook AI Research (FAIR) research on meta-learning has majorly classified into two types:  First, methods that can learn representation for generalization. Second, methods that can optimize models. We have thoroughly discussed the type first in our previous article MBIRL. For this post, we are going to give a brief introduction to the second type. Last month, at the International Conference on Pattern Recognition, {ICPR}, Italy, January 10-15, 2021, a group of researchers: <em>S. Bechtle,  A. Molchanov, Y. Chebotar, E. Grefenstette, L. Righetti, G. S. Sukhatme, F. Meier</em> submitted a research paper focussing on the automation of “meta-training” processing: <strong>Meta Learning via Learned Loss</strong>.</p>



<p><strong>Motivation Behind ML</strong><strong><sup>3</sup></strong></p>



<p>In meta-learning, the goal is to efficiently optimize the function <em>f<sub>θ </sub> </em>which can be a regressor or classifier that finds the optimal value of <em>θ</em>. <em>L</em> is the loss function and <em>h</em> is the gradient transform. The majority of the work in deep learning is associated with learning the <em>f </em>function directly from data and some meta-learning work focuses on the parameter updation. In <strong>ML<sup>3</sup></strong> approach, the authors have targeted loss learning. Loss functions are architecture independent and widely used for learning problems so learning a loss function doesn’t require any engineering and optimization and allows the addition of extra information during meta-training.</p>



<p>The key idea of the proposed framework is to develop a pipeline for meta-training that not only can optimize the performance of the model but also generalize for different tasks and model architectures. The proposed framework of learning loss functions efficiently optimize the models for new tasks. The main contribution of the ML<sup>3</sup>&nbsp;framework are :</p>



<p>i) It is capable of learning adaptive, high-dimensional functions via back propagation and gradient descent.</p>



<p>ii) The given framework is very flexible as it is capable of storing additional information at the meta-train time and provides generalization by solving regression, classification, model-based reinforcement learning, model-free reinforcement learning.</p>



<p><strong>The Model Architecture of ML</strong><strong><sup>3</sup></strong></p>



<p>The task of learning a loss function is based on a bi-level optimization technique i.e., it contains two optimization loops: inner and outer. The inner loop is responsible for training the model or<em> optimizee </em>with gradient descent by using the loss function learners meta-loss function and the outer loop optimized the meta-loss function by minimizing the task loss i.e., regression or classification or reinforcement learning loss. </p>



<p>The process contains a function <em>f </em>parameterized by <em>θ</em> that takes a variable <em>x</em> and outputs <em>y</em>. It also learns meta-loss network <em>M</em> parameterized by <em>Φ </em> that takes the input and output of function <em>f</em> and together with task-specific information <em>g </em>(for example ground truth label for regression or classification, final position in MBIRL or the sample reward from model-free reinforcement learning problems) and outputs the meta- loss function <em>L</em> parameterized by both <em>Φ </em>and <em>θ</em>. </p>



<p>So, to update function <em>f</em>, compute the gradient of Meta Loss <em>L</em> with respect to <em>θ </em>and update the gradient using the learned loss function, as shown below :</p>



<p>Now, to update <em>M, </em>the loss network, formulate a task-specific loss that compares the output of the currently optimal <em>f </em>with the target information since <em>f </em>is updated with<em> L</em>, the task is also functional <em>Φ </em>and perform gradient update on <em>Φ </em>to optimize M. This architecture finally forms a fully differential loss learning framework used for training.</p>



<p>To use the learning loss  at Test time, directly update <em>f </em>by taking the gradient of learned loss <em>L</em> with respect to the parameters of <em>f.</em></p>



<p><strong>Applications of ML</strong><strong><sup>3</sup></strong></p>



<ol class="wp-block-list"><li>Regression problems.</li><li>Classification problems.</li><li>Shaping Loss during training e.g., Covexifying Loss, exploration signal. ML<sup>3</sup>&nbsp;provides a possibility to add additional information during meta-training.</li><li>Model-based Reinforcement Learning.</li><li>Model-free Reinforcement Learning.</li></ol>



<p><strong>Requirements &amp; Installation</strong></p>



<ol class="wp-block-list"><li>Python=3.7</li><li>Clone the Github repository via <em>git</em>.</li><li>Install all the dependencies of ML<sup>3</sup> via :</li></ol>



<p><strong>Paper Experiment Demos</strong></p>



<p>This section contains different experiments mentioned in the research paper.</p>



<p><strong>A. Loss Learning for Regression</strong></p>



<ol class="wp-block-list"><li>Run Sin function regression experiment by code below:</li></ol>



<ol class="wp-block-list" start="2"><li>Now, you can visualize the results by the following code:</li></ol>



<p>2.1 Import the required libraries, packages and modules and specify the path to the saved data during meta-training. The code snippet is available here.</p>



<p>2.2 Load the saved data during the experiment.</p>



<p>2.3 Visualize the performance of the meta loss when used to optimize the meta training tasks, as a function of (outer) meta training iterations.</p>



<p>2.4 Evaluating learned meta loss networks on test tasks. Plot the performance of the final meta loss network when used to optimize the new test tasks at meta test time. Here the x-axis represents the number of gradient descent steps. The code snippet is available here.</p>



<p><strong>C. Learning with extra information at the meta-train time</strong></p>



<p>This demo shows how we can add extra information during meta training in order to shape the loss function. For experiment purposes, we have taken the example of sin function. Now, with the code, the script requires two arguments, first one is train\test, the 2nd one indicates whether to use extra information by setting True\False (with\without extra info).</p>



<ol class="wp-block-list"><li>For training, the code is given below</li><li>To test the loss with extra information run:</li></ol>



<ol class="wp-block-list" start="3"><li>For comparison purposes, we have repeated the above two steps with argument as <em>False</em>. The full code is available here.</li><li>Comparison of results via visualization.</li></ol>



<p>Similarly, the research experiment for meta learning the loss with an additional goal in the mountain car experiment run can be done. The code lines is available here.</p>



<p><strong>EndNotes</strong></p>



<p>In this write-up we have given an overview Meta Learning via Learned Loss(ML<sup>3</sup>), a gradient-based bi-level optimization algorithm which is capable of learning any parametric loss function as long as the output is differential with respect to its parameters. These learned loss functions can be used to efficiently optimize models for new tasks.</p>



<p><strong>Note :</strong>&nbsp;All the figures/images except the output of the code are taken from official sources of ML<sup>3</sup>.</p>



<ul class="wp-block-list"><li><strong>Colab Notebook ML<sup>3</sup> Demo</strong></li></ul>



<p>Official Code, Documentation &amp; Tutorial are available at:</p>



<ul class="wp-block-list"><li>Github </li><li>Website </li><li>Research Paper</li></ul>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-meta-learning-via-learned-losses-with-python-code/">What Is Meta-Learning via Learned Losses (with Python Code)</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/what-is-meta-learning-via-learned-losses-with-python-code/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What is Python Used For?</title>
		<link>https://www.aiuniverse.xyz/what-is-python-used-for/</link>
					<comments>https://www.aiuniverse.xyz/what-is-python-used-for/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 26 Feb 2021 11:33:18 +0000</pubDate>
				<category><![CDATA[Python]]></category>
		<category><![CDATA[Foundation]]></category>
		<category><![CDATA[JetBrains]]></category>
		<category><![CDATA[Used]]></category>
		<category><![CDATA[What]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13124</guid>

					<description><![CDATA[<p>Source &#8211; https://www.i-programmer.info/ JetBrains and the Python Software Foundation have released the results of its latest survey to reveal the current state of the language, the ecosystem <a class="read-more-link" href="https://www.aiuniverse.xyz/what-is-python-used-for/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-python-used-for/">What is Python Used For?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.i-programmer.info/</p>



<p>JetBrains and the Python Software Foundation have released the results of its latest survey to reveal the current state of the language, the ecosystem around it, including insights into who uses Python and for what purposes. </p>



<p>The Python Developers Survey,&nbsp;conducted jointly by the Python Software Foundation and JetBrains, was inaugurated in 2017 so now we have the results of its fourth iteration. The number of respondents has increased year on year and over 28,000 Python developers and enthusiasts from almost 200 countries/regions took part in the latest, conducted in October 2020.</p>



<p>Back in 2017 the proportion of respondents for whom Python was the main language was 79%, by 2018 it had risen to 84%, see Survey Results From More Python Developers, where it remained in 2019, see Python Developer Survey and in 2020 it had edged up to 85%.</p>



<p>With regard to the other languages used with Python there are few changes from last year. JavaScript, which has been in the lead in every survey, starting with 50%, is still top, but has again shortened its lead:</p>



<p>Obviously the percentages here far exceed one hundred &#8211; this is because respondents could choose as many as applied. However, the number that said that they only use Python had increased since the previous surveys &#8211; 15% this year compared to 12% last year and 6% before that.</p>



<p>In response to the question about whether Python was being used for work or other reasons there was an increase, from 21% to 26% in the option &#8220;For personal, educational or side projects&#8221; reduction in the other two &#8211; down 4% for &#8220;Both work and personal&#8221; and 2% &#8220;For work&#8221; which fell to less than 1 in 5. This is largely explained by the fact that the proportion of students included in the survey rose from 10% to 13% and a further 7% choosing &#8220;Work student&#8221; for Employment status. The sizable proportion of students influenced the age distribution of respondents  which peaked with 40% in the 21-29 age band and a total of 50% under 30 years, and their experience. Over a third, 34%, claimed less than 1 year of coding experience and 68% fell into the up to and including 5 years of coding experience. Conversely in terms of Python experience, the most popular response was 3-5 years (28%) even though 74% were in the up to and including 5 years of coding experience. </p>



<p>When respondents were asked &#8220;What do you use Python for&#8221; and allowed to nominate multiple purposes, the mean number of choices was 3.9% and Data analysis topped the chart with 54% of respondents including this use. Web development camme next, 48% with DevOps and Machine learning tying for 3rd place with 38%.</p>



<p>However, when asked &#8220;What do you use Python for most?&#8221;, Web development came top with 25% and Data Analysis, still in second place, was well behind in terms of its share at 17%. Machine learning was still in third place at 13% leaving Dev Ops with 10% in fourth place.</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-python-used-for/">What is Python Used For?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/what-is-python-used-for/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What is the Difference Between AI, ML, and Deep Learning?</title>
		<link>https://www.aiuniverse.xyz/what-is-the-difference-between-ai-ml-and-deep-learning/</link>
					<comments>https://www.aiuniverse.xyz/what-is-the-difference-between-ai-ml-and-deep-learning/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 25 Feb 2021 05:32:09 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Difference]]></category>
		<category><![CDATA[ML]]></category>
		<category><![CDATA[What]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13082</guid>

					<description><![CDATA[<p>Source &#8211; https://www.iotforall.com/ Artificial Intelligence, Machine Learning, and Deep Learning are terms that often overlap with each other and are easily confused. Let’s discuss all three in <a class="read-more-link" href="https://www.aiuniverse.xyz/what-is-the-difference-between-ai-ml-and-deep-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-the-difference-between-ai-ml-and-deep-learning/">What is the Difference Between AI, ML, and Deep Learning?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.iotforall.com/</p>



<p>Artificial Intelligence, Machine Learning, and Deep Learning are terms that often overlap with each other and are easily confused. Let’s discuss all three in detail and go through their applications and uses.</p>



<h2 class="wp-block-heading" id="h-artificial-intelligence"><strong>Artificial Intelligence</strong></h2>



<p>Have you ever noticed how effortlessly we calculate the environment around us and keep learning from past experiences? Well, Artificial Intelligence (AI) is a method to teach a computer the same thing.</p>



<p>Artificial Intelligence is used to build tools, agents, bots, and robots that can predict human behavior &amp; act on a human basis. Tesla’s auto-driving cars, Amazon’s Alexa, and Siri are all examples of Artificial intelligence.</p>



<p>AI has three different levels:</p>



<p>First, Artificial Narrow Intelligence (ANI) is the only type of AI we have successfully accomplished to date. ANI is designed to perform singular tasks &amp; is goal-oriented. ANI is very capable of completing specific tasks it is programmed to do. A few examples of ANI are voice assistants, facial recognition, or driving a car.</p>



<p>Second, Artificial General Intelligence (AGI) is the concept of a machine with general intelligence that can mimic human intelligence and behaviors, with the ability to learn from data and apply its intelligence to solve any problem. Artificial General Intelligence can think, understand, and act in a somewhat similar way to a human in any given situation.</p>



<p>Artificial Superintelligence (ASI) is the hypothetical where machines can become self-aware and surpass human ability and intelligence. Practically, we are far away from achieving this form of AI in real life.</p>



<h2 class="wp-block-heading" id="h-machine-learning">Machine Learning</h2>



<p>While Artificial Intelligence is a concept of imitating human abilities, Machine Learning is a subset of Artificial Intelligence that teaches a machine to learn from previous outcomes.</p>



<p>Machine learning models look for patterns in the data and try to conclude you or I would based on previous outcomes and data. And once the algorithm gets really good at drawing outcomes, it starts applying the knowledge to the new data sets and keeps improving.</p>



<p>In a nutshell, Artificial Intelligence is the science of computers copying human behavior, while Machine Learning is the method behind how machines learn from data.</p>



<h2 class="wp-block-heading" id="h-types-of-machine-learning">Types of Machine Learning</h2>



<p>Supervised Learning is when a large amount of labeled data is fed to the algorithms, and variables that the algorithm needs to assess for correlations are also defined. However, supervised learning needs a vast pool of data to master the tasks.</p>



<p>Unsupervised Learning helps the algorithm look for patterns and data sets that don’t have labeled responses. You would use this technique to explore your data but don’t yet have a specific goal. The algorithm scans the data sets and starts segregating data into groups based on characteristics they share.</p>



<p>The mix of supervised &amp; unsupervised learning is called semi-supervised learning. In semi-supervised learning mostly labeled data is fed to an algorithm, yet the model is free to explore &amp; develop its own understanding of the data set.</p>



<p>Reinforcement learning is teaching a machine to complete a multi-step process with clearly defined rules. The algorithm takes its own decisions along the way &amp; gets rewards or penalties for the actions it takes</p>



<h2 class="wp-block-heading" id="h-deep-learning">Deep Learning</h2>



<p>It would not be an exaggeration to say that deep learning is a technique for implementing Machine Learning. Deep Learning is a subset of machine learning that uses deep neural networks and imitates the network of neurons in a brain, and allows machines to make accurate decisions without humans’ help.</p>



<p>However, deep learning is sometimes seen as an evolution of machine learning.&nbsp;The depth of a model is represented by the number of layers it has. Deep learning is the new state of the art in terms of Artificial Intelligence. In deep learning, the training is done through a neural network.</p>



<p>Deep learning has empowered many practical applications in Artificial Intelligence. Self-driving cars, better healthcare, even better product recommendations are all here today or on the horizon.</p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-the-difference-between-ai-ml-and-deep-learning/">What is the Difference Between AI, ML, and Deep Learning?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/what-is-the-difference-between-ai-ml-and-deep-learning/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>WHAT CAUSED THE DOWNFALL OF HADOOP IN BIG DATA DOMAIN?</title>
		<link>https://www.aiuniverse.xyz/what-caused-the-downfall-of-hadoop-in-big-data-domain/</link>
					<comments>https://www.aiuniverse.xyz/what-caused-the-downfall-of-hadoop-in-big-data-domain/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 19 Feb 2021 06:09:05 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[DOMAIN]]></category>
		<category><![CDATA[DOWNFALL]]></category>
		<category><![CDATA[Hadoop]]></category>
		<category><![CDATA[What]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12944</guid>

					<description><![CDATA[<p>Source &#8211; https://www.analyticsinsight.net/ While Hadoop emerged as favorite for Big Data Technologies, it could not keep up with the hype! Hadoop is one of the most popular <a class="read-more-link" href="https://www.aiuniverse.xyz/what-caused-the-downfall-of-hadoop-in-big-data-domain/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-caused-the-downfall-of-hadoop-in-big-data-domain/">WHAT CAUSED THE DOWNFALL OF HADOOP IN BIG DATA DOMAIN?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.analyticsinsight.net/</p>



<h2 class="wp-block-heading">While Hadoop emerged as favorite for Big Data Technologies, it could not keep up with the hype!</h2>



<p>Hadoop is one of the most popular open-source cloud platforms from Apache, used in big data community for data processing activities. Debuting in 2006, as Hadoop version 0.1.0, it was first developed by Doug Cutting and Mike Carafella, two software engineers that wanted to improve web indexing in 2002. It was built upon Google’s File System paper and was created as the Apache Nutch project. Since then, Hadoop has been used by Facebook, Yahoo, Google, Twitter, LinkedIn and many more.</p>



<p>With the rising importance of big data in industries, many business activities revolve around data.  Hadoop is great for MapReduce data analysis on huge amounts of data. Some of its specific use cases include data searching, data analysis, data reporting, large-scale indexing of files and other big data functions. It can also store and process any file data, be it large or small, plain text files or binary files like images, and even multiple data versions across different time periods. It basically stores the data using Hadoop distributed file system and processes it using the MapReduce programming model. Since it is based on cheap servers and requires less cost to store and process the data, Hadoop is a huge hit in business sector.</p>



<p>Hadoop has three components, viz.,</p>



<p><strong>•&nbsp;</strong>Hadoop HDFS – Hadoop Distributed File System (HDFS) is the storage unit of Hadoop.</p>



<p><strong>•&nbsp;</strong>Hadoop MapReduce – Hadoop MapReduce is the processing unit of Hadoop.</p>



<p><strong>•&nbsp;</strong>Hadoop YARN&nbsp;– Hadoop YARN is a resource management unit of Hadoop.</p>



<p>Hadoop seemed highly promising prior to a decade. In 2008, Cloudera became the first dedicated Hadoop company, followed by MapR in 2009 and Hortonworks in 2011.&nbsp;It was a huge hit among Fortune 500 vendors who were fascinated by big data’s potential to generate a competitive advantage. However, as data analytics became mainstream, Hadoop faltered as it offered very little in the way of analytic capabilities.&nbsp;Further, as businesses migrated to the cloud, they soon found alternatives to the HDFS and the Hadoop processing engine.</p>



<p>Every cloud vendor offered their unique big data services capable of doing things that were previously only possible on Hadoop in a more efficient and hassle-free manner. Users were no longer bothered by the administration, security, and maintenance issues they faced with Hadoop. The security issues are mainly because, Hadoop is written in&nbsp;Java which is a widely used programming language. &nbsp;Java has been heavily exploited by cybercriminals&nbsp;and as a result, a bull’s eye for numerous security breaches.</p>



<p>A 2015 study from Gartner found that 54% of companies had no plans to invest in Hadoop. The study also noticed that out of&nbsp;those who were not investing, 49% were still trying to figure out how to use it for value, while 57% said that the skills gap was the major reason. The latter is also another key reason behind the downfall of Hadoop. Most of the companies had jumped the bandwagon due to the hype surrounding it. Some of them did not have enough data to warrant a Hadoop rollout, or started leveraging big data technologies without estimating the amount of data they actually would need to process. While file-intensive MapReduce was a great piece of software for simple requests, it could not do much for iterative data. This is why it is a bad option for machine learning too. Machine learning functions on cyclic flow of data, in contrast Hadoop has data flowing in a chain of stages where output on one stage becomes the input of another stage. Therefore, machine learning is not possible in Hadoop unless tied with a 3rd party library.</p>



<p>It was also an inefficient solution for smaller datasets. In other words, while it is perfect for&nbsp;a small number of large files, however in case of an application dealing with a large number of small files, Hadoop fails again! This is because the large number of small files tends to overload the Namenode as it stores namespace for the system and makes it difficult for Hadoop to function. It is also not suitable for non-parallel data processing.</p>



<p>At the same time, Cloudera and Hortonworks were witnessing lesser adoption with every year, which led to the eventual merger of the two companies in 2019.</p>



<p>Lastly, another major reason behind downfall of Hadoop is the fact that it’s a batch processing engine. Batch processes are one that run in the background and do not have any kind of interaction with the user. The engines used for this are not efficient when it comes to stream processing. Also, they cannot produce output in real-time with low latency – which is a must for real time data analysis.</p>
<p>The post <a href="https://www.aiuniverse.xyz/what-caused-the-downfall-of-hadoop-in-big-data-domain/">WHAT CAUSED THE DOWNFALL OF HADOOP IN BIG DATA DOMAIN?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/what-caused-the-downfall-of-hadoop-in-big-data-domain/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What is Artificial Intelligence? How Does AI Work?</title>
		<link>https://www.aiuniverse.xyz/what-is-artificial-intelligence-how-does-ai-work/</link>
					<comments>https://www.aiuniverse.xyz/what-is-artificial-intelligence-how-does-ai-work/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 19 Feb 2021 05:41:11 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial]]></category>
		<category><![CDATA[How]]></category>
		<category><![CDATA[Intelligence]]></category>
		<category><![CDATA[What]]></category>
		<category><![CDATA[work]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12931</guid>

					<description><![CDATA[<p>Source &#8211; https://www.business2community.com/ “Depending on who you ask, AI is either man’s greatest invention since the discovery of fire”, as Google’s CEO said at Google’s I/O 2017 <a class="read-more-link" href="https://www.aiuniverse.xyz/what-is-artificial-intelligence-how-does-ai-work/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-artificial-intelligence-how-does-ai-work/">What is Artificial Intelligence? How Does AI Work?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.business2community.com/</p>



<p>“Depending on who you ask, AI is either man’s greatest invention since the discovery of fire”, as Google’s CEO said at Google’s I/O 2017 keynote, or it is a technology that might one day make man superfluous. What’s inarguable is major companies have embraced AI as if it was one of the most important discoveries ever invented. In the US, Amazon, Apple, Microsoft, Facebook, IBM, SAS, and Adobe have all infused AI and machine learning throughout their operations, while in China the big four – Baidu, Alibaba, Tencent, Xiaomi – are coordinating with the government and all working on unique and almost siloed AI initiatives.</p>



<p>In her article Understanding Three Types of Artificial Intelligence, Anjali UJ explains “The term AI was coined by John McCarthy, an American computer scientist in 1956.” Anjali speaks of the following three types of AI, including:</p>



<ol class="wp-block-list"><li>Narrow Artificial Intelligence: AI that has been trained for a narrow task.</li><li>Artificial General Intelligence: AI containing generalized cognitive abilities, which understand and reason the environment the way humans do.</li><li>Artificial Super Intelligence: AI that surpasses human intelligence and allows machines to mimic human thought.</li></ol>



<p>AI is not a new technology, in reality, it’s decades old. In his MIT Technology Review article Is AI Riding a One-Trick Pony?, James Somers states “Just about every AI advance you’ve heard of depends on a breakthrough that’s three decades old.” Recent advances in chip technology, as well as improvements in hardware, software, and electronics have turned AI’s enormous potential into reality.</p>



<h2 class="wp-block-heading"><strong>Neural Nets</strong></h2>



<p>AI is founded on Artificial Neural Networks (ANN) or just “Neural Nets”, which are non-linear statistical data modelling tools used when the true nature of a relationship between input and output is unknown. In his article Machine Learning Applications for Data Center Optimization, Jim Gao describes neural nets as “a class of machine learning algorithms that mimic cognitive behavior via interactions between artificial neurons.” Neural nets search for patterns and interactions between features to automatically generate a best­ fit model.</p>



<p>They do not require the user to predefine a model’s feature interactions. Speech recognition, image processing, chatbots, recommendation systems, and autonomous software agents are common examples of machine learning. There are three types of training in neural networks; supervised, which is the most common, as well as unsupervised training and reinforcement learning. AI can be broken down into three areas:</p>



<h2 class="wp-block-heading"><strong>Machine Learning</strong></h2>



<p>A branch of computer science, machine learning explores the composition and application of algorithms that learn from data. These algorithms build models based on inputs and use those results to predict or determine actions and results, rather than following strict instructions.</p>



<p>Supervised learning’s goal is to learn a general rule that maps inputs to outputs and the computer is provided with example inputs as well as the desired outputs. With unsupervised learning, however, labeled data isn’t provided to the learning algorithm and it must find the input’s structure on its own. In reinforcement learning, the computer utilizes trial and error to solve a problem. Like Pavlov’s dog, the computer is rewarded for good actions it performs and the goal of the program is to maximize reward.</p>



<h2 class="wp-block-heading"><strong>Deep learning</strong></h2>



<p>A subset of machine learning, deep learning utilizes multi-layered neural nets to perform classification tasks directly from image, text, and/or sound data. In some cases, deep learning models are already exceeding human-level performance. Google Meet’s ability to transcribe a human voice during a live conference call is an example of deep learning’s impressive capabilities.</p>



<p>ML and deep learning are useful for personalization marketing, customer recommendation, spam filtering, fraud detection, network security, optical character recognition (OCR), computer vision, voice recognition, predictive asset maintenance, sentiments analysis, language translations, and online search, among others.</p>



<h2 class="wp-block-heading"><strong>7 Patterns of AI</strong></h2>



<p>In her Forbes article The Seven Patterns of AI, Kathleen Walch lays out a theory that, regardless of the application of AI, there are seven commonalities to all AI applications. These are “hyperpersonalization, autonomous systems, predictive analytics and decision support, conversational/human interactions, patterns and anomalies, recognition systems, and goal-driven systems.” Walch adds that, while AI might require its own programming and pattern recognition, each type can be combined with others, but they all follow their own pretty standard set of rules.</p>



<p>The ‘Hyperpersonalization Pattern’ can be boiled down to the slogan, ‘Treat each customer as an individual’. ‘Autonomous systems’ will reduce the need for manual labor. Predictive analytics portends “some future value for data, predicting behavior, predicting failure, assisted problem resolution, identifying and selecting best fit, identifying matches in data, optimization activities, giving advice, and intelligent navigation,” says Walch. The ‘Conversational Pattern’ includes chatbots, which allow humans to communicate with machines via voice, text, or image.</p>



<p>The ‘Patterns and Anomalies’ type utilizes machine learning to discern patterns in data and it attempts to discover higher-order connections between data points, explains Walch. The recognition pattern helps identify and determine objects within image, video, audio, text, or other highly unstructured data notes Walch. The ‘Goal-Driven Systems Pattern’ utilizes the power of reinforcement learning to help computers beat humans on some of the most complex games imaginable, including&nbsp;<em>Go&nbsp;</em>and&nbsp;<em>Dota 2</em>, a complicated multiplayer online battle arena video game.</p>



<h2 class="wp-block-heading"><strong><sup>Conclusion</sup></strong></h2>



<p>A few years ago, the AI hype had reached such a fever pitch that companies just had to add ‘AI’, ‘ML’, or ‘Deep Learning’ to their pitch decks, and funding flooded through the door. However, businesses are investing in AI powered solutions like AIOps to reduce IT operations cost. Today, investors are a little wiser to the fact that not all that glitters is AI gold, and a lot of companies who pitched themselves as AI experts really didn’t know the difference between a neural net and a&nbsp;<em>k</em>-means algorithm.</p>



<p>Jumping head-first into AI is a recipe for disaster. Only “1 in 3 AI projects are successful and it takes more than 6 months to go from concept to production, with a significant portion of them never making it to production—creating an AI dilemma for organizations,” says Databricks. Not only is AI old, but it is also a difficult technology to implement. Anyone delving into AI needs to have a strong understanding of technology, what it is, where it came from, what limitations might hold it back, so although AI is exceptional technology, the waters are deep. It is far from the panacea that many software companies claim it is. AI has had not one but two AI winters. CEOs looking to make a substantial investment in AI should be well aware of the old saying that ‘a fool and his money are easily parted’, as that fool could be an AI fool, too.</p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-artificial-intelligence-how-does-ai-work/">What is Artificial Intelligence? How Does AI Work?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/what-is-artificial-intelligence-how-does-ai-work/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What if artificial intelligence decided how to allocate stimulus money?</title>
		<link>https://www.aiuniverse.xyz/what-if-artificial-intelligence-decided-how-to-allocate-stimulus-money/</link>
					<comments>https://www.aiuniverse.xyz/what-if-artificial-intelligence-decided-how-to-allocate-stimulus-money/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 13 Feb 2021 06:31:01 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[allocate]]></category>
		<category><![CDATA[Artificial]]></category>
		<category><![CDATA[decided]]></category>
		<category><![CDATA[Intelligence]]></category>
		<category><![CDATA[stimulus]]></category>
		<category><![CDATA[What]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12877</guid>

					<description><![CDATA[<p>Source &#8211; https://www.livemint.com/ New Treasury Department software points the way. But research suggests that it’s impossible to show that an artificial &#8216;superintelligence&#8217; can be contained If, like me, you’re worried about <a class="read-more-link" href="https://www.aiuniverse.xyz/what-if-artificial-intelligence-decided-how-to-allocate-stimulus-money/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-if-artificial-intelligence-decided-how-to-allocate-stimulus-money/">What if artificial intelligence decided how to allocate stimulus money?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.livemint.com/</p>



<p>New Treasury Department software points the way. But research suggests that it’s impossible to show that an artificial &#8216;superintelligence&#8217; can be contained</p>



<p>If, like me, you’re worried about how members of Congress are supposed to vote on a stimulus bill so lengthy and complex that nobody can possibly know all the details, fear not — the Treasury Department will soon be riding to the rescue.</p>



<p>But that scares me a little too.</p>



<p>Let me explain. For the past few months, the department’s Bureau of the Fiscal Service has been testing software designed to scan legislation and correctly allocate funds to various agencies and programs in accordance with congressional intent — a process known as issuing Treasury warrants. Right now, human beings must read each bill line by line to work out where the money goes. If the program can be made to work, the savings will be significant.</p>



<p>Alas, there’s a big challenge. Plenty of tools exist for extracting data from HTML files (and, of course, XML files), but Congress initially publishes legislation only in PDF form; XML or HTML versions often arrive only weeks later. As many a business knows, scraping data from PDFs generally requires human intervention, leading to the possibility of copy errors. The trouble is that PDFs have no standard data format. Even “simple&#8221; methods for extraction generally are designed to work only if the data in question is already presented within the PDF in tabular form.</p>



<p>Treasury’s ambitious hope, however, is that its software, when fully operational, will be able to scan new legislation in its natural language form, figure out where the money is supposed to go and issue the appropriate warrants far more swiftly than humans could. The faster the warrants are issued, the sooner the agency that’s supposed to receive the money can start spending.</p>



<p>Pretty cool stuff.</p>



<p>Yet this snapshot of the future inspires a wicked train of thought. Suppose that the Treasury Department software — which you are free to describe as artificial intelligence or not, depending on your taste — is later replaced by a better program, then by a better one and finally by one that can mimic the working general intelligence of the human mind.</p>



<p>What’s to stop this future AI from deciding on its own that Congress was wrong to give another billion to Agency A when, in the judgment of the program, Agency B needs it more? The program makes a tiny adjustment in a gigantic spending bill, and given that nobody’s actually read it, nobody’s the wiser.</p>



<p>Sounds improbable, right? HAL 9000 meets “Person of Interest&#8221; meets Skynet?</p>



<p>Not so fast.</p>



<p>For technophiles like me, recent achievements in AI are exciting, even breathtaking. AI is credited with reorganizing supply chains to help overcome disruptions caused by the pandemic. Deep learning systems may be able to discover coronary plaques more accurately than clinicians.</p>



<p>So why worry? After all, most of those in the field, including my professors when I studied artificial intelligence as an undergraduate, are confident that tight programming will keep even the most advanced artificial intelligence from escaping the bounds set by its creators. (Think Isaac Asimov’s Laws of Robotics.)</p>



<p>But there have long been dissenters, even among the experts. The prospect of an out-of-control AI has haunted researchers in the field for almost as long as it’s haunted science fiction writers. One thinks of Joseph Weizenbaum’s “Computer Power and Human Reason,&#8221; published back in 1976, or even Norbert Wiener’s classic “God and Golem, Inc.,&#8221; based on lectures the author delivered in 1962.</p>



<p>All of which brings us to an unnerving paper published last month by six AI researchers who argue that it is impossible to show that an artificial “superintelligence&#8221; can be contained. The authors are an international group, representing universities in Germany, Spain, and Chile, as well as the U.S. According to their analysis, no matter how tightly an AI may be programmed, if it indeed possesses generalized reasoning skills “far surpassing&#8221; those of the most gifted humans, what they call “total containment&#8221; turns out to be incapable of formal proof.</p>



<p>Using what is known as computability theory, they hypothesize a superintelligent AI that incorporates a fundamental command never to harm humans. (Asimov again.) The programming will then require a function that decides whether a particular action will harm humans or not. They proceed to show that even if it’s possible “to articulate in a precise programming language&#8221; a perfect set of “control strategies&#8221; to implement this function, there’s no way to know for sure whether the strategies will in fact constrain the AI. (The proof, although technical, is rather elegant, and fun to read.)</p>



<p>Don’t get me wrong: I’m not arguing that the Treasury Department should abandon its quest for a system that extracts data from PDFs, any more than I’m suggesting that any of the countless researchers working on various aspects of AI should halt. I continue to find the prospect of true artificial intelligence as exciting as ever.</p>



<p>What concerns me, however, is the way that public critiques of AI tend to pick around the edges rather than go to the heart of the matter. We often charge nascent AI systems with enhancing bias — for example, by exacerbating rather than correcting disparities in the distribution of health care. Such issues are of undeniable public importance. But as the authors of the paper on computability remind us, you don’t have to be either a technophobe or a fan of apocalyptic steampunk sci-fi to see that the time for public conversation about the containability of AI is now, not later.</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-if-artificial-intelligence-decided-how-to-allocate-stimulus-money/">What if artificial intelligence decided how to allocate stimulus money?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/what-if-artificial-intelligence-decided-how-to-allocate-stimulus-money/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
