<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>model Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/model/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/model/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Wed, 30 Jun 2021 09:45:56 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>When A Good Machine Learning Model Is So Bad</title>
		<link>https://www.aiuniverse.xyz/when-a-good-machine-learning-model-is-so-bad/</link>
					<comments>https://www.aiuniverse.xyz/when-a-good-machine-learning-model-is-so-bad/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 30 Jun 2021 09:45:54 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Bad]]></category>
		<category><![CDATA[Good]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[model]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14666</guid>

					<description><![CDATA[<p>Source &#8211; https://www.informationweek.com/ IT teams must work with managers who oversee data scientists, data engineers, and analysts to develop points of intervention that complement model ensemble techniques. <a class="read-more-link" href="https://www.aiuniverse.xyz/when-a-good-machine-learning-model-is-so-bad/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/when-a-good-machine-learning-model-is-so-bad/">When A Good Machine Learning Model Is So Bad</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.informationweek.com/</p>



<p><strong>IT teams must work with managers who oversee data scientists, data engineers, and analysts to develop points of intervention that complement model ensemble techniques.</strong></p>



<p>Most managers feel euphoria when implementing a technology meant to enhance the workflow of a team or an organization. But they often overlook the details that help implement the technology successfully. The same sentiment can occur for managers who oversee data scientists, data engineers, and analysts examining machine learning initiatives.</p>



<p>Every organization seems to be in love with machine learning. Because love is blind, so to speak, IT teams become the first line of defense in protecting that euphoric feeling. They can start that protection by helping managers appreciate how models fit observations from data sources.  Appreciating the statistical balance in data models is essential for establishing management that minimizes errors that lead to very poor real-world decisions. Overfitting and underfitting is the key part of that discussion. </p>



<p>Overfitting and underfitting address how training data performance compares to production data performance of a model or machine learning algorithm.&nbsp; An analyst can see good performance on the training data but experience results that exhibit poor generalization with a new data sample or, even worse, in production.&nbsp;&nbsp;&nbsp;</p>



<p>So how does all of this work in practice? Overfit means the model treats noise in the training data as a reliable indicator, when in reality the noise distorts. The model creates a poor prediction from any new dataset that does not contain the same or any noise in it &#8212; namely the production data. From a statistics standpoint, overfitting occurs if the model or algorithm shows low bias but high variance</p>



<p>Underfit introduces a different model performance issue. Intuitively, underfit implies that the model or the algorithm does not capture all the data well enough to understand the statistical relationships among the data. From a statistics perspective,&nbsp;underfitting&nbsp;occurs if the model or algorithm shows low variance but high bias.</p>



<p>Both model conditions reduce generalizations to poor decisions. Generalizations are the capacity for machine learning models to accurately access unseen data. Getting the right generalization&nbsp;is at the heart of establishing a good machine learning model.</p>



<p>One avenue for analysts is to examine the training data to determine if additional observations are possible to avoid adding unbalanced data sets to models. I explained unbalanced datasets previously in a previous post.   </p>



<p>But there are limits to adding observations or adding features. There are phenomena in which adding more data yields no further performance improvements. One example is called the Hughes phenomenon, which shows that as the number of features increases, a classifying model’s performance increases up to a point of optimal number of features, then decreases performance as more features based on the same size as the training set are added. The Hughes phenomenon should certainly remind data professionals of the curse of dimensionality. The number of possible unique rows grow exponentially for many instances, such as high-dimensional models. The variance increases from the additional observations as well. The result is a model with more opportunities to overfit, making accurate generalization harder to establish and raising development inefficiency.</p>



<p>Thus, the most likely efforts will involve finding a balance between bias and variance. Having low bias and variance is a desired objective but usually is impractical or impossible to achieve.&nbsp; Analysts should focus on cross-validation techniques, like gradient boosting, to minimize the likelihood of implementing a poor model.</p>



<p>IT teams must work with managers who oversee data scientists, data engineers, and analysts to develop points of intervention that complement model ensemble techniques. The interaction can also lead to forming robust management processes like observability for incident detection and root-cause reporting.&nbsp; The result is a system that minimizes operational downtime related to data issues. It also produces a process point for managing a balance of bias and variance that protects model accuracy and yield fair outcomes.</p>



<p>Signal noise does not mean that ethics exists in an outcome. Good judgment will make sure ethics in the outcome occur.  Such outcomes are certainly worth a euphoric feeling.</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/when-a-good-machine-learning-model-is-so-bad/">When A Good Machine Learning Model Is So Bad</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/when-a-good-machine-learning-model-is-so-bad/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>DATA ANNOTATION: CHANGING THE TAILWIND OF ML MODEL TRAINING</title>
		<link>https://www.aiuniverse.xyz/data-annotation-changing-the-tailwind-of-ml-model-training/</link>
					<comments>https://www.aiuniverse.xyz/data-annotation-changing-the-tailwind-of-ml-model-training/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 22 Jun 2021 05:24:53 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[annotation]]></category>
		<category><![CDATA[CHANGING]]></category>
		<category><![CDATA[data]]></category>
		<category><![CDATA[ML]]></category>
		<category><![CDATA[model]]></category>
		<category><![CDATA[TAILWIND]]></category>
		<category><![CDATA[training]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14446</guid>

					<description><![CDATA[<p>Source &#8211; https://www.analyticsinsight.net/ Data annotation is the process of labeling data to make it easy for machines to access it. Why did humans start making machines? The <a class="read-more-link" href="https://www.aiuniverse.xyz/data-annotation-changing-the-tailwind-of-ml-model-training/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/data-annotation-changing-the-tailwind-of-ml-model-training/">DATA ANNOTATION: CHANGING THE TAILWIND OF ML MODEL TRAINING</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.analyticsinsight.net/</p>



<h2 class="wp-block-heading">Data annotation is the process of labeling data to make it easy for machines to access it.</h2>



<p>Why did humans start making machines? The immediate answer would be to make a mechanical and computerised model that works like humans. Yes, humans wanted machines to imitate whatever they do. The purpose of artificial intelligence is no different. If we look at the things that artificial intelligence-powered machines are doing for us today, most of them try to minimize our work by taking over the routine, time-consuming jobs. In order to make machine learning models advanced, they should be trained with datasets. That is where data annotation makes its debut.</p>



<p>Artificial intelligence and machine learning have changed the way we live. Starting from product recommendations and search engine results to self-driving cars and autonomous drones, everything is powered by artificial intelligence. However, this would be impossible without data annotation. Today, we are building a future where automation and autonomous-powered working is everything. To create such automated applications and machines, the datasets need to be trained properly. However, since the datasets are very huge and the human mode of training won’t help, artificial intelligence companies use data annotation to label the content and use it for machine learning models’ training. By implying data annotation, machine learning models get to be fed with well trained and labelled datasets. In this article, we take you through the basics of data annotation, explain its types, and list the use cases.</p>



<ul class="wp-block-list"><li>DATA ANNOTATION – OUTSOURCING V/S IN-HOUSE – ROI AND BENEFITS</li><li>A GUIDE TO MACHINE LEARNING: EVERYTHING YOU NEED TO KNOW</li><li>OPERATE MACHINE LEARNING IN MS EXCEL WITHOUT A SINGLE LINE OF CODE</li></ul>



<h4 class="wp-block-heading"><strong>What is data annotation?</strong></h4>



<p>In simple terms,&nbsp;data annotation&nbsp;is the process of labelling data to make it easy for machines to access it.&nbsp;Data annotation&nbsp;is specifically important for supervised machine learning as the models rely on labelled datasets to process, understand, and learn from input patterns to arrive at desired outputs.</p>



<p>Data comes in various forms like text, image, video, documents, etc. But such diverse types can’t be fed into a machine learning model without segregating and sorting it according to their varieties. Therefore, data annotation acts as an intermediary tool to mitigate training issues. By using data annotation, companies can train their machine learning models with the right tools and techniques. In a machine learning model, data annotation takes place before the information gets fed to a system. The process is similar to how we teach kids. For example, in order to teach them about a ball, we either show the picture or a real ball. Similarly, data annotation labels the object as ‘ball’ in the dataset and feeds it to the machine learning model. Some of the uses of data annotation are listed as follows,</p>



<ul class="wp-block-list"><li>While using annotated data to train a machine learning model, the accuracy of its mechanism will be higher.</li><li>Machine learning models trained with annotated data leverages a seamless experience for end-users.</li><li>Even virtual assistants or chatbots use the trained dataset to answer users’ queries.</li><li>In search engine recommendation, a machine learning model trained with annotated data provides comprehensive results.</li><li>Besides helping on large scale, data annotation can help with localized labelling based on geolocations. It locally labels information, images, and other content.</li></ul>



<h4 class="wp-block-heading"><strong>What is human-annotated data?</strong></h4>



<p>Despite the sophistication technology is enjoying, they will be nothing without humans help. It is no different while training a machine learning model. Human help big time in making machines learn about the way the world functions. Therefore, data annotation loops humans in the training process to improve performance.</p>



<p>But why is human-annotated data important in machine learning? Humans have a special talent called judgement and hunch, which machines don’t possess. The recent developments in the technology industry are pointing to developing machines that can think like humans. That is where human-annotated data comes into the picture. Human-annotated data introduces subjectivity, intent, and clarification, making machines determine whether a search result is relevant.</p>



<h4 class="wp-block-heading"><strong>Types of data annotation</strong></h4>



<p><strong>Text annotation:</strong>&nbsp;Today, most companies are moving to automatic models, especially, text-based to power their working system. Owing to the increasing adoption, text annotation has become the centre of attention recently.&nbsp;Text annotation&nbsp;includes a wide variety of annotations like sentiment, intent, and query.</p>



<p><strong>Video annotation:</strong>&nbsp;When it comes to video annotation, humans are seen as a good source to train the datasets. For example, companies use human assistance in search engine results. They collect the input from many people in terms of their preferences and promote similar content to others.</p>



<p><strong>Image annotation:</strong>&nbsp;Image annotation&nbsp;is very important in training a dataset. Many technologies including computer vision, robotic vision, facial recognition, etc. rely on image annotation to label and interpret image forms. To train the models with image data, metadata must be assigned to the images in form of identifiers, captions, or keywords.</p>



<p><strong>Audio annotation:</strong>&nbsp;Audio annotation is quite different from the other types of annotation. Unlike others, audio annotation takes an in-depth step to transcribe and time-stamp the speech data, including transcription of specific pronunciation and intonation.</p>
<p>The post <a href="https://www.aiuniverse.xyz/data-annotation-changing-the-tailwind-of-ml-model-training/">DATA ANNOTATION: CHANGING THE TAILWIND OF ML MODEL TRAINING</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/data-annotation-changing-the-tailwind-of-ml-model-training/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Infovista unveils Artificial Intelligence Model</title>
		<link>https://www.aiuniverse.xyz/infovista-unveils-artificial-intelligence-model/</link>
					<comments>https://www.aiuniverse.xyz/infovista-unveils-artificial-intelligence-model/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 19 Jun 2021 05:44:45 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Infovista]]></category>
		<category><![CDATA[model]]></category>
		<category><![CDATA[unveils]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14423</guid>

					<description><![CDATA[<p>Source &#8211; https://www.itp.net/ The AI-based propagation model enables mobile operators to drive 5G planning and delivery Infovista, the network lifecycle automation company, has announced the availability of <a class="read-more-link" href="https://www.aiuniverse.xyz/infovista-unveils-artificial-intelligence-model/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/infovista-unveils-artificial-intelligence-model/">Infovista unveils Artificial Intelligence Model</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.itp.net/</p>



<p>The AI-based propagation model enables mobile operators to drive 5G planning and delivery</p>



<p>Infovista, the network lifecycle automation company, has announced the availability of its Artificial Intelligence Model (AIM). The commercially available AI-based propagation model aims to transform the way wireless networks can be planned and optimised.</p>



<p>Régis Lerbour, VP Product &amp; R&amp;D, RAN Engineering at Infovista, said, “Operators are at different stages within the 5G rollout, but the majority are still faced with the massive task of selecting, testing and commissioning new sites.</p>



<p>“Our AI-based propagation model, successfully presented to our customers at Infovista RAN Summit, is, by design, cloud-ready and scalable to increase agility and the ability to adapt the network more dynamically, thus offering a way to automate and accelerate the planning and roll-out of 5G networks.”</p>



<p>Infovista’s AIM has been built around machine learning frameworks such as TensorFlow to focus on training and inference of deep neural networks. The project utilised over 10 million data points collected by the company during the last 15 years and spans multiple sub-6 GHz and millimetre wave bands, geographic locations, antenna heights, weather conditions, seasonal foliage variations and hundreds of additional variables – across urban, mixed and rural environments. </p>



<p>The company emphasised that the AI-model predictions have been extensively validated against real-world measurement sampling data and are proven to deliver network plans that are 25% more accurate compared to those delivered using traditional propagation models. The initial testing shows that this improved accuracy translates into up to 20% CAPEX savings when it comes to radio site investments.</p>



<p>AIM avoids labour-intensive and repetitive calibration and parameter manipulation. It fully fits with the Network Lifecycle Automation vision of Infovista that aims to expand the reach of automation beyond network and service operations, into planning, testing and deployment, and reporting and monetization.</p>



<p>AIM is embedded into Infovista’s Planet software, which also includes an integrated feed of crowdsourced subscriber-centric data available in all geographies. Combining both provides mobile operators with higher accuracy and more efficient network planning workflows.</p>



<p>“Over time the combination of AIM with crowdsourced data will mean the new platform will enable operators to fully automate network planning thus allowing them to deploy in new frequency bands faster than ever,” Lerbour added. “Automated data collection and processing contribute to significantly reducing the cost of propagation model calibration and optimizing drive testing, helping accelerate 5G deployments to new levels.”</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/infovista-unveils-artificial-intelligence-model/">Infovista unveils Artificial Intelligence Model</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/infovista-unveils-artificial-intelligence-model/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Model behavior: Waite teaching machine learning via March Madness</title>
		<link>https://www.aiuniverse.xyz/model-behavior-waite-teaching-machine-learning-via-march-madness/</link>
					<comments>https://www.aiuniverse.xyz/model-behavior-waite-teaching-machine-learning-via-march-madness/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 03 Apr 2021 06:27:40 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Behavior]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Madness]]></category>
		<category><![CDATA[March]]></category>
		<category><![CDATA[model]]></category>
		<category><![CDATA[teaching]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13896</guid>

					<description><![CDATA[<p>Source &#8211; https://news.unl.edu/ Zig, or go Zags? Favor new blood or blue blood? Dance with Cinderella or a&#160;stepsister? Every March Madness bracket is a bet (often literally) <a class="read-more-link" href="https://www.aiuniverse.xyz/model-behavior-waite-teaching-machine-learning-via-march-madness/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/model-behavior-waite-teaching-machine-learning-via-march-madness/">Model behavior: Waite teaching machine learning via March Madness</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://news.unl.edu/</p>



<p>Zig, or go Zags? Favor new blood or blue blood? Dance with Cinderella or a&nbsp;stepsister?</p>



<p>Every March Madness bracket is a bet (often literally) on one of 9.2 quintillion possible permutations of winners and losers, front-runners and dark horses, drowsy blowouts and rousing&nbsp;upsets.</p>



<p>While out walking his dog, the University of Nebraska–Lincoln’s Matt Waite realized that the annual rite of spring and college basketball was also an ideal opportunity to apply some lessons he was teaching in Sports Media and Communication 460: Advanced Sports Data Analysis. Some of the 19 undergrads in the class might be unfamiliar with, if not openly wary of, the quantitative realm, but most <em>were</em> already planning to fill out brackets. So he decided to turn the ritual exercise into a class exercise.</p>



<p>“To tell you the truth, it wasn’t on the syllabus when I started the class,” said Waite, professor of practice of journalism and mass communications. “Between the way that the course schedule was working out, the progression that the students were making, and the timing of the tournament, it just sort of all came&nbsp;together.</p>



<p>“That’s something that I really, really try to do in my sports data classes, is make examples of the&nbsp;moment.”</p>



<p>To Waite’s mind, March Madness is especially suited to teaching the fundamentals of machine learning — in the simplest terms, feeding data into a computer algorithm for the sake of training it to predict future outcomes. Analytically inclined college basketball fans and bettors have increasingly looked to machine learning for an assist when filling out their brackets. Waite has even built his own models on the foundations of books like “Basketball on Paper” and other sacred texts of&nbsp;analytics.</p>



<p>“I wanted to have sports communicators dip their toes into the waters of machine learning and predictive analytics — where the tools of doing this have become easy enough to use, but understanding what’s going into the algorithm, and what’s coming out of it, takes some work,” he said. “But once you have some key concepts, you can communicate with it. You can tell stories with the&nbsp;output.”</p>



<p>Waite began by giving his&nbsp;SPMC&nbsp;460 students access to the box scores of every men’s college basketball game going back to the 2014-15 season. (He tried to do the same for the women’s tournament, but despite his ongoing efforts, a lack of available data made it unworkable. “There is sexism in sports data, just as there is in sports in general. Game-level statistics for women’s basketball are vastly more difficult to get your hands on than men’s,” Waite&nbsp;said.)</p>



<p>Those box scores were stuffed with the raw statistics used to calculate more advanced metrics that have historically proven predictive of successful teams: average margin of victory, points scored per possession, shooting percentages, turnovers, offensive rebounding rates, and so on. But it was up to each student to decide which statistics they would feed into an algorithm, and which of three algorithms would consume those&nbsp;stats.</p>



<p>“Machine learning is not magic, and the algorithms are doing a very specific thing: using input that you give them and coming up with answers,” Waite said. “And you, as a human being, need to be able to evaluate&nbsp;those.”</p>



<p>With those fateful decisions made, the students tested their inputs and algorithms by asking the latter to predict the winners of games that had already been played but whose outcomes were a mystery to the machine. After some fine-tuning, the students were ready to run their newly trained algorithms through the bracket-busting gauntlet of March Madness, picking all 63 games (not including the so-called First Four) ahead of&nbsp;time.</p>



<p>“My goal was to let them run wild, see where they got, and then talk about where it went wrong after it happened,” Waite&nbsp;said.</p>



<p>Or, in the case of a few students, where it’s gone especially&nbsp;right.</p>



<p>“I’ve got a handful of folks who are just absolute basketball maniacs and were skeptical that some computer was going to tell them better than they knew,” Waite said. “I have a handful who have absolutely no interest in basketball whatsoever. I had to literally explain the rules of basketball to them, and what these statistics are, for them to even be able to function with this. And the irony is (that) two of those folks are in the top five of the&nbsp;class.”</p>



<p>Thomas Baker, a junior who leads the pack with a bracket in the 99th percentile of those submitted to&nbsp;ESPN.com, is an “absolute hoop-head” who can “rattle off names and their season narratives” at the drop of a basketball, Waite said. Baker put that Bilas-esque knowledge to use by occasionally disregarding an algorithm-based prediction. But he also chose a relatively sophisticated algorithm: a so-called random forest that, true to its name, consists of many decision-tree analyses that proceed in a random fashion to limit the possibility of statistical&nbsp;bias.</p>



<p>“The decision tree learns where to make splits based on the amount of similarity in data,” Waite said. “So you might take all of the teams that shoot better than 40% from the 3-point line and put them over in this group. The teams that shoot worse than that, we’re going to put them over in that group. Then those groups get split by something. And then those (subsequent) groups get split by something (else). So on and so forth, until you get to the end, where if you have a team that matches all of these particular parameters, the model says there’s a 58% chance that they’re going to win the&nbsp;game.”</p>



<p>Kaitlynn Johnson, a senior in fourth place and the 96th percentile, could hardly be more different — a total college basketball novice who built “maybe the most simplistic model,” input only some basic shooting stats, and dutifully followed every prediction. Still, Waite said, anyone who’s spent as much time as he has with brackets might have predicted the seemingly unpredictable success of a rookie&nbsp;predictor.</p>



<p>“Before this even got going, I honestly predicted that somebody like that was going to be near the top,” Waite said. “Because it happens in every bracket pool. If you’ve ever filled out a bracket in an office, you know there’s somebody in there who’s like, ‘I don’t know anything about basketball, but those uniforms are cool! Let’s pick those.’ Or, ‘I like Wildcats more than Blue Devils, so I’ll take them.’ And they always seem to do really well. So I saw her coming a mile&nbsp;away.”</p>



<p>As for Waite himself? He’s just glad to no longer be bringing up the rear, where he spent about half of the tournament. Riding a hot streak that began in the Sweet 16, he’s ascended to a respectable 11th place and breached the 60th percentile on&nbsp;ESPN.com. If nothing else, he said, his marginal March should at least help him illustrate an important point to the class: that while the machine needs a properly educated ghost to guide it, that education goes only so far — and even the best-informed ghosts can be&nbsp;busted.</p>



<p>“There is a certain amount of humility and, I would even say, naivete that needs to go into this, where there is such a thing as the curse of knowledge,” Waite said. “I read the canonical basketball analysis book and tried, as close as I could, to implement the analysis steps into a model. I spent hours and hours on mine, used the fanciest algorithms that I could — and immediately just got my head kicked in. Meanwhile, somebody who didn’t know what a field goal was three weeks ago came up with a very simple and, truthfully, elegant model, and is crushing&nbsp;it.”</p>



<p>And if, in the process of tracking their brackets and retracing their missteps and claiming bragging rights for the rest of the semester, the future media professionals forget or even begin losing some of their lingering aversion to numbers? So much the better, Waite&nbsp;said.</p>



<p>“The students I’ve got are not computer scientists; they’re not statistics majors,” he said. “They’ve (often) avoided math as much as possible. So, for me, the trick is trying to make this as relevant as possible, and draw them in that way. You know, it’s sort of the spoonful of&nbsp;sugar.</p>



<p>“We’re using the tournament to introduce some pretty complex topics in an environment that is easy to understand, in a way that’s accessible, using something that they’re doing anyway. If you can bring those things together, I think you’re in good&nbsp;territory.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/model-behavior-waite-teaching-machine-learning-via-march-madness/">Model behavior: Waite teaching machine learning via March Madness</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/model-behavior-waite-teaching-machine-learning-via-march-madness/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deep Learning Model Detects Electrolyte Imbalance via ECG</title>
		<link>https://www.aiuniverse.xyz/deep-learning-model-detects-electrolyte-imbalance-via-ecg/</link>
					<comments>https://www.aiuniverse.xyz/deep-learning-model-detects-electrolyte-imbalance-via-ecg/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 18 Mar 2021 06:35:49 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Detects]]></category>
		<category><![CDATA[ECG]]></category>
		<category><![CDATA[Electrolyte]]></category>
		<category><![CDATA[Imbalance]]></category>
		<category><![CDATA[model]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13603</guid>

					<description><![CDATA[<p>Source &#8211; Researchers may have developed a deep learning model that is effective at detecting electrolyte imbalance via electrocardiography (ECG). “The detection and monitoring of electrolyte imbalance <a class="read-more-link" href="https://www.aiuniverse.xyz/deep-learning-model-detects-electrolyte-imbalance-via-ecg/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-model-detects-electrolyte-imbalance-via-ecg/">Deep Learning Model Detects Electrolyte Imbalance via ECG</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; </p>



<p>Researchers may have developed a deep learning model that is effective at detecting electrolyte imbalance via electrocardiography (ECG).</p>



<p>“The detection and monitoring of electrolyte imbalance is essential for appropriate management of many metabolic diseases; however, there is no tool that detects such imbalances reliably and noninvasively,” the team wrote in their abstract.</p>



<p>Aiming to develop a deep learning model using ECG, the researchers conducted the retrospective cohort study of two hospitals. The patient sample included 92,140 patients who underwent a lab electrolyte exam and ECG within 30 minutes. The learning model was created using 83,449 ECGs of more than 48,000 of the patients (the internal validation cohort consisted of 12,091 ECGs from 12,091 patients), and they team conducted an external validation with the ECGs of more than 31,000 patients from another hospital. The researchers then evaluated the area under the receiving operating characteristic curve (AUC) of their deep learning model with the use of 12-lead ECG for detecting</p>



<p>According to the analysis results, the AUC for hyperkalemia was 0.945 and was 0.866 for hypokalemia. For hypernatremia, it was 0.944 and was 0.885 hyponatremia. For hypercalcemia, the AUC was 0.905, and for hypocalcemia, it was 0.901. Values during the external validation of the AUC for hypokalemia, hypernatremia, hyponatremia, hypercalcemia, and hypocalcemia were 0.873, 0.857, 0.839, 0.856, 0.831, and 0.813 respectively. The authors also reported that the learning model helped visualize the important ECG region for the detection of electrolyte imbalances.</p>



<p>“To the best of our knowledge, this study is the first to develop an artificial intelligence algorithm for detecting electrolyte imbalance and to show the interpretable patterns of decision making using artificial intelligence in the biosignal domain,” the authors wrote.</p>



<p>Some of the study limitations included the use of 4 common electrolytes (to the exclusion of others), the use of retrospective data, the limited number of centers, a lack of adjustment for certain comorbidities, and the limited combinations of ECG leads.</p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-model-detects-electrolyte-imbalance-via-ecg/">Deep Learning Model Detects Electrolyte Imbalance via ECG</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deep-learning-model-detects-electrolyte-imbalance-via-ecg/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>MODEL SEARCH: A PLATFORM FOR FINDING OPTIMAL ML MODELS</title>
		<link>https://www.aiuniverse.xyz/model-search-a-platform-for-finding-optimal-ml-models/</link>
					<comments>https://www.aiuniverse.xyz/model-search-a-platform-for-finding-optimal-ml-models/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 15 Mar 2021 06:25:04 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[finding]]></category>
		<category><![CDATA[ML]]></category>
		<category><![CDATA[model]]></category>
		<category><![CDATA[Models]]></category>
		<category><![CDATA[Optimal]]></category>
		<category><![CDATA[platform]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13481</guid>

					<description><![CDATA[<p>Source &#8211; https://www.analyticsinsight.net/ As known to many, Google has recently released Model Search which is an open-source platform. This caters to developing efficient and best machine learning <a class="read-more-link" href="https://www.aiuniverse.xyz/model-search-a-platform-for-finding-optimal-ml-models/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/model-search-a-platform-for-finding-optimal-ml-models/">MODEL SEARCH: A PLATFORM FOR FINDING OPTIMAL ML MODELS</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.analyticsinsight.net/</p>



<p>As known to many, Google has recently released Model Search which is an open-source platform. This caters to developing efficient and best machine learning models automatically. Rather than focusing on a particular domain, Model Search is domain agnostic and flexible beyond imagination. Well, not just that. To our surprise, it is even capable of finding just the right architecture. With this, it will best fit a given dataset and the associated problem. At the same time, it is mastered enough to minimize the time that goes behind in coding, the effort as well as the resources that are put in.</p>



<p>Model Search is built on Tensorflow. It is flexible to the extent that it can run either on a single machine or in a distributed setting. This feature does set it apart from the rest, without any doubt. It is equipped with multiple trainers, a search algorithm, a transfer learning algorithm and a database as well that aims at storing the various evaluated models.</p>



<h3 class="wp-block-heading"><strong>The Architecture of Model Search</strong></h3>



<p>Talking about the architecture of Model Search, it is based on four foundational components: They are –</p>



<ol class="wp-block-list"><li><strong>Model Trainers:</strong>As evident as it can get, these components cater to training and evaluating the various models asynchronously.</li><li><strong>Search Algorithms:&nbsp;</strong>With this component of a search algorithm, it is possible to select the best trained architectures. It doesn’t end there. There’s also an option for the user to add some “mutations” to it which can be sent to the trainers for further evaluation.</li><li><strong>Transfer Learning Algorithm:</strong>Model Search boasts of using transfer learning techniques such as knowledge distillation. This further brings in an advantage of reusing knowledge across different experiments. Model Search enables this using two ways – knowledge distillation or weight sharing. The former allows improving the accuracy by adding a loss term that matches the predictions of the high-performing models. On the other hand, the latter bootstraps some of the network’s parameters from previously trained candidates.</li><li><strong>Model Database:</strong>This is where the results of the experiments can be persisted in such a way that it can be reused on different cycles.</li></ol>



<h3 class="wp-block-heading"><strong>What Makes Model Search So Unique?</strong></h3>



<ul class="wp-block-list"><li>Now, that one aspect which makes people look forward to Model Search is its ability to run training and evaluation experiments for AI models in an adaptive and asynchronous fashion. This ability paves the way for the trainers to share the knowledge that they’ve gained from their experiments. The working is such that at the beginning of every cycle, the search algorithm closely monitors all the completed trials. After this, what follows is deciding what to try next. This platform makes use of beam search while deciding what is to be tried next. The next step that follows is to invoke mutation and assign the resulting model back to a trainer.</li><li>When the users go for a Model Search run, they are in a position to compare the many models found during the search. Well, there’s more to this. The platform also allows the users to create their own search space. With this, it is possible for them to customize the architectural elements in their models.</li><li>The researchers have also claimed that Model Search improves upon production models with&nbsp;minimal iterations.</li></ul>



<p>All in all, The Model Search Code aims to provide the researchers with a&nbsp;flexible,&nbsp;domain-agnostic&nbsp;framework to develop the most efficient and best machine learning model. Also, the framework is so powerful that it can build models with state-of-the-art performance. It also has the capability to deal with well-known problems when provided with a search space composed of standard building blocks.</p>
<p>The post <a href="https://www.aiuniverse.xyz/model-search-a-platform-for-finding-optimal-ml-models/">MODEL SEARCH: A PLATFORM FOR FINDING OPTIMAL ML MODELS</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/model-search-a-platform-for-finding-optimal-ml-models/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Machine learning: Accelerating your model deployment</title>
		<link>https://www.aiuniverse.xyz/machine-learning-accelerating-your-model-deployment/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-accelerating-your-model-deployment/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 13 Feb 2021 06:26:42 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Accelerating]]></category>
		<category><![CDATA[Business]]></category>
		<category><![CDATA[deployment]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[model]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12874</guid>

					<description><![CDATA[<p>Source &#8211; https://www.marketscreener.com/ Machine learning: Accelerating your model deployment Business models rely on data to drive decisions and make projections for future growth and performance. Traditionally, business <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-accelerating-your-model-deployment/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-accelerating-your-model-deployment/">Machine learning: Accelerating your model deployment</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.marketscreener.com/</p>



<p><strong>Machine learning: Accelerating your model deployment</strong></p>



<p>Business models rely on data to drive decisions and make projections for future growth and performance. Traditionally, business analytics has been reactive &#8211; guiding decisions in response to past performance. But today&#8217;s leading companies are turning to machine learning (ML) and AI to harness their data for predictive analytics. This shift, however, comes with significant challenges.</p>



<p>According to IDC, almost 30% of AI and ML initiatives fail. The primary culprits behind this failure are poor-quality data, low experience and challenging operationalization. They also require a lot of time to maintain, since you need to repeatedly train ML models with fresh data through the development cycle, due to data quality degradation over time.</p>



<p>Let&#8217;s explore the challenges presented when developing ML models and how the Rackspace Technology Model Factory Framework simplifies and accelerates the process &#8211; so you can overcome these challenges.<strong>Machine learning challenges</strong></p>



<p>Among the most difficult aspects of machine learning is the process of operationalizing developed ML models that accurately and rapidly generate insights to serve your business needs. You&#8217;ve probably experienced some of the most prominent hurdles, such as:</p>



<ul class="wp-block-list"><li>Inefficient coordination in <strong>lifecycle management</strong> between operations teams and ML engineers. According to Gartner, 60% of models don&#8217;t make it to production due to this disconnect.</li><li>A high degree of <strong>model sprawl</strong>, which is a complex situation where multiple models are run simultaneously across different environments, with different datasets and hyperparameters. Keeping track of all these models and their associatives can be challenging.</li><li>Models may be developed quickly, but the process of deployment can often take months &#8211; limiting <strong>time to value</strong>. Organizations lack defined frameworks for data preparation, model training, deployment and monitoring, along with strong governance and security controls.</li><li>The <strong>DevOps model</strong> for application development doesn&#8217;t work with ML models. The standardized linear approach is made redundant by the need for retraining across a model lifecycle with fresh datasets, as data ages and becomes less usable.</li></ul>



<p>The ML model lifecycle is fairly complex, starting with data ingestion, transformation and validation so that it fits the needs of the initiative. A model is then developed and validated, followed by training. Depending on the length of development time, you may need to repeatedly perform training as a model moves across development, testing and deployment environments. After training, the model is then set into production, where it begins serving business objectives. Through this stage, the model&#8217;s performance is logged and monitored to ensure suitability.<strong>Rapidly Build Models with Amazon SageMaker</strong></p>



<p>Among the available tools to help you accelerate this process is Amazon SageMaker. This ML platform from Amazon Web Services (AWS) offers a more comprehensive set of capabilities towards rapidly developing, training and running your ML models in the cloud or at the edge. The Amazon SageMaker stack comes packaged with models for <strong>AI services</strong> such as computer vision, speech and recommendation engine capabilities, as well as models for <strong>ML services</strong> that help you deploy deep learning capabilities. It also supports leading ML frameworks, interfaces and infrastructure options.</p>



<p>But employing the right toolsets is only half the story. Significant improvements in ML model deployment can only be achieved when you also consider improving the efficiency of lifecycle management across the teams that work on them. Different teams across organizations prefer different sets of tooling and frameworks, which can introduce lag through a model lifecycle. An open and modular solution &#8211; agnostic of the platform, tooling or ML framework &#8211; allows for easy tailoring and integration into proven AWS solutions. A solution such as this will allow your teams to use the tools they are comfortable with.</p>



<p>That&#8217;s where the&nbsp;<strong>Rackspace Technology Model Factory Framework</strong>&nbsp;comes in, by providing a CI/CD pipeline for your models that makes them easier to deploy and track.</p>



<p>Let&#8217;s take a closer look at exactly how it improves efficiency and speed across model development, deployment, monitoring and governance, to accelerate getting ML models into production.<strong>End-to-end ML blueprint</strong></p>



<p>When in development, ML models flow from data science teams to operational teams. As previously noted, preferential variances across these teams can introduce a large amount of lag in the absence of standardization.</p>



<p>The Rackspace Technology Model Factory Framework provides a model lifecycle management solution in the form of a modular architectural pattern, built using open source tools that are platform, tooling and framework agnostic. It is designed to improve the collaboration between your data scientists and operations teams so they can rapidly develop models, automate packaging and deploy to multiple environments.</p>



<p>The framework allows integration with AWS services and industry-standard automation tools such as Jenkins, Airflow and Kubeflow. It supports a variety of frameworks such as TensorFlow, scikit-learn, Spark ML, spaCy, and PyTorch, and it can be deployed into different hosting platforms such as Kubernetes or Amazon SageMaker.<strong>Benefits of the Rackspace Technology model factory framework</strong></p>



<p>The Rackspace Technology Model Factory Framework affords large gains in efficiency, cutting the ML lifecycle from an average of 15 or more steps to as few as five. Employing a single source of truth for management, it also automates the handoff process across teams, simplifies maintenance, and troubleshooting.</p>



<p>From the perspective of data scientists, the Model Factory Framework makes their code standardized and reproducible across environments, and it enables experiment and training tracking. It can also result in up to 60% of compute cost savings through scripted access to spot instance training. For operations teams, the framework offers built-in tools for diagnostics, performance monitoring and model drift mitigation. It also offers a model registry to track models&#8217; versions over time. Overall, this helps your organization improve its model deployment time and reduce effort, accelerating time to business insights and ROI.<strong>Solution overview &#8211; from development and deployment, to monitoring and governance</strong></p>



<p>The Model Factory Framework employs a curated list of Notebook templates and proprietary domain-specific languages, simplifying onboarding, reproduction across environments, tracking experiments, tuning hyperparameters and consistently packaging models and code agnostic to the domain.</p>



<p>Once packaged, the framework can execute the end-to-end pipeline which will run the pre-processing, feature engineering and training jobs, log generated metrics and artifacts, and deploy the model across multiple environments.</p>



<ul class="wp-block-list"><li><strong>Development:</strong>&nbsp;The Model Factory Framework supports multiple avenues of development. Users can either develop locally, integrate with Notebooks Server using Integrated Development Environments (IDEs) or use SageMaker Notebooks. They may even utilize automated environment deployment using AWS tooling such as AWS CodeStar.</li><li><strong>Deployment:</strong>&nbsp;Multiple platform backends are supported for the same model code and models can be deployed to Amazon SageMaker, Amazon EMR, Amazon ECS and Amazon EKS. Revision histories are tracked, including artifacts and notebooks with real-time batch and streaming inference pipelines.</li><li><strong>Monitoring:</strong>&nbsp;Model requests and responses are monitored for detailed analysis which enables the ability to address model and data drift.</li><li><strong>Governance:</strong>&nbsp;Data and model artifacts are clearly separated and access can be controlled using AWS IAM and bucket policies that control model feature stores, models and associated pipeline artifacts. The framework also supports rule-based access control through Amazon Cognito, traceability with Data Version Control, and auditing and accounting through extensive tagging.</li></ul>



<p>Using a combination of proven accelerators, AWS native tools and the Model Factory Framework, companies can experience significant acceleration in model development automation, reducing lag and effort and experiencing improvements in time to insights and ROI.</p>



<p>If your organization is interested in utilizing the Model Factory Framework to simplify and accelerate your ML use cases, visit our AI and ML pages for further info, including customer stories, details of supported platforms and other helpful resources.</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-accelerating-your-model-deployment/">Machine learning: Accelerating your model deployment</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-accelerating-your-model-deployment/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>New Survey Finds Model-Driven Culture Is Critical for Data Science Success</title>
		<link>https://www.aiuniverse.xyz/new-survey-finds-model-driven-culture-is-critical-for-data-science-success/</link>
					<comments>https://www.aiuniverse.xyz/new-survey-finds-model-driven-culture-is-critical-for-data-science-success/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 11 Feb 2021 08:33:10 +0000</pubDate>
				<category><![CDATA[Data Science]]></category>
		<category><![CDATA[Critical]]></category>
		<category><![CDATA[Culture]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[Driven]]></category>
		<category><![CDATA[model]]></category>
		<category><![CDATA[Survey]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12840</guid>

					<description><![CDATA[<p>Source &#8211; https://aithority.com/ While companies continue to realize the importance of data science and its ability to positively impact revenue, scaling it across an organization continues to be a <a class="read-more-link" href="https://www.aiuniverse.xyz/new-survey-finds-model-driven-culture-is-critical-for-data-science-success/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/new-survey-finds-model-driven-culture-is-critical-for-data-science-success/">New Survey Finds Model-Driven Culture Is Critical for Data Science Success</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://aithority.com/</p>



<p>While companies continue to realize the importance of data science and its ability to positively impact revenue, scaling it across an organization continues to be a challenge. A new survey released today reveals a new leading factor to success — creating a positive, model-driven business culture among employees. This insight is one of the findings from a survey of data and analytics professionals sponsored by Domino Data Lab, provider of the leading open enterprise data science management platform trusted by over 20% of the Fortune 100.</p>



<p>Conducted by DataIQ, the leading membership-based forum for connecting, educating and supporting the data and analytics community, the survey curated a research panel of influential data and analytics professionals across a wide range of industry sectors and company sizes in the UK. Seniority ranged from senior managers and heads of department to global directors and chief officers.</p>



<p>The survey found that one in four businesses<sup>1</sup>&nbsp;expect data science to impact topline revenue by more than 11 percent. However, the survey indicates a challenge with company culture, suggesting a positive, model-driven culture is difficult to build and still needs to be developed. 39 percent want a clearer definition of needs from stakeholders, 38 percent recognize the need to train business users in data science concepts, and 32 percent identify the need for a more positive relationship with stakeholders.</p>



<p>“Many companies begin their data science journey by hiring a few data scientists, but overlook the importance of building a model-driven culture that aligns with business users and their needs,” said Nick Elprin, CEO of Domino Data Lab. “This survey highlights the impact that the lack of positive culture can have on identifying proper use cases, setting appropriate expectations, and ultimately delivering a measurable impact to the business. Understanding these challenges is important for companies at all stages of maturity so they can course correct and successfully scale data science operations across their organizations.”</p>



<p>Additionally, 40 percent of respondents indicate that weak understanding or support for data science in business is one of their biggest challenges. One out of three organizations (34%) indicate that conflict between data science and IT is one of their biggest challenges. Even companies that describe themselves at the “advanced” and “reaching maturity” levels in terms of their adoption of data science and analytics are not free of culture conflict. For both of these groups, half (52 percent and 50 percent of both groups respectively) indicate that conflict between data science and IT is their biggest challenge.</p>



<p>Some other findings from the survey include:</p>



<ul class="wp-block-list"><li><strong>More than half of all organizations (57 percent)</strong>&nbsp;expect a revenue uplift of under five percent, showing that the failure to embrace data science contributes to low expectations.</li><li><strong>One out of five businesses (21 percent)</strong>&nbsp;are gaining a major competitive advantage through the use of data and analytics tools across their enterprise.</li><li><strong>Sixty-seven percent&nbsp;</strong>have grouped their data scientists together as a central function or department (e.g., a Center of Excellence), rather than federating them across the business.</li><li><strong>One out of three organizations (32 percent)&nbsp;</strong>need months to get models into production. This latency must be addressed, because market conditions can change quickly and models trained using outdated data will make suboptimal recommendations.</li><li><strong>One in 10 organizations (10 percent)&nbsp;</strong>have adopted a superior automated form of model monitoring that provides proactive alerts when models are starting to decay. Data scientists can then address potential model issues before they impact business results.</li></ul>



<p>“For data science to deliver real value to the organization, a positive culture needs to be created in which business stakeholders and data science practitioners have a close bond and common goals,” said David Reed, Knowledge and Strategy Director at DataIQ. “As the survey results show, that’s easier said than done. Four in ten organizations identify a weak understanding or support for data science by the business as their biggest challenge, which creates a vicious circle that leads to one in eight failing to create compelling use cases.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/new-survey-finds-model-driven-culture-is-critical-for-data-science-success/">New Survey Finds Model-Driven Culture Is Critical for Data Science Success</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/new-survey-finds-model-driven-culture-is-critical-for-data-science-success/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Machine Learning Model Shows Higher COVID-19 Cases Than Reported</title>
		<link>https://www.aiuniverse.xyz/machine-learning-model-shows-higher-covid-19-cases-than-reported/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-model-shows-higher-covid-19-cases-than-reported/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 10 Feb 2021 05:56:07 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Cases]]></category>
		<category><![CDATA[COVID-19]]></category>
		<category><![CDATA[Higher]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[model]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12791</guid>

					<description><![CDATA[<p>Source &#8211; https://healthitanalytics.com/ A machine learning model estimated that the number of US COVID-19 cases is nearly three times greater than reported. Since the pandemic began, experts <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-model-shows-higher-covid-19-cases-than-reported/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-model-shows-higher-covid-19-cases-than-reported/">Machine Learning Model Shows Higher COVID-19 Cases Than Reported</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://healthitanalytics.com/</p>



<p>A machine learning model estimated that the number of US COVID-19 cases is nearly three times greater than reported.</p>



<p>Since the pandemic began, experts have looked to daily counts of laboratory-confirmed COVID-19 cases and deaths in an effort to contain the virus. Now, a machine learning algorithm has revealed that these numbers may be higher than reported.</p>



<p>In a study published in <em>PLOS ONE</em>, researchers estimate that the number of COVID-19 cases in the US since the pandemic started is nearly three times that of confirmed cases. The machine learning algorithm provides daily updated estimates of total infections to date, as well as how many people are currently infected across the US and in 50 countries hardest hit by the pandemic.</p>



<p>According to the model, as of February 4, 2021 more than 71 million people in the US had contracted COVID-19. This is significantly greater than the 26.7 million publicly reported number of confirmed cases.</p>



<p>Of those 71 million Americans estimated to have had COVID-19, seven million had current infections and were potentially contagious on February 4, the algorithm showed.</p>



<p>The study is based on calculations completed in September. At that time, the number of actual cumulative cases in 25 of the 50 hardest-hit countries was five to 20 times greater than the confirmed case numbers then suggested.</p>



<p>The current information available on the online algorithm shows that the estimates are now closer to the reported numbers, but still a lot higher. On February 4, Brazil had more than 36 million cumulative cases as estimated by the algorithm – almost four times more than the 9.4 million confirmed cases reported.</p>



<p>France had 14 million versus the 3.2 million reported, while the UK had almost 25 million instead of about four million. The machine learning algorithm also showed that Mexico had nearly 15 times its reported number of cases, at 27.6 million cases instead of 1.9 million confirmed cases.</p>



<p>“The estimates of actual infections reveal for the first time the true severity of COVID-19 across the US and in countries worldwide,” said Jungsik Noh, PhD, a UT Southwestern assistant professor in the Lyda Hill Department of Informatics and first author of the study.</p>



<p>To run its daily updates, the model uses COVID-19 death data from Johns Hopkins University and The COVID Tracking Project, a volunteer organization that aims to help track COVID-19.</p>



<p>The algorithm uses the number of reported deaths, which is thought to be more accurate than the number of lab-confirmed cases, as the basis of its calculations. The model then assumes an infection fatality rate of 0.66 percent, based on an earlier study of the pandemic in China, and considers factors like the average number of days from the onset of symptoms to death or recovery.</p>



<p>The algorithm also compares its estimate with the number of confirmed cases to calculate a ratio of confirmed-to-estimated infections.</p>



<p>Experts are still uncertain about the death rate of COVID-19, so the algorithm’s estimates are rough. However, researchers believe that the model’s estimates are more accurate and leave out fewer cases than the confirmed ones currently used to guide public health policies. The team noted that it’s critical to have a more comprehensive estimate of the prevalence of the disease.</p>



<p>“These are critical statistics about the severity of COVID-19 in each region. Knowing the true severity in different regions will help us effectively fight against the virus spreading,” said Noh.</p>



<p>“The currently infected population is the cause of future infections and deaths. Its actual size in a region is a crucial variable required when determining the severity of COVID-19 and building strategies against regional outbreaks.”</p>



<p>The study showed that in the US, infections vary significantly by state. The algorithm’s projections for February 4 revealed that California has had almost seven million infections since the pandemic started, compared with 5.7 million in New York. Additionally, the model estimated that California had 1.3 million active cases on that date, impacting 3.4 percent of the state’s population.</p>



<p>Researchers checked their findings by comparing results with existing prevalence rates found in several studies that used blood tests to check for antibodies to the virus causing COVID-19. For most of the areas tested, the algorithm’s estimates of infections closely corresponded to the percentage of people who had tested positive for the antibodies.</p>



<p>The team expects that their machine learning model can help inform public health policies during the pandemic.</p>



<p>“Our framework estimates the actual fraction of currently infected people in each region. To our knowledge this is the first model to provide this prediction. The estimated number of current infections can serve as an initial target in planning effective contact tracing,” researchers concluded.</p>



<p>“Since the developed pipeline requires simple input, it is widely applicable to more granular analyses of specific regions or communities, for which the number of confirmed cases and deaths are being tracked.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-model-shows-higher-covid-19-cases-than-reported/">Machine Learning Model Shows Higher COVID-19 Cases Than Reported</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-model-shows-higher-covid-19-cases-than-reported/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Promising Technologies Predicted by Deep Learning-based Model</title>
		<link>https://www.aiuniverse.xyz/promising-technologies-predicted-by-deep-learning-based-model/</link>
					<comments>https://www.aiuniverse.xyz/promising-technologies-predicted-by-deep-learning-based-model/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 06 Feb 2020 06:20:35 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[model]]></category>
		<category><![CDATA[Science]]></category>
		<category><![CDATA[Technologies]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6585</guid>

					<description><![CDATA[<p>Source: businesskorea.co.kr The Korea Institute of Science and Technology Information (KISTI) and the Data Science Lab of Myongji University have selected the 10 most promising technological fields <a class="read-more-link" href="https://www.aiuniverse.xyz/promising-technologies-predicted-by-deep-learning-based-model/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/promising-technologies-predicted-by-deep-learning-based-model/">Promising Technologies Predicted by Deep Learning-based Model</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: businesskorea.co.kr</p>



<p>The Korea Institute of Science and Technology Information (KISTI) and the Data Science Lab of Myongji University have selected the 10 most promising technological fields by using big data and artificial intelligence. The fields expected to show a very rapid growth until the mid-2020s include autonomous driving, energy, machine vision, biotechnology and robotics. The prediction is based on their deep learning-based future prediction model with an accuracy of over 86 percent.</p>



<p>They used 16 million pieces of data published worldwide for the past 12 years in developing the prediction model. The data was classified into 4,500 subject categories and AI and deep learning techniques were employed for quantification by category of network structure data, research content and research fields.<br><br>The 10 fields include renewable energy storage and conversion for hydrogen energy utilization. This technique for using hydrogen in fuel cells by producing it from water electrolyzed by renewable energy is expected to contribute to renewable energy storage and greenhouse gas emissions reduction.<br><br>The other fields include the development of advanced and eco-friendly air conditioning and heating system materials. Examples of the materials expected to contribute to greenhouse gas emissions reduction include nano-adsorbents for use in adsorption air conditioners and heaters, which are predicted to replace electric air conditioners and heaters.<br><br>Carbon dioxide capture and utilization, in the meantime, is to capture carbon dioxide and turn it into resources for use in biofuels, chemical products, construction materials, and so on. It can result in added value creation in various forms as well as carbon reduction.</p>



<p>

Vehicle control technology development for autonomous driving improvement is to better control vehicle behaviors and ensure safety by recognizing fast-changing traffic situations with more accuracy and precision. It is data processing performance enhancement and intellectualization that are key to the development.</p>



<p>AI-based machine vision can be defined as automated decision making based on image acquisition and processing. These days, the scope of application of this technology is expanding very rapidly with the development of deep learning-based image processing and classification techniques and Industry 4.0 technologies such as smart factory operation.</p>



<p>Ultra-high-performance concrete development is to improve the salt resistance and durability of concrete and better prevent its carbonation so that buildings and structures can be used for extended periods.</p>



<p>Biodiversity research is a field comprehensively covering species exploration, research on interactions between organisms in the same habitats, research on genetic variations related to genes and individual organisms, etc.</p>



<p>High-voltage direct current transmission is to convert produced AC power into DC power, transmit it at a high voltage, and then supply electric power after reconversion into AC power. It is an advanced power transmission technique ensuring stability and a decrease in power loss and the demand for it is soaring with regard to cross-border power grid construction, renewable energy system linkage, etc.</p>



<p>Humanoid robot development is to work on controllable humanoids, including two-legged robots, so that they can do various jobs in place of humans. In this field, intellectualization is in rapid progress as to incident recognition, determination and prediction, hazard avoidance, and so on.</p>



<p>Lastly, hyperspectral imaging is to allow an object or a substance to be distinguished or detected with greater ease by acquiring spectrum data on a fragmented band by image pixel. Nowadays, it is developing at a rapid pace in combination with ultraspectral imaging, machine learning-based big data analysis, micro image sensors, and the like.

</p>
<p>The post <a href="https://www.aiuniverse.xyz/promising-technologies-predicted-by-deep-learning-based-model/">Promising Technologies Predicted by Deep Learning-based Model</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/promising-technologies-predicted-by-deep-learning-based-model/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
