<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>needs Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/needs/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/needs/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Mon, 28 Jun 2021 09:00:43 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>The Data Paradox: Artificial Intelligence Needs Data; Data Needs AI</title>
		<link>https://www.aiuniverse.xyz/the-data-paradox-artificial-intelligence-needs-data-data-needs-ai/</link>
					<comments>https://www.aiuniverse.xyz/the-data-paradox-artificial-intelligence-needs-data-data-needs-ai/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 28 Jun 2021 09:00:41 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[data]]></category>
		<category><![CDATA[needs]]></category>
		<category><![CDATA[Paradox]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14608</guid>

					<description><![CDATA[<p>Source &#8211; https://www.forbes.com/ Artificial intelligence is a data hog; effectively building and deploying AI and machine learning systems require large data sets. “The development of a machine <a class="read-more-link" href="https://www.aiuniverse.xyz/the-data-paradox-artificial-intelligence-needs-data-data-needs-ai/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-data-paradox-artificial-intelligence-needs-data-data-needs-ai/">The Data Paradox: Artificial Intelligence Needs Data; Data Needs AI</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.forbes.com/</p>



<p>Artificial intelligence is a data hog; effectively building and deploying AI and machine learning systems require large data sets. “The development of a machine learning algorithm depends on large volumes of data, from which the learning process draws many entities, relationships, and clusters,” says Philip Russom of TDWI. “To broaden and enrich the correlations made by the algorithm, machine learning needs data from diverse sources, in diverse formats, about diverse business processes.”</p>



<p>At the same time, AI itself can be instrumental in identifying and preparing the data needed to increase the value of AI-driven or analytics-driven systems. Companies have needed cadres of data scientists or high-level analysts to put AI and machine learning algorithms in place, AI itself may ultimately help automate such roles to a large degree.</p>



<p>“A new generation of enterprise analytics is emerging, and it incorporates some degree of both automation and contextual information,” according to Tom Davenport and Joey Fitts, writing in Harvard Business Review. AI-enhanced analytics systems “can prepare insights and recommendations that can be delivered directly to decision makers without requiring an analyst to prepare them in advance.”</p>



<p>Business intelligence analysts and quantitative professionals “will still have important tasks to perform, but many will no longer have to provide support and training to amateur data users,” according to Davenport and Fitts. “Small to mid-size businesses that haven’t been able to afford data scientists will be able to analyze their own data with higher precision and clearer insight. All that will matter to organizations’ analytical prowess will be a cultural appetite for data, a set of transactional systems that generate data to be analyzed, and a willingness to invest in and deploy these new technologies.”</p>



<p>Of course, the ability to effectively automate data science tasks depends on industry and circumstances. As Matt Przybyla, senior data scientist and author of Toward Data Science, points out, there often still needs to be trained human guidance to AI and machine learning initiatives, especially if the output is critical to the tasks at hand. “Sure, use an automated data science platform if you already have a data analyst on your team. Or, use the automated solution for predictions that are not harmful if incorrect. Categorizing clothes incorrectly is not the worst thing that can happen, but when you are in the health or finance industry and you classify a disease or large sums of money incorrectly, the harm is undeniable.”</p>



<p>While automated AI data science tools or platforms may be easy and powerful, they also may leave businesses with unanswered questions. “Imagine you are not a data scientist and have not had an academic background in the various types of machine learning algorithms,” Przybyla continues. “You will have to explain these platform model results and implement the suggestions or predictions with regards to your company’s integrations, which could prove to be time-consuming and difficult.”</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-data-paradox-artificial-intelligence-needs-data-data-needs-ai/">The Data Paradox: Artificial Intelligence Needs Data; Data Needs AI</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-data-paradox-artificial-intelligence-needs-data-data-needs-ai/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>US Needs to Defend Its Artificial Intelligence Better, Says Pentagon No. 2</title>
		<link>https://www.aiuniverse.xyz/us-needs-to-defend-its-artificial-intelligence-better-says-pentagon-no-2/</link>
					<comments>https://www.aiuniverse.xyz/us-needs-to-defend-its-artificial-intelligence-better-says-pentagon-no-2/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 23 Jun 2021 11:05:34 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Defend]]></category>
		<category><![CDATA[needs]]></category>
		<category><![CDATA[Pentagon]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14486</guid>

					<description><![CDATA[<p>Source &#8211; https://www.defenseone.com/ AI safety is often overlooked in the private sector, but Deputy Secretary Kathleen Hicks wants the Defense Department to lead a cultural change. As <a class="read-more-link" href="https://www.aiuniverse.xyz/us-needs-to-defend-its-artificial-intelligence-better-says-pentagon-no-2/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/us-needs-to-defend-its-artificial-intelligence-better-says-pentagon-no-2/">US Needs to Defend Its Artificial Intelligence Better, Says Pentagon No. 2</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.defenseone.com/</p>



<p>AI safety is often overlooked in the private sector, but Deputy Secretary Kathleen Hicks wants the Defense Department to lead a cultural change.</p>



<p>As the Pentagon rapidly builds and adopts artificial intelligence tools, Deputy Defense Secretary Kathleen Hicks said military leaders increasingly are worried about a second-hand problem: AI safety.</p>



<p>AI safety broadly refers to making sure that artificial intelligence programs don’t wind up causing problems, no matter whether&nbsp;they were based on corrupted or incomplete data, were poorly designed, or were hacked by attackers.&nbsp;</p>



<p>AI safety is often seen as an afterthought as companies rush to build, sell, and adopt machine learning tools. But the Department of Defense is obligated to put a little more attention into the issue, Hicks said Monday at the Defense One Tech Summit. </p>



<p>“As you look at testing evaluation and validation and verification approaches, these are areas where we know—whether you&#8217;re in the commercial sector, the government sector, and certainly if you look abroad, there is not a lot happening in terms of safety,” she said. “Here I think the department can be a leader. We&#8217;ve been a leader on the [adoption of AI ethical] principles, and I think we can continue to lead on AI by demonstrating that we have an approach that&#8217;s worked for us.”</p>



<p>While multiple private companies have adopted AI ethics principles, the principles adopted by the Defense Department in 2020 were considerably more strict and detailed. </p>



<p>While Ai safety has yet to cause big headlines, the wide implementation of new machine learning programs and processes presents a rich attack surface for adversaries, according to Neil Serebryany, founder &amp; CEO of AI safety company CalypsoAI. His company scans academic research papers, the dark web, and other sources to find&nbsp;threats to deployed AI programs. Its clients include&nbsp;the Air Force and Department of Homeland Security.</p>



<p>“Over the last five years, we’ve seen a more-than-5,000-percent rise in the number of new attacks discovered and new ways to break systems,” said Serebryany. Many of those attacks focus on the big data sources that feed AI algorithms. It’s “very hard for a data practitioner to know if they have been breached or have not been breached.”</p>



<p>A report out this month from Georgetown&#8217;s Center for Security and Emerging Technology notes, &#8220;Right now, it is hard to verify that the well of machine learning is free from malicious interference. In fact, there are good reasons to be worried. Attackers can poison the well’s three main resources—machine learning tools, pretrained machine learning models, and datasets for training—in ways that are extremely difficult to detect.&#8221;</p>



<p>The Defense Department is grappling with AI safety as it rushes to adopt tools in new ways. Within the next three months, the military will dispatch several teams across its combatant commands to determine how to integrate their data with the rest of the department, speed up AI deployment, and examine “how to bring AI and data to the tactical edge,” for U.S. troops, said Hicks.&nbsp;</p>



<p>“I think we have to have a cultural change where we&#8217;re thinking about safety across all of our components. We&#8217;re putting in place [verification and validation and testing and experimentation] approaches that can really ensure that we&#8217;re getting the safest capabilities forward,” she said.&nbsp;</p>



<p>The Defense Department, she said, would look beyond just educating the technical workforce on safety issues and would also reach out to “everyone throughout the department.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/us-needs-to-defend-its-artificial-intelligence-better-says-pentagon-no-2/">US Needs to Defend Its Artificial Intelligence Better, Says Pentagon No. 2</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/us-needs-to-defend-its-artificial-intelligence-better-says-pentagon-no-2/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>ARTIFICIAL INTELLIGENCE NEEDS A HUMANITARIAN OUTLOOK. HERE’S WHY!</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-needs-a-humanitarian-outlook-heres-why/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-needs-a-humanitarian-outlook-heres-why/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 15 Mar 2021 06:38:09 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[HERE’S]]></category>
		<category><![CDATA[Humanitarian]]></category>
		<category><![CDATA[needs]]></category>
		<category><![CDATA[OUTLOOK]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13487</guid>

					<description><![CDATA[<p>Source &#8211; https://www.analyticsinsight.net/ When it comes to artificial intelligence, it’s time to change the way you think. Artificial Intelligence and robotics are transforming every industry and since it’s <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-needs-a-humanitarian-outlook-heres-why/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-needs-a-humanitarian-outlook-heres-why/">ARTIFICIAL INTELLIGENCE NEEDS A HUMANITARIAN OUTLOOK. HERE’S WHY!</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.analyticsinsight.net/</p>



<h2 class="wp-block-heading">When it comes to artificial intelligence, it’s time to change the way you think.</h2>



<p>Artificial Intelligence and robotics are transforming every industry and since it’s only in the initial phase of development, there are near-to-endless applications to this disruptive technology. It estimated that AI will contribute $150 trillion to the global economy in the coming years. At this rate, almost all humanitarian efforts all over the world will be dependent on advanced artificial intelligence and robotics technology.</p>



<p>But before we assess the potential of humanitarian artificial intelligence and robotics, we have to find answers to the most common questions that people might pose about the future of human and machine coexistence. After all, that future is not that far away. People are already speculating whether robots will take over all the jobs or will artificial intelligence create an unjust society?</p>



<p>In 1962, Doug Englebart, the creator of the mouse, came with a theory that the future of humanity should depend on robots replacing humans but augmenting them. His work on creating the first mouse was challenging in his own time. After Apple and Microsoft marketed it for commercial purposes, that device has changed the ways of modern life by transforming human productivity. To date, we use that technology to do our work.</p>



<p>Just like the era of the mouse, artificial intelligence and robotics are potentially a big step forward in terms of productivity. It will certainly enable us to do more. Like any technology, some disruption is bound to happen. Some jobs might be lost and it might just create some gap between the rich and the poor. But in the long run, humanity will be much better off with AI and robotics. Similar to how one cannot imagine life without or before the inception of the internet.</p>



<p>Let’s take the example of those jobs that require people working in hazardous conditions like poor lighting, toxic chemicals, and heavy lifting. In such situations, we can remove humans from the risky environment and employ robots. By using robots to perform in such dangerous conditions, humans can put their focus and capabilities elsewhere.</p>



<p>Another example to consider is robot-enabled surgeries. Surgical robots are so advanced that they facilitate remote surgeries and improve the limitations of minimally invasive surgery. Development in the field of intelligence amplification is also taking place which will give new opportunities to the people who are physically compromised.</p>



<p>Advancements like this in artificial intelligence and robotics are not to harm anyone. It will, in fact, result in the creation of more jobs. Human capabilities and potential will also grow as we are freed to focus on better and more important things.</p>



<p>For artificial intelligence to grow out of its initial stages and robotics to flourish, we need to encourage ambitious pursuing of this field of technology. At the same time, it is imperative to shape these emerging technologies to be socially responsible and beneficial. Artificial intelligence is omnipresent. It has given birth to many branches like machine learning, deep learning, NLP, expert systems, and fuzzy logic which are being used in numerous companies to make this world a better place. Think of the future where machines won’t replace us but assist us in helping us use our time for the better.</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-needs-a-humanitarian-outlook-heres-why/">ARTIFICIAL INTELLIGENCE NEEDS A HUMANITARIAN OUTLOOK. HERE’S WHY!</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-needs-a-humanitarian-outlook-heres-why/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>To Predict Mortality After MI, Machine Learning Needs Better Intel</title>
		<link>https://www.aiuniverse.xyz/to-predict-mortality-after-mi-machine-learning-needs-better-intel/</link>
					<comments>https://www.aiuniverse.xyz/to-predict-mortality-after-mi-machine-learning-needs-better-intel/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 13 Mar 2021 06:40:08 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[better]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[MI]]></category>
		<category><![CDATA[Mortality]]></category>
		<category><![CDATA[needs]]></category>
		<category><![CDATA[predict]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13445</guid>

					<description><![CDATA[<p>Source &#8211; https://www.tctmd.com/ In order for AI-based algorithms to perform better, data sets need to become less crude, study author says. Squelching some of the mounting excitement <a class="read-more-link" href="https://www.aiuniverse.xyz/to-predict-mortality-after-mi-machine-learning-needs-better-intel/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/to-predict-mortality-after-mi-machine-learning-needs-better-intel/">To Predict Mortality After MI, Machine Learning Needs Better Intel</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.tctmd.com/</p>



<p>In order for AI-based algorithms to perform better, data sets need to become less crude, study author says.</p>



<p>Squelching some of the mounting excitement over artificial intelligence, a new study shows no improvement in predicting in-hospital mortality after acute MI with machine learning over standard logistic regression models.</p>



<p>“Existing models were not perfect, and our thought was using advanced models we could derive additional insights from these presumably rich data sets,” lead author Rohan Khera, MBBS (Yale School of Medicine, New Haven, CT), told TCTMD. “But we were unable to discern any additional information, suggesting that our current way of abstracting data into fixed fields, like we do in registries, does not capture the entirety of the patient phenotype. And patients still have a lot of features that we probably capture in our day-to-day clinical care that are not put into these structured fields in a registry.”</p>



<p>It’s not that the data show a problem with machine learning, echoed Ann Marie Navar, MD, PhD (UT Southwestern Medical Center, Dallas, TX), who co-authored an editorial accompanying the study. “It&#8217;s as much a reflection that our current statistical tools for more traditional risk prediction are actually pretty good,” she told TCTMD. “So it&#8217;s kind of hard to build a better mouse trap there.”</p>



<p>For the study, published online this week in&nbsp;<em>JAMA Cardiology</em>, Khera and colleagues compared the predictive values of several machine-learning-based models with logistic regression for in-hospital death among 755,402 patients who were hospitalized for acute MI between 2011 and 2016 and enrolled in the American College of Cardiology Chest Pain &#8211; MI Registry. Overall in-hospital mortality was 4.4%.</p>



<p>Model performance, including area under the receiver operator curve (AUROC), sensitivity, and specificity, was similar for logistic regression and all machine learning-based algorithms.</p>



<p>Notably, both the XGBoost and meta-classifier models showed near-perfect calibration in independent validation, with each reclassifying 27% and 25%, respectively, of people who had been deemed low risk by logistic regression as being moderate-to-high risk, which was more consistent with observed events.</p>



<p>“The general conclusion that we draw is that our data streams have to become better for us to be able to leverage them completely for all clinical applications,” Khera said. “Our current data are very crude—they&#8217;re manually abstracted into a fixed number of data fields—and our assumption that a model that does a little better at detecting relationships in these few variables will do better is probably not the case.”</p>



<p>If currently available models work, “why would you replace it with something else that has more computational power but requires more coding skill and everything involved?” Khera asked. “If both the skill set and the computational power are higher in developing such models. It only makes sense to develop such models if you&#8217;re application markedly improves the rate of predictions or understanding quality or new signatures of patients.”</p>



<p>This means that healthcare systems have work to do, he continued. “Hospitals and healthcare systems should band together to participate in rich data-sharing platforms that can allow us to aggregate this rich information from individual hospitals into a common consortium,” Khera said, noting that current electronic health record (EHR) research is often single institution based. “What registries offer at the other end of the spectrum is you could have a thousand hospitals contributing their data.”</p>



<p>Similarly, he called for national cardiovascular societies “to now go to the next level by incorporating these rich signals from the EHR directly into a higher dimensional registry rather than these manually extracted registries.”</p>



<p><strong>In the ‘Gray Area’</strong></p>



<p>In their editorial, Navar along with Matthew M. Engelhard, MD, PhD, and Michael J. Pencina, PhD (both Duke University School of Medicine, Durham, NC), write that “when working with images, text, or time series, machine learning is almost sure to add value, whereas when working with a fewer, weakly correlated clinical variables, logistic regression is likely to do just as well. In the substantial gray area between these extremes, judgment and experimentation are required.”</p>



<p>This study falls in this category while also hinting at the potential benefits of machine learning. “When correctly applied, it might lead to more meaningful gains in calibration than discrimination,” they say. “This is an important finding, because the role of calibration is increasingly recognized as key for unbiased clinical decision-making, especially when threshold-based classification rules are used. The correctly applied caveat is also important; unfortunately, many developers of machine learning models treat calibration as an afterthought.”</p>



<p>Navar explained that the importance of calibration is dependent on how the model is being used. For example, if it is being deployed to find the patients within the top 10% highest risk in order to best dole out a targeted intervention, discrimination is more important, she said. “But if you have a model to tell somebody that their chance of a heart attack in the next few years is 20% or 10% or 15% and you&#8217;re giving that actual number to a patient, you kind of want to make sure that number is as close to right as possible.” Calibration is also vital for cost-effective models, Navar added.</p>



<p>In this case, for risk prediction, “a traditional modeling approach is really nice because you can see what is going on with all the different variables, you can cross that to what you know about the biology and the epidemiology of whatever it is that you&#8217;re looking at, and then providers can see it,” she said. “We can see how, if we&#8217;re using a model, blood pressure goes up, risk goes up; someone&#8217;s a smoker, risk goes up; and that&#8217;s not always so obvious if you just package up a machine-learning model and just deploy it to a physician without them being able to see what&#8217;s going on underneath the hood.”</p>



<p>For now, this advantage gives traditional models the “upper hand,” Navar said. “But that doesn&#8217;t mean that the insights from those machine learning models are wrong. It just means that the other models are a little bit easier to use.”</p>



<p>“Recent feats of machine learning in clinical medicine have seized our collective attention, and more are sure to follow,” the editorial concludes. “As medical professionals, we should continue building familiarity with these technologies and embrace them when benefits are likely to outweigh the costs, including when working with complex data. However, we must also recognize that for many clinical prediction tasks, the simpler approach—the generalized linear model—may be all that we need.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/to-predict-mortality-after-mi-machine-learning-needs-better-intel/">To Predict Mortality After MI, Machine Learning Needs Better Intel</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/to-predict-mortality-after-mi-machine-learning-needs-better-intel/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Why microservices needs AIOps</title>
		<link>https://www.aiuniverse.xyz/why-microservices-needs-aiops/</link>
					<comments>https://www.aiuniverse.xyz/why-microservices-needs-aiops/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 11 Jun 2019 11:02:57 +0000</pubDate>
				<category><![CDATA[Microservices]]></category>
		<category><![CDATA[AIOps]]></category>
		<category><![CDATA[needs]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=3729</guid>

					<description><![CDATA[<p>Source:- itproportal.com Microservices – a legitimate trend or just hype? Undoubtedly microservices are part of a legitimate trend, they are a specific instance of a more general trend <a class="read-more-link" href="https://www.aiuniverse.xyz/why-microservices-needs-aiops/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/why-microservices-needs-aiops/">Why microservices needs AIOps</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source:- itproportal.com</p>
<h2 id="microservices-x2013-a-legitimate-trend-or-just-hype">Microservices – a legitimate trend or just hype?</h2>
<p>Undoubtedly microservices are part of a legitimate trend, they are a specific instance of a more general trend towards making IT systems more modular and independent. As a consequence of modularisation, there is also a desire within IT to miniaturise components – in essence, building applications with less code and less functionality. Historically, system components would be around for years. However, due to the nature of microservices they now only exist for a very short amount of time – many of them might only have a life span of microseconds or minutes. So when we look at the bigger picture, we can say that the deployment of microservices is very much in line with a wider IT trend.</p>
<h2 id="doesn-x2019-t-microservices-lead-to-greater-complexity">Doesn’t microservices lead to greater complexity?</h2>
<p>In short yes, but let’s first understand why there is a drive towards modularity and microservices in the first place. The reason for the drive is because it was believed that the more modular the system, the more easily DevOps teams could change the system to evolving business requirements. Ultimately, a team only needs to make local changes, without having to worry too much about what’s occurring in the rest of the system. Even from a performance and execution perspective, the fact that you’ve got so many different pieces makes it is easier to fit them into all kinds of architectures.</p>
<p>If your business decides to move a lot of infrastructure to the cloud, you have a lot more freedom when deciding which components remain in-house and which are taken to the cloud, or how you want to distribute those components over various cloud architectures. As opposed to managing a relatively monolithic system, where you will have far more constraints. There are a number of contributing factors as to why enterprises have moved to a microservices environment, but, in many respects, it comes down to gaining more agility and flexibility in regard to development and infrastructure.</p>
<p>However, there is no free lunch. The more agility you build into the system and the easier you make it to develop the system, change the system or distribute it architecturally, the more complex it becomes. And when I talk about complexity, I mean something quite precise…You effectively increase the entropy of the system design in a very real sense. When you have a system that is built out of a few monolithic parts, it’s possible to infer the state of the system as a whole from a few vantage points. But, if you’ve got lots of independent parts that are working in sync, but loosely coupled together, it becomes harder to predict the state of the system from a few snapshots  – you have to look at almost all of the components to be able to see what’s happening end-to-end and acquire an accurate picture.</p>
<h2 id="the-consequences-of-an-entropic-system">The consequences of an entropic system</h2>
<p>The move to modularisation increases the entropy of the system. When you have a high entropy system every data point contains a lot of information, where as in the case of low entropy system many of the data points give you very little information. So we have bought this greater agility at the cost of a more difficult task of managing the systems. The other side effect of this is that many of the traditional monitoring tools pre-supposed a low entropy world, so they are not equipped to deal with high entropy systems.</p>
<p>Without doubt there is a lot of complexity within a microservices architecture, nobody claims that microservices makes things simpler for development and operations teams. However, the complexity is manageable.</p>
<h2 id="so-how-does-aiops-fit-within-microservices">So how does AIOps fit within microservices?</h2>
<p>Firstly, enterprises use big data platforms to gather data points which is fine up to a point as you absolutely need some place to put all of this information, unfortunately for many enterprises they believe that’s the end of the story – we’ve got all the data and can access the data, job done. Of course all you’ve really done is assemble data into a big haystack, now you need to start looking for the needle. And this is where AIOps comes into play.</p>
<p>The basic premise here is that there are patterns and events which disrupt the normal end-to-end behaviour of the system – but we still need to figure out what the cause of disruptions are to fix whatever is ailing the system. Because of the complexity and high entropy of the system, seeing those patterns and being able to analyse those patterns simply exceeds the capabilities of human operators. Yes, there may be a mathematical curve which describes what’s going on under the hood, but it is so complex human beings are not able to come up with the equation to make sense of that curve and hence it is very difficult for them to figure out how to deal with it.</p>
<p>AIOps enables enterprises to work with the data that is being collected in large databases and see that a curve exists, and then come up with the equation that describes the curve. AIOps processes data and then has the capacity to see patterns and provide an analytical solution that human operators can use to solve problems.</p>
<h2 id="what-about-the-cloud-and-microservices">What about the cloud and microservices?</h2>
<p>For microservices in general, things work a lot better if you have automated orchestration. The configuration of any orchestration engine should be in response to a specific business or technical requirement. In this role, the AIOps technology basically informs that business situation. Ideally, the AIOps technology allows the enterprise to rapidly see the pattern, do the analysis, come up with the solution, which is then fed to the orchestration engine.</p>
<p>In an IT environment where applications are running and orchestration engines are manipulating the stack while applications are executing, we have a lot of activity which in turn results in massive complexity. In this instance the AIOps capability needs to view the IT environment in its entirety, which will include the impact of the orchestration engine, and this is true whether the engine sits in the cloud or not.</p>
<p>Cloud is an important economic and architectural development, but it does make life harder for people trying to manage the environment as information is more difficult to extract. However, the cloud doesn’t fundamentally alter the role AIOps plays in the support of IT systems.</p>
<h2 id="what-about-traditionalists-and-microservices">What about traditionalists and microservices?</h2>
<p>With every major technology wave there are sceptics and indeed their scepticism has been proven right a number of times. Not all technology waves have borne fruit. I think in this case they’re wrong, if you look specifically at microservices it’s a trend in terms of packaging technology, but it is a fact that it is part of a longer-term trend in how technology is being deployed.</p>
<p>Having said, what positive role do the sceptics play? There’s certainly little attention given by most DevOps teams to management. If the sceptic is wagging its finger highlighting the need for assurance, requesting more monitoring and understanding of root cause problems, then it plays a fundamental role in not just keeping things honest but ensuring that microservice projects are a success – both in terms of delivering functionality and economic sustainability. If you don’t take the cost of management into account early on, you’re going to have a very distorted picture of what the economics of the situation are.</p>
<p>Traditionalists tend to work against the trend and deploy new technologies exclusively for new applications. However, a lot of the value from these new technologies can come through re-engineering existing applications, because in many cases it was the problems inherent in existing applications which brought about these new technologies. So what ends up happening is that new applications are built, but the old applications which the new solutions were supposed to replace continue to limp along and don’t deliver good customer service.</p>
<h2 id="how-might-microservices-help-aiops-evolve">How might microservices help AIOps evolve?</h2>
<div class="slot-double-height-1-1391">
<div>Advertisement</div>
<div class="pocevcjilxnecgfm"></div>
</div>
<p>I think microservices will underscore the necessity of deploying AIOps. There is not even an option for a community of human operators to manage new systems. You need AIOps to ensure that microservices transitions from leading edge technology to being mainstream. For the broader market, microservices will help legitimise AIOps.</p>
<div class="slot-double-height-0-1062"></div>
<p>With regard to AIOps itself, the kind of analysis one needs to understand causality in a microservice setting is an analysis which makes heavy demands on topology or graphs, because you’re looking at a whole series of components that have all sorts of complex changing connections. A lot of the mystery that needs to be unravelled is what is the causal path that takes place from one microservice to another, what are the connections between the microservices?</p>
<div class="slot-single-height-0-602"></div>
<p>In theory AIOps takes into account topological analysis, and many vendors like Moogsoft have developed a number of interesting algorithms to deal with topology, but more work needs to be done. So I think we will see topological, graph-based analytics become one of the central pieces of AIOps. It’s not that AIOps has not given attention to topology, but I think that topology will move to centre stage in order to cope with the particular kind of complexity that microservices bring to the table.</p>
<p>The post <a href="https://www.aiuniverse.xyz/why-microservices-needs-aiops/">Why microservices needs AIOps</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/why-microservices-needs-aiops/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
