<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>machine-learning Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/machine-learning-2/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/machine-learning-2/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Mon, 01 Mar 2021 06:57:56 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Robust Data-Driven Machine-Learning Models for Subsurface Applications: Are We There Yet?</title>
		<link>https://www.aiuniverse.xyz/robust-data-driven-machine-learning-models-for-subsurface-applications-are-we-there-yet/</link>
					<comments>https://www.aiuniverse.xyz/robust-data-driven-machine-learning-models-for-subsurface-applications-are-we-there-yet/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 01 Mar 2021 06:57:54 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[applications]]></category>
		<category><![CDATA[data-driven]]></category>
		<category><![CDATA[machine-learning]]></category>
		<category><![CDATA[Models]]></category>
		<category><![CDATA[Robust]]></category>
		<category><![CDATA[Subsurface]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13139</guid>

					<description><![CDATA[<p>Source &#8211; https://jpt.spe.org/ Algorithms are taking over the world, or so we are led to believe, given their growing pervasiveness in multiple fields of human endeavor such <a class="read-more-link" href="https://www.aiuniverse.xyz/robust-data-driven-machine-learning-models-for-subsurface-applications-are-we-there-yet/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/robust-data-driven-machine-learning-models-for-subsurface-applications-are-we-there-yet/">Robust Data-Driven Machine-Learning Models for Subsurface Applications: Are We There Yet?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://jpt.spe.org/</p>



<p>Algorithms are taking over the world, or so we are led to believe, given their growing pervasiveness in multiple fields of human endeavor such as consumer marketing, finance, design and manufacturing, health care, politics, sports, etc. The focus of this article is to examine where things stand in regard to the application of these techniques for managing subsurface energy resources in domains such as conventional and unconventional oil and gas, geologic carbon sequestration, and geothermal energy.</p>



<p><strong>Srikanta Mishra</strong>&nbsp;and&nbsp;<strong>Jared Schuetter,&nbsp;</strong>Battelle Memorial Institute;&nbsp;<strong>Akhil Datta-Gupta,&nbsp;</strong>SPE, Texas A&amp;M University; and&nbsp;<strong>Grant Bromhal,</strong>&nbsp;National Energy Technology Laboratory, US Department of Energy</p>



<p>Algorithms are taking over the world, or so we are led to believe, given their growing pervasiveness in multiple fields of human endeavor such as consumer marketing, finance, design and manufacturing, health care, politics, sports, etc. The focus of this article is to examine where things stand in regard to the application of these techniques for managing subsurface energy resources in domains such as conventional and unconventional oil and gas, geologic carbon sequestration, and geothermal energy.</p>



<p>It is useful to start with some definitions to establish a common vocabulary.</p>



<ul class="wp-block-list"><li><strong>Data analytics (DA)</strong>—Sophisticated data collection and analysis to understand and model hidden patterns and relationships in complex, multivariate data sets</li><li><strong>Machine learning (ML)</strong>—Building a model between predictors and response, where an algorithm (often a black box) is used to infer the underlying input/output relationship from the data</li><li><strong>Artificial intelligence (AI)</strong>—Applying a predictive model with new data to make decisions without human intervention (and with the possibility of feedback for model updating)</li></ul>



<p>Thus, DA can be thought of as a broad framework that helps determine what happened (descriptive analytics), why it happened (diagnostic analytics), what will happen (predictive analytics), or how can we make something happen (prescriptive analytics) (Sankaran et al. 2019). Although DA is built upon a foundation of classical statistics and optimization, it has increasingly come to rely upon ML, especially for predictive and prescriptive analytics (Donoho 2017). While the terms DA, ML, and AI are often used interchangeably, it is important to recognize that ML is basically a subset of DA and a core enabling element of the broader application for the decision-making construct that is AI.</p>



<p>In recent years, there has been a proliferation in studies using ML for predictive analytics in the context of subsurface energy resources. Consider how the number of papers on ML in the OnePetro database has been increasing exponentially since 1990 <strong>(Fig. 1). </strong>These trends are also reflected in the number of technical sessions devoted to ML/AI topics in conferences organized by SPE, AAPG, and SEG among others; as wells as books targeted to practitioners in these professions (Holdaway 2014; Mishra and Datta-Gupta 2017; Mohaghegh 2017; Misra et al. 2019).</p>



<p>Given these high levels of activity, our goal is to provide some observations and recommendations on the practice of data-driven model building using ML techniques. The observations are motivated by our belief that some geoscientists and petroleum engineers may be jumping the gun by applying these techniques in an ad hoc manner without any foundational understanding, whereas others may be holding off on using these methods because they do not have any formal ML training and could benefit from some concrete advice on the subject. The recommendations are conditioned by our experience in applying both conventional statistical modeling and data analytics approaches to practical problems. To that end, we ask and (try to) answer the following questions:</p>



<ul class="wp-block-list"><li>Why ML models and when?</li><li>One model or many?</li><li>Which predictors matter?</li><li>Can data-driven models become physics-informed?</li><li>What are some challenges going forward?</li></ul>



<h2 class="wp-block-heading">Why ML Models and When?</h2>



<p>Historically, subsurface science and engineering analyses have relied on mechanistic (physics-based) models, which include a causal understanding of input/output relationships. Unsurprisingly, experienced professionals are wary of purely data-driven black-box ML models that appear to be devoid of any such understanding. Nevertheless, the use of ML models is easy to justify if the relevant physics-based model is computation intensive or immature or a suitable mechanistic modeling paradigm does not exist. Furthermore, Holm (2019) posits that, even though humans cannot assess how a black-box model arrives at a particular answer, such models can be useful in science and engineering in certain cases. The three cases that she identifies, and some corresponding oil and gas examples, follow.</p>



<ul class="wp-block-list"><li><strong>When the cost of a wrong answer is low relative to the value of a correct answer</strong>&nbsp;(e.g., using an ML-based proxy model to carry out initial explorations in the parameter space during history matching, with further refinements in the vicinity of the optimal solution applied using a full-physics model)</li><li><strong>When they produce the best results&nbsp;</strong>(e.g., using a large number of pregenerated images to seed a pattern-recognition algorithm for matching the observed pressure derivative signature to an underlying conceptual model during well-test analysis)</li><li><strong>As tools to inspire and guide human inquiry</strong>&nbsp;(e.g., using operational and historical data for electrical submersible pumps in unconventional wells to understand the factors and conditions responsible for equipment failure or suboptimal performance and perform preventative maintenance as needed)</li></ul>



<p>It should be noted that data-driven modeling does not preclude the use of conventional statistical models such as linear/linearized regression, principal component analysis for dimension reduction, or cluster analysis to identify natural groupings within the data (in addition to, or as an alternative to, black-box models). This sets up the data-modeling culture vs. algorithm-modeling culture debate as first noted by Breiman (2001). In our view, the two approaches can and should coexist, with ML methods being preferred if they are clearly superior in terms of predictive accuracy, albeit often at the cost of interpretability. If both approaches provide comparable results at comparable speeds, then conventional statistical models should be chosen because of their transparency.</p>



<h2 class="wp-block-heading">One Model or Many?</h2>



<p>Although the concept of a single correct model has been conventional wisdom for quite some time, the practice of geostatistics has influenced the growing acceptance that multiple plausible geologic models (and their equivalent dynamic reservoir models) can exist (Coburn et al. 2007). This issue of nonuniqueness can be extended readily to other application domains such as drilling, production, and predictive maintenance. The idea of an ensemble of acceptable models simply recognizes that every model—through its assumptions, architecture, and parameterization—has a unique way of characterizing the relationships between the predictors and the responses. Furthermore, multiple such models can provide very similar fits to training or test data, although their performance with respect to future predictions or identification of variable importance can be quite different.</p>



<p>Much like a “wisdom of crowds” sentiment for decision-making at the societal level, ensemble modeling approaches combine predictions from different models with the goal of improving predictions beyond what a single model can provide. They have also routinely appeared as top solutions to the well-known Kaggle data analysis competitions. Approaches for model aggregation may include a simple unweighted average of all model predictions or a weighted average based on model goodness of fit (e.g., root-mean-squared error or a similar error metric). Alternatively, multiple model predictions can be combined using a process called stacking, where a set of base models are used to predict the response of interest using the original inputs, and then their predictions are used as predictors in a final ML-based model, as shown in the work flow of <strong>Fig. 2</strong> (Schuetter et al. 2019).</p>



<p>Given that there is no a priori way to choose the best ML algorithm for a problem at hand, at least in our experience, we recommend starting with a simple linear regression or classification model (ideally, no ML model should underperform this base model). This would be supplemented by one or more tree-based models [e.g., random forest (RF) or gradient boosting machine (GBM)] and one or more nontree-based models [e.g., support vector machine (SVM) or artificial neural network (ANN)]. Because of their architecture, tree-based models can be quite robust, sidestepping many issues that tend to plague conventional statistical models (e.g., monotone transformation of predictors, collinearity, sensitivity to outliers, and normality assumptions). They also tend to produce good performance without excessive tuning, so they are generally easy to train and use. Models such as SVM and ANN require more effort to implement—in the former case, because of the need to be more careful with predictor representation and outliers, and, in the latter case, because of the large number of tuning parameters and resources required; however, they traditionally also have shown better performance.</p>



<p>The suite of acceptable models, based on a goodness-of-fit threshold, would then be combined using the model aggregation concepts described earlier. The benefits would be robust predictions as well as ranking of variable interactions that integrate multiple perspectives.</p>



<h2 class="wp-block-heading">Which Predictors Matter?</h2>



<p>For black-box models, we strongly believe that it is not just sufficient to obtain the model prediction (i.e., what will happen) but also necessary to understand how the predictors are affecting the response (i.e., why will it happen). At some point, every model should require human review to understand what it does because (a) all models are wrong (thanks, George Box), (b) all models are based on assumptions, and (c) humans have a tendency to be overconfident in models and use them even when those assumptions are violated. To that end, answering the question “Which predictors matter?” can help provide some inkling into the inner workings of the black-box model and, thus, addresses the issue of model interpretability. In fact, one of the biggest pushbacks against the widespread adoption of ML models is the perception of lack of transparency in the black-box modeling paradigm (Holm 2019). Therefore, it is important to ensure that a robust approach toward determining (and communicating) variable importance is an integral element of the work flow for data-driven modeling using ML methods.</p>



<p>A review of the subsurface ML modeling literature suggests that ranking of input variables (predictors) with respect to their effect on the output variable of interest (response) seems to be carried out sporadically and mostly when the ML algorithm used in the study happens to include a built-in importance metric (as in the case of RF, GBM, or certain ANN implementations). In our experience, it is more useful to consider a model-agnostic variable-importance strategy, which also lends itself to the ensemble modeling construct. This can help create a meta-ranking of importance across multiple plausible models (much like using a panel of judges in a figure skating competition).</p>



<p>As Schuetter et al. (2018) have shown, the importance rankings may fluctuate from model to model, but, collectively, they provide a more robust perspective on the relative importance of predictors aggregated across multiple models. Some of those model-independent</p>



<p>importance-ranking approaches, as explained in detail in Molnar (2020), are summarized in <strong>Table 1.</strong> We have found the Permute approach to be the most robust and easy to implement and explain without incurring any significant additional computational burden beyond the original model fitting process.</p>



<h2 class="wp-block-heading">Can Data-Driven Models Become Physics-Informed?</h2>



<p>Standard data-driven ML algorithms are trained solely based on data. To ensure good predictive power, the training typically requires large amounts of data that may not be readily available, particularly during early stages of field development. Even if adequate data are available, there often is difficulty in interpreting the results or the results may be physically unrealistic. To address these challenges, a new class of physics-informed ML is being actively investigated (Raissi et al. 2019). The loss function in a data-driven ML (such as ANN) typically consists of only the data misfit term. In contrast, in the</p>



<p>physics-informed neural network (PINN) modeling approaches, the models are trained to minimize the data misfit while accounting for the underlying physics, typically described by governing partial differential equations. This ensures physically consistent predictions and lower data requirements because the solution space is constrained by physical laws. For subsurface flow and transport modeling using PINN, the residual of the governing mass balance equations is typically used as the additional term in the loss function.</p>



<p>For illustrative purposes, <strong>Fig. 3</strong> shows 3D pressure maps in an unconventional reservoir generated using the PINN approach and comparison with a standard neural network (NN) approach. To train the PINN, the loss function here is set as L=Ld+Lr, where Ld is the data misfit in terms of initial pressure, boundary pressure, and gas production rate and Lr is the residual with respect to the governing mass-balance equation that is specified using a computationally efficient Eikonal form of the original equations (Zhang et al. 2016). Almost identical results are obtained using the PINN and standard NN in terms of matching the gas production rate. However, the pressure maps generated using the PINN show close agreement with 3D numerical simulation results, whereas the standard NN shows pressure depletion over a much larger region. Furthermore, the predictions using the PINN are two orders of magnitude faster than the 3D numerical simulator for this example.</p>



<h2 class="wp-block-heading">What Are Some Key Challenges Going Forward?</h2>



<p>Next, we address some of the lingering questions and comments that commonly have been raised during the first author’s SPE Distinguished Lecture question and answer sessions, in industry/research oriented technical forums related to ML, and from conversations with decision-makers and stakeholders.</p>



<p>“Our ML models are not very good.” Consumer marketing and social-media entities (e.g., Google, Facebook, Netflix) are forced to use ML/AI models to predict human behavior because there is no mechanistic modeling alternative. There is a general (but mistaken) perception in our industry that these models must be highly accurate (because they are used so often), whereas subsurface ML models can show higher errors depending on the problem being solved, the size of training data set, and the inclusion of relevant causal variables. We need to manage the (misplaced) expectation about subsurface ML models having to provide near-perfect fits to data and focus more on how the data-driven model can serve as a complement to physics-based models and add value for decision-making. Also, the application of ML models in predictive mode for a different set of geological conditions (spatially) or extending into the future where a different flow regime might be valid (temporally) should be treated with caution because data-driven models have limited ability to project the unseen. In other words, past may not always be prologue for such models.</p>



<p>“If I don’t understand the model, how can I believe it?” This common human reaction to anything that lies beyond one’s sphere of knowledge can be countered by a multipronged approach: (a) articulating the extent to which the predictors span the space of the most relevant causal variables for the problem of interest, (b) demonstrating the robustness of the model with both training and (cross) validation data sets, (c) explaining how the predictors affect the response to provide insights into the inner workings of the model by using variable importance and conditional sensitivity analysis (Mishra and Datta-Gupta 2017), and (d) supplementing this understanding of input/output relationships through creative visualizations.</p>



<p>“We are still looking for the ‘Aha!’ moment. Another common refrain against ML models is that they fail to produce some profound insights on system behavior that were not known before. There are times when a data-driven model will produce insights that are novel, whereas, in other situations, it will merely substantiate conventional subject-matter expertise on key factors affecting the system response. The value of the ML model in either case lies in providing a quantitative data-driven framework for describing the input/output relationships, which should prove useful to the domain expert whenever a physics-model takes too long to run, requires more data than is readily available, or is at an immature or evolving state.</p>



<p>“My staff need to learn data science, but how?” There appears to be a grassroots trend where petroleum engineers and geoscientists are trying to reinvent themselves by informally picking up some knowledge of machine learning and statistics from open sources such as YouTube videos, code and scripts from GitHub, and online courses. Following Donoho (2017), we believe that becoming a citizen data scientist (i.e., one who learns from data) requires more—that is, formally supplementing one’s domain expertise with knowledge of conventional data analysis (from statistics), programming in Python/R (from computer science), and machine learning (from both statistics and computer science). Organizations, therefore, should promote a training regime for their subsurface engineers and scientists that provides such competencies.</p>



<p>In the context of technology advancement and workforce upskilling, it is worth pointing out a recently launched initiative by the US Department of Energy known as Science-Informed Machine Learning for Accelerating Real-Time Decisions in Subsurface Applications (SMART) (https://edx.netl.doe.gov/smart/) . This initiative is funded by DOE’s Carbon Storage and Upstream Oil and Gas Program and has three main focus areas:</p>



<ul class="wp-block-list"><li><strong>Real-time visualization</strong>—to enable dramatic improvements in the visualization of key subsurface features and flows by exploiting machine learning to substantially increase speed and enhance detail</li><li><strong>Real-time forecasting</strong>—to transform reservoir management by rapid analysis of real-time data and rapid forward prediction under uncertainty to inform operational decisions</li><li><strong>Virtual learning</strong>—to develop a computer-based experiential learning environment to improve field development and monitoring strategies</li></ul>



<p>The SMART team is engaging with university, national laboratory, and industry partners and is building off of ongoing and historical data collected from DOE-supported field laboratories and regional partnerships and initiatives since the early 2000s. A key area of experimentation within SMART is the use of deep-learning techniques (e.g., convolution and graph neural networks, auto encoder/decoder, long short-term memory) for building 3D spatiotemporal data-driven models on the basis of field observations or synthetic data.</p>



<h2 class="wp-block-heading">Epilogue</h2>



<p>The buzz surrounding DA and AI/ML from multiple business, health, social, and applied science domains has found its way into several oil and gas (and related subsurface science and engineering) applications. Within our area of work, there is significant ongoing activity related to technology adaptation and development, as well as both informal and formal upskilling of geoenergy professionals to create citizen data scientists. The current status of this field, however, can best be classified as somewhat immature; it reminds us of the situation with geostatistics in the early 1990s, when the potential of the technology was beginning to be realized by the industry but was not yet fully adopted for mainstream applications.</p>



<p>To that end, we have highlighted several issues that should be properly addressed for making data-driven models more robust (i.e., accurate, efficient, understandable, and useful) while promoting foundational understanding of ML-related technologies among petroleum engineers and geoscientists. We believe that an appropriate mindset should be not to treat these data-driven modeling problems as merely curve-fitting exercises using very flexible and powerful algorithms easily abused for overfitting but to try to extract insights based on data that can be translated into actionable information for making better decisions. As the poet T.S. Eliot has said: “Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?” By extension, where is the information that is hiding in our data? May these thoughts help guide our journey toward better ML-based data-driven models for subsurface energy resource applications.</p>
<p>The post <a href="https://www.aiuniverse.xyz/robust-data-driven-machine-learning-models-for-subsurface-applications-are-we-there-yet/">Robust Data-Driven Machine-Learning Models for Subsurface Applications: Are We There Yet?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/robust-data-driven-machine-learning-models-for-subsurface-applications-are-we-there-yet/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Industry News: A machine-learning approach to finding treatment options for COVID-19</title>
		<link>https://www.aiuniverse.xyz/industry-news-a-machine-learning-approach-to-finding-treatment-options-for-covid-19/</link>
					<comments>https://www.aiuniverse.xyz/industry-news-a-machine-learning-approach-to-finding-treatment-options-for-covid-19/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 19 Feb 2021 05:37:52 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Approach]]></category>
		<category><![CDATA[COVID-19]]></category>
		<category><![CDATA[finding]]></category>
		<category><![CDATA[industry]]></category>
		<category><![CDATA[machine-learning]]></category>
		<category><![CDATA[treatment]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12928</guid>

					<description><![CDATA[<p>Source &#8211; https://www.selectscience.net/ Researchers have developed a system to identify drugs that might be repurposed to fight the coronavirus in elderly patients When the COVID-19 pandemic struck <a class="read-more-link" href="https://www.aiuniverse.xyz/industry-news-a-machine-learning-approach-to-finding-treatment-options-for-covid-19/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/industry-news-a-machine-learning-approach-to-finding-treatment-options-for-covid-19/">Industry News: A machine-learning approach to finding treatment options for COVID-19</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.selectscience.net/</p>



<p>Researchers have developed a system to identify drugs that might be repurposed to fight the coronavirus in elderly patients</p>



<p><strong>When the COVID-19 pandemic struck in early 2020, doctors and researchers rushed to find effective treatments. There was little time to spare. “Making new drugs takes forever,” says Caroline Uhler, a computational biologist in MIT’s Department of Electrical Engineering and Computer Science and the Institute for Data, Systems and Society, and an associate member of the Broad Institute of MIT and Harvard. “Really, the only expedient option is to repurpose existing drugs.”</strong></p>



<p>Uhler’s team has now developed a machine learning-based approach to identify drugs already on the market that could potentially be repurposed to fight COVID-19, particularly in the elderly. The system accounts for changes in gene expression in lung cells caused by both the disease and aging. That combination could allow medical experts to more quickly seek drugs for clinical testing in elderly patients, who tend to experience more severe symptoms. The researchers pinpointed the protein RIPK1 as a promising target for COVID-19 drugs, and they identified three approved drugs that act on the expression of RIPK1.</p>



<p>The research appears today in the journal Nature Communications. Co-authors include MIT PhD students Anastasiya Belyaeva, Adityanarayanan Radhakrishnan, Chandler Squires, and Karren Dai Yang, as well as PhD student Louis Cammarata of Harvard University and long-term collaborator G.V. Shivashankar of ETH Zurich in Switzerland.</p>



<p>Early in the pandemic, it grew clear that COVID-19 harmed older patients more than younger ones, on average. Uhler’s team wondered why. “The prevalent hypothesis is the aging immune system,” she says. But Uhler and Shivashankar suggested an additional factor: “One of the main changes in the lung that happens through aging is that it becomes stiffer.”</p>



<p>The stiffening lung tissue shows different patterns of gene expression than in younger people, even in response to the same signal. “Earlier work by the Shivashankar lab showed that if you stimulate cells on a stiffer substrate with a cytokine, similar to what the virus does, they actually turn on different genes,” says Uhler. “So, that motivated this hypothesis. We need to look at aging together with SARS-CoV-2 — what are the genes at the intersection of these two pathways?” To select approved drugs that might act on these pathways, the team turned to big data and artificial intelligence.</p>



<p>The researchers zeroed in on the most promising drug repurposing candidates in three broad steps. First, they generated a large list of possible drugs using a machine-learning technique called an autoencoder. Next, they mapped the network of genes and proteins involved in both aging and SARS-CoV-2 infection. Finally, they used statistical algorithms to understand causality in that network, allowing them to pinpoint “upstream” genes that caused cascading effects throughout the network. In principle, drugs targeting those upstream genes and proteins should be promising candidates for clinical trials.</p>



<p>To generate an initial list of potential drugs, the team’s autoencoder relied on two key datasets of gene expression patterns. One dataset showed how expression in various cell types responded to a range of drugs already on the market, and the other showed how expression responded to infection with SARS-CoV-2. The autoencoder scoured the datasets to highlight drugs whose impacts on gene expression appeared to counteract the effects of SARS-CoV-2. “This application of autoencoders was challenging and required foundational insights into the working of these neural networks, which we developed in a paper recently published in PNAS,” notes Radhakrishnan.</p>



<p>Next, the researchers narrowed the list of potential drugs by homing in on key genetic pathways. They mapped the interactions of proteins involved in the aging and SARS-CoV-2 infection pathways. Then they identified areas of overlap among the two maps. That effort pinpointed the precise gene expression network that a drug would need to target to combat COVID-19 in elderly patients.</p>



<p>“At this point, we had an undirected network,” says Belyaeva, meaning the researchers had yet to identify which genes and proteins were “upstream” (i.e. they have cascading effects on the expression of other genes) and which were “downstream” (i.e. their expression is altered by prior changes in the network). An ideal drug candidate would target the genes at the upstream end of the network to minimize the impacts of infection.</p>



<p>“We want to identify a drug that has an effect on all of these differentially expressed genes downstream,” says Belyaeva. So the team used algorithms that infer causality in interacting systems to turn their undirected network into a causal network. The final causal network identified RIPK1 as a target gene/protein for potential COVID-19 drugs, since it has numerous downstream effects. The researchers identified a list of the approved drugs that act on RIPK1 and may have potential to treat COVID-19. Previously these drugs have been approved for the use in cancer. Other drugs that were also identified, including ribavirin and quinapril, are already in clinical trials for COVID-19.</p>



<p>Uhler plans to share the team’s findings with pharmaceutical companies. She emphasizes that before any of the drugs they identified can be approved for repurposed use in elderly COVID-19 patients, clinical testing is needed to determine efficacy. While this particular study focused on COVID-19, the researchers say their framework is extendable. “I’m really excited that this platform can be more generally applied to other infections or diseases,” says Belyaeva. Radhakrishnan emphasizes the importance of gathering information on how various diseases impact gene expression. “The more data we have in this space, the better this could work,” he says.</p>



<p>This research was supported, in part, by the Office of Naval Research, the National Science Foundation, the Simons Foundation, IBM, and the MIT Jameel Clinic for Machine Learning and Health.</p>
<p>The post <a href="https://www.aiuniverse.xyz/industry-news-a-machine-learning-approach-to-finding-treatment-options-for-covid-19/">Industry News: A machine-learning approach to finding treatment options for COVID-19</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/industry-news-a-machine-learning-approach-to-finding-treatment-options-for-covid-19/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Machine-learning method can crunch data to find new uses for existing drugs</title>
		<link>https://www.aiuniverse.xyz/machine-learning-method-can-crunch-data-to-find-new-uses-for-existing-drugs/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-method-can-crunch-data-to-find-new-uses-for-existing-drugs/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 05 Jan 2021 05:32:55 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[machine-learning]]></category>
		<category><![CDATA[scientists]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12501</guid>

					<description><![CDATA[<p>Source: news-medical.net Scientists have developed a machine-learning method that crunches massive amounts of data to help determine which existing medications could improve outcomes in diseases for which <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-method-can-crunch-data-to-find-new-uses-for-existing-drugs/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-method-can-crunch-data-to-find-new-uses-for-existing-drugs/">Machine-learning method can crunch data to find new uses for existing drugs</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: news-medical.net</p>



<p>Scientists have developed a machine-learning method that crunches massive amounts of data to help determine which existing medications could improve outcomes in diseases for which they are not prescribed.</p>



<p>The intent of this work is to speed up drug repurposing, which is not a new concept &#8211; think Botox injections, first approved to treat crossed eyes and now a migraine treatment and top cosmetic strategy to reduce the appearance of wrinkles.</p>



<p>But getting to those new uses typically involves a mix of serendipity and time-consuming and expensive randomized clinical trials to ensure that a drug deemed effective for one disorder will be useful as a treatment for something else.</p>



<p>The Ohio State University researchers created a framework that combines enormous patient care-related datasets with high-powered computation to arrive at repurposed drug candidates and the estimated effects of those existing medications on a defined set of outcomes.</p>



<p>Though this study focused on proposed repurposing of drugs to prevent heart failure and stroke in patients with coronary artery disease, the framework is flexible &#8211; and could be applied to most diseases.</p>



<p>Drug repurposing is an attractive pursuit because it could lower the risk associated with safety testing of new medications and dramatically reduce the time it takes to get a drug into the marketplace for clinical use.</p>



<p>Randomized clinical trials are the gold standard for determining a drug&#8217;s effectiveness against a disease, but Zhang noted that machine learning can account for hundreds &#8211; or thousands &#8211; of human differences within a large population that could influence how medicine works in the body. These factors, or confounders, ranging from age, sex and race to disease severity and the presence of other illnesses, function as parameters in the deep learning computer algorithm on which the framework is based.</p>



<p>That information comes from &#8220;real-world evidence,&#8221; which is longitudinal observational data about millions of patients captured by electronic medical records or insurance claims and prescription data.</p>



<p>&#8220;Real-world data has so many confounders. This is the reason we have to introduce the deep learning algorithm, which can handle multiple parameters,&#8221; said Zhang, who leads the Artificial Intelligence in Medicine Lab and is a core faculty member in the Translational Data Analytics Institute at Ohio State. &#8220;If we have hundreds or thousands of confounders, no human being can work with that. So we have to use artificial intelligence to solve the problem.</p>



<p>&#8220;We are the first team to introduce use of the deep learning algorithm to handle the real-world data, control for multiple confounders, and emulate clinical trials,&#8221; Zhang said.</p>



<p>The research team used insurance claims data on nearly 1.2 million heart-disease patients, which provided information on their assigned treatment, disease outcomes and various values for potential confounders. The deep learning algorithm also has the power to take into account the passage of time in each patient&#8217;s experience &#8211; for every visit, prescription and diagnostic test. The model input for drugs is based on their active ingredients.</p>



<p>Applying what is called causal inference theory, the researchers categorized, for the purposes of this analysis, the active drug and placebo patient groups that would be found in a clinical trial. The model tracked patients for two years &#8211; and compared their disease status at that end point to whether or not they took medications, which drugs they took and when they started the regimen.</p>



<p>&#8220;With causal inference, we can address the problem of having multiple treatments. We don&#8217;t answer whether drug A or drug B works for this disease or not, but figure out which treatment will have the better performance,&#8221; Zhang said.</p>



<p>Their hypothesis: that the model would identify drugs that could lower the risk for heart failure and stroke in coronary artery disease patients.</p>



<p>The model yielded nine drugs considered likely to provide those therapeutic benefits, three of which are currently in use &#8211; meaning the analysis identified six candidates for drug repurposing. Among other findings, the analysis suggested that a diabetes medication, metformin, and escitalopram, used to treat depression and anxiety, could lower risk for heart failure and stroke in the model patient population. As it turns out, both of those drugs are currently being tested for their effectiveness against heart disease.</p>



<p>Zhang stressed that what the team found in this case study is less important than how they got there.</p>



<p>&#8220;My motivation is applying this, along with other experts, to find drugs for diseases without any current treatment. This is very flexible, and we can adjust case-by-case,&#8221; he said. &#8220;The general model could be applied to any disease if you can define the disease outcome.&#8221;</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-method-can-crunch-data-to-find-new-uses-for-existing-drugs/">Machine-learning method can crunch data to find new uses for existing drugs</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-method-can-crunch-data-to-find-new-uses-for-existing-drugs/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Facebook Open-Sources Machine-Learning Privacy Library Opacus</title>
		<link>https://www.aiuniverse.xyz/facebook-open-sources-machine-learning-privacy-library-opacus/</link>
					<comments>https://www.aiuniverse.xyz/facebook-open-sources-machine-learning-privacy-library-opacus/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 14 Oct 2020 05:13:29 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI research]]></category>
		<category><![CDATA[Facebook]]></category>
		<category><![CDATA[machine-learning]]></category>
		<category><![CDATA[PyTorch]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12186</guid>

					<description><![CDATA[<p>Source: infoq.com Facebook AI Research (FAIR) has announced the release of Opacus, a high-speed library for applying differential privacy techniques when training deep-learning models using the PyTorch <a class="read-more-link" href="https://www.aiuniverse.xyz/facebook-open-sources-machine-learning-privacy-library-opacus/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/facebook-open-sources-machine-learning-privacy-library-opacus/">Facebook Open-Sources Machine-Learning Privacy Library Opacus</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: infoq.com</p>



<p>Facebook AI Research (FAIR) has announced the release of Opacus, a high-speed library for applying differential privacy techniques when training deep-learning models using the PyTorch framework. Opacus can achieve an order-of-magnitude speedup compared to other privacy libraries.</p>



<p>The library was described on the FAIR blog. Opacus provides an API and implementation of a PrivacyEngine, which attaches directly to the PyTorch optimizer during training. By using hooks in the PyTorch Autograd component, Opacus can efficiently calculate per-sample gradients, a key operation for differential privacy. Training produces a standard PyTorch model which can be deployed without changing existing model-serving code. According to FAIR,</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>[W]e hope to provide an easier path for researchers and engineers to adopt differential privacy in ML, as well as to accelerate DP research in the field.</p></blockquote>



<p>Differential privacy (DP) is a mathematical definition of data privacy. The core concept of DP is to add noise to a query operation on a dataset so that removing a single data element from the dataset has a very low probability of altering the results of that query. This probability is called the privacy budget. Each successive query expends part of the total privacy budget of the dataset; once that has happened, further queries cannot be performed while still guaranteeing privacy.</p>



<p>When this concept is applied to machine learning, it is typically applied during the training step, effectively guaranteeing that the model does not learn &#8220;too much&#8221; about specific input samples. Because most deep-learning frameworks use a training process called stochastic gradient descent (SGD), the privacy-preserving version is called DP-SGD. During the back-propagation step, normal SGD computes a single gradient tensor for an entire input &#8220;minibatch&#8221;, which is then used to update model parameters. However, DP-SGD requires computing the gradient for the individual samples in the minibatch. The implementation of this step is the key to the speed gains for Opacus.</p>



<p>For computing the individual gradients, Opacus uses an efficient algorithm developed by Ian Goodfellow, inventor of the generative adversarial network (GAN) model. Applying this technique, Opacus computes the gradient for each input sample. Each gradient is clipped to a maximum magnitude, ensuring privacy for outliers in the data. The gradients are aggregated to a single tensor, and noise is added to the result before model parameters are updated. Because each training step constitutes a &#8220;query&#8221; of the input data, and thus an expenditure of privacy budget, Opacus tracks this, providing real-time monitoring and the option to stop training when the budget is expended.</p>



<p>In developing Opacus, FAIR and the PyTorch team collaborated with OpenMined, an open-source community dedicated to developing privacy techniques for ML and AI. OpenMined had previously contributed to Facebook&#8217;s CrypTen, a framework for ML privacy research, and developed its own projects, including a DP library called PySyft and a federated-learning platform called PyGrid. According to FAIR&#8217;s blog post, Opacus will now become one of the core dependencies of OpenMined&#8217;s libraries. PyTorch&#8217;s major competitor, Google&#8217;s deep-learning framework TensorFlow, released a DP library in early 2019. However, the library is not compatible with the newer 2.x versions of TensorFlow.</p>
<p>The post <a href="https://www.aiuniverse.xyz/facebook-open-sources-machine-learning-privacy-library-opacus/">Facebook Open-Sources Machine-Learning Privacy Library Opacus</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/facebook-open-sources-machine-learning-privacy-library-opacus/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Machine-learning technique could improve fusion energy outputs</title>
		<link>https://www.aiuniverse.xyz/machine-learning-technique-could-improve-fusion-energy-outputs/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-technique-could-improve-fusion-energy-outputs/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 13 Oct 2020 11:31:16 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[could]]></category>
		<category><![CDATA[fusion energy]]></category>
		<category><![CDATA[Future]]></category>
		<category><![CDATA[machine-learning]]></category>
		<category><![CDATA[technique]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12170</guid>

					<description><![CDATA[<p>Source: phys.org Machine-learning techniques, best known for teaching self-driving cars to stop at red lights, may soon help researchers around the world improve their control over the <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-technique-could-improve-fusion-energy-outputs/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-technique-could-improve-fusion-energy-outputs/">Machine-learning technique could improve fusion energy outputs</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: phys.org</p>



<p>Machine-learning techniques, best known for teaching self-driving cars to stop at red lights, may soon help researchers around the world improve their control over the most complicated reaction known to science: nuclear fusion.</p>



<p>Fusion reactions are typically hydrogen atoms heated to form a gaseous cloud called a plasma that releases energy as the particles bang into each other and fuse. Getting these reactions under better control could create huge amounts of environmentally clean energy from nuclear reactors in fusion power plants of the future.</p>



<p>&#8220;The connection between machine learning and fusion energy is not obvious,&#8221; said Sandia National Laboratories researcher Aidan Thompson, principal investigator for a three-year Department of Energy Office of Science award of $2.2 million annually to make that very connection. &#8220;Simply put, we have pioneered machine-learning&#8217;s use to improve simulations of the reactor&#8217;s wall material as it interacts with plasma. This has been beyond the scope of atomic-scale simulations of the past.&#8221;</p>



<p>The expected result should suggest procedural or structural modifications to improve nuclear energy output, he said.</p>



<p><strong>Power of machine learning in modeling nuclear fusion</strong></p>



<p>Machine learning is powerful because it uses mathematical and statistical means to figure out a situation, rather than analyze every piece of data in the desired category. For example, only a small number of dog photos are needed to teach a recognition system the concept of &#8220;dogginess&#8221;— in other words, &#8220;this is a dog&#8221;—rather than scanning every dog photo in existence.</p>



<p>Sandia&#8217;s machine-learning approach to nuclear fusion is the same, but more complicated.</p>



<p>&#8220;It is not a trivial problem to physically observe what is going on within a reactor&#8217;s walls as these structures are internally bombarded with hydrogen, helium, deuterium and tritium as parts of a super-heated plasma,&#8221; said Thompson.</p>



<p>He described components of the circling plasma striking and altering the composition of the retaining walls and heavy atoms dislodging from the struck walls and altering the plasma. Reactions take place in nanoseconds at temperatures as hot as the sun. Trying to modify components using trial and error to improve outcomes is extraordinarily laborious.</p>



<p>Machine-learning algorithms, on the other hand, use computer-generated data without direct measurements from experiments and can yield information that eventually could be used to make plasma interactions with containment wall material less damaging and thus improve the overall energy output of fusion reactors.</p>



<p>&#8220;There is no other way of getting this information,&#8221; said Thompson.</p>



<p><strong>Small number of atoms predict the energy of many</strong></p>



<p>Thompson&#8217;s team expects that by using large datasets of quantum-mechanics calculations under extreme conditions as training data, they can build a machine-learning model that predicts the energy of any configuration of atoms.</p>



<p>This model, called a machine-learning interatomic potential, or MLIAP, can be inserted into huge classical molecular dynamics codes such as Sandia&#8217;s award-winning LAMMPS, or Large-scale Atomic/Molecular Massively Parallel Simulator, software. In this way, by interrogating only a relatively small number of atoms, they can extend the accuracy of quantum mechanics to the scale of millions of atoms needed to simulate the behavior of fusion energy materials.</p>



<p>&#8220;So why is what we are doing machine learning and not just bookkeeping lots of data?&#8221; asks Thompson rhetorically. &#8220;The short answer is, we generate equations from an infinite set of possible variables to build models that are grounded in physics but contain hundreds or thousands of parameters that keep us within range of our target.&#8221;</p>



<p>One catch is that the accuracy of the MLIAP model depends on the overlap between the training data and the actual atomic environments encountered by the application, said Thompson.</p>



<p>These environments may be various, requiring new training data and alteration of the machine-learning model. Recognizing and adjusting for overlaps is part of the work of the next few years.</p>



<p>&#8220;Our model at first will be used to interpret small experiments,&#8221; Thompson said. &#8220;Conversely, that experimental data will be used to validate our model, which can then be used to make predictions about what is happening in a full-scale fusion reactor.&#8221;</p>



<p>The target for giving fusion researchers access to the Sandia machine-learning models to build better fusion reactors is approximately three years, said Thompson.</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-technique-could-improve-fusion-energy-outputs/">Machine-learning technique could improve fusion energy outputs</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-technique-could-improve-fusion-energy-outputs/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How AI improves microservices testing automation</title>
		<link>https://www.aiuniverse.xyz/how-ai-improves-microservices-testing-automation/</link>
					<comments>https://www.aiuniverse.xyz/how-ai-improves-microservices-testing-automation/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 29 Aug 2020 05:04:55 +0000</pubDate>
				<category><![CDATA[Microservices]]></category>
		<category><![CDATA[applications]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Automation]]></category>
		<category><![CDATA[machine-learning]]></category>
		<category><![CDATA[testing]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11281</guid>

					<description><![CDATA[<p>Source: techbeacon.com Organizations that&#160;adopt&#160;artificial intelligence (AI) in testing of microservices-based applications gain better accuracy, faster results, and greater operational efficiency. AI and machine-learning technologies have matured over the <a class="read-more-link" href="https://www.aiuniverse.xyz/how-ai-improves-microservices-testing-automation/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-ai-improves-microservices-testing-automation/">How AI improves microservices testing automation</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: techbeacon.com</p>



<p>Organizations that&nbsp;adopt&nbsp;artificial intelligence (AI) in testing of microservices-based applications gain better accuracy, faster results, and greater operational efficiency.</p>



<p>AI and machine-learning technologies have matured over the last few years, and today their application in automated testing can help in more ways than one. In fact, AI has redefined the way microservices-based applications are tested, especially when it comes to canary testing. </p>



<p>The introduction of AI in software testing helps both developers and testers alike. It improves accuracy; the same steps can be performed accurately every time they&#8217;re needed. Automated testing can increase both the depth and scope of your tests, resulting in more thorough overall test coverage. You can also leverage AI to simulate a large number of users interacting with your application.</p>



<p>Here&#8217;s how AI-enabled automation can help you test as you scale microservices-based applications, as well as the challenges you&#8217;ll face and effective strategies you can adopt to overcome them. </p>



<h3 class="wp-block-heading">Why traditional testing strategies don&#8217;t work</h3>



<p>Traditionally, when creating monolithic applications, you&#8217;d test each unit of code with unit tests. As different components of the application are joined together, you typically test your application using integration testing first and, usually, system testing, regression testing, and user acceptance testing follow.</p>



<p>If the code passes all of these tests, the release&nbsp;goes out.</p>



<p>Testing microservices-based applications is not an easy task and is not the same as testing monoliths; you must be aware of not only the service you are testing but also its dependencies—the services that work with the services under test. </p>



<p>Owing to the granular nature of microservices architecture, boundaries that were previously hidden in a traditional&nbsp;application&nbsp;are exposed. You might have several different teams spread across geographical distances working simultaneously on different services;&nbsp;this makes coordination extremely challenging. It can be&nbsp;difficult to find a particular time window to perform end-to-end testing of the application as a whole.</p>



<p>The distributed nature of microservices-based development poses many challenges to&nbsp;testing your application. These&nbsp;include:</p>



<ul class="wp-block-list"><li><strong>Availability:</strong>&nbsp;Because of the distributed nature of microservices architecture, it is difficult to find a time when all microservices are available.</li><li><strong>Isolation:</strong>&nbsp;Microservices are designed to work in isolation together with other loosely coupled services. This implies that you should be able to test every component in isolation as well as testing them together.</li><li><strong>Knowledge gap:</strong>&nbsp;You should possess a strong knowledge of each microservice;&nbsp;this would help you to write effective test cases.</li><li><strong>Data:</strong>&nbsp;Each microservice can have its own copy of data. In other words,&nbsp;each&nbsp;can have its own copy of the database, which may be different from another microservice&#8217;s copy. As a result, data integrity poses a challenge.</li><li><strong>Transactionality:</strong>&nbsp;Unlike with a monolith, where transactionality is often assured at the database level, implementing transactionality between different microservices is challenging, because a transaction can consist of various service calls spread across different servers.</li></ul>



<p>Typically, a microservices-based application consists of several services, each of which can dynamically scale up if needed. There is also a risk of failure and the cost of fixing bugs or issues after integration. Hence, you should have an effective test strategy in place for testing microservices-based applications. </p>



<h4 class="wp-block-heading">How to build an effective testing strategy</h4>



<p>To build an automated testing process for a microservices-based application, you should follow the same best practices you would for any other type of testing:</p>



<ul class="wp-block-list"><li> Understand the customer&#8217;s expectations as far as test automation is concerned.</li><li>Set quality goals—and adhere to them.</li><li>Analyze the testing types that are right for you to achieve the goals.</li><li>Write tests according to the test pyramid (i.e., considering that the cost of the tests increases as you move up the pyramid).</li></ul>



<h3 class="wp-block-heading">AI-driven test automation: Embrace innovation</h3>



<p>Today&#8217;s software testers can take advantage of AI for test creation, test execution, and data analysis by using natural-language processing and advanced modeling techniques. AI-based sofware testing can help by increasing efficiency, facilitating faster releases, improving test accuracy and coverage, and allowing for easier test maintenance, particularly when it comes to managing your test data. </p>



<p>For efficient test maintenance, you need to know what is happening to your data at the time of test creation. Inadequate data modeling is one reason why test maintenance fails, becoming a bottleneck in your deployment pipeline. AI can help with efficient data modeling and with root-cause analysis.</p>



<p>Repeating tests manually each time the source code changes&nbsp;can be time-consuming and costly. Once you create automated tests, you execute them&nbsp;repeatedly and quickly with no additional cost.</p>



<h3 class="wp-block-heading">Use AI for canary testing</h3>



<p>Canary testing&nbsp;helps reduce risks by gradually rolling out the changes to a small group of users before presenting it&nbsp;to a larger audience—and it is particularly useful in the testing of microservices-based applications. In a typical&nbsp;application, the changes to&nbsp;microservices happen independently of one another, so those&nbsp;microservices need to&nbsp;be verified independently as well.</p>



<p>AI can help automate&nbsp;canary testing of&nbsp;microservices-based applications. You can take advantage of AI concepts such as deep&nbsp;learning to identify the changes in the new code and the issues within&nbsp;it.&nbsp;AI can be used to compare the experience of the small group of users&nbsp;with that of the existing users, and this can be done automatically;&nbsp;you don&#8217;t need any human intervention in the loop.</p>



<h3 class="wp-block-heading">Challenges in AI-based microservices testing</h3>



<p>AI-based testing does have some constraints. While you can automate functional and unit tests, it is quite difficult to automate integration tests, because of complexity.</p>



<p>Some of the other challenges in AI-based testing include the following:&nbsp;</p>



<h4 class="wp-block-heading">Skills</h4>



<p>Testing microservices-based applications with an AI-based approach requires extensive technical expertise from testers, and is very different from what&nbsp;manual or automation testers&nbsp;are used to. Testers should be adept at how to use&nbsp;AI-based tools specifically for&nbsp;microservices-based applications.&nbsp;</p>



<h4 class="wp-block-heading">Use cases&nbsp;</h4>



<p>Learn how&nbsp;to determine the best use cases for using AI in microservices&nbsp;test automation. One&nbsp;is to use AI for&nbsp;creating your&nbsp;unit tests. You can take advantage of AI to perform static code analysis and determine the portions of code that are not covered by your unit tests.</p>



<p>You can also use AI to update unit tests as soon as the source code changes, as well as for&nbsp;test creation, execution,&nbsp;data analysis, and API testing in microservices-based applications.</p>



<p>AI can help you understand the patterns and relationships in&nbsp;API calls and&nbsp;come up with more advanced patterns and inputs for testing the API. You can leverage an AI-powered continuous testing process to more efficiently detect the controls that have changed.</p>



<h3 class="wp-block-heading">AI-based testing can&#8217;t do everything</h3>



<p>AI-based test automation of microservices can&nbsp;create more reliable tests, and in so doing&nbsp;slash the time needed for test creation, maintenance, and analysis. Such tests can in turn be used to check the service-to-service communication, test communication paths, etc.</p>



<p>You can also&nbsp;leverage deep-learning&nbsp;models and other AI techniques to empower your team to build tests faster and&nbsp;execute&nbsp;them at scale in the cloud.</p>



<p>Adopting AI for&nbsp;microservices&nbsp;test automation is no&nbsp;panacea. It won&#8217;t magically eliminate all problems associated with software testing. But it can help you make your software testing process smarter, more efficient, and faster—and thereby deliver business value consistently.</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-ai-improves-microservices-testing-automation/">How AI improves microservices testing automation</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-ai-improves-microservices-testing-automation/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Global Data Science and Machine-Learning Platforms Market 2025 Promising Growth Opportunities &#038; Forecast During This Pandamic Season</title>
		<link>https://www.aiuniverse.xyz/global-data-science-and-machine-learning-platforms-market-2025-promising-growth-opportunities-forecast-during-this-pandamic-season/</link>
					<comments>https://www.aiuniverse.xyz/global-data-science-and-machine-learning-platforms-market-2025-promising-growth-opportunities-forecast-during-this-pandamic-season/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 25 Aug 2020 07:38:51 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Global Data]]></category>
		<category><![CDATA[influencers]]></category>
		<category><![CDATA[machine-learning]]></category>
		<category><![CDATA[nitty]]></category>
		<category><![CDATA[Pandamic]]></category>
		<category><![CDATA[touchpoints]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11174</guid>

					<description><![CDATA[<p>Source:-thedailychronicle Overview and Executive Summary: Data Science and Machine-Learning Platforms Market. This well articulated research report offering is an in-depth reference citing primary information as well as <a class="read-more-link" href="https://www.aiuniverse.xyz/global-data-science-and-machine-learning-platforms-market-2025-promising-growth-opportunities-forecast-during-this-pandamic-season/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/global-data-science-and-machine-learning-platforms-market-2025-promising-growth-opportunities-forecast-during-this-pandamic-season/">Global Data Science and Machine-Learning Platforms Market 2025 Promising Growth Opportunities &#038; Forecast During This Pandamic Season</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:-thedailychronicle</p>



<p><strong>Overview and Executive Summary: Data Science and Machine-Learning Platforms Market.</strong></p>



<p>This well articulated research report offering is an in-depth reference citing primary information as well as demonstrating nitty gritty developments in the Data Science and Machine-Learning Platforms market to harness a detailed overview of the global outlook of the Data Science and Machine-Learning Platforms market across diverse touchpoints such as market valuation concerning volume and value, dominant trends, catastrophic events, drivers, restraints, threats, challenges as well as barrier analysis and opportunity assessment to adequately serve as a ready to refer guide for market participants interested to strike profitable revenue generation in the Data Science and Machine-Learning Platforms market.</p>



<p><strong>The study encompasses profiles of major companies operating in the Data Science and Machine-Learning Platforms Market. Key players profiled in the report includes:<br>SAS<br>Alteryx<br>IBM<br>RapidMiner<br>KNIME<br>Microsoft<br>Dataiku<br>Databricks<br>TIBCO Software<br>MathWorks<br>H20.ai<br>Anaconda<br>SAP<br>Google<br>Domino Data Lab<br>Angoss<br>Lexalytics<br>Rapid Insight</strong></p>



<p>A close review of vital influencers comprising growth statistics, research methodologies and logic used, case study references, consumption and production trends, pricing brackets, as well as crucial data on production patterns, import and export valuation, production practices as well as supply chain network remain major points of elaborate discussion in the Data Science and Machine-Learning Platforms market.</p>



<p>The report specifically highlights leading players and their elaborate marketing decisions and best industry practices that collectively orchestrate remunerative business discretion in the Data Science and Machine-Learning Platforms market. Further scope of the Data Science and Machine-Learning Platforms market growth and likely prognosis format are also intricately discussed in this Data Science and Machine-Learning Platforms market synopsis. For better and superlative comprehension of the Data Science and Machine-Learning Platforms market by leading market players and participants striving to strike a profitable growth trail in the Data Science and Machine-Learning Platforms market during 2020-24.</p>



<p>Understanding Regional Scope of the Keyword Market:<br>This aforementioned Data Science and Machine-Learning Platforms market has recorded a growth valuation of xx million US dollars in 2019 and is also likely to show favorable growth worth xx million US dollars throughout the forecast tenure until 2024, clocking at an impressive CAGR of xx% through the forecast period.</p>



<p>–<strong> North America (U.S., Canada, Mexico)<br>– Europe (U.K., France, Germany, Spain, Italy, Central &amp; Eastern Europe, CIS)<br>– Asia Pacific (China, Japan, South Korea, ASEAN, India, Rest of Asia Pacific)<br>– Latin America (Brazil, Rest of L.A.)<br>– Middle East and Africa (Turkey, GCC, Rest of Middle East)</strong></p>



<p><strong>What to Expect from the Data Science and Machine-Learning Platforms Market Report</strong></p>



<p><strong>•The report surveys and makes optimum forecast pertaining to market volume and value estimation<br>•A thorough evaluation to investigate material sources and downstream purchase developments are echoed in the report</strong></p>



<p>With unfailing market gauging skills, has been excelling in curating tailored business intelligence data across industry verticals. Constantly thriving to expand our skill development, our strength lies in dedicated intellectuals with dynamic problem solving intent, ever willing to mold boundaries to scale heights in market interpretation.</p>
<p>The post <a href="https://www.aiuniverse.xyz/global-data-science-and-machine-learning-platforms-market-2025-promising-growth-opportunities-forecast-during-this-pandamic-season/">Global Data Science and Machine-Learning Platforms Market 2025 Promising Growth Opportunities &#038; Forecast During This Pandamic Season</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/global-data-science-and-machine-learning-platforms-market-2025-promising-growth-opportunities-forecast-during-this-pandamic-season/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What Robots Need to Succeed: Machine-Learning to Teach Effectively</title>
		<link>https://www.aiuniverse.xyz/what-robots-need-to-succeed-machine-learning-to-teach-effectively/</link>
					<comments>https://www.aiuniverse.xyz/what-robots-need-to-succeed-machine-learning-to-teach-effectively/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 01 Aug 2020 05:29:35 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Automation]]></category>
		<category><![CDATA[machine-learning]]></category>
		<category><![CDATA[Robots]]></category>
		<category><![CDATA[teach]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10641</guid>

					<description><![CDATA[<p>Source: roboticsbusinessreview.com The Mid-twentieth century sociologist David Reisman was perhaps the first to wonder with unease what people would do with all of their free time once <a class="read-more-link" href="https://www.aiuniverse.xyz/what-robots-need-to-succeed-machine-learning-to-teach-effectively/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-robots-need-to-succeed-machine-learning-to-teach-effectively/">What Robots Need to Succeed: Machine-Learning to Teach Effectively</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: roboticsbusinessreview.com</p>



<p>The Mid-twentieth century sociologist David Reisman was perhaps the first to wonder with unease what people would do with all of their free time once the encroaching machine automation of the 1960s liberated humans from their menial chores and decision-making. His prosperous, if anxious, vision of the future only half came to pass however, as the complexities of life expanded to continually fill the days of both man and machine. Work alleviated by industrious machines, such as robotics systems, in the ensuing decades only freed humans to create increasingly elaborate new tasks to be labored over. Rather than give us more free time, the machines gave us more time to work.</p>



<p><strong>Machine Learning</strong></p>



<p>Today, the primary man-made assistants helping humans with their work are decreasingly likely to take the form of an assembly line of robot limbs or the robotic butlers first dreamed up during the era of the Space Race. Three quarters of a century later, it is robotic minds, and not necessarily bodies, that are in demand within nearly every sector of business. But humans can only teach artificial intelligence so much – or at least at so great a scale. Enter Machine Learning, the field of study in which algorithms and physical machines are taught using enormous caches of data. Machine learning has many different disciplines, with Deep Learning being a major subset of that.</p>



<p><strong>Deep Learning ‘Arrives’</strong></p>



<p>Deep Learning utilizes neural network layers to learn patterns from datasets. The field was first conceived 20-30 years ago, but did not achieve popularity due to the limitations of computational power at the time. Today Deep Learning is finally experiencing its star turn, driven by the explosive potential of Deep Neural Network algorithms and hardware advancements. Deep Learning require enormous amounts of computational power, but can ultimately be very powerful if one has enough computational capacity and the required datasets.</p>



<p>So who teaches the machines? Who decides what AI needs to know? First, engineers and scientists decide how AI learns. Domain experts then advise on how robots need to function and operate within the scope of the task that is being addressed, be that assisting warehouse logistics experts, security consultants, etc.</p>



<p><strong>Planning and Learning</strong></p>



<p>When it comes to AI receiving these inputs, it is important to make the distinction between Planning and Learning. Planning involves scenarios in which all the variables are already known, and the robot just has to work out at what pace it has to move each joint to complete a task such as grabbing an object. Learning on the other hand, involves a more unstructured dynamic environment in which the robot has to anticipate countless different inputs and react accordingly.</p>



<p>Learning can take place via Demonstrations (Physically training their movements through guided practice), Simulations (3D artificial environments), or even by being fed videos or data of a person or another robot performing the task it is hoping to master for itself. The latter of these is a form of Training Data, a set of labeled or annotated datasets that an AI algorithm can use to recognize and learn from. Training Data is increasingly necessary for today’s complex Machine Learning behaviors. For ML algorithms to pick up patterns in data, ML teams need to feed it with a large amount of data.</p>



<p><strong>Accuracy and Abundance</strong></p>



<p>Accuracy and abundance of data are critical. A diet of inaccurate or corrupted data will result in the algorithm not being able to learn correctly, or drawing the wrong conclusions. If your dataset is focused on Chihuahuas, and you input a picture of a blueberry muffin, then you would still get a Chihuahua. This is known as lack of proper data distribution.</p>



<p>Insufficient training data will result in a stilted learning curve that might not ever reach the full potential of how it was designed to perform. Enough data to encompass the majority of imagined scenarios and edge cases alike is critical for true learning to take place.</p>



<p><strong>Hard at Work</strong></p>



<p>Machine Learning is currently being deployed across a wide array of industries and types of applications, including those involving robotics systems. For example, unmanned vehicles are currently assisting the construction industry, deployed across live worksites. Construction companies use data training platforms such as Superb AI to create and manage datasets that can teach ML models to avoid humans and animals, and to engage in assembling and building.</p>



<p>In the medical sector, research labs at renowned international universities deploy training data to help computer vision models to recognize tumors within MRIs and CT Scans. These can eventually be used to not only accurately diagnose and prevent diseases, but also train medical robots for surgery and other life-saving procedures. Even the best doctor in the world has a bad night’s sleep sometimes, which can dull focus the next day. But a properly trained robotic tumor-hunting assistant can at perform peak efficiency every day.</p>



<p><strong>Living Up to the Potential</strong></p>



<p>So what’s at stake here? There’s a tremendous opportunity for training data, Machine Learning, and Artificial Intelligence to help robots to live up to the potential that Reisman imagined all those decades ago. Technology companies employing complex Machine Learning initiatives have a responsibility to educate and create trust within the general public, so that these advancements can be permitted to truly help humanity level up. If the world can deploy well-trained, built and purposed AI, coupled with advanced robotics, then we may very well live to see some of that leisure time that Reisman was so nervous about. I think most people today would agree that we certainly could use it.</p>
<p>The post <a href="https://www.aiuniverse.xyz/what-robots-need-to-succeed-machine-learning-to-teach-effectively/">What Robots Need to Succeed: Machine-Learning to Teach Effectively</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/what-robots-need-to-succeed-machine-learning-to-teach-effectively/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
