<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>data-driven Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/data-driven/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/data-driven/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Mon, 01 Mar 2021 06:57:56 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Robust Data-Driven Machine-Learning Models for Subsurface Applications: Are We There Yet?</title>
		<link>https://www.aiuniverse.xyz/robust-data-driven-machine-learning-models-for-subsurface-applications-are-we-there-yet/</link>
					<comments>https://www.aiuniverse.xyz/robust-data-driven-machine-learning-models-for-subsurface-applications-are-we-there-yet/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 01 Mar 2021 06:57:54 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[applications]]></category>
		<category><![CDATA[data-driven]]></category>
		<category><![CDATA[machine-learning]]></category>
		<category><![CDATA[Models]]></category>
		<category><![CDATA[Robust]]></category>
		<category><![CDATA[Subsurface]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13139</guid>

					<description><![CDATA[<p>Source &#8211; https://jpt.spe.org/ Algorithms are taking over the world, or so we are led to believe, given their growing pervasiveness in multiple fields of human endeavor such <a class="read-more-link" href="https://www.aiuniverse.xyz/robust-data-driven-machine-learning-models-for-subsurface-applications-are-we-there-yet/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/robust-data-driven-machine-learning-models-for-subsurface-applications-are-we-there-yet/">Robust Data-Driven Machine-Learning Models for Subsurface Applications: Are We There Yet?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://jpt.spe.org/</p>



<p>Algorithms are taking over the world, or so we are led to believe, given their growing pervasiveness in multiple fields of human endeavor such as consumer marketing, finance, design and manufacturing, health care, politics, sports, etc. The focus of this article is to examine where things stand in regard to the application of these techniques for managing subsurface energy resources in domains such as conventional and unconventional oil and gas, geologic carbon sequestration, and geothermal energy.</p>



<p><strong>Srikanta Mishra</strong>&nbsp;and&nbsp;<strong>Jared Schuetter,&nbsp;</strong>Battelle Memorial Institute;&nbsp;<strong>Akhil Datta-Gupta,&nbsp;</strong>SPE, Texas A&amp;M University; and&nbsp;<strong>Grant Bromhal,</strong>&nbsp;National Energy Technology Laboratory, US Department of Energy</p>



<p>Algorithms are taking over the world, or so we are led to believe, given their growing pervasiveness in multiple fields of human endeavor such as consumer marketing, finance, design and manufacturing, health care, politics, sports, etc. The focus of this article is to examine where things stand in regard to the application of these techniques for managing subsurface energy resources in domains such as conventional and unconventional oil and gas, geologic carbon sequestration, and geothermal energy.</p>



<p>It is useful to start with some definitions to establish a common vocabulary.</p>



<ul class="wp-block-list"><li><strong>Data analytics (DA)</strong>—Sophisticated data collection and analysis to understand and model hidden patterns and relationships in complex, multivariate data sets</li><li><strong>Machine learning (ML)</strong>—Building a model between predictors and response, where an algorithm (often a black box) is used to infer the underlying input/output relationship from the data</li><li><strong>Artificial intelligence (AI)</strong>—Applying a predictive model with new data to make decisions without human intervention (and with the possibility of feedback for model updating)</li></ul>



<p>Thus, DA can be thought of as a broad framework that helps determine what happened (descriptive analytics), why it happened (diagnostic analytics), what will happen (predictive analytics), or how can we make something happen (prescriptive analytics) (Sankaran et al. 2019). Although DA is built upon a foundation of classical statistics and optimization, it has increasingly come to rely upon ML, especially for predictive and prescriptive analytics (Donoho 2017). While the terms DA, ML, and AI are often used interchangeably, it is important to recognize that ML is basically a subset of DA and a core enabling element of the broader application for the decision-making construct that is AI.</p>



<p>In recent years, there has been a proliferation in studies using ML for predictive analytics in the context of subsurface energy resources. Consider how the number of papers on ML in the OnePetro database has been increasing exponentially since 1990 <strong>(Fig. 1). </strong>These trends are also reflected in the number of technical sessions devoted to ML/AI topics in conferences organized by SPE, AAPG, and SEG among others; as wells as books targeted to practitioners in these professions (Holdaway 2014; Mishra and Datta-Gupta 2017; Mohaghegh 2017; Misra et al. 2019).</p>



<p>Given these high levels of activity, our goal is to provide some observations and recommendations on the practice of data-driven model building using ML techniques. The observations are motivated by our belief that some geoscientists and petroleum engineers may be jumping the gun by applying these techniques in an ad hoc manner without any foundational understanding, whereas others may be holding off on using these methods because they do not have any formal ML training and could benefit from some concrete advice on the subject. The recommendations are conditioned by our experience in applying both conventional statistical modeling and data analytics approaches to practical problems. To that end, we ask and (try to) answer the following questions:</p>



<ul class="wp-block-list"><li>Why ML models and when?</li><li>One model or many?</li><li>Which predictors matter?</li><li>Can data-driven models become physics-informed?</li><li>What are some challenges going forward?</li></ul>



<h2 class="wp-block-heading">Why ML Models and When?</h2>



<p>Historically, subsurface science and engineering analyses have relied on mechanistic (physics-based) models, which include a causal understanding of input/output relationships. Unsurprisingly, experienced professionals are wary of purely data-driven black-box ML models that appear to be devoid of any such understanding. Nevertheless, the use of ML models is easy to justify if the relevant physics-based model is computation intensive or immature or a suitable mechanistic modeling paradigm does not exist. Furthermore, Holm (2019) posits that, even though humans cannot assess how a black-box model arrives at a particular answer, such models can be useful in science and engineering in certain cases. The three cases that she identifies, and some corresponding oil and gas examples, follow.</p>



<ul class="wp-block-list"><li><strong>When the cost of a wrong answer is low relative to the value of a correct answer</strong>&nbsp;(e.g., using an ML-based proxy model to carry out initial explorations in the parameter space during history matching, with further refinements in the vicinity of the optimal solution applied using a full-physics model)</li><li><strong>When they produce the best results&nbsp;</strong>(e.g., using a large number of pregenerated images to seed a pattern-recognition algorithm for matching the observed pressure derivative signature to an underlying conceptual model during well-test analysis)</li><li><strong>As tools to inspire and guide human inquiry</strong>&nbsp;(e.g., using operational and historical data for electrical submersible pumps in unconventional wells to understand the factors and conditions responsible for equipment failure or suboptimal performance and perform preventative maintenance as needed)</li></ul>



<p>It should be noted that data-driven modeling does not preclude the use of conventional statistical models such as linear/linearized regression, principal component analysis for dimension reduction, or cluster analysis to identify natural groupings within the data (in addition to, or as an alternative to, black-box models). This sets up the data-modeling culture vs. algorithm-modeling culture debate as first noted by Breiman (2001). In our view, the two approaches can and should coexist, with ML methods being preferred if they are clearly superior in terms of predictive accuracy, albeit often at the cost of interpretability. If both approaches provide comparable results at comparable speeds, then conventional statistical models should be chosen because of their transparency.</p>



<h2 class="wp-block-heading">One Model or Many?</h2>



<p>Although the concept of a single correct model has been conventional wisdom for quite some time, the practice of geostatistics has influenced the growing acceptance that multiple plausible geologic models (and their equivalent dynamic reservoir models) can exist (Coburn et al. 2007). This issue of nonuniqueness can be extended readily to other application domains such as drilling, production, and predictive maintenance. The idea of an ensemble of acceptable models simply recognizes that every model—through its assumptions, architecture, and parameterization—has a unique way of characterizing the relationships between the predictors and the responses. Furthermore, multiple such models can provide very similar fits to training or test data, although their performance with respect to future predictions or identification of variable importance can be quite different.</p>



<p>Much like a “wisdom of crowds” sentiment for decision-making at the societal level, ensemble modeling approaches combine predictions from different models with the goal of improving predictions beyond what a single model can provide. They have also routinely appeared as top solutions to the well-known Kaggle data analysis competitions. Approaches for model aggregation may include a simple unweighted average of all model predictions or a weighted average based on model goodness of fit (e.g., root-mean-squared error or a similar error metric). Alternatively, multiple model predictions can be combined using a process called stacking, where a set of base models are used to predict the response of interest using the original inputs, and then their predictions are used as predictors in a final ML-based model, as shown in the work flow of <strong>Fig. 2</strong> (Schuetter et al. 2019).</p>



<p>Given that there is no a priori way to choose the best ML algorithm for a problem at hand, at least in our experience, we recommend starting with a simple linear regression or classification model (ideally, no ML model should underperform this base model). This would be supplemented by one or more tree-based models [e.g., random forest (RF) or gradient boosting machine (GBM)] and one or more nontree-based models [e.g., support vector machine (SVM) or artificial neural network (ANN)]. Because of their architecture, tree-based models can be quite robust, sidestepping many issues that tend to plague conventional statistical models (e.g., monotone transformation of predictors, collinearity, sensitivity to outliers, and normality assumptions). They also tend to produce good performance without excessive tuning, so they are generally easy to train and use. Models such as SVM and ANN require more effort to implement—in the former case, because of the need to be more careful with predictor representation and outliers, and, in the latter case, because of the large number of tuning parameters and resources required; however, they traditionally also have shown better performance.</p>



<p>The suite of acceptable models, based on a goodness-of-fit threshold, would then be combined using the model aggregation concepts described earlier. The benefits would be robust predictions as well as ranking of variable interactions that integrate multiple perspectives.</p>



<h2 class="wp-block-heading">Which Predictors Matter?</h2>



<p>For black-box models, we strongly believe that it is not just sufficient to obtain the model prediction (i.e., what will happen) but also necessary to understand how the predictors are affecting the response (i.e., why will it happen). At some point, every model should require human review to understand what it does because (a) all models are wrong (thanks, George Box), (b) all models are based on assumptions, and (c) humans have a tendency to be overconfident in models and use them even when those assumptions are violated. To that end, answering the question “Which predictors matter?” can help provide some inkling into the inner workings of the black-box model and, thus, addresses the issue of model interpretability. In fact, one of the biggest pushbacks against the widespread adoption of ML models is the perception of lack of transparency in the black-box modeling paradigm (Holm 2019). Therefore, it is important to ensure that a robust approach toward determining (and communicating) variable importance is an integral element of the work flow for data-driven modeling using ML methods.</p>



<p>A review of the subsurface ML modeling literature suggests that ranking of input variables (predictors) with respect to their effect on the output variable of interest (response) seems to be carried out sporadically and mostly when the ML algorithm used in the study happens to include a built-in importance metric (as in the case of RF, GBM, or certain ANN implementations). In our experience, it is more useful to consider a model-agnostic variable-importance strategy, which also lends itself to the ensemble modeling construct. This can help create a meta-ranking of importance across multiple plausible models (much like using a panel of judges in a figure skating competition).</p>



<p>As Schuetter et al. (2018) have shown, the importance rankings may fluctuate from model to model, but, collectively, they provide a more robust perspective on the relative importance of predictors aggregated across multiple models. Some of those model-independent</p>



<p>importance-ranking approaches, as explained in detail in Molnar (2020), are summarized in <strong>Table 1.</strong> We have found the Permute approach to be the most robust and easy to implement and explain without incurring any significant additional computational burden beyond the original model fitting process.</p>



<h2 class="wp-block-heading">Can Data-Driven Models Become Physics-Informed?</h2>



<p>Standard data-driven ML algorithms are trained solely based on data. To ensure good predictive power, the training typically requires large amounts of data that may not be readily available, particularly during early stages of field development. Even if adequate data are available, there often is difficulty in interpreting the results or the results may be physically unrealistic. To address these challenges, a new class of physics-informed ML is being actively investigated (Raissi et al. 2019). The loss function in a data-driven ML (such as ANN) typically consists of only the data misfit term. In contrast, in the</p>



<p>physics-informed neural network (PINN) modeling approaches, the models are trained to minimize the data misfit while accounting for the underlying physics, typically described by governing partial differential equations. This ensures physically consistent predictions and lower data requirements because the solution space is constrained by physical laws. For subsurface flow and transport modeling using PINN, the residual of the governing mass balance equations is typically used as the additional term in the loss function.</p>



<p>For illustrative purposes, <strong>Fig. 3</strong> shows 3D pressure maps in an unconventional reservoir generated using the PINN approach and comparison with a standard neural network (NN) approach. To train the PINN, the loss function here is set as L=Ld+Lr, where Ld is the data misfit in terms of initial pressure, boundary pressure, and gas production rate and Lr is the residual with respect to the governing mass-balance equation that is specified using a computationally efficient Eikonal form of the original equations (Zhang et al. 2016). Almost identical results are obtained using the PINN and standard NN in terms of matching the gas production rate. However, the pressure maps generated using the PINN show close agreement with 3D numerical simulation results, whereas the standard NN shows pressure depletion over a much larger region. Furthermore, the predictions using the PINN are two orders of magnitude faster than the 3D numerical simulator for this example.</p>



<h2 class="wp-block-heading">What Are Some Key Challenges Going Forward?</h2>



<p>Next, we address some of the lingering questions and comments that commonly have been raised during the first author’s SPE Distinguished Lecture question and answer sessions, in industry/research oriented technical forums related to ML, and from conversations with decision-makers and stakeholders.</p>



<p>“Our ML models are not very good.” Consumer marketing and social-media entities (e.g., Google, Facebook, Netflix) are forced to use ML/AI models to predict human behavior because there is no mechanistic modeling alternative. There is a general (but mistaken) perception in our industry that these models must be highly accurate (because they are used so often), whereas subsurface ML models can show higher errors depending on the problem being solved, the size of training data set, and the inclusion of relevant causal variables. We need to manage the (misplaced) expectation about subsurface ML models having to provide near-perfect fits to data and focus more on how the data-driven model can serve as a complement to physics-based models and add value for decision-making. Also, the application of ML models in predictive mode for a different set of geological conditions (spatially) or extending into the future where a different flow regime might be valid (temporally) should be treated with caution because data-driven models have limited ability to project the unseen. In other words, past may not always be prologue for such models.</p>



<p>“If I don’t understand the model, how can I believe it?” This common human reaction to anything that lies beyond one’s sphere of knowledge can be countered by a multipronged approach: (a) articulating the extent to which the predictors span the space of the most relevant causal variables for the problem of interest, (b) demonstrating the robustness of the model with both training and (cross) validation data sets, (c) explaining how the predictors affect the response to provide insights into the inner workings of the model by using variable importance and conditional sensitivity analysis (Mishra and Datta-Gupta 2017), and (d) supplementing this understanding of input/output relationships through creative visualizations.</p>



<p>“We are still looking for the ‘Aha!’ moment. Another common refrain against ML models is that they fail to produce some profound insights on system behavior that were not known before. There are times when a data-driven model will produce insights that are novel, whereas, in other situations, it will merely substantiate conventional subject-matter expertise on key factors affecting the system response. The value of the ML model in either case lies in providing a quantitative data-driven framework for describing the input/output relationships, which should prove useful to the domain expert whenever a physics-model takes too long to run, requires more data than is readily available, or is at an immature or evolving state.</p>



<p>“My staff need to learn data science, but how?” There appears to be a grassroots trend where petroleum engineers and geoscientists are trying to reinvent themselves by informally picking up some knowledge of machine learning and statistics from open sources such as YouTube videos, code and scripts from GitHub, and online courses. Following Donoho (2017), we believe that becoming a citizen data scientist (i.e., one who learns from data) requires more—that is, formally supplementing one’s domain expertise with knowledge of conventional data analysis (from statistics), programming in Python/R (from computer science), and machine learning (from both statistics and computer science). Organizations, therefore, should promote a training regime for their subsurface engineers and scientists that provides such competencies.</p>



<p>In the context of technology advancement and workforce upskilling, it is worth pointing out a recently launched initiative by the US Department of Energy known as Science-Informed Machine Learning for Accelerating Real-Time Decisions in Subsurface Applications (SMART) (https://edx.netl.doe.gov/smart/) . This initiative is funded by DOE’s Carbon Storage and Upstream Oil and Gas Program and has three main focus areas:</p>



<ul class="wp-block-list"><li><strong>Real-time visualization</strong>—to enable dramatic improvements in the visualization of key subsurface features and flows by exploiting machine learning to substantially increase speed and enhance detail</li><li><strong>Real-time forecasting</strong>—to transform reservoir management by rapid analysis of real-time data and rapid forward prediction under uncertainty to inform operational decisions</li><li><strong>Virtual learning</strong>—to develop a computer-based experiential learning environment to improve field development and monitoring strategies</li></ul>



<p>The SMART team is engaging with university, national laboratory, and industry partners and is building off of ongoing and historical data collected from DOE-supported field laboratories and regional partnerships and initiatives since the early 2000s. A key area of experimentation within SMART is the use of deep-learning techniques (e.g., convolution and graph neural networks, auto encoder/decoder, long short-term memory) for building 3D spatiotemporal data-driven models on the basis of field observations or synthetic data.</p>



<h2 class="wp-block-heading">Epilogue</h2>



<p>The buzz surrounding DA and AI/ML from multiple business, health, social, and applied science domains has found its way into several oil and gas (and related subsurface science and engineering) applications. Within our area of work, there is significant ongoing activity related to technology adaptation and development, as well as both informal and formal upskilling of geoenergy professionals to create citizen data scientists. The current status of this field, however, can best be classified as somewhat immature; it reminds us of the situation with geostatistics in the early 1990s, when the potential of the technology was beginning to be realized by the industry but was not yet fully adopted for mainstream applications.</p>



<p>To that end, we have highlighted several issues that should be properly addressed for making data-driven models more robust (i.e., accurate, efficient, understandable, and useful) while promoting foundational understanding of ML-related technologies among petroleum engineers and geoscientists. We believe that an appropriate mindset should be not to treat these data-driven modeling problems as merely curve-fitting exercises using very flexible and powerful algorithms easily abused for overfitting but to try to extract insights based on data that can be translated into actionable information for making better decisions. As the poet T.S. Eliot has said: “Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?” By extension, where is the information that is hiding in our data? May these thoughts help guide our journey toward better ML-based data-driven models for subsurface energy resource applications.</p>
<p>The post <a href="https://www.aiuniverse.xyz/robust-data-driven-machine-learning-models-for-subsurface-applications-are-we-there-yet/">Robust Data-Driven Machine-Learning Models for Subsurface Applications: Are We There Yet?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/robust-data-driven-machine-learning-models-for-subsurface-applications-are-we-there-yet/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Realising value from onboard data</title>
		<link>https://www.aiuniverse.xyz/realising-value-from-onboard-data/</link>
					<comments>https://www.aiuniverse.xyz/realising-value-from-onboard-data/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 08 Oct 2020 06:15:36 +0000</pubDate>
				<category><![CDATA[Data Mining]]></category>
		<category><![CDATA[data-driven]]></category>
		<category><![CDATA[data-mining]]></category>
		<category><![CDATA[digital information]]></category>
		<category><![CDATA[digital system]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12032</guid>

					<description><![CDATA[<p>Source: rivieramm.com Vessel owners are improving their ship maintenance through improved monitoring of onboard system performance, with real-time and packaged data from the main engines, diesel generators, <a class="read-more-link" href="https://www.aiuniverse.xyz/realising-value-from-onboard-data/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/realising-value-from-onboard-data/">Realising value from onboard data</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: rivieramm.com</p>



<p>Vessel owners are improving their ship maintenance through improved monitoring of onboard system performance, with real-time and packaged data from the main engines, diesel generators, boilers and emissions-abatement technology.</p>



<p>NYK Bulkship (Asia) operations director Capt K K Mukherjee explained some of the main benefits of deep-level sensor networks and condition monitoring during Riviera Maritime Media’s Extending intelligent monitoring of onboard machinery webinar, on 9 September.</p>



<p>He said shipowners and managers should invest in intelligent systems to understand the huge amount of hidden data in a ship. This should involve installing networks of sensors, if these are not already commissioned; centralising onboard data capture and processing, sending information over a secure virtual private network from the ship to the shore-based management office; plus data analytics for trend analysis and real-time monitoring.</p>



<p>“This can lead to data mining during inventory control and various optimisation of the onboard operations in relation to navigation or running machinery,” said Capt Mukherjee.</p>



<p>“You have to have an intention or mission to transform your digital resources and not to waste them,” he said.</p>



<p>Online and remote monitoring can be used for maintenance management, system optimisation, operational advice, reporting and spare parts management of some onboard systems, said Capt Mukherjee. The most common equipment monitored by ship operators are the main engines, diesel generators, boilers, scrubber systems, selective catalytic reaction (SCR) devices, voyage data recorders and vessel navigation aids. Capt Mukherjee said remote monitoring provides owners with early warning of issues and alerts if there are performance problems.</p>



<p>For example, NYK monitors main engine loads and revolutions. It watches fuel oil and cylinder oil consumption, exhaust gases and auxiliary blowers. This Japanese owner then monitors consumption and load sharing of the ship’s diesel generators and voyage information from the bridge.</p>



<p>Capt Mukherjee said the future for condition monitoring and cognitive maintenance will involve virtual reality and digital twins. These will help in repair and maintenance over a ship’s lifecycle and enable owners to identify areas that need action and improvement, he said.</p>



<p>Steel Ships chief executive Dr Ranjan Varghese explained during Riviera’s Vessel Optimisation Webinar Week why regular onboard system monitoring and deploying a maintenance decision support system were becoming more important as the global coronavirus pandemic limits travel between shore and ships.</p>



<p>“Cognitive maintenance is the only way forward. We are having serious problems with getting a complete crew on board,” Dr Varghese said.</p>



<p>Implementing the cognitive maintenance system is already producing results for Steel Ships. “We have increased availability and maintainability by 15%,” he said. “Failures have reduced by 30% and energy consumption by 6% to 10%. The reduction of spare parts consumption is between 7% to 15%.”</p>



<p>This all leads to operational expenditure reductions through optimisation and intelligence. “The ultimate goal is to keep the lifecycle cost of the vessel as low as possible. At the same time safety is not compromised,” said Dr Varghese. He noted how safety remained the most important element of shipping operations.</p>



<p>Data can be sourced from various sources on ships and used for different purposes, but this data needs to be processed and delivered to the correct people in a timely manner.</p>



<p>“There are tonnes of data coming from the vessel,” continued Dr Varghese, “all kinds of data for different parties, consumers, charterers, agents, owners and shipmanagers.”</p>



<p>He explained that to achieve fleetwide efficiency, all this data must be distilled properly. “It needs to be as easy to understand by the senior managers and the non-technical people as it is by the staff who are technically managing the ships.”</p>



<p><strong>New technologies</strong></p>



<p>World Maritime University (WMU) associate professor (safety and security) Dimitrios Dalaklis said new technologies were enhancing performance and the condition of ships. These include smart sensors, fleet digitalisation, cloud computing, internet-of-things and digital twins.</p>



<p>When combined, these solutions can contribute towards improved safety, logistics, fuel costs and lower emissions. “Take the concept of a digital twin for example,” Mr Dalaklis explained. “We can now create a theoretical model and manipulate it in real time to make changes that have an almost instantaneous result in the real world.”</p>



<p>This optimises the decision-making process by “using highly accurate data, saving costs and having a huge impact on efficiency, both during the development stage and when the model becomes a reality,” Mr Dalaklis said.</p>



<p><strong>Class platform</strong></p>



<p>Class society ABS has launched a platform for digital information from ship fleets. ABS My Digital Fleet provides data-driven insights for shipowners to improve fleet efficiency, reduce costs and help manage risks. This web-based platform fuses multiple data sources into a centralised digital system.</p>



<p>Applications within this platform can deliver real-time alerts to owners, enabling managers to see an asset’s performance in terms of regulatory compliance, fuel efficiency, structural and mechanical integrity.</p>



<p>“ABS My Digital Fleet aggregates these data sources into one online environment and derives insights by leveraging emerging technology, such as artificial intelligence,” said ABS chairman president and CEO Christopher Wiernicki.</p>



<p><strong>Real-time installations</strong></p>



<p>Höegh Autoliners partnered with MAN Energy Solutions and Kongsberg Digital to optimise engines and maintenance on a fleet of vehicle carriers in September 2020. MAN and Kongsberg cemented a memorandum of understanding (MoU) they signed in October 2019 with a firm agreement to collaborate in developing digitalisation solutions in the maritime industry, with Höegh Autoliners being the initial project.</p>



<p>These partners are validating real-time engine monitoring and digital assistance to optimise performance on a fleet of car carriers on fixed trade routes worldwide.</p>



<p>“For us at Höegh Autoliners, this collaboration is an important step towards the utilisation of digital solutions in optimising the running and maintenance of our engines in a safe and effective way,” said head of technical operations Geir Frode Abelsen.</p>



<p>Part of this project involves MAN’s PrimeServ Assist for engine data analysis. Kongsberg Digital is also providing its data infrastructure solution, Vessel Insight, for real-time data transfer.</p>



<p>Hanson Marine is investing in real-time monitoring of machinery on its new aggregate dredger,&nbsp;<em>Hanson Thames</em>. This is under construction by Damen Shipyards in Galati, Romania, and is expected to be delivered in Q1 2021.</p>



<p><em>Hanson Thames</em>&nbsp;will feature Royston Diesel Power’s electronic fuel management system (EFMS) Enginei, as part of a comprehensive suite of advanced digital technologies.</p>



<p>Enginei uses Coriolis flow meters and sensors to accurately monitor the fuel being consumed by each vessel engine, when tracked against GPS data, voyage details and operational mode. The data is collected, processed, and relayed to bridge- and engineroom-mounted touchscreen monitors to enable the ship’s master to adjust vessel speed and take actions needed to reduce fuel consumption and emissions and equipment maintenance.</p>



<p><strong>Future applications of AR/VR and digital twins</strong></p>



<p>NYK Bulkship (Asia) operations director Capt K K Mukherjee expects developments in augmented and virtual reality and digital twins to enhance fleet management in the following areas:</p>



<ul class="wp-block-list"><li>Training of personnel before they arrive to the vessel.</li><li>Easing maintenance activities.</li><li>Identifying trends of problems.</li><li>Improving asset management.</li><li>Documentation, record keeping and maintenance activities.</li><li>Providing evidence of the ship’s history of repair and maintenance over its lifecycle.</li><li>Identifying potentials areas or systems that need improvements.</li><li>Implementation of various modes of autonomous shipping.</li></ul>
<p>The post <a href="https://www.aiuniverse.xyz/realising-value-from-onboard-data/">Realising value from onboard data</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/realising-value-from-onboard-data/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>UNLOCKING BIG DATA VALUE FROM DATA GENERATION TO DATA</title>
		<link>https://www.aiuniverse.xyz/unlocking-big-data-value-from-data-generation-to-data/</link>
					<comments>https://www.aiuniverse.xyz/unlocking-big-data-value-from-data-generation-to-data/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 05 Aug 2020 05:34:34 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[could]]></category>
		<category><![CDATA[data analytics]]></category>
		<category><![CDATA[data-driven]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10696</guid>

					<description><![CDATA[<p>Source: analyticsinsight.net The huge reserves of Big Data and Analytics could make enterprises earn unlimited Revenues. Data is valuable, so much that the fastest-growing companies are adopting <a class="read-more-link" href="https://www.aiuniverse.xyz/unlocking-big-data-value-from-data-generation-to-data/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/unlocking-big-data-value-from-data-generation-to-data/">UNLOCKING BIG DATA VALUE FROM DATA GENERATION TO DATA</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: analyticsinsight.net</p>



<p>The huge reserves of Big Data and Analytics could make enterprises earn unlimited Revenues.</p>



<p>Data is valuable, so much that the fastest-growing companies are adopting data monetization imbibing them a vital component of their strategy. Every modern enterprise is a data-driven company. Data is everywhere in the enterprise, harnessed through strategic partners, supply chains, operations, customers, and competitors, what is critical is the insights derived from it that substantially increase that value.</p>



<p>How do enterprises value their data? That’s one most demanded question in the C-suite asks as the volume of big data grows exponentially. What is interesting to realize, that there is no finite answer. Data like air is free flowing, no wonder that data monetization focuses on increasing the economic value of data.</p>



<h4 class="wp-block-heading"><strong>Leveraging from Data Monetization</strong></h4>



<p>There are two primary paths to data monetization. The first is internal focusing on leveraging data to improve an enterprise’s takeaways which include operations, productivity, and products and services</p>



<p>The second path is external that encapsulates creating new revenue streams by making data available to customers and partners.</p>



<p>Gaining the monetary rewards from data, or the ‘<strong>Golden Rules of Data Exchange</strong>’ include-</p>



<p><strong>1. Understanding the Role and Data value in Business</strong><br>Smart data utilization also helps in managing risk and provides assurance that the business is compliant with laws and regulations. But it can only serve this purpose effectively if you know where your data resides, how relevant it is and how valuable it could be.</p>



<p><strong>2. Getting Data in Order</strong><br>Before thinking about monetizing data, companies need to discover what kind of data they hold about their partners, customers, products, assets or transactions and what publicly available data can be called on to increase the value of their proprietary data.</p>



<p><strong>3. Embed data monetization into Business Strategy</strong></p>



<p>Executives should evaluate their key business goals and strategic initiatives through the lens of how data can support them. Once you understand the quality of data and have tied it to business strategy then you can put the right structures in place to monetize it.</p>



<p><strong>4. The potential for data to deliver value is Enormous</strong></p>



<p>Sometimes, though, it’s hard for companies to imagine quite what the opportunities could be because they are so used to pursuing growth through established strategies and revenue streams. That’s why all companies should be open to learning from other businesses and partnering in ways that make sense from a data point of view.</p>



<h4 class="wp-block-heading"><strong>Communicate data’s value to Foster Growth</strong></h4>



<p>Monetizing data is still a relatively new experience for many organizations, and even when successful initiatives are in place they aren’t always known to the business as a whole. As data becomes more and more important, companies will need both to communicate and educate internal and external stakeholders so they fully grasp the value data can deliver.</p>
<p>The post <a href="https://www.aiuniverse.xyz/unlocking-big-data-value-from-data-generation-to-data/">UNLOCKING BIG DATA VALUE FROM DATA GENERATION TO DATA</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/unlocking-big-data-value-from-data-generation-to-data/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Big Data – How Businesses Can Manage Data Aggregation Successfully</title>
		<link>https://www.aiuniverse.xyz/big-data-how-businesses-can-manage-data-aggregation-successfully/</link>
					<comments>https://www.aiuniverse.xyz/big-data-how-businesses-can-manage-data-aggregation-successfully/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 29 Jul 2020 05:30:48 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[automating]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[Businesses]]></category>
		<category><![CDATA[data-driven]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10541</guid>

					<description><![CDATA[<p>Source: enterprisetalk.com There is an enormous amount of data available for companies for business insights and analysis. Businesses now have tools to enable them to aggregate them <a class="read-more-link" href="https://www.aiuniverse.xyz/big-data-how-businesses-can-manage-data-aggregation-successfully/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/big-data-how-businesses-can-manage-data-aggregation-successfully/">Big Data – How Businesses Can Manage Data Aggregation Successfully</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: enterprisetalk.com</p>



<p>There is an enormous amount of data available for companies for business insights and analysis. Businesses now have tools to enable them to aggregate them into a meaningful report in order to use it for decision making. Combining big data in a meaningful way is tricky – but bigger brands are tackling it well.</p>



<p>A majority of organizations are aiming for a data-driven approach, and are seeing success in their efforts. According to a study by Dell EMC (back in 2014), there would be 1.7 megabytes of data produced in 2020 – for every person and in every second.</p>



<p>Businesses need to follow proven practices for data aggregation to reduce related data management challenges. A major issue is ensuring that they are running on the raw insights, and organizations can accomplish it by structuring and normalizing data. Thus, to begin with, the top methodologies need to be executed.</p>



<p>Basically, businesses need to realize their short-term and long-term analytics objectives. For instance, currently, a company could be trying to know its consumers’ buying preferences. After a while, it may want to aggregate data from different sources to identify audiences’ interests – in order to sell insightfully. Regardless of the purposes, there is likely to be an immediate and long-term focus that will alter the business’ data aggregation requirements. And the strategy should reflect it.</p>



<p>For organizations that purchase data from third parties, they need to ensure that their privacy standards and governance are compatible. In this case, healthcare data would be a great example. While acquiring patient data from an external source for sensitive issues for the purpose of analysis or treatment, the data needs to be in an anonymous format. This is to secure the privacy of such patients.</p>



<p>Furthermore, businesses need to determine how data will be accumulated and how the users will be accessing it. The aggregated data can be used by specific functional areas in a company or by different departments across the board. This is a critical factor because it indicates the best choice- whether the company has chosen to aggregate and keep data in a vast data repository with various access choices – or in a small database that is customized to the need of a specific user group.</p>



<p>In its essence, automating data integration will help. No matter where the data is being aggregated, organizations will require a straightforward way – to vet and integrate the data into the target data source. The necessity of having to hand-code the data integration interface needs to be avoided. Hence, the preferred tactics for data integration are generally processed via standard APIs and automated integration solutions tools – in order to perform secure data integration for business functionalities.</p>
<p>The post <a href="https://www.aiuniverse.xyz/big-data-how-businesses-can-manage-data-aggregation-successfully/">Big Data – How Businesses Can Manage Data Aggregation Successfully</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/big-data-how-businesses-can-manage-data-aggregation-successfully/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>DEMOCRATIZING DATA-DRIVEN PROCESSES THROUGH AUTOML FOR BETTER BUSINESS PROSPECTS</title>
		<link>https://www.aiuniverse.xyz/democratizing-data-driven-processes-through-automl-for-better-business-prospects/</link>
					<comments>https://www.aiuniverse.xyz/democratizing-data-driven-processes-through-automl-for-better-business-prospects/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 18 May 2020 06:41:00 +0000</pubDate>
				<category><![CDATA[Data Science]]></category>
		<category><![CDATA[AutoML]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[data-driven]]></category>
		<category><![CDATA[deployment]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8840</guid>

					<description><![CDATA[<p>Source: analyticsinsight.net Data Science and Machine Learning are among the most deployed and useful technologies of the current marketplace. And as the utility increases, the new wave <a class="read-more-link" href="https://www.aiuniverse.xyz/democratizing-data-driven-processes-through-automl-for-better-business-prospects/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/democratizing-data-driven-processes-through-automl-for-better-business-prospects/">DEMOCRATIZING DATA-DRIVEN PROCESSES THROUGH AUTOML FOR BETTER BUSINESS PROSPECTS</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: analyticsinsight.net</p>



<p>Data Science and Machine Learning are among the most deployed and useful technologies of the current marketplace. And as the utility increases, the new wave of advancements hit the industry with more innovations in its tides. Similarly, to add an extra edge to what Data Science and ML could achieve, we now have AutoML (Automated Machine Learning) platforms. It is among the top trends of contemporary data-market with most of the big techs investing in its successful incorporation. Companies including Google, Amazon, Microsoft have already embraced AutoML in their business processes to accelerate the effectiveness of their operations and products. Considered as a quiet revolution in AI, the technology has transformed the entire data science landscape while offering a great deal to modern-day businesses.</p>



<h4 class="wp-block-heading"><strong>Let’s know what AutoML actually is?</strong></h4>



<p>Automated machine learning (AutoML) is the process to automate an end-to-end process of leveraging machine learning algorithms to real-world problems. One of the most peculiar features of the technology is that even people with no data science or ML expertise can work with this platform to carry out desired outcomes.</p>



<h4 class="wp-block-heading"><strong>But why do we need AutoML?</strong></h4>



<p>According to Gartner’s survey, it takes around 4 years to make an AI project go live which doesn’t cope-up with the rising demand and transforming market dynamics. And, according to statistics, huge investments in data and AI projects are only successful 15% of the time. However, with the rise in current trends and the AutoML platform, small AI projects can be produced in a short period of time.</p>



<p>Moreover, the soaring demands for machine learning systems don’t imply the successful deployment of ML models across a wide range of applications. Its success requires a proficient team of seasoned data scientists and a team that decides which model is the best for a particular business problem. But the shortage of data science talents has doesn’t quite fulfilled the scenario. Here enters the AutoML platform which tends to automate the maximum number of steps in an ML pipeline while reducing the human effort without compromising on the quality of performance.</p>



<h4 class="wp-block-heading">So how is it changing the landscape of modern businesses?</h4>



<p>Have you heard of Mercari? Mercari is a popular online shopping app in Japan. The company uses Google’s AutoML tool in order to better process the image classification. Using a UI for uploading photos, Mercari’s app can identify and suggest brand names from over 12 major brands through customized AutoML pipeline technology.</p>



<p>Leveraging Google’s AutoML platform enabled the company to customize ML models in successfully identifying over 50,000 images with an accuracy of 91.3%.</p>



<p>Moreover, the implementation of automated machine learning across physical retail stores is redefining their future with rich business benefits including better sales forecasting and significant others. Analyzing the available current customer data and purchasing season, the AutoML platform can help retail industry businesses with better sales prospects. This can subsequently reduce the unused inventory costs and waste in unnecessary promotions.</p>



<p>While leveraging the AutoML to enhance business effectiveness and productivity, brands can also improve customer personalization through customization.</p>



<p>For any business across any industry, AutoML is bound to make cost reductions and increase productivity for data scientists while the democratization of machine learning reduces demand for them. The technology also helps accelerate revenues and customer satisfaction. AutoML models with enhanced accuracy possess the capability to improve other, less tangible business results too.</p>
<p>The post <a href="https://www.aiuniverse.xyz/democratizing-data-driven-processes-through-automl-for-better-business-prospects/">DEMOCRATIZING DATA-DRIVEN PROCESSES THROUGH AUTOML FOR BETTER BUSINESS PROSPECTS</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/democratizing-data-driven-processes-through-automl-for-better-business-prospects/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Data Analytics Shortcuts Reduce the Need for Roomfuls of Data Scientists</title>
		<link>https://www.aiuniverse.xyz/data-analytics-shortcuts-reduce-the-need-for-roomfuls-of-data-scientists/</link>
					<comments>https://www.aiuniverse.xyz/data-analytics-shortcuts-reduce-the-need-for-roomfuls-of-data-scientists/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 14 Apr 2020 10:47:44 +0000</pubDate>
				<category><![CDATA[Data Science]]></category>
		<category><![CDATA[data analytics]]></category>
		<category><![CDATA[data scientists]]></category>
		<category><![CDATA[data-driven]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8157</guid>

					<description><![CDATA[<p>Source: rtinsights.com Every company aspires to be data-driven, but it takes expertise and investment over long periods of time to attain this. In an era of change <a class="read-more-link" href="https://www.aiuniverse.xyz/data-analytics-shortcuts-reduce-the-need-for-roomfuls-of-data-scientists/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/data-analytics-shortcuts-reduce-the-need-for-roomfuls-of-data-scientists/">Data Analytics Shortcuts Reduce the Need for Roomfuls of Data Scientists</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: rtinsights.com</p>



<p>Every company aspires to be data-driven, but it takes expertise and investment over long periods of time to attain this. In an era of change and disruption, the months — or perhaps years — it takes to build and deploy data analytics solutions may be too late for a struggling company. It also means hiring data scientists and analysts, often with PhDs — another onerous, time-consuming process. Is there a better way?</p>



<p>There is an emerging class of pre-built analytics that may help provide shortcuts around the trials and tribulations involved in attempting to build things out onsite. “Automation is removing the need for developers to be paired with traditional data scientists,” writes Nick Jordan in SD Times. “The vehicle that is accelerating this transition is the API or application programming interface, the mechanism by which different software platforms talk to each other.”</p>



<p>Data analytics and data science are increasingly becoming automated, and “as businesses make the leap from big data to AI, and automation becomes increasingly sophisticated — with every major cloud vendor already investing in some type of Auto ML initiative — fewer organizations will need the traditional data scientist, and data engineers will be able to harness the power of a Ph.D. through APIs,” Jordan adds.</p>



<p>In addition, companies are increasingly relying on analytic already pre-embedded into their enterprise applications. Companies seeking to step up their analytics game need to reduce their software build times, reduce costs from ongoing development and maintenance, and increase user productivity, a recent report from Nucleus Research concludes.</p>



<p>When adding dashboards, reports, and other analytics features to software, embedded analytics may cut software build times by up to 85 percent, according to the study’s author, Daniel Elman of Nucleus Research. “Companies who eschewed internal builds in favor of an embedded solution were able to go live with hosted analytics functionality in an average of three weeks,” “Homegrown builds were projected to require, on average, six to eight months.”</p>



<p>An embedded analytics application is one that is built into and executed from within enterprise applications, Elman explains. This feature is valuable to companies since “data science skills are scarce, and every company cannot afford to find and hire developers with the qualified background to create and maintain a custom analytics solution.”<br>Over the next five years, more than half of all corporate job functions will include self-service analytics responsibilities such as insight discovery and reporting, Nucleus estimates.</p>



<p>Elman cites the example of an industrial equipment manufacturer which was outgrowing its data management and analytics framework. “The company was falling behind in deliveries and lacked the internal visibility to track progress and diagnose issues,” he said. The company plugged its a pre-packaged analytics solution into its ERP environment, and “as a result of the deployment, the company was able to reduce late deliveries, with total orders delivered on-time increasing by 18 percent. The constant data integrated with the application ecosystem allowed the company to reduce the time to act on new leads by 15 percent. More generally, the organization was able to eliminate paper-based reporting and adopt a more data-driven culture of accountability and trust.”</p>



<p>There are cases where embedding the solution is not the most feasible approach, Elman cautions. “For example, in speaking with end-users, we found that applications requiring highly-specialized functionality or handling massive volumes of data are often better suited for standalone solutions or self-builds. “</p>



<p>In addition, care must be taken in fitting the right analytics approach to the business problem at hand. “Embedded solutions are more lightweight and tend to lack the back-end computation engine and complex data management architecture as compared to more traditional standalone analytics solutions,” Elman says. “As a result, embedded solutions are ideally suited for executing simpler processes like creating reports, producing graphics and data visualizations, and tracking key benchmarks and indicators with dashboards. For more complex processes such as big-data analytics — predictive, prescriptive, diagnostic — and machine learning, standalone solutions are the more feasible choice due to their more robust data handling capabilities and more labor-intensive functional options.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/data-analytics-shortcuts-reduce-the-need-for-roomfuls-of-data-scientists/">Data Analytics Shortcuts Reduce the Need for Roomfuls of Data Scientists</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/data-analytics-shortcuts-reduce-the-need-for-roomfuls-of-data-scientists/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>OmniSci and Z by HP Accelerate Data-Driven Workflows</title>
		<link>https://www.aiuniverse.xyz/omnisci-and-z-by-hp-accelerate-data-driven-workflows/</link>
					<comments>https://www.aiuniverse.xyz/omnisci-and-z-by-hp-accelerate-data-driven-workflows/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 11 Apr 2020 12:44:29 +0000</pubDate>
				<category><![CDATA[Data Science]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[data analytics]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[data scientists]]></category>
		<category><![CDATA[data-driven]]></category>
		<category><![CDATA[Database]]></category>
		<category><![CDATA[HP]]></category>
		<category><![CDATA[OmniSci]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8136</guid>

					<description><![CDATA[<p>Source: aithority.com OmniSci, the pioneer in accelerated analytics, and Z by HP announced a collaboration to bring the power of advanced data analytics to the HP Z8, the world’s <a class="read-more-link" href="https://www.aiuniverse.xyz/omnisci-and-z-by-hp-accelerate-data-driven-workflows/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/omnisci-and-z-by-hp-accelerate-data-driven-workflows/">OmniSci and Z by HP Accelerate Data-Driven Workflows</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: aithority.com</p>



<p>OmniSci, the pioneer in accelerated analytics, and Z by HP announced a collaboration to bring the power of advanced data analytics to the HP Z8, the world’s most powerful workstation. As a preloaded component in the Z8 Workstation Evaluation Program, OmniSci’s accelerated analytics platform gives data scientists and analysts the ability to directly interrogate massive datasets at their workstations, overcoming the cost and needless operational hurdles presented by cloud and server-based deployments.</p>



<p>OmniSci offers accelerated analytics at scale. The platform is capable of processing and visualizing billions of rows of data in milliseconds, enabling data scientists as well as geospatial and business analysts to gain new insights from vast collections of internal and/or external data. With the OmniSciDB SQL database engine and OmniSci Immerse data visualization interface preloaded into the Z8 workstation, users can instantly enjoy the extreme speed and interactivity of the OmniSci platform in a discrete, secure and personalized hardware solution that is ready to use.</p>



<p>“We are dedicated to delivering the best user experience in all our solutions, including mobile and desktop workstations for professional creators and power users,” stated Jared Dame, Director of AI and Data Science, Z by HP. “OmniSci offers the advanced performance necessary to satisfy the new era in intensive data analytics, making it an ideal option for our Z8 workstations.”</p>



<p>Designed for scientists, educators and other professionals, the Z8 is available with up to 56 cores, 3 TB of high-speed memory and 48 TB storage, in addition to 2X NVIDIA Quadro RTX 8000 graphics for exceptional rendering speed and clarity.</p>



<p>“Our relationship with HP allows us to deliver OmniSci’s accelerated analytics in yet another format. Now, data analysts and data scientists can take advantage of our solution in a dedicated and secure device that is ideally suited to their needs,” said Todd Mostak, OmniSci CEO and co-founder. “Our corporate mission is to make analytics instant, powerful, and effortless for everyone. Today’s announcement is yet another important step on that journey.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/omnisci-and-z-by-hp-accelerate-data-driven-workflows/">OmniSci and Z by HP Accelerate Data-Driven Workflows</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/omnisci-and-z-by-hp-accelerate-data-driven-workflows/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Columbia’s Veldkamp: Big Data Has Hard Limits</title>
		<link>https://www.aiuniverse.xyz/columbias-veldkamp-big-data-has-hard-limits/</link>
					<comments>https://www.aiuniverse.xyz/columbias-veldkamp-big-data-has-hard-limits/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 18 Feb 2020 07:17:57 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[data-driven]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[software]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6869</guid>

					<description><![CDATA[<p>Source: poetsandquants.com From Wall Street trading floors to Amazon’s huge knowledge base on customer preferences to Tesla’s self-driving software to Major League Baseball’s analytics-driven front offices, Big <a class="read-more-link" href="https://www.aiuniverse.xyz/columbias-veldkamp-big-data-has-hard-limits/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/columbias-veldkamp-big-data-has-hard-limits/">Columbia’s Veldkamp: Big Data Has Hard Limits</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: poetsandquants.com</p>



<p>From Wall Street trading floors to Amazon’s huge knowledge base on customer preferences to Tesla’s self-driving software to Major League Baseball’s analytics-driven front offices, Big Data—which, when connected to machine learning and/or artificial intelligence, aggregates massive datasets to analyze patterns of behavior and performance—appears to offer unlimited upside. As of 2015, information and communications technology were responsible for an estimated 6.5% of global GDP and some 100 million jobs.&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;</p>



<p>As more and more data becomes accessible, there should be no limit to increases in the productivity users get from it.</p>



<p>But a recent paper by Poets&amp;Quants’ Professor of the Week, Laura L. Veldkamp of the Columbia Business School, along with Maryam Farboodi of MIT Sloan School of Management, questions the “sky’s the limit” conventional wisdom about Big Data.  In fact, the research says there are real limits to its potential gains, suggesting that its usage will produce “diminishing returns” in the long run.</p>



<h4 class="wp-block-heading">‘DATA-DRIVEN GROWTH WILL GRIND TO A HALT WITHOUT GAINS IN NON-DATA PRODUCTIVITY’</h4>



<p>”Just like capital accumulation, data accumulation alone cannot sustain growth,” Veldkamp and Farboodi write.&nbsp;&nbsp;“Without improvements in non-data productivity, data-driven growth will grind to a halt.”</p>



<p>The working paper, whose latest version was published last October, is called “A Growth Model of the Data Economy.”</p>



<p>Veldkamp and Farboodi begin by challenging the assumption that growth in the amount of data is equivalent to idea growth or technological change. The key features of data, they write, “are that it is user-generated and that it is used to predict uncertain outcomes.” But even a model that theoretically has perfect foresight doesn’t produce infinite profits.&nbsp;&nbsp;</p>



<p><strong>‘DATA IS A MEANS OF REDUCING UNCERTAINTY’</strong></p>



<p>Why? “With perfect forecasting, zero operational mistakes, profits are large, but not infinite….Data cannot sustain long-run growth because data, like all information, is a means of reducing uncertainty….,” Veldkamp and her co-author write. “Information has diminishing returns because its ability to reduce variance gets smaller and smaller as beliefs become more precise.” In other words, the more data you accumulate, the less value future data will add.</p>



<p>“Unless a perfect forecast gives a firm access to a pure, real, limitless arbitrage, the perfect forecast generates finite payoff,” the authors continue. “Information has diminishing returns because its ability to reduce variance gets smaller and smaller as beliefs become more precise…Without any other source of growth in the model, data-driven growth, like capital-driven growth, eventually grinds to a halt.”</p>



<p>That “other” growth, of course, comes from economic activity, which generates more data points for researchers and practitioners to analyze and incorporate into their forecasts. But the data itself does not drive growth; that’s the result of economic activity and better tools to aggregate and analyze all the data it generates.</p>



<p><strong>CAPITAL HAS LIMITS AND SO DOES BIG DATA</strong></p>



<p>“The more productive capacity the data is matched with, the greater are the gains in output,” Veldkamp and Farboodi write.&nbsp;</p>



<p>In fact, they think the use of data is more comparable to how firms employed capital during the Industrial Revolution than to the technological innovation of the 21<sup>st</sup>&nbsp;Century post-industrial age. “When economies accumulate data alone, the aggregate growth economics are similar to an economy that accumulates capital alone,” the researchers note.</p>



<p>Like capital, data helps managers apply innovation to the real world and amplify its effect. And just as capital has limits, so does data, even Big Data, Veldkamp and Farboodi conclude.</p>



<p>Laura Veldkamp, 44, is the Cooperman Professor of Finance and Economics at Columbia Business School. Her research focuses on how individuals, investors, and firms get their information, how that information affects the decisions they make, and how those decisions affect the macroeconomy and asset prices. Lately she has examined the impact of the data economy. She teaches international finance to MBAs and finance theory and information frictions in finance to PhDs.</p>



<p>Veldkamp earned her bachelor’s degree in math and economics from Northwestern and got a PhD in economic analysis and policy from the Stanford Graduate School of Business. Having taught at NYU Stern School of Business for 15 years, she joined the Columbia faculty in 2018.</p>
<p>The post <a href="https://www.aiuniverse.xyz/columbias-veldkamp-big-data-has-hard-limits/">Columbia’s Veldkamp: Big Data Has Hard Limits</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/columbias-veldkamp-big-data-has-hard-limits/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How Indian enterprises are transforming into data driven businesses</title>
		<link>https://www.aiuniverse.xyz/how-indian-enterprises-are-transforming-into-data-driven-businesses/</link>
					<comments>https://www.aiuniverse.xyz/how-indian-enterprises-are-transforming-into-data-driven-businesses/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 03 Jan 2020 07:26:49 +0000</pubDate>
				<category><![CDATA[Data Science]]></category>
		<category><![CDATA[data-driven]]></category>
		<category><![CDATA[enterprises]]></category>
		<category><![CDATA[Indian]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[transforming]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5946</guid>

					<description><![CDATA[<p>Source: Realising the importance of data, Indian enterprises are leveraging it to enhance customer experience, employee productivity and business growth. According to Forrester, insights-driven companies will earn <a class="read-more-link" href="https://www.aiuniverse.xyz/how-indian-enterprises-are-transforming-into-data-driven-businesses/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-indian-enterprises-are-transforming-into-data-driven-businesses/">How Indian enterprises are transforming into data driven businesses</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: </p>



<p>Realising the importance of data, Indian enterprises are leveraging it to enhance customer experience, employee productivity and business growth. According to Forrester, insights-driven companies will earn $1.8 trillion by 2021.</p>



<p>This journey of maximising data starts with the building of a data lake.</p>



<p><strong>Insights drive productivity, cost efficiency, newer opportunities</strong></p>



<p>“A massive data lake aggregates data from all our systems and third party sources,” says Bharat Krishnamurthy, CTO, Exide Life Insurance. </p>



<p>Krishnamurthy powered data with a machine learning model to predict the documents required from customers to process an insurance.</p>



<p>This seamless experience extended to his field agents who could process the documents without any lag. In Exide Life Insurance’s case, the machine learning model also helped predict the persistency of customers in paying premiums for the next year.</p>



<p>Healthcare has been equally geared up to implement an insight -driven culture to arrive at decision making faster and reducing the cost of patient care.</p>



<p>Santosh Rathi, VP, Columbia Asia saved Rs 7 crore last year with optimization of medical assets based on the data generated by the healthcare chain.</p>



<p>Relying on the dataset — consisting of a particular equipment’s business utilization from the platform — the total cost of ownership and the total cost to maintain the equipment which rigorously monitored month-on-month. This helps Rathi to arrive at a decision whether to keep, shift or discard a particular medical equipment, thereby driving cost optimization for business.</p>



<p>“Now I can see a trend where a particular manufacturer gives me that kind of cost vis-a-vis clinical operation, doctor’s ease of use, and the reliability of the doctor with respect to the equipment,” Rathi says.</p>



<p>Insights derived from structured data also helps the manufacturing sector to come up with newer initiatives, explains Beena Nayar, Head-IT, Forbes Marshall.</p>



<p>“Several years of data has been captured through IoT enabled sensors and different technologies. We are in the process of building a data lake and analyzing it. We have built one level of analytics, now we focus on the next to enhance it,” she says.</p>



<p>Building predictive analytics can help with monitoring the parameters of flagship assets of the company and bring in corrections real time for efficiency.</p>



<p><strong>Data-driven journey is a bumpy ride</strong></p>



<p>Though the benefits to be derived as enormous, Krishnamurthy lists some of the practical challenges that enterprises encounter in their journey to leverage data.</p>



<p>Data consolidation is the first hurdle. “One of the challenges is to have a single view of the data. The challenge is also to ensure that every system across the organization represents data in a uniform way,” he explains.</p>



<p>Krishnamurthy points out that it is important to have a common data dictionary across the organization so that every department be it finance, sales, analytics, or marketing refers to a particular terminology with the exact same definition.</p>



<p>Another major challenge in the whole exercise is ensuring security and access control around this data. “It is a continuously changing ecosystem with new sources of data coming in all the time, new partnerships being made, the third party data sources contributing to the database,” he says.</p>



<p>Technology leaders believe that the data consolidation and providing uniform view is a change management exercise in itself. It is therefore essential to build a suitable environment for such experiments to thrive.</p>



<p>“In order for the culture of the larger organization to change, it is imperative that data is democratized and made available to everyone in a form which makes sense to the individual and meets their specific requirements,” iterates Vishal Bhasin, SVP-Technology, Viacom18 Media.</p>



<p>To foster the process further, Bhasin set up detailed workshops with different stakeholders to understand the exact requirements from executives, business owners and analysts, operations team, and data scientists.</p>



<p>“After diligently understanding the ask, we curate the data models and generate customized dashboards for different user groups,” he says.</p>



<p>In addition to descriptive analytics, the data engineering pipeline and unified analytics layer also supports predictive analytics ensuing a data-driven, decision-making culture.</p>



<p>Though the idea of transforming to a data-driven model seems enticing to the enterprises, a lot of IT leaders deal with the availability of relevant skill sets. The requirement can be narrowed to data science, data engineering and a sound understanding of the business, says Krishnamurthy. This sets the base for artificial intelligence and machine learning implementation in the organization.</p>



<p>Data science entails having core ML or AI skills and understanding the models and algorithms. An edge above others would, however, lie in understanding the underlying data.</p>



<p>Data engineering involves filling data from across the organization into a form that can be used by machine learning algorithms. Understand business well enough is critical to guide data scientists into defining the problem, concludes Krishnamurthy.

</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-indian-enterprises-are-transforming-into-data-driven-businesses/">How Indian enterprises are transforming into data driven businesses</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-indian-enterprises-are-transforming-into-data-driven-businesses/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Digging deeper: Health data mining platforms surge ahead</title>
		<link>https://www.aiuniverse.xyz/digging-deeper-health-data-mining-platforms-surge-ahead/</link>
					<comments>https://www.aiuniverse.xyz/digging-deeper-health-data-mining-platforms-surge-ahead/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 24 Jul 2019 14:05:44 +0000</pubDate>
				<category><![CDATA[Data Mining]]></category>
		<category><![CDATA[Company]]></category>
		<category><![CDATA[data mining]]></category>
		<category><![CDATA[data-driven]]></category>
		<category><![CDATA[Health]]></category>
		<category><![CDATA[health data]]></category>
		<category><![CDATA[Intelligence]]></category>
		<category><![CDATA[platforms]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4138</guid>

					<description><![CDATA[<p>Source: benefitspro.com Few areas of the corporate world are fraught with the conflicting objectives found in employee health. Company health plans are designed to maintain employee health. <a class="read-more-link" href="https://www.aiuniverse.xyz/digging-deeper-health-data-mining-platforms-surge-ahead/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/digging-deeper-health-data-mining-platforms-surge-ahead/">Digging deeper: Health data mining platforms surge ahead</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: benefitspro.com</p>



<p>Few areas of the corporate world are fraught with the conflicting objectives found in employee health. Company health plans are designed to maintain employee health. Healthy employees are more productive. And generous health coverage bolsters recruitment and retention, a key goal in the zero-employment economy.</p>



<p>But benefits are also a nagging cost center. And many employers are uncertain about the legality of analyzing the health data their plans create.</p>



<p>How to balance these often conflicting priorities?</p>



<p>Enter the latest potential solution: data mining platforms and consultants.</p>



<p>Data warehouses are nothing new. That’s where employers first turned when they needed a black hole in space to store the enormous bits of data generated every minute of the work day. But now, as the Big Data industry rushes ahead, driven by its own data, employers suddenly have myriad options for managing and mining those bits stashed away on the cloud. The question is: How do I get the answers I need from my data in a timely fashion with actionable recommendations? Oh, and without running afoul of privacy concerns?</p>



<p>That’s where vendors like Springbuk, Segal Group, Artemis Health and others come in. Their promise to employers: We’ll help you quickly find out what you’re looking for in your data. And we will also bring to your attention trends and issues you didn’t know existed that can generate a better return on your health plan investment.</p>



<p>Employers are signing on for these services despite concerns about just how deeply they can mine health data. A recent Accenture survey found that only 30 percent of respondents were “very confident that they are using new sources of workforce data in a highly responsible way.” But 62 percent said they were already “using new technologies and sources of workforce data today,” and three-quarters were eager to analyze their employee data to grow and transform their businesses, and to unlock their employees’ full potential.</p>



<p>And apart from legal protections around privacy, which remain uncertain, workers don’t seem to be fearful of the Big Data/Big Brother syndrome. More than 90&nbsp;percent of employees responding to the survey said they were fine with collection of personal data, as long as it “improves their performance or well-being or provides other personal benefits.”</p>



<h4 class="wp-block-heading">Connecting with employers</h4>



<p>Platform builders are finding two routes to employers: Directly to them through their sales teams, and through broker channels. Says Springbuk’s Reasen, “We do believe in the value proposition of the broker model. They represent a strong advocacy at the local level that still exists. They like to be able to offer a tool like ours to get into data warehouses and analyze what’s there for their clients.”</p>



<p>Springbuk just unveiled an upgrade of its health intelligence platform that its executives say will both greatly reduce the time required to mine specific intelligence from health data, and provide clients with customized, curated data-based reports on topics ranging from risk mitigation, care efficiency and drug savings, to steerage procedures and potentially unnecessary procedures.</p>



<p>The upgrade further enhances the platform’s ability to identify members within a plan population “who are at risk of developing health conditions and then get actionable information including appropriate treatment, disease management resources and risk mitigation strategies. At-risk employees are identified based on a proprietary algorithm using a database of existing claims,” the company says.</p>



<p>In other words, the platform both responds rapidly to employer queries about employee health, and anticipates, explores, and issues reports on trends that clients may not be aware of.</p>



<p>“The message around health intelligence, as opposed to health data, is resonating with employers,” says Springbuk’s Rod Reasen, CEO and co-founder. “I just got off a call with a very large organization that represents hundreds of thousands of lives. They want to know why they bought a data warehouse. ‘We thought we’d have access to a lot o f information. but actually it’s just access to a lot of data.’ Our health intelligence platform goes beyond a data warehouse to provide actually actionable intelligence.”</p>



<p>“It’s a question of data mining versus data reporting,” says Segal’s David Searles, vice president and the executive who developed Segal’s data analytics business. “Data mining creates new information from the data. The mining can say, for example, that you have 20&nbsp;percent of your diabetics who aren’t getting their tests done. It is creating new actionable information from the data you are presented with.”</p>



<h4 class="wp-block-heading">Data-driven decisions</h4>



<p>These new platforms can swiftly adapt to shifting priorities among employees. As opioid abuse continues to take a toll on employee health and the cost of insurance, Segal was asked to examine one client’s population to identify total savings potential for opioid abuse prevention management.</p>



<p>“The client wanted us to quantify enhanced opioid criteria savings to medical and prescription drug programs,” he says. “We analyzed the data and discovered that, by limiting first fills of opioid prescriptions to a 7-day supply, ER-related opioid visits decreased 35.3 percent.”</p>



<p>Demand is strong to mine employee data to evaluate workplace wellness programs. Generally, employers want to reduce their wellness offerings to those programs that engage employees and produce better health outcomes.</p>



<p>“We get a lot of requests to examine the data for return on investments in various programs, both wellness or disease management. While you can’t really do an ROI accurately–no one agrees on a consistent methodology for it–you can determine the effectiveness of the program by looking at the change in biometric data of the participants,” Searles says.</p>



<p>“What we are pushing toward is this: Plan sponsors should use data to actively manage their health plans. They should evaluate their employee profiles. Let’s target a program that addresses conditions that are driving trends.”</p>



<p>One example would be designing a treatment plan for diabetics with coronary disease. “They should be highly motivated and will incur large claims if they don’t improve their condition,” he says.</p>



<p>But first, the company needs to know whether its workforce includes enough diabetics with coronary disease to justify creating such a program. And that’s where the emerging data mining platforms shine.</p>



<p>The new mining platforms have immeasurably reduce the time required for an employer to find the desired information. Springbuk’s Reasen says the chief medical officer for one client told him “it would have taken him a month to come up with the exact [report] we came up with in seconds.”</p>



<p>He adds: “When a user steps in front of a data warehouse, we are asking them to spend time and use their knowledge to extract information. We all have the same amount of time. How do we use it?”</p>



<p>Strategic benefits firm Sequoia Consulting Group is a Springbuk broker client. CEO Greg Golub&nbsp;says his clients especially value the executive reports the platform produces.</p>



<p>“Springbuk is very effective at producing executive reports, tailored for the CFO or HR leader. They do a good job of synthesizing the information into an actionable report. They are focusing on the right stuff.”</p>



<p>Sequoia’s chief marketing officer, Michele Floriani,&nbsp;says being able to offer Springbuk reports to clients has led to positive feedback. “We offer it as a service to our self-insured clients and together we make use of the output and insights to make annual and longer term strategy decisions on plan design. It’s really wonderful. That’s the value to us. It focuses on what matters to the client.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/digging-deeper-health-data-mining-platforms-surge-ahead/">Digging deeper: Health data mining platforms surge ahead</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/digging-deeper-health-data-mining-platforms-surge-ahead/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
