<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Digital Business Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/digital-business/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/digital-business/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Thu, 24 Dec 2020 06:09:05 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>IMPROVE MACHINE LEARNING PERFORMANCE WITH THESE 5 STRATEGIES</title>
		<link>https://www.aiuniverse.xyz/improve-machine-learning-performance-with-these-5-strategies/</link>
					<comments>https://www.aiuniverse.xyz/improve-machine-learning-performance-with-these-5-strategies/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 24 Dec 2020 06:09:04 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[applications]]></category>
		<category><![CDATA[Digital Business]]></category>
		<category><![CDATA[Hybrid Cloud]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12469</guid>

					<description><![CDATA[<p>Source: analyticsinsight.net Machine learning is compute-intensive Advances in innovation to capture and process a lot of data have left us suffocating in information. This makes it hard <a class="read-more-link" href="https://www.aiuniverse.xyz/improve-machine-learning-performance-with-these-5-strategies/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/improve-machine-learning-performance-with-these-5-strategies/">IMPROVE MACHINE LEARNING PERFORMANCE WITH THESE 5 STRATEGIES</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: analyticsinsight.net</p>



<h3 class="wp-block-heading">Machine learning is compute-intensive</h3>



<p>Advances in innovation to capture and process a lot of data have left us suffocating in information. This makes it hard to extricate insights from data at the rate we get it. This is the place where machine learning offers some benefit to a digital business.</p>



<p>We need strategies to improve machine learning performance all the more effectively. Since, supposing that we put forth efforts in the wrong direction, we can’t get a lot of progress and burn through a lot of time. Then, we need to get a few expectations toward the path we picked, for instance, how much precision can be improved.</p>



<h4 class="wp-block-heading">Articulate the issue</h4>



<p>There are by and large two kinds of organizations that participate in machine learning: those that build applications with a trained ML model inside as their core business proposition and those that apply ML to upgrade existing business work processes. In the latter case, articulating the issue will be the underlying challenge. Diminishing the expense or increasing income should be limited to the moment that it gets solvable by gaining the right data.</p>



<p>For example, if you need to minimize the churn rate, data may assist you with detecting clients with a high “fly risk” by analyzing their activities on a website, a SaaS application, or even online media. In spite of the fact that you can depend on traditional metrics and make suppositions, the algorithm may unwind shrouded dependencies between the data in clients’ profiles and the probability to leave.</p>



<h4 class="wp-block-heading">Resource Management</h4>



<p>Resource management has become a significant part of a data scientist’s duties. For instance, it is a challenge having a GPU worker on-prem for a group of five data scientists. A lot of time is spent sorting out some way to share those GPU’s simply and effectively. Allocation of compute resources for machine learning can be a major agony, and takes time away from doing data science tasks.</p>



<h4 class="wp-block-heading">Focus on Quality of Data</h4>



<p>Data science is an expansive field of practices pointed toward removing significant insights from data in any structure. Furthermore, utilizing data science in decision-making is a better method to stay away from bias. Nonetheless, that might be trickier than you might suspect. Indeed, even Google has as of late fallen into a trap of indicating more esteemed jobs to men in their ads than to women. Clearly, it isn’t so much that Google data scientists are sexist, but instead the data that the algorithm utilizes is one-sided on the grounds that it was gathered from our interactions on the web.</p>



<h4 class="wp-block-heading">Embrace Hybrid Cloud</h4>



<p>Machine learning is compute-intensive. A scalable machine learning foundation should be compute agnostic. Joining public clouds, private clouds, and on-premise resources offers flexibility and agility as far as running AI workloads. Since the kinds of workloads shift significantly between AI workloads, companies that construct a hybrid cloud infrastructure can dispense assets all the more deftly in custom sizes. You can bring down CapEx expenditure with public cloud, and offer the scalability required for times of high compute demands. In companies with strict security demands, the expansion of private cloud is essential, and can bring down OpEx over the long-term. Hybrid cloud encourages you to accomplish the control and flexibility necessary to improve planning of resources.</p>



<h4 class="wp-block-heading">Be prepared to Iterate</h4>



<p>A large portion of the models are created on a static subset of information, and they capture the conditions of the time frame when the data was gathered. When you have a model or various them deployed, they become dated over time and give less exact expectations. Contingent upon how effectively the patterns in your business climate change, you should pretty much regularly replace models or retrain them</p>
<p>The post <a href="https://www.aiuniverse.xyz/improve-machine-learning-performance-with-these-5-strategies/">IMPROVE MACHINE LEARNING PERFORMANCE WITH THESE 5 STRATEGIES</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/improve-machine-learning-performance-with-these-5-strategies/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Big data centres on regional areas</title>
		<link>https://www.aiuniverse.xyz/big-data-centres-on-regional-areas/</link>
					<comments>https://www.aiuniverse.xyz/big-data-centres-on-regional-areas/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 13 Aug 2019 18:06:30 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[cloud]]></category>
		<category><![CDATA[Digital Business]]></category>
		<category><![CDATA[Digital Transformation]]></category>
		<category><![CDATA[energy]]></category>
		<category><![CDATA[Infrastructure]]></category>
		<category><![CDATA[Schneider Electric]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4340</guid>

					<description><![CDATA[<p>Source: afr.com Australia’s data centres are concentrated in Sydney, Melbourne and Brisbane, but that is starting to change as storage and cloud providers and their customers see <a class="read-more-link" href="https://www.aiuniverse.xyz/big-data-centres-on-regional-areas/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/big-data-centres-on-regional-areas/">Big data centres on regional areas</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: afr.com</p>



<p>Australia’s data centres are concentrated in Sydney, Melbourne and Brisbane, but that is starting to change as storage and cloud providers and their customers see the advantages of situating them in regional Australia.<br>For example, Australian-owned cloud, data centre and connectivity provider iseek is set to open a data centre in Townsville shortly.</p>



<p>Founder and managing director Jason Gomersall says the decision to build in Townsville was prompted by a number of local organisations wanting to have a data centre nearby.</p>



<p>“In theory it doesn&#8217;t matter where it is but we&#8217;re finding government organisations in particular, and also corporates, are starting to care more and more about where their data is hosted and who it’s hosted with,” Gomersall says. “It’s just the security of knowing where it is and who’s got it.”<br>Regional data centres can also boost the resilience of Australia’s data by boosting the geographic area over which it is spread, he says.</p>



<p>“The internet as such was designed to distribute information globally and does that very efficiently and effectively. Then ironically we go and then concentrate all our data in one geographic location. It seems a little bit counterintuitive to me.</p>



<p>“From a policy perspective, I think the government should be looking at how they sort of spread the data centre load around the nation.”</p>



<h4 class="wp-block-heading">Developing skills</h4>



<p>Gomersall says the cost of connecting regional data centres to capital cities had been an impediment in the past, but this is now being addressed with better infrastructure.</p>



<p>Regional data centres can also play a significant role in local job creation and keeping and developing IT skills in the regions.</p>



<p>In the short term the Townsville data centre will create a handful of jobs but iseek is already starting to consider how it will expand that workforce. Data centres also create indirect jobs, such as the cloud services and hosted services within them.</p>



<p>“If we take a five to 10-year view on this – and when you build a data centre you&#8217;re taking a 20 year-plus view – I see significant job creation,” Gomersall says.</p>



<p>“We&#8217;re going to create the skills and do the training to build those skills in those areas.”</p>



<p>Joe Craparotta, vice-president secure power at Schneider Electric, says improvements in Australia’s telecommunications infrastructure outside the major carriers has made regional data centres more feasible while the rising cost of land in capital cities has made them more desirable.</p>



<p>Another advantage of housing them in cooler climates such as Toowoomba, where the $40 million Pulse Data Centre supported by Schneider Electric, Telstra and the Queensland government opened last year, is they draw on less power to keep cool.</p>



<p>“The running cost of the data centre can be a lot more efficient and a lot more optimised than a metro data centre,” Craparotta says. “There’s almost free cooling for a big chunk of the year.”</p>



<h4 class="wp-block-heading">Close to power source</h4>



<p>Additionally, they can often have access to good power, being close to the source of generation.</p>



<p>Craparotta says keeping IT skills in the regions will become more important as regional Australia becomes more digital and the use of the internet of things increases, creating further benefits for local industry.</p>



<p>“They have a facility locally they can rely on to help them digitise, either their farm – from paddock to plate – or the local university or school, or just the local businesses in general.”</p>



<p>While he is not suggesting that regional data centres will replace metro data centres, he does expect Australia will have a mix of both.</p>



<p>“I would expect it to be aligned with the population, so I would expect that there&#8217;s more regional data centres that are built over the next five years to accommodate the third of the population that is outside metro area,” Craparotta says.</p>



<p>“I’d predict in the next two to three years, we will see a much faster acceleration as the 35 per cent of Australia that lives outside the metro areas becomes more digital and more reliant on those services.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/big-data-centres-on-regional-areas/">Big data centres on regional areas</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/big-data-centres-on-regional-areas/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>6 Design Principles for Artificial Intelligence in Digital Business</title>
		<link>https://www.aiuniverse.xyz/6-design-principles-for-artificial-intelligence-in-digital-business/</link>
					<comments>https://www.aiuniverse.xyz/6-design-principles-for-artificial-intelligence-in-digital-business/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 26 Apr 2019 05:34:28 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Act autonomously]]></category>
		<category><![CDATA[AI applications]]></category>
		<category><![CDATA[CIOs]]></category>
		<category><![CDATA[Digital Business]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=3448</guid>

					<description><![CDATA[<p>Source:- gartner.com. CIOs can make the most of artificial intelligence by applying it to strategic digital business objectives. Artificial intelligence (AI) can augment or automate decisions and tasks today <a class="read-more-link" href="https://www.aiuniverse.xyz/6-design-principles-for-artificial-intelligence-in-digital-business/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/6-design-principles-for-artificial-intelligence-in-digital-business/">6 Design Principles for Artificial Intelligence in Digital Business</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source:- gartner.com.</p>
<div class="entry-summary">
<p>CIOs can make the most of artificial intelligence by applying it to strategic digital business objectives.</p>
</div>
<div class="entry-content">
<p>Artificial intelligence (AI) can augment or automate decisions and tasks today performed by humans, making it indispensable for digital business transformation. With AI, organizations can reduce labor costs, generate new business models, and improve processes or customer service. However, most AI technologies remain immature.</p>
<p>“To overcome this hurdle, CIOs must ensure that applications intended to serve a strategic business purpose, such as increasing revenue or scaling services, are designed for strategic plans,” says Jorge Lopez, Distinguished Vice President Analyst, Gartner.</p>
<p><span class="open-quote">“</span>AI generates insights that lead directly to business execution<span class="close-quote">”</span></p>
<p>Lopez outlines six design principles that will help CIOs and organizations evaluate all proposed AI applications with strategic intent — that is, applications intended to help achieve business results, not just operational improvements. Applications do not have to follow all six principles; however, designs that show two or fewer principles should be reconsidered.</p>
<h2>Design principle No. 1: Anticipate the future</h2>
<p>In digital business, AI generates insights that lead directly to business execution. A strategic AI application can produce granular insights into what customers, markets or other entities are likely to do in specific future situations and what the enterprise can do to influence them. The more trustworthy the insights, the more enterprises will rely on them to guide future execution systems.</p>
<h2>Design principle No. 2: Act autonomously</h2>
<p>AI applications provide value by automating existing manual processes, but can also go a step further by enabling autonomous operation of the business. A strategic AI application that acts autonomously can operate without human direction, producing significant productivity gains as it augments the work done by humans and frees them for more personalized tasks.</p>
<p>When designing AI applications for autonomous operations, ensure the AI applications are located as close as possible to the work being done, have near-real-time understanding of what’s going on and have the intelligence to make decisions on the spot.</p>
<h2>Design principle No. 3: Connect to the customer</h2>
<p>Digital businesses thrive on knowledge of markets and customers. To support digital business initiatives, AI applications must get as close to customers as possible. CIOs should take cues from digital giants that use their popular technologies powered by AI to get between companies and their customers.</p>
<p>For example, consumers often use Amazon’s Alexa and Apple’s Siri to access the capabilities of platforms from other companies. As a result, Amazon and Apple can gather better data about customers than the companies that provide the service. Similarly, CIOs should think about strategic AI applications that enable their organization to capture critical information to help build more intimate customer relationships overtime.</p>
<h2>Design principle No. 4: Elevate the physical</h2>
<p>Strategic AI applications should make a difference in the physical world. AI can have a physical impact by enhancing the power of other advanced technologies. For example, 3D printing continues to grow in sophistication. GE Aviation now creates fan blades, a critical part for jet engines, using 3D printing. Adding AI can extend 3D printing to even more complex use cases, such as adjusting the printing process to accommodate manufacturing where many variables must be controlled.</p>
<h2>Design principle No. 5: Detect the invisible</h2>
<p>AI can manage operations in ways that humans cannot, and strategic AI applications should take advantage of this ability. Strategic AI applications can make decisions much faster than humans about increasingly complex situations. For example, high-speed trading applications can already move money around in nanoseconds. They are powered by algorithms that take into account variables such as stock prices, weather and political developments. This enables traders to execute millions of orders in a matter of seconds, giving their organization a huge advantage.</p>
<h2>Design principle No. 6: Manage risk</h2>
<p>Security, risk and privacy form the biggest barriers to the development of AI applications and are even more of an issue when AI applications serve a strategic business purpose. A mistake doesn’t just disrupt operations, it harms the brand or the enterprise. As a result, CIOs should define behavior limits. These limits reduce the risk of concept drift and prevents any damage the application could do.</p>
</div>
<p>The post <a href="https://www.aiuniverse.xyz/6-design-principles-for-artificial-intelligence-in-digital-business/">6 Design Principles for Artificial Intelligence in Digital Business</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/6-design-principles-for-artificial-intelligence-in-digital-business/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
		<item>
		<title>Building a data science pipeline: Benefits, cautions</title>
		<link>https://www.aiuniverse.xyz/building-a-data-science-pipeline-benefits-cautions/</link>
					<comments>https://www.aiuniverse.xyz/building-a-data-science-pipeline-benefits-cautions/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 30 Jun 2018 05:50:42 +0000</pubDate>
				<category><![CDATA[Data Science]]></category>
		<category><![CDATA[application development]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[Digital Business]]></category>
		<category><![CDATA[IT]]></category>
		<category><![CDATA[software development]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2543</guid>

					<description><![CDATA[<p>Source &#8211; techtarget.com Enterprises are adopting data science pipelines for artificial intelligence, machine learning and plain old statistics. A data science pipeline &#8212; a sequence of actions <a class="read-more-link" href="https://www.aiuniverse.xyz/building-a-data-science-pipeline-benefits-cautions/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/building-a-data-science-pipeline-benefits-cautions/">Building a data science pipeline: Benefits, cautions</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; techtarget.com</p>
<p>Enterprises are adopting data science pipelines for artificial intelligence, machine learning and plain old statistics. A data science pipeline &#8212; a sequence of actions for processing data &#8212; will help companies be more competitive in a digital, fast-moving economy.</p>
<div class="ad-wrapper ad-embedded">
<div id="halfpage" class="ad ad-hp" data-google-query-id="CIqgk4zX-tsCFU7SjgodE7wDeQ">
<div id="google_ads_iframe_/3618/scio/NEWS_3__container__"><iframe id="google_ads_iframe_/3618/scio/NEWS_3" title="3rd party ad content" name="google_ads_iframe_/3618/scio/NEWS_3" width="300" height="251" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" data-mce-fragment="1"></iframe></div>
</div>
</div>
<p>Before CIOs take this approach, however, it&#8217;s important to consider some of the key differences between data science development workflows and traditional application developmentworkflows.</p>
<p>Data science development pipelines used for building predictive and data science models are inherently experimental and don&#8217;t always pan out in the same way as other software development processes, such as Agile and DevOps. Because data science models break and lose accuracy in different ways than traditional IT apps do, a data science pipeline needs to be scrutinized to assure the model reflects what the business is hoping to achieve.</p>
<p>At the recent Rev Data Science Leaders Summit in San Francisco, leading experts explored some of these important distinctions, and elaborated on ways that IT leaders can responsibly implement a data science pipeline. Most significantly, data science development pipelines need accountability, transparency and auditability. In addition, CIOs need to implement mechanisms for addressing the degradation of a model over time, or &#8220;model drift.&#8221; Having the right teams in place in the data science pipeline is also critical: Data science generalists work best in the early stages, while specialists add value to more mature data science processes.</p>
<section class="section main-article-chapter" data-menu-title="Data science at Moody's">
<h3 class="section-title">Data science at Moody&#8217;s</h3>
<div class="imagecaption alignRight">CIOs might want to take note from Moody&#8217;s, the financial analytics giant, which was an early pioneer in using predictive modeling to assess the risks of bonds and investment portfolios. Jacob Grotta, managing director at Moody&#8217;s Analytics, said the company has streamlined the data science pipeline it uses to create models in order to be able to quickly adapt to changing business and economic conditions.</div>
<p>&#8220;As soon as a new model is built, it is at its peak performance, and over time, they get worse,&#8221; Grotta said. Declining model performance can have significant impacts. For example, in the finance industry, a model that doesn&#8217;t accurately predict mortgage default rates puts a bank in jeopardy.</p>
</section>
<section class="section main-article-chapter" data-menu-title="Watch out for assumptions">
<h3 class="section-title">Watch out for assumptions</h3>
<p>Grotta said it is important to keep in mind that data science models are created by and represent the assumptions of the data scientists behind them. Before the 2008 financial crisis, a firm approached Grotta with a new model for predicting the value of mortgage-backed derivatives, he said. When he asked what would happen if the prices of houses went down, the firm responded that the model predicted the market would be fine. But it didn&#8217;t have any data to support this. Mistakes like these cost the economy almost $14 trillion by some estimates.</p>
<section class="section main-article-chapter" data-menu-title="Watch out for assumptions">The expectation among companies often is that someone understands what the model does and its inherent risks. But these unverified assumptions can create blind spots for even the most accurate models. Grotta said it is a good practice to create lines of defense against these sorts of blind spots.</p>
<p>The first line of defense is to encourage the data modelers to be honest about what they do and don&#8217;t know and to be clear on the questions they are being asked to solve. &#8220;It is not an easy thing for people to do,&#8221; Grotta said.</p>
<p>A second line of defense is verification and validation. Model verification involves checking to see that someone implemented the model correctly, and whether mistakes were made while coding it. Model validation, in contrast, is an independent challenge process to help a person developing a model to identify what assumptions went into the data. Ultimately, Grotta said, the only way to know if the modeler&#8217;s assumptions are accurate or not is to wait for the future.</p>
<p>A third line of defense is an internal audit or governance process. This involves making the results of these models explainable to front-line business managers. Grotta said he was working with a bank recently that protested its bank managers would not use a model if they didn&#8217;t understand what was driving its results. But he said the managers were right to do this. Having a governance process and ensuring information flows up and down the organization is extremely important, Grotta said.</p>
</section>
<section class="section main-article-chapter" data-menu-title="Baking in accountability">
<h3 class="section-title">Baking in accountability</h3>
<p>Models degrade or &#8220;drift&#8221; over time, which is part of the reason organizations need to streamline their model development processes. It can take years to craft a new model. &#8220;By that time, you might have to go back and rebuild it,&#8221; Grotta said. Critical models must be revalidated every year.</p>
<p>To address this challenge, CIOs should think about creating a data science pipeline with an auditable, repeatable and transparent process. This promises to allow organizations to bring the same kind of iterative agility to model development that Agile and DevOps have brought to software development.</p>
<p>Transparent means that upstream and downstream people understand the model drivers. It is repeatable in that someone can repeat the process around creating it. It is auditable in the sense that there is a program in place to think about how to manage the process, take in new information, and get the model through the monitoring process. There are varying levels of this kind of agility today, but Grotta believes it is important for organizations to make it easy to update data science models in order to stay competitive.</p>
</section>
<section class="section main-article-chapter" data-menu-title="How to keep up with model drift">
<h3 class="section-title">How to keep up with model drift</h3>
<p>Nick Elprin, CEO and co-founder of Domino Data Lab, a data science platform vendor, agreed that model drift is a problem that must be addressed head on when building a data science development pipeline. In some cases, the drift might be due to changes in the environment, like changing customer preferences or behavior. In other cases, drift could be caused by more adversarial factors. For example, criminals might adopt new strategies for defeating a new fraud detection model.</p>
<div class="imagecaption alignRight"> </p>
<section class="section main-article-chapter" data-menu-title="How to keep up with model drift">In order to keep up with this drift, CIOs need to include a process for monitoring the effectiveness of their data models over time and establishing thresholds for replacing these models when performance degrades.</p>
<p>With traditional software monitoring, the IT service management needs to track metrics related to CPU, network and memory usage. With data science, CIOs need to capture metrics related to accuracy of model results. &#8220;Software for [data science] production models needs to look at the output they are getting from those models, and if drift has occurred, that should raise an alarm to retrain it,&#8221; Elprin said.</p>
</section>
<section class="section main-article-chapter" data-menu-title="Fashion-forward data science">
<h3 class="section-title">Fashion-forward data science</h3>
<p>At Stitch Fix, a personal shopping service, the company&#8217;s data science pipeline allows it to sell clothes online at full price. Using data science in various ways allows them to find new ways to add value against deep discount giants like Amazon, said Eric Colson, chief algorithms officer at Stitch Fix.</p>
<div class="imagecaption alignRight"> </p>
<section class="section main-article-chapter" data-menu-title="Fashion-forward data science">For example, the data science team has used natural language processing to improve its recommendation engines and buy inventory. Stitch Fix also uses genetic algorithms &#8212; algorithms that are designed to mimic evolution and iteratively select the best results following a set of randomized changes. These are used to streamline the process for designing clothes, coming up with countless iterations: Fashion designers then vet the designs.</p>
<p>This kind of digital innovation, however, was only possible he said because the company created an efficient data science pipeline. He added that it was also critical that the data science team is considered a top-level department at Stitch Fix and reports directly to the CEO.</p>
</section>
<section class="section main-article-chapter" data-menu-title="Specialists or generalists?">
<h3 class="section-title">Specialists or generalists?</h3>
<p>One important consideration for CIOs in constructing the data science development pipeline is whether to recruit data science specialists or generalists. Specialists are good at optimizing one step in a complex data science pipeline. Generalists can execute all the different tasks in a data science pipeline. In the early stages of a data science initiative, generalists can adapt to changes in the workflow more easily, Colson said.</p>
<p>Some of these different tasks include feature engineering, model training, enhance transform and loading (ETL) data, API integration, and application development. It is tempting to staff each of these tasks with specialists to improve individual performance. &#8220;This may be true of assembly lines, but with data science, you don&#8217;t know what you are building, and you need to iterate,&#8221; Colson said. The process of iteration requires fluidity, and if the different roles are staffed with different people, there will be longer wait times when a change is made.</p>
<p>In the beginning at least, companies will benefit more from generalists. But after data science processes are established after a few years, specialists may be more efficient.</p>
</section>
<section class="section main-article-chapter" data-menu-title="Align data science with business">
<h3 class="section-title">Align data science with business</h3>
<p>Today a lot of data science models are built in silos that are disconnected from normal business operations, Domino&#8217;s Elprin said. To make data science effective, it must be integrated into existing business processes. This comes from aligning data science projects with business initiatives. This might involve things like reducing the cost of fraudulent claims or improving customer engagement.</p>
<div class="join-discussion-wrapper">
<div class="slick-list" aria-live="polite">
<div class="slick-track" role="listbox">
<ul>
<li class="slick-slide slick-current slick-active" tabindex="-1" role="option" data-slick-index="0" aria-hidden="false" aria-describedby="slick-slide00">
<aside id="embedded-discussions">
<div class="discussion-question">
<div class="image-resize">What are the advantages of data science development pipelines? What are the cautions?</div>
<p><i class="icon" data-icon="Z"></i></div>
<div class="discussion-cta">Join the Discussion</div>
</aside>
</li>
</ul>
</div>
</div>
</div>
</section>
<p>In less effective organizations, management tends to start with the data the company has collected and wonder what a data science team can do with it. In more effective organizations, data science is driven by business objectives.</p>
<p>&#8220;Getting to digital transformation requires top down buy-in to say this is important,&#8221; Elprin said. &#8220;The most successful organizations find ways to get quick wins to get political capital. Instead of twelve-month projects, quick wins will demonstrate value, and get more concrete engagement.&#8221;</p>
</div>
</section>
</div>
</section>
</section>
<p>The post <a href="https://www.aiuniverse.xyz/building-a-data-science-pipeline-benefits-cautions/">Building a data science pipeline: Benefits, cautions</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/building-a-data-science-pipeline-benefits-cautions/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>3 megatrends for digital business into the next decade</title>
		<link>https://www.aiuniverse.xyz/3-megatrends-for-digital-business-into-the-next-decade/</link>
					<comments>https://www.aiuniverse.xyz/3-megatrends-for-digital-business-into-the-next-decade/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 17 Aug 2017 08:21:55 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Big Data]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Digital Business]]></category>
		<category><![CDATA[IT leaders]]></category>
		<category><![CDATA[IT trends]]></category>
		<category><![CDATA[Reinforcement Learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=648</guid>

					<description><![CDATA[<p>Source &#8211; enterpriseinnovation.net Artificial intelligence (AI) everywhere, transparently immersive experiences and digital platforms are the trends that will provide unrivaled intelligence, create profoundly new experiences and offer platforms <a class="read-more-link" href="https://www.aiuniverse.xyz/3-megatrends-for-digital-business-into-the-next-decade/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/3-megatrends-for-digital-business-into-the-next-decade/">3 megatrends for digital business into the next decade</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>enterpriseinnovation.net</strong></p>
<p>Artificial intelligence (AI) everywhere, transparently immersive experiences and digital platforms are the trends that will provide unrivaled intelligence, create profoundly new experiences and offer platforms that allow organizations to connect with new business ecosystems, according Gartner’s Hype Cycle for Emerging Technologies 2017.</p>
<p>&#8220;In addition to the potential impact on businesses, these trends provide a significant opportunity for enterprise architecture leaders to help senior business and IT leaders respond to digital business opportunities and threats by creating signature-ready actionable and diagnostic deliverables that guide investment decisions,&#8221; said Mike J. Walker, research director at Gartner.</p>
<p>Artificial intelligence technologies will be the most disruptive class of technologies over the next 10 years due to radical computational power, near-endless amounts of data, and unprecedented advances in deep neural networks; these will enable organizations with AI technologies to harness data in order to adapt to new situations and solve problems that no one has ever encountered previously.</p>
<p>Enterprises that are seeking leverage in this theme should consider the following technologies Deep Learning, Deep Reinforcement Learning, Artificial General Intelligence, Autonomous Vehicles, Cognitive Computing, Commercial UAVs (Drones), Conversational User Interfaces, Enterprise Taxonomy and Ontology Management, Machine Learning, Smart Dust, Smart Robots and Smart Workspace.</p>
<p>Also, technology will continue to become more human-centric to the point where it will introduce transparency between people, businesses and things. This relationship will become much more entwined as the evolution of technology becomes more adaptive, contextual and fluid within the workplace, at home, and in interacting with businesses and other people.</p>
<p>Critical technologies to be considered include 4D Printing, Augmented Reality (AR), Computer-Brain Interface, Connected Home, Human Augmentation, Nanotube Electronics, Virtual Reality (VR) and Volumetric Displays.</p>
<p>Further, emerging technologies require revolutionizing the enabling foundations that provide the volume of data needed, advanced compute power, and ubiquity-enabling ecosystems. The shift from compartmentalized technical infrastructure to ecosystem-enabling platforms is laying the foundations for entirely new business models that are forming the bridge between humans and technology.</p>
<p>Key platform-enabling technologies to track include 5G, Digital Twin, Edge Computing, Blockchain, IoT Platform, Neuromorphic Hardware, Quantum Computing, Serverless PaaS and Software-Defined Security.</p>
<p>The post <a href="https://www.aiuniverse.xyz/3-megatrends-for-digital-business-into-the-next-decade/">3 megatrends for digital business into the next decade</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/3-megatrends-for-digital-business-into-the-next-decade/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>Hacker Hunting: Combatting Cybercrooks with Big Data</title>
		<link>https://www.aiuniverse.xyz/hacker-hunting-combatting-cybercrooks-with-big-data/</link>
					<comments>https://www.aiuniverse.xyz/hacker-hunting-combatting-cybercrooks-with-big-data/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 14 Jul 2017 09:52:59 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[commercial tools]]></category>
		<category><![CDATA[Cybercrooks]]></category>
		<category><![CDATA[data scientists]]></category>
		<category><![CDATA[Digital Business]]></category>
		<category><![CDATA[Hacker]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=55</guid>

					<description><![CDATA[<p>Source &#8211; datanami.com When it comes to cybersecurity, the big data explosion represents both a liability and an asset. On the one hand, big collections of data represent <a class="read-more-link" href="https://www.aiuniverse.xyz/hacker-hunting-combatting-cybercrooks-with-big-data/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/hacker-hunting-combatting-cybercrooks-with-big-data/">Hacker Hunting: Combatting Cybercrooks with Big Data</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>datanami.com</strong></p>
<p>When it comes to cybersecurity, the big data explosion represents both a liability and an asset. On the one hand, big collections of data represent a treasure trove that hackers would love to get their dirty little mitts on. But on the other hand, the capability to collect, store, and analyze huge reams of data gives the good guys a powerful tool to thwart the bad guys.</p>
<p>These are two ends of the same stick. If you’re building a data lake atop Hadoop, Amazon S3, MongoDB or any other big data platform, you sure as heck better be securing that data. After all, if that data is valuable to you – which is hopefully why you’re collecting it in the first place – then it will likely have value to some digital bottom dweller who steals for a living.</p>
<p>There are various open source and commercial tools designed to help you lock down access to that data. But there’s another element of big data security, and it involves using advanced analytics to better detect when bad guys (or bad software) are trying to do us harm.</p>
<p>Here’s how the good guys are using big data analytics to go after the black hats of the world.</p>
<h3>Machine Learning De Rigueur</h3>
<p>Machine learning techniques have been reducing spam in our email inboxes for years. Now that same technology is being applied by leaders in the cybersecurity world, including McAfee, which started using it widely with a major update to its Internet security software last fall.</p>
<p>“I do think it’s important to know that we have been shipping machine learning in our product at scale for many, many months,” says McAfee CTO Steve Grobman. “We did a massive launch last year in late October, November that was really retooling our detection technology to take full advantage of machine learning capabilities.”<img decoding="async" class="alignright  wp-image-15744" src="https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2017/04/Machine-Learning_lightbulb.png" sizes="(max-width: 247px) 100vw, 247px" srcset="https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2017/04/Machine-Learning_lightbulb.png 1249w, https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2017/04/Machine-Learning_lightbulb-300x175.png 300w, https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2017/04/Machine-Learning_lightbulb-768x448.png 768w, https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2017/04/Machine-Learning_lightbulb-1024x598.png 1024w, https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2017/04/Machine-Learning_lightbulb-200x117.png 200w, https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2017/04/Machine-Learning_lightbulb-100x58.png 100w, https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2017/04/Machine-Learning_lightbulb-120x70.png 120w" alt="" width="247" height="144" /></p>
<p>That ML capability extends from the data lakes that McAfee uses to store examples of malware and infiltration techniques, all the way out to the millions of end-points that it protects.</p>
<p>“Our consumer product line, as well as our most current enterprise product, both have these technologies built into them,” Grobman tells <em>Datanami</em>. “We’re able to classify a much larger set of malicious scenarios using non-deterministic machine learning capabilities to find either new forms of malware or threat scenarios as compared to traditional techniques.”</p>
<h3>Speed to Detection</h3>
<p>While established security firms like McAfee play large roles in cybersecurity, organizations are also turning to big data analytic software companies to get a leg up on the detection of bad stuff going on in their networks.</p>
<p>One of those analytics firms seeing increased uptake for security use cases is Pentaho. Chuck Yarbrough, a vice president with the Hitachi subsidiary, says he’s seeing an uptick in customers using its big data software to bolster their company’s security.</p>
<p>“The reality is, you’re going to get intrusions,” Yarbrough says. “There are tools to help prevent intrusions….But oftentimes the challenge is to recognize that you’ve had an intrusion, and being able to figure that out as quickly as possible.”</p>
<p><img decoding="async" class=" wp-image-2394 alignleft" src="https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2013/10/datablending_200.png" sizes="(max-width: 166px) 100vw, 166px" srcset="https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2013/10/datablending_200.png 200w, https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2013/10/datablending_200-149x200.png 149w, https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2013/10/datablending_200-74x100.png 74w, https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2013/10/datablending_200-120x160.png 120w" alt="" width="166" height="222" />On average it takes organizations 205 days to discover that they’ve had an intrusion into their internal systems, Yarbrough says. That means these ticks get nearly seven months to poke their nasty little heads into organizations’ digital crevices before the host figures out that something’s amiss. That’s just too long.</p>
<p>“The amount of time it takes to identify a problem is crazy,” Yarbrough says. “How can we bring that number down? Ultimately we want to get it down to hours or minutes, and part of that is blending multiple data sets together.”</p>
<p>The typical security engagement with Pentaho starts small with just a few data sources and analysts prepping the data. Then the customers will combine data sets, such as bringing employee social data or badge swipe data, to bear against log files recording network activity.</p>
<p>Once the data is landed, prepped, and mixed, the data scientists use machine learning algorithms, usually in R or Python, to discover anomalies that could indicate a breach. The Pentaho software helps to operationalize this entire data pipeline and run it repeatedly on Hadoop or other big data platforms.</p>
<p>“There are a lot of security systems out there that do parsing of data all the time, and they’re really good at it,” Yarbrough says. “But now we can parse that data in an appropriate way and blend in additional data sets to help it add that context. I build the data model on the fly so that either the data scientists a security analyst can do some level of interactive analytics against that to find the cause, or the potential problems, and then be able to tack action.”</p>
<h3>Context Is Key</h3>
<p>Finding context in the data is critical to stopping cybercrimes, such as fraud, says Poornima Ramaswamy, vice president of business analytics and insights at Cognizant Digital Business, a technology consultancy.</p>
<p>“We are basically trying to combine human science and data science to be able to create analytical solutions that are more human-based, deeply contextual, and can actually truly address the problems that we face,” Ramaswamy says.</p>
<div id="attachment_16808" class="wp-caption alignright"><img loading="lazy" decoding="async" class="wp-image-16808" src="https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2017/07/woman_stealing_shutterstock_Nestor-Rizhniak.jpg" sizes="auto, (max-width: 240px) 100vw, 240px" srcset="https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2017/07/woman_stealing_shutterstock_Nestor-Rizhniak.jpg 1000w, https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2017/07/woman_stealing_shutterstock_Nestor-Rizhniak-300x200.jpg 300w, https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2017/07/woman_stealing_shutterstock_Nestor-Rizhniak-768x511.jpg 768w, https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2017/07/woman_stealing_shutterstock_Nestor-Rizhniak-200x133.jpg 200w, https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2017/07/woman_stealing_shutterstock_Nestor-Rizhniak-100x67.jpg 100w, https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2017/07/woman_stealing_shutterstock_Nestor-Rizhniak-120x80.jpg 120w" alt="" width="240" height="160" /></p>
<p class="wp-caption-text">
</div>
<p>For example, fraud conducted in-store is much different than fraud conduced over the Internet. In engagements with big box retailers, Cognizant collects data about the movement of individuals within the store, and combines that with other data to come up with a solution.</p>
<p>“We overlay the contextual data…on top of big data to look for patterns of human behavior,”Ramaswamy tells <em>Datanami</em>. “With this approach we can apply the contextual data on top of the data to get more human-based segmentation and then your analytics and insights are a lot more realistic.</p>
<p>A recent Cognizant project involved researching elements around fraud, and included interviews with 11 actual fraudsters to get more insight into their mindsets. What the research discovered is that cyber thieves don’t like to take unnecessary risks.</p>
<p>“The biggest characteristics for fraud are speed and liquidity and efficiency,” she says. “There’s very little [research] in looking at that end-to-end chain and looking at it as an opportunity to make it less attractive, where you’re not going to be able to liquidate the assets.”</p>
<h3>Bad Guys Using It, Too</h3>
<p>Big data analytics is helping the good guys, but the cybercriminals are getting hip to the techniques – and some are starting to use it themselves.</p>
<p>According to McAfee’s Grobman, hackers are starting to use machine learning “poisoning” techniques that basically involve throwing a lot of white noise at the good guys’ data receptors with the goal of confusing the model – and thereby throwing the good guys off their trail.</p>
<div id="attachment_16809" class="wp-caption alignleft"><img loading="lazy" decoding="async" class="wp-image-16809" src="https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2017/07/hacker_shutterstock_HQuality.jpg" sizes="auto, (max-width: 255px) 100vw, 255px" srcset="https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2017/07/hacker_shutterstock_HQuality.jpg 1000w, https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2017/07/hacker_shutterstock_HQuality-300x158.jpg 300w, https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2017/07/hacker_shutterstock_HQuality-768x405.jpg 768w, https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2017/07/hacker_shutterstock_HQuality-200x105.jpg 200w, https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2017/07/hacker_shutterstock_HQuality-100x53.jpg 100w, https://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2017/07/hacker_shutterstock_HQuality-120x63.jpg 120w" alt="" width="255" height="135" /></p>
<p class="wp-caption-text">
</div>
<p>“They’re making it very difficult to tease out the signal within a very noisy set of data,” he says. “An attacker can craft things that look like an attack but are actually benign in order to intentionally create false positives that become very expensive things for the company to deal with, and are forced to lower the sensitivity.”</p>
<p>You can expect these evasion tactics to become more common in the years to come, as security professionals increasingly rely on machine learning and artificial intelligence to automate rote tasks.</p>
<p>“It’s absolutely something that’s beginning to be analyzed,” Grobman says. “We’re just now hitting the saturation point where there’s enough AI and ML in the industry for the counter measures” to work.</p>
<p>McAfee has also observed bad actors using big data technology and techniques to make their attacks more effective and produce a higher return on investment.</p>
<p>“If you look at what are some of these algorithms and technology are good at, one of the things is classification,” he says. “One of the things an attacker can do is evaluate many potential victims, then use a machine learning classifier to choose victims who have a high probability of ease of breach, or they have a high probability to extract high value data.”</p>
<p>Think of it as Netflix for digital ne’er-do-wells. “Just as Netflix wants to suggest to you the movie you’d like the most, an attacker wants to not waste their time on victims that are difficult to break or unlikely to yield high value,” Grobman says.</p>
<p>The rapid evolution of digital technology has benefitted people in many ways. The incredible power of personalization has resulted in consumers now expecting what Forrester analyst Mike Gualteiri adeptly calls “the celebrity experience.” And thanks to machine learning, we’re able to spot bad guys lurking in our networks faster than before.</p>
<p>The post <a href="https://www.aiuniverse.xyz/hacker-hunting-combatting-cybercrooks-with-big-data/">Hacker Hunting: Combatting Cybercrooks with Big Data</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/hacker-hunting-combatting-cybercrooks-with-big-data/feed/</wfw:commentRss>
			<slash:comments>6</slash:comments>
		
		
			</item>
	</channel>
</rss>
