<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Information Technology Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/information-technology/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/information-technology/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Mon, 23 Sep 2019 10:26:23 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>
	<item>
		<title>The best AI-driven data science platform you’ve never heard of</title>
		<link>https://www.aiuniverse.xyz/the-best-ai-driven-data-science-platform-youve-never-heard-of/</link>
					<comments>https://www.aiuniverse.xyz/the-best-ai-driven-data-science-platform-youve-never-heard-of/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 23 Sep 2019 10:26:22 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[Information Technology]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4535</guid>

					<description><![CDATA[<p>Source: When O’Reilly’s Strata Data Conference opens at the Javits Center in New York City later this week, some of the software vendors who helped usher in the big data era won’t be there. Take Alpine Data Labs, ClearStory Data, Hortonworks, MapR, Platfora and others, all of which have been acquired by larger vendors. But there’s one thriving data science and <a class="read-more-link" href="https://www.aiuniverse.xyz/the-best-ai-driven-data-science-platform-youve-never-heard-of/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-best-ai-driven-data-science-platform-youve-never-heard-of/">The best AI-driven data science platform you’ve never heard of</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: </p>



<p>When O’Reilly’s Strata Data Conference opens at the Javits Center in New York City later this week, some of the software vendors who helped usher in the big data era won’t be there. Take Alpine Data Labs, ClearStory Data, Hortonworks, MapR, Platfora and others, all of which have been acquired by larger vendors.</p>



<p>But there’s one thriving data science and analytics platform whose origin looks very different than those — OpenText Magellan was born at enterprise information management software maker OpenText in 2017.</p>



<p>At two years of age, OpenText Magellan has already qualified as a “strong performer” in The Forrester Wave Notebook-Based Predictive Analytics And Machine Learning Solutions, Q3 2018 alongside Cloudera, Databricks, H2O.ai and ahead of Anaconda and Google. (Oracle with its Data Science.com buy and Domino Data Lab are leaders.)</p>



<p>Forrester analysts Kjell Carlsson, Ph.D., Mike Gualtieri and others described Magellan this way:</p>



<p>“It&nbsp;<em>ta</em>ckles the gnarliest, most underleveraged valuable data…“(OpenText Magellan’s use of Jupyter) is the workbench interface many data science teams expect, and OpenText adds value by also giving them access to unstructured data with sophisticated text analytics,” they wrote.</p>



<p>And that’s an important differentiator, according to Zachary Scott Jarvinen the product marketing lead for AI at OpenText. After all, there’s a wealth of information stored in documents like company reports, contracts, email, marketing material, whitepapers, resumes, forms filled out during employee recruitment and onboarding, text messages, messaging on social networks, in surveys, and so on. Most enterprises aren’t leveraging it fully to gain critical insights.</p>



<p>That’s partly because the sheer volume of unstructured content is massive, and also because the velocity at which it is being created is huge — think about, how much textual information do you create everyday (think about everything you write down, every conversation that is heard, recorded or archived involving a machine, and video. OpenText Magellan helps enterprises leverage it all through Artificial Intelligence and machine learning algorithms in big data platforms. It transforms big data and big content into self-service data visualizations for users across organizations to increase automation, operational efficiencies and maximize revenue. And it does all of this in a way that is governed and compliant.</p>



<p>“We’re pretty good at this, we’ve been working with content for quite a while,” says Jarvinen, alluding to the fact that OpenText has been helping organizations manage content since 1991. Not only that, but OpenText took on text mining in 2006, which was noted as a bit of a breakthrough by the likes of O’Reilly Media’s Tim O’Reilly.</p>



<p>And that was at a time that some people still thought of AI as something out science fiction, like ET, and big data crunching technologies were too expensive and too hard to use for those who weren’t resource-rich computer scientists.</p>



<p>Fast forward to the present day, the use of open source Apache Spark — a unified analytics engine for big data processing, with built-in modules for streaming, SQL, machine learning and graph processing — and its machine learning library MLlib are commonplace. OpenText has combined those with technologies it obtained through the acquisition of Actuate (which leverages open source Eclipse BIRT) and Nstein</p>



<p>Magellan brings something new(er) to the table for data scientists, business analysts and decision makers- AI and analytics capabilities on unstructured data.</p>



<p>The advantage that OpenText Magellan provides to its customers is the ability to operationalize models in business-friendly interfaces in a vast portfolio of enterprise information applications and services, including customer experience management, business network, digital process automation, and content management, according to the Forrester analysts.</p>



<p>If OpenText Magellan has a big problem, it’s that not enough data workers know it exists, a fact that’s illustrated in the almost non-existent circle in the Forrester Wave graphic above. Jarvinen aims to change that. He will be hosting a 10-minute pop-up talk, <em>How to get value from the 80% of data you’re not using and make it easier to get started with Enterprise AI </em>on the Expo floor at Strata on Wednesday. It might be worth checking out.</p>
<p>The post <a href="https://www.aiuniverse.xyz/the-best-ai-driven-data-science-platform-youve-never-heard-of/">The best AI-driven data science platform you’ve never heard of</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-best-ai-driven-data-science-platform-youve-never-heard-of/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Mphasis’s announces Deep Learning algorithms on AWS Marketplace for Machine Learning</title>
		<link>https://www.aiuniverse.xyz/mphasiss-announces-deep-learning-algorithms-on-aws-marketplace-for-machine-learning/</link>
					<comments>https://www.aiuniverse.xyz/mphasiss-announces-deep-learning-algorithms-on-aws-marketplace-for-machine-learning/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 01 Aug 2019 06:12:20 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Information Technology]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Mphasis]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4190</guid>

					<description><![CDATA[<p>Source: crn.in Mphasis, an Information Technology (IT) solutions provider specializing in cloud and cognitive services announced the availability of new Deep Learning algorithms on Amazon Web Services (AWS) Marketplace for Machine Learning. The on-demand solutions target practical enterprise use cases such as influence analytics, insurance claims analysis, payment card fraud, and image analytics for supply <a class="read-more-link" href="https://www.aiuniverse.xyz/mphasiss-announces-deep-learning-algorithms-on-aws-marketplace-for-machine-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/mphasiss-announces-deep-learning-algorithms-on-aws-marketplace-for-machine-learning/">Mphasis’s announces Deep Learning algorithms on AWS Marketplace for Machine Learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: crn.in</p>



<p>Mphasis, an Information Technology (IT) solutions provider specializing in cloud and cognitive services announced the availability of new Deep Learning algorithms on Amazon Web Services (AWS) Marketplace for Machine Learning. The on-demand solutions target practical enterprise use cases such as influence analytics, insurance claims analysis, payment card fraud, and image analytics for supply chain and logistics. They are available for download and free trial on the&nbsp;AWS Marketplace for Machine Learning website.</p>



<p>Solutions on AWS Marketplace combined with AWS Machine Learning services help users simplify data experimentation, formulate deeper insights from disparate sources across their data estate, and foster new levels of productivity and efficiency for a wide variety of use cases.</p>



<p>“AWS Marketplace is helping Mphasis put the power of machine learning into the hands of developers virtually everywhere,” said Dr. Jai Ganesh -Senior Vice President &amp; Head, Mphasis NEXT Labs. “Our solutions target practical, high-value use cases that can deliver immediate impact and ROI in critical enterprise business processes and operations. And users can deploy them with the speed and security provided by AWS.”</p>



<p>Mphasis is an Advanced Consulting Partner in the AWS Partner Network (APN) and leverages AWS with customers across its business. The company applies machine learning and artificial intelligence to allow companies to gain hidden value from their enterprise data. By linking datasets with cognitive computing, Mphasis helps companies accelerate process transformation and gain competitive advantage.</p>
<p>The post <a href="https://www.aiuniverse.xyz/mphasiss-announces-deep-learning-algorithms-on-aws-marketplace-for-machine-learning/">Mphasis’s announces Deep Learning algorithms on AWS Marketplace for Machine Learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/mphasiss-announces-deep-learning-algorithms-on-aws-marketplace-for-machine-learning/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Why &#8220;Artificial Intelligence&#8221; Needs A Smarter Name</title>
		<link>https://www.aiuniverse.xyz/why-artificial-intelligence-needs-a-smarter-name/</link>
					<comments>https://www.aiuniverse.xyz/why-artificial-intelligence-needs-a-smarter-name/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 28 Feb 2019 06:10:13 +0000</pubDate>
				<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Automation]]></category>
		<category><![CDATA[Information Technology]]></category>
		<category><![CDATA[Smart]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=3367</guid>

					<description><![CDATA[<p>Source- worldcrunch.com In the Harry Potter series, evoking even the name of the villain — Voldemort — spreads terror. In real life, Voldemort doesn&#8217;t exist. But simple words can still be enough to provoke mental instability, or even a panicked fear. Such is the case today for the term Artificial Intelligence, AI for short. The phrase covers a range <a class="read-more-link" href="https://www.aiuniverse.xyz/why-artificial-intelligence-needs-a-smarter-name/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/why-artificial-intelligence-needs-a-smarter-name/">Why &#8220;Artificial Intelligence&#8221; Needs A Smarter Name</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source- <a href="https://www.worldcrunch.com/tech-science/why-artificial-intelligence-needs-a-smarter-name" target="_blank" rel="noopener">worldcrunch.com</a></p>
<p>In the <em>Harry Potter </em>series, evoking even the name of the villain — Voldemort — spreads terror. In real life, Voldemort doesn&#8217;t exist. But simple words can still be enough to provoke mental instability, or even a panicked fear. Such is the case today for the term Artificial Intelligence, AI for short.</p>
<p>The phrase covers a range of incredibly effective tools, but also evokes such strong emotions of excitement and fear that people forget what it is in the first place — and at the risk of causing errors, blockages and frenzies. It is therefore essential that we stop talking about AI, assuming it isn’t too late.</p>
<p>When researchers invented the tools of Information Technology, they also created the vocabulary we use to refer to them. To believe the <em>Robert</em> dictionary, the word &#8220;computer&#8221; made its first appearance in text in 1955. The first &#8220;internet&#8221; reference came 40 years later, in 1995. But &#8220;Artificial Intelligence&#8221; comes from a combination of two old terms, both of which have a strong significance.</p>
<p>Initially, it was a marketing move. In 1995, four U.S. universities sent out invitations for a research seminar, held the following year at Dartmouth College, to conduct a &#8220;study on Artificial Intelligence,&#8221; assuming that &#8220;every aspect of learning or any other characteristic of intelligence can, in principle, be described so precisely that it is possible for a machine to simulate.&#8221; The goal was to attract researchers and funding, but the name stuck.</p>
<p>Recently, experts have put forward other suggestions. Joel de Rosnay, a specialist in future trends, proposes &#8220;auxiliary intelligence.&#8221; The researcher Luc Julia, director of Samsung&#8217;s Laboratory of Artificial Intelligence, prefers &#8220;augmented intelligence.&#8221; And consultant Pierre Blanc likes &#8220;algorithmic computing.&#8221;</p>
<p>Blanc is right to want to replace the word &#8220;intelligence,&#8221; which is what poses the main problem.</p>
<p>Intelligence has long been considered a distinctive trait of humanity. In the 17th century (again according to the <em>Robert</em> dictionary) the word was employed to designate a &#8220;human being as a thinking being, capable of reflection.&#8221; With artificial intelligence, a machine is supposed to acquire this human capacity. It could distinguish, discuss, and even decide, like HAL 9000; the famous computer from Stanley Kubrick&#8217;s <em>2001: A Space Odyssey</em>, released in 1968.</p>
<p>Machines could, therefore, supplant man not just in physical capability (as has been the case for centuries) but also intellectually. According to an Ipsos survey for BCG Gamma, 50% of French and German people fear the effects of AI on their jobs, as do 47% in the United States, 45% in Britain, and 38% in Spain.</p>
<p><strong>A powerful tool</strong></p>
<p>For the moment, AI remains a myth. The concept presented in that Dartmouth seminar has yet to materialize. Machines &#8220;know,&#8221; of course, how to beat the world&#8217;s most capable humans at StarCraft II, Jeopardy!, or the Chinese game of Go. But these are extremely narrow competencies, which devour infinitely greater amounts of energy than is needed by the human brain. The most powerful machines in the world are like mathematical geniuses incapable of stopping someone in the road to ask directions.</p>
<p>And often, hidden behind artificial intelligence, is human stupidity — as Microsoft demonstrated last year with its chat software Tay, which was disconnected from Twitter due to horrific sexist and racist content less than one day after being put in service. Or by Amazon in 2018, with its fully automated recruitment system that automatically eliminated women.</p>
<p>So what is really behind what we conveniently call Artificial Intelligence? The truth is simple: it&#8217;s a combination of the internet and the computer! The computer, with an information processing capacity that has grown for half a century at the exponential rate of Moore&#8217;s Law (the density of transistors on a chip double every two years). And the internet, with its colossal capacity to gather and transmit data. As spelled out by Michel Volle, co-president of the Institute of Economic and Statistical Training: &#8220;Artificial Intelligence = Statistics + Computing&#8221;</p>
<p>Short and sweet, this equation still needs one further point to be completed: The calculating power and the mountains of data permitted by forms of automated learning (&#8220;machine learning&#8221; and then &#8220;deep learning&#8221;). This is how researchers were able to make great strides for a good decade in the matter of visual and vocal recognition. They will surely make more spectacular progress in the years to come.</p>
<p>And yet, what we call Artificial Intelligence is still nothing but a tool. A tool of fantastic power that will transform how businesses are organized, but a tool nonetheless. It&#8217;s a &#8220;technological platform,&#8221; explains economists Darren Acemoglu and Pascal Restrepo, that &#8220;could be deployed not just to automate, but also to reorganize production to create new heights of human productivity.&#8221; But here too, Artificial Intelligence will only do that which human intelligence decides.</p>
<p>The post <a href="https://www.aiuniverse.xyz/why-artificial-intelligence-needs-a-smarter-name/">Why &#8220;Artificial Intelligence&#8221; Needs A Smarter Name</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/why-artificial-intelligence-needs-a-smarter-name/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Artificial Intelligence: Most adopters worry about security</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-most-adopters-worry-about-security/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-most-adopters-worry-about-security/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 14 Sep 2017 07:27:02 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI technology]]></category>
		<category><![CDATA[cognitive technology]]></category>
		<category><![CDATA[global survey]]></category>
		<category><![CDATA[Information Technology]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1118</guid>

					<description><![CDATA[<p>Source &#8211; economictimes.indiatimes.com Mumbai: Even as a debate rages over possible consequences of artificial intelligence (AI), a global survey has found managing security risk arising out of its adoption as the biggest concern for companies. &#8220;Effectively managing the security risk of AI systems is of paramount importance for the majority of industries,&#8221; the global survey undertaken by <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-most-adopters-worry-about-security/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-most-adopters-worry-about-security/">Artificial Intelligence: Most adopters worry about security</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>economictimes.indiatimes.com</strong></p>
<p>Mumbai: Even as a debate rages over possible consequences of artificial intelligence (AI), a global survey has found managing security risk arising out of its adoption as the biggest concern for companies.</p>
<p>&#8220;Effectively managing the security risk of AI systems is of paramount importance for the majority of industries,&#8221; the global survey undertaken by country&#8217;s largest software exporter TCS said.</p>
<p>The survey added that companies in the automotive, banking and financial services, consumer goods, technology, industrial manufacturing, and telecoms said managing this risk is the biggest determinant of success on AI investments.</p>
<p>Getting managers and employees to trust the advice provided by AI systems, and getting employees to learn about and adopt the new processes and systems that AI requires is also very important, it said.</p>
<p>However, addressing people&#8217;s fears about losing their jobs is not ranked as a major barrier by the survey of 835 executives across 13 sectors.</p>
<p>The study comes amid growing concerns among a section of the tech world, which feels AI has the potential to have unwelcome consequences and are hence making a pitch for a regulation, while others feel that the industry will continue finding required solutions by itself.</p>
<p>The survey said all the industries, led by insurance have started to invest in AI systems.</p>
<p>The insurance industry is averaging USD 124 million in AI spends, followed by USD 95 million while the cross-sectoral average was USD 70 million, said the survey.</p>
<p>As many as 80 per cent of the 835 executives polled said they are investing in AI while the remaining 20 per cent said they have plans to start focusing on this aspect by 2020, it said.</p>
<p>&#8220;All industries see AI technology as a major game- changer on their business competitiveness by 2020,&#8221; TCS chief technology officer K Annath Krishnan said.</p>
<p>When going by investments in AI as a percentage of revenue, consumer goods industry led with 0.66 per cent, followed by utilities (0.53 per cent), insurance (0.52 per cent) and telecoms (0.39 per cent).</p>
<p>The survey claimed all respondents spoke about revenue benefits on AI investments with an average revenue increase of 17 per cent across all the 13 sectors. It has an impact on costs as well, with 12 per cent average reduction being reported by those polled.</p>
<p>Telecom is the biggest gainer when it comes to return on the AI investments, with 25 per cent revenue improvement and 20 per cent cost reductions reported by those polled.</p>
<p>AI is used the most by the information technology function with high-tech and utilities companies more frequently using cognitive technology in their IT operations, the survey said.</p>
<p>However, only 29 per cent of the companies are using AI in sales at present even though some segments like consumer goods (52 per cent) and retailers (49 per cent) reported higher tendency to use it.</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-most-adopters-worry-about-security/">Artificial Intelligence: Most adopters worry about security</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-most-adopters-worry-about-security/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
		<item>
		<title>Integration: the key to a digital mining future</title>
		<link>https://www.aiuniverse.xyz/integration-the-key-to-a-digital-mining-future/</link>
					<comments>https://www.aiuniverse.xyz/integration-the-key-to-a-digital-mining-future/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 13 Sep 2017 06:34:20 +0000</pubDate>
				<category><![CDATA[Data Mining]]></category>
		<category><![CDATA[Data Science]]></category>
		<category><![CDATA[data mining]]></category>
		<category><![CDATA[digital mining]]></category>
		<category><![CDATA[Information Technology]]></category>
		<category><![CDATA[integration framework]]></category>
		<category><![CDATA[research and development]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1093</guid>

					<description><![CDATA[<p>Source &#8211; australianmining.com.au Improving how technology solutions are integrated will be critical for mining to effectively transform into a digital industry, according to RPMGlobal chief executive officer Richard Mathews. RPM has grown into a leading Australian developer of mining software despite the commodities downturn by continuing to drive investment towards the research and development (R&#38;D) of new <a class="read-more-link" href="https://www.aiuniverse.xyz/integration-the-key-to-a-digital-mining-future/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/integration-the-key-to-a-digital-mining-future/">Integration: the key to a digital mining future</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>australianmining.com.au</strong></p>
<p>Improving how technology solutions are integrated will be critical for mining to effectively transform into a digital industry, according to RPMGlobal chief executive officer Richard Mathews.</p>
<p>RPM has grown into a leading Australian developer of mining software despite the commodities downturn by continuing to drive investment towards the research and development (R&amp;D) of new and integrated solutions.</p>
<p>While much of the industry focused on lowering operational costs, RPM increased its R&amp;D investment to $13 million in the 2017 financial year. Since 2012, it has spent close to $67 million on product development and acquisitions.</p>
<p>This commitment has paid off, with RPM delivering three new software products – Open Cut Coal, Stratigraphic Metals and Operations Manager – during fiscal 2017.</p>
<p>RPM Global also launched its Underground Metals Solution (UGMS) product – a scheduling solution tailored for underground metals mines – in July.</p>
<p>The company’s growing product portfolio has helped it double sales of software licenses over the past year as market conditions have improved.</p>
<p>While RPM has been rewarded for its ambitious strategy, for Mathews the primary goal is to bring enterprise integration to the mining industry.</p>
<p>“For our customers, who are also using other vendors’ products, we want them to be able to integrate our software with those products as well to ensure that the information they use to make operational decisions is accurate and truly reflects what is happening on their operational sites,” Mathews, who will present on collaboration at the International Mining and Resources Conference (IMARC) in Melbourne next month, said.</p>
<p>“That’s why we have been working with industry partners, like BHP, Komatsu, Caterpillar and Schneider, to introduce standard messaging formats across the mining industry using the ISA-95 standard.</p>
<p>“Once all of the software vendors, including ourselves, use the same messaging formats then data will be able to flow through all of applications right across the mining value chain.”</p>
<p>Mathews, who joined RPM in 2012, considers collaboration to be crucial for the mining industry. He believes that successful integration of technological solutions will be the catalyst to collaboration and this will only occur if mining companies demand their suppliers use open information technology standards.</p>
<p>“When you look at the industry five years ago – before we got involved – the software space was very fragmented, particularly in the operations space. All the solutions were desktop solutions that relied heavily on excel and manual data entry, data was certainly not shared across the mining value chain it was kept in silos,” Mathews said.</p>
<p>“Collaboration has to occur in this industry. If we look at manufacturing, their success and ability to deliver the operational excellence they have today is due to messaging standards, collaboration and an enterprise approach across the entire supply chain.</p>
<p>“Just as large ERP vendors, such as SAP, have standardised financials and HR across the corporate entities, RPMGlobal is working to deliver collaboration ‘below the line’ in the operational space. The only way for the miners to benefit from the digital landscape is to collaborate.”</p>
<p>The company’s approach to growing its business through heavy investment in software R&amp;D investment has bucked the industry trend. Most suppliers to mining have instead focused on reducing costs.</p>
<p>Unlike most companies in the post-boom environment, RPM increased its investment to be ready for when a turnaround arrived.</p>
<p>With conditions improving since late 2016, RPM is now enjoying the benefits of its investment strategy, as its financial results have revealed.</p>
<p>“Our belief was that when the market came right miners wouldn’t want to keep doing what they were doing five years ago – they would want to have the technical capability right across the organisation,” Mathews said.</p>
<p>“We have also purchased two companies and acquired the rights to four software code bases over the last four years as well. We haven’t just increased our own investment but we have also grown inorganically through M&amp;A.”</p>
<p>RPM’s R&amp;D team has, therefore, expanded significantly too, growing from around 30 employees when Mathews started five years ago to more than 100 personnel (and counting) now.</p>
<p>Just don’t expect this growth to get in the way of RPM’s technological target for the mining industry – enterprise-wide integration.</p>
<p>“Everyone has their own definition of the digital mine and what the benefits are for their business. Whilst business outcomes may differ, critical to the digital mine will be integration and collaboration,” Mathews said.</p>
<p>“When we started out, we said the key thing is going to be having an integration platform for mining.</p>
<p>“It is not about having a relationship between just us, or having integration between us – it is about putting all the data on an integration framework for anyone to use, which we have successfully built with our Enterprise Integration Platform (EPF) that is deployed globally at a number of our key customers delivering them true operational excellence, shareholder value and above all, visibility and improved safety.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/integration-the-key-to-a-digital-mining-future/">Integration: the key to a digital mining future</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/integration-the-key-to-a-digital-mining-future/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Need Data Science CyberInfrastructure? Check with RENCI’s xDCI Concierge</title>
		<link>https://www.aiuniverse.xyz/need-data-science-cyberinfrastructure-check-with-rencis-xdci-concierge/</link>
					<comments>https://www.aiuniverse.xyz/need-data-science-cyberinfrastructure-check-with-rencis-xdci-concierge/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 07 Sep 2017 07:10:54 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Data Science]]></category>
		<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[data management]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[hydrologic data]]></category>
		<category><![CDATA[Information Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=990</guid>

					<description><![CDATA[<p>Source &#8211; hpcwire.com For about a year the Renaissance Computing Institute (RENCI) has been assembling best practices and open source components around data-driven scientific research to create a rapidly deployable “cross-disciplinary Data CyberInfrastructure” dubbed xDCI. Funded in part by NSF, and quite far along in its first two test cases, xDCI could become a powerful enabler <a class="read-more-link" href="https://www.aiuniverse.xyz/need-data-science-cyberinfrastructure-check-with-rencis-xdci-concierge/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/need-data-science-cyberinfrastructure-check-with-rencis-xdci-concierge/">Need Data Science CyberInfrastructure? Check with RENCI’s xDCI Concierge</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>hpcwire.com</strong></p>
<p>For about a year the Renaissance Computing Institute (RENCI) has been assembling best practices and open source components around data-driven scientific research to create a rapidly deployable “cross-disciplinary Data CyberInfrastructure” dubbed xDCI. Funded in part by NSF, and quite far along in its first two test cases, xDCI could become a powerful enabler of research in the age of growing datasets and the proliferation of data sources that often turn even small research projects into big data management and analysis challenges.</p>
<p>“xDCI was really driven by our experience working with different groups of scientists and different disciplines. We saw there were many common components needed and there were also sort of unique capabilities needed depending upon the domain. So RENCI has had been putting together these domain-specific cyberinfrastructures for many years,” said xDCI project lead Ashok Krishnamurthy, also deputy director, RENCI.</p>
<p>“The idea of xDCI was to have a collection of components that we have experience with and that we know interoperate well. Then, depending upon the particular need of the group that wants to stand up a community data sharing and collaborative analysis facility, we would put these components together into a solution that is particular to that community. We would also work with them in terms of how do you actually take it out into the community and bring other researchers to start using those services.”</p>
<p>Such thinking is in line with RENCI’s mission to be “leader in data science and an essential catalyst for data-driven discoveries leading to better health, a safer environment, and improved economic and business successes.” Certainly expansive goals.</p>
<p>RENCI describes xDCI as a “technology framework that enables their research communities to rapidly deploy robust cyberinfrastructure that can easily ingest, move, share, analyze and archive scientific data in all its varieties.” Besides open source and specific RENCI-deveoped technologies, tools such as Docker and HTCondor are also leveraged by the xDCI team. The specific xDCI stack elements at this writing are bulleted here, and the deployment process sketched out below:</p>
<ul>
<li><strong>iRODS</strong> – Open source data management, providing automated data virtualization, data discovery and metadata templates via an integrated rules engine.</li>
<li><strong>CyVerse DE</strong> – Workflows, apps, and analysis environment.</li>
<li><strong>xDCIshare</strong> – Data sharing among team members and with the larger community.</li>
<li><strong>Jupyter Notebook</strong> – Creation and sharing of live code, equations, visualizations and text.</li>
<li><strong>Query Arrow</strong> – Semantically unified SQL and NoSQL query and update system.</li>
<li><strong>DataBridge</strong> – Discovery of relevant datasets and “dark” data.</li>
<li><strong>SciDAS</strong> – Fluid and flexible infrastructure for working with and analyzing large-scale data.</li>
</ul>
<p><img fetchpriority="high" decoding="async" class="size-large wp-image-39559 aligncenter" src="https://6lli539m39y3hpkelqsm3c2fg-wpengine.netdna-ssl.com/wp-content/uploads/2017/09/XDCI.RENCI_-1024x605.png" sizes="(max-width: 555px) 100vw, 555px" srcset="https://6lli539m39y3hpkelqsm3c2fg-wpengine.netdna-ssl.com/wp-content/uploads/2017/09/XDCI.RENCI_-1024x605.png 1024w, https://6lli539m39y3hpkelqsm3c2fg-wpengine.netdna-ssl.com/wp-content/uploads/2017/09/XDCI.RENCI_-150x89.png 150w, https://6lli539m39y3hpkelqsm3c2fg-wpengine.netdna-ssl.com/wp-content/uploads/2017/09/XDCI.RENCI_-300x177.png 300w, https://6lli539m39y3hpkelqsm3c2fg-wpengine.netdna-ssl.com/wp-content/uploads/2017/09/XDCI.RENCI_-768x454.png 768w" alt="" width="555" height="328" /></p>
<p>It’s important to note that xDCI offers considerably more than just a set of tools. RENCI is formalizing the effort to help researchers stand up xDCI environments with what it calls a set of “concierge” services. Here is snapshot of concierge services taken from the xDCI web page – bear in mind the process is still taking shape:</p>
<ul>
<li><strong>Technology Concierge</strong> – The xDCI Information Technology (IT) Concierge staff will work with your team to move your project from concept to a productive and efficient combination of select xDCI technologies, in effect creating a unique xDCI-based architecture for your project.</li>
<li><strong>Software Concierge</strong> – The xDCI Software Concierge staff will serve as technical project managers and work with your team to deploy your project on the appropriate cloud infrastructure. Once deployed, they will work with your team to instill sustainable software best practices to ensure efficient and persistent continuation of your project over time.</li>
<li><strong>Data Science Concierge</strong> – The xDCI Data Science Concierge staff will offer data science expertise in how to extract maximum scientific and community benefit from your deployment of xDCI. These staff are experts in data science analytics and visualization using the latest technologies and methods that will accelerate your community to new levels of achievement.</li>
<li><strong>Sustainability Concierge</strong> – The xDCI Sustainability Concierge staff will assist in the identification of requirements necessary to support future scenarios including migration to new cloud infrastructure, business plans for identifying future funding models, and identification of technology resulting from use of xDCI which may feed back into xDCI itself.</li>
</ul>
<p>The BRAIN-I project is the most advanced xDCI pilot; it is essentially a cyberinfrastructure for dealing with large image files and supports work by Jason Stein, an assistant professor in the department of genetics at UNC-Chapel Hill and a researcher at UNC’s Neuroscience Center.</p>
<p>The research itself is fascinating. A few years back, a method for removing lipid (fatty) tissue from the brain was developed. In its place a more transparent gel is placed making a kind of “see-through brain”. The work is with mice. Using light sheet microscopy researchers can scan thin layers or section of the brain. Use of different stains allows highlighting different cell types. Obtaining images this way makes it much easier to align sections and trace neuron paths correctly and to create composite 3D images of the brain. If you do this for a set of bred mice with a particular disease, say Alzheimer’s Disease, you can compare it to images from a set of normal (wild type) mice.</p>
<p>It also turns out, not surprisingly, that handling and processing mouse brain image data is a huge challenge.</p>
<p>“Each of the 3-D images for a mouse brain turns out to be 2-to-4 terabytes and there are multiple images for each experiment. In a typical experiment, they may have bred mice, let’s say 24 bred mice, that have genetic characteristics that they could image. They would have another 24 mice for control wild type mice that they would image. Suddenly for a single experiment you would have several hundred of these images, each of which is 2-4 TB in size. That causes significant problems in terms of data management, in terms of how you do computation on it, how do you analyze on it,” said Krishnamurthy.</p>
<p>“The images we are analyzing are 1 micron thick, and a cell is about 10 microns, so we have many images with very detailed resolution,” said Stein. “All that data has to go somewhere, but it can’t fit onto individual hard drives. If you want to share an image with a colleague the process can take days or weeks.”</p>
<p>Enter RENCI and xDCI.</p>
<p>The BRAIN-I system takes the 3D microscopy images and replicates that data onto a server at RENCI that runs the integrated Rule Oriented Data System (iRODS). Once ingested into an iRODS data grid, the data is validated, metadata tags are assigned to it and relevant inputs and processes are documented to provide an historical record of the data and its origins. Using iRODS, each image can be linked to its biological sample, tracked from its creation in the lab through final analysis, and made discoverable and reproducible for future research.</p>
<p>Analysis and visualization tools can be used in the BRAIN-I system thanks to a collaboration with CyVerse, a cyberinfrastructure initiative based at the University of Arizona that offers an easy to use, web-based interface for handling computing and data analysis. Using the CyVerse Discovery Environment (DE), BRAIN-I users can launch analysis codes that are packaged as Docker images. Docker is a platform that packages software into a lightweight container that includes everything needed to run the software and is guaranteed to work the same regardless of who runs it, their location or the type of computer they are using.[i]</p>
<p>xDCI also helped BRAIN-I researchers develop a custom microscopy data ingest workflow. Image analysis takes place on RENCI HPC resources and leverages GPUs there but in principle, an xDCI community could be stood up anywhere and work with its preferred compute resources. RENCI also has access to a duplicate system in the Information Technology Services at UNC that has more GPUs. “We have the two locations but technically we could reach into any place that computing resources are available,” said Krishnamurthy.</p>
<p>RENCI director Stan Aholt and Krishnamurthy emphasize the longterm goals for xDCI are broad. Think of xDCI as software stack for data-driven science that lets users select what suits their community needs. It’s even possible for experienced users to simply download xDCI components. “That has happened. As an example, the National Cancer Institute is using some of the components to manage their data internally and use them,” said Ahalt.</p>
<p>The use cases vary. The BRAIN-I infrastructure includes automated data ingest, image analysis and analytics, automated data management, data discovery, data publication, and cross-institutional collaboration. For another xDCI pilot – My Health Peace of Mind (MyHPOM) an online system to support sharing advanced health care directives – the primary use cases include document sharing, document versioning, individual and group access controls, comments and ratings on documents, secure storage and archiving, and cross-institutional collaboration.</p>
<p>While these efforts are life sciences centric the intent accommodate any domain. In fact and long those lines, RENCI borrowed work on another NSF-funded project, HydroShare, a physical sciences effort, for use in xDCI. HydroShare[ii] is part of the NSF’s Software Infrastructure for Sustained Innovation (SI2) program; its goal is “to enable scientists to easily discover and access hydrologic data and models, retrieve them to their desktop, or perform analyses in a distributed computing environment that may include grid, cloud, or high performance computing.”</p>
<p>Ray Idaszak, HydroShare project lead and co-leader of xDCI, said, “The HydroShare software has about a half million lines of code, required quite a bit of NSF funding, and had ten teams contribute to it. We’re taking that code base and we are generalizing it and we are making it part of the xDCI technology stack. So for example the advanced healthcare project will be based on HydroShare code that has been repurposed so it can serve a completely different orthogonal community.”</p>
<p>NSF funding for the BRAIN-I pilot is scheduled to run for another year and RENCI is seeking additional funding for xDCI. One question is how large should xDCI ambitions be? Much of the user base may end up being in the NC and SC area frequently served by RENCI but, as mentioned by Ahalt, there’s no reason it couldn’t be used widely and perhaps even by commercial entities. RENCI’s core mission[iii] is more academic and incudes developing and deploying advanced computing to support research as well as conducting research itself.</p>
<p>&nbsp;</p>
<p>The post <a href="https://www.aiuniverse.xyz/need-data-science-cyberinfrastructure-check-with-rencis-xdci-concierge/">Need Data Science CyberInfrastructure? Check with RENCI’s xDCI Concierge</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/need-data-science-cyberinfrastructure-check-with-rencis-xdci-concierge/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
		<item>
		<title>The importance of adaptability: defining a data leader</title>
		<link>https://www.aiuniverse.xyz/the-importance-of-adaptability-defining-a-data-leader/</link>
					<comments>https://www.aiuniverse.xyz/the-importance-of-adaptability-defining-a-data-leader/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 06 Sep 2017 09:14:04 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Agile development]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[data leader]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Information Technology]]></category>
		<category><![CDATA[IT]]></category>
		<category><![CDATA[New technologies]]></category>
		<category><![CDATA[software development]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=972</guid>

					<description><![CDATA[<p>Source &#8211; information-age.com There are many qualities that we search for in great leaders: wisdom; loyalty; constancy; courage; the ability to communicate persuasively, balanced with a willingness to hear uncomfortable truths and to change direction. But when it comes to a great data leader, a crucial role in an increasingly digitalised era, the defining characteristics vary <a class="read-more-link" href="https://www.aiuniverse.xyz/the-importance-of-adaptability-defining-a-data-leader/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-importance-of-adaptability-defining-a-data-leader/">The importance of adaptability: defining a data leader</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>information-age.com</strong></p>
<p>There are many qualities that we search for in great leaders: wisdom; loyalty; constancy; courage; the ability to communicate persuasively, balanced with a willingness to hear uncomfortable truths and to change direction. But when it comes to a great data leader, a crucial role in an increasingly digitalised era, the defining characteristics vary slightly. Let’s explore what they are:</p>
<h3>Ability to embrace change</h3>
<p>To work in Information Technology in the early 21st century is to live in an era of unprecedented – and accelerating – change. New technologies are proliferating at such speed that even domain experts are struggling to keep up with new logos that appear in presentations summarising recent arrivals.</p>
<p>Furthermore, more data than ever before is available to support analysis, and in greater variety. Established methods for functions such as software development, information management and governance and the design and development of architecture and infrastructure are cracking under the twin pressures of the “new” and the requirement to do more, more quickly, with the same or fewer resources. It’s clear that these are challenging, but also exciting times for data leaders.</p>
<p>Little over two decades ago, the industry regarded business processes as ever-changing, but data structures as largely constant and stable. Businesses believed that if they modelled their data correctly and exhaustively, they would be largely insulated from changes in the world around them.</p>
<p>But in an era when the big web properties can make thousands of changes to their web-sites every month, the idea that we should map each-and-every new attribute to a well-defined and fixed domain in half-a-dozen downstream target systems now appear quaint and other-worldly.</p>
<p>“Big Data” is sometimes represented as a veritable Tsunami: the reality is that the challenge for today’s data leaders is not in dealing with a single, giant wave of data – but rather in working out how to manage and exploit scores of rivers of data, each of variable structure, quality, provenance, reliability and value.</p>
<h3>Adaptability is key</h3>
<p>In his seminal essay on software development, Fred Brooks observed many moons ago that it is the termites that technology managers and leaders should worry about, not the tornadoes.</p>
<p>For today’s data leader, the volume of data – the tornado – is far less of an issue than the variety, and the complexity that comes with managing that variety.</p>
<p>Without an adaptable leader at the helm, it’s impossible for organisations to succeed merely in managing that change, never mind exploiting it to drive business value. DevOps, Agile development methodologies, schema-less information management strategies, user-centred models of data governance, cloud and as-a-service deployment options, Deep Learning – all of these are merely tools that make more-or-less sense in different circumstances and for different use-cases.</p>
<p>Recognising and celebrating the plethora of new technologies, tools and frameworks now available to us – and adapting to circumstance by making appropriate choices in different scenarios – is the hallmark of success for today’s data leaders.</p>
<p>Einstein might have been thinking of the 21st century data-driven business when he observed that “everything should be made as simple as possible, but no simpler”.</p>
<p>Today’s adaptable data leader can make smart choices about when and where to avoid over-simplification to avoid leaving business value on the table – and where to enforce simplification, to avoid the unnecessary complexity that slows business to a crawl. Because to adapt is first to make intelligent choices about what is merely important – and what is vital.</p>
<p>The post <a href="https://www.aiuniverse.xyz/the-importance-of-adaptability-defining-a-data-leader/">The importance of adaptability: defining a data leader</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-importance-of-adaptability-defining-a-data-leader/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
	</channel>
</rss>
