<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>data center Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/data-center/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/data-center/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Fri, 16 Jul 2021 06:36:57 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>ABB to Deliver Artificial Intelligence Modelling for Data Center Energy Optimization in Singapore</title>
		<link>https://www.aiuniverse.xyz/abb-to-deliver-artificial-intelligence-modelling-for-data-center-energy-optimization-in-singapore/</link>
					<comments>https://www.aiuniverse.xyz/abb-to-deliver-artificial-intelligence-modelling-for-data-center-energy-optimization-in-singapore/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 16 Jul 2021 06:36:55 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[ABB]]></category>
		<category><![CDATA[data center]]></category>
		<category><![CDATA[optimization]]></category>
		<category><![CDATA[Singapore]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=15043</guid>

					<description><![CDATA[<p>Source &#8211; https://www.automation.com/ ABB has signed up to a pilot study with ST Telemedia Global Data Centres (STT GDC) to explore how artificial intelligence (AI), machine learning (ML) <a class="read-more-link" href="https://www.aiuniverse.xyz/abb-to-deliver-artificial-intelligence-modelling-for-data-center-energy-optimization-in-singapore/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/abb-to-deliver-artificial-intelligence-modelling-for-data-center-energy-optimization-in-singapore/">ABB to Deliver Artificial Intelligence Modelling for Data Center Energy Optimization in Singapore</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.automation.com/</p>



<p>ABB has signed up to a pilot study with ST Telemedia Global Data Centres (STT GDC) to explore how artificial intelligence (AI), machine learning (ML) and advanced analytics can optimize energy use and reduce a facility’s carbon footprint.</p>



<p>Singapore-headquartered STT GDC, which is one of the fastest growing global data center operators, is leveraging the digital transformation expertise of technology leader ABB as it bids to become net carbon-neutral by 2030.</p>



<p>ABB is conducting the pilot in two phases, beginning with initial data exploration, modelling and validation, studying historical data to establish how digital solutions would impact existing operations and energy use. Once proven, it will be followed by AI control logic testing in a live data center environment. STT GDC aims to achieve at least 10 percent in energy savings from its cooling systems, which is the largest consumption of electrical power in a data center after IT equipment.</p>



<p>“Our group’s AI roadmap will take a big leap forward with this pilot program. The vast amounts of data that can be captured and harnessed in a live data center environment makes for a strong base for AI applications, which can also be applied to other business processes including capacity planning, risk mitigation and predictive maintenance,” said Daniel Pointon, group chief technology officer, ST Telemedia Global Data Centres. “This, and other initiatives around alternative energy sources, water efficiency, construction technology and innovative cooling solutions, are being carried out by our research and development team based in Singapore.”<br><br>The ABB team is currently developing AI-based optimization models for the entire data center cooling plant, including the upstream chiller and distribution systems. The AI project is also unlocking new opportunities for efficiency improvement at a granular level within the data center. STT GDC will be able to use AI-generated insights, leveraging cutting-edge ABB Ability™ Genix for industrial analytics and AI, to track and analyze data generated by monitoring systems in the data center, and better facilitate dynamic cooling optimization.<br><br>“We look forward to supporting the STT GDC team in their efforts to drive digitalization and energy efficiencies,” said Madhav Kalia, global head of Data Center Automation at ABB. “At ABB, we have a strong track record of supporting data center operators with our best-in-class technology solutions. We are committed to exploring the synergies between our offerings with STT GDC as it embarks on an ambitious plan.”<br><br>STT GDC is one of the fastest-growing data center providers, with a global platform of data centers in the world’s major business markets. It has more than 130 facilities across Singapore, UK, India, China, Thailand, South Korea and Indonesia.</p>



<h3 class="wp-block-heading"><br>ABB</h3>



<p>ABB (ABBN: SIX Swiss Ex) is a leading global technology company that energizes the transformation of society and industry to achieve a more productive, sustainable future. By connecting software to its electrification, robotics, automation and motion portfolio, ABB pushes the boundaries of technology to drive performance to new levels. With a history of excellence stretching back more than 130 years, ABB’s success is driven by about 105,000 talented employees in over 100 countries.</p>
<p>The post <a href="https://www.aiuniverse.xyz/abb-to-deliver-artificial-intelligence-modelling-for-data-center-energy-optimization-in-singapore/">ABB to Deliver Artificial Intelligence Modelling for Data Center Energy Optimization in Singapore</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/abb-to-deliver-artificial-intelligence-modelling-for-data-center-energy-optimization-in-singapore/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Big Data Exchange (BDx) Partnering with Cogent in Singapore (SIN1) Facility</title>
		<link>https://www.aiuniverse.xyz/big-data-exchange-bdx-partnering-with-cogent-in-singapore-sin1-facility/</link>
					<comments>https://www.aiuniverse.xyz/big-data-exchange-bdx-partnering-with-cogent-in-singapore-sin1-facility/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 18 Dec 2020 05:40:13 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[Big Data Exchange]]></category>
		<category><![CDATA[clustercarrier-neutral]]></category>
		<category><![CDATA[Cogent]]></category>
		<category><![CDATA[data center]]></category>
		<category><![CDATA[Digital Transformation]]></category>
		<category><![CDATA[provider]]></category>
		<category><![CDATA[Singapore]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12455</guid>

					<description><![CDATA[<p>Source: enterprisetalk.com Big Data Exchange (BDx), a pan-Asian carrier-neutral data center cluster, today announced a new partnership with Cogent Communications in the company’s Singapore (SIN1) data center facility. Cogent Communications is among <a class="read-more-link" href="https://www.aiuniverse.xyz/big-data-exchange-bdx-partnering-with-cogent-in-singapore-sin1-facility/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/big-data-exchange-bdx-partnering-with-cogent-in-singapore-sin1-facility/">Big Data Exchange (BDx) Partnering with Cogent in Singapore (SIN1) Facility</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: enterprisetalk.com</p>



<p><u>Big Data Exchange (BDx)</u>, a pan-Asian carrier-neutral data center cluster, today announced a new partnership with Cogent Communications in the company’s Singapore (SIN1) data center facility. Cogent Communications is among the top three globally ranked carriers specializing in providing IP transit, dedicated internet access, ethernet transport, SD-WAN and colocation services.</p>



<p>Singapore&nbsp;boasts the highest megawatt per capita globally, and its desirable and strategic location in the center of&nbsp;Asia&nbsp;empowers enterprises to access&nbsp;Asia-Pacific’s&nbsp;most rapidly emerging markets easily. Conversely, the high demand for colocation space makes data center availability hard to come by for new customers. Acquired by BDx earlier this year, SIN1 is one of the facilities still available to service new customers in capacity-constrained&nbsp;Singapore.</p>



<p>SIN1 houses 1,500 racks with a 6 MW power capacity within 14,400 square meters. It has been awarded an UpTime Tier 3 Design Certificate, as well as holding SS564 GreenMark Gold Plus, TVRA, ISO27001 and PCI-DSS certifications. Construction to add four floors and an additional 8 MW of capacity has also started. Upon completion, the SIN1 data center will house a total of 14 MW of capacity, making BDx one of the few data centers still able to service new customers in Singapore. The purpose-built servers within the SIN1 data center enable businesses to gain the robust insights needed to power their digital transformation. The BDx 360 portal provides customers with a holistic view into their infrastructure from any device or location.</p>



<p>The addition of Cogent as a new carrier in the BDx SIN1 facility will allow BDx customers to benefit from Cogent’s extensive footprint, network capabilities and range of services. As a leading telecoms carrier, Cogent is available as a connectivity option to all BDx customers regardless of the type of colocation services they require, along with offering high standards of operations and support.</p>



<p>“This new agreement with Cogent marks the beginning of a strategic partnership which we look forward to developing,” says Sona Singh, Business Development Director at BDx. “We are proud that Cogent has chosen BDx to partner within the important Singapore market. Together, we offer an even wider range of services for our customers. We are pleased to welcome Cogent into our SIN1 facility and look forward to expanding our partnership in the near future.”</p>



<p>Cogent chose to partner with BDx because the company provides the widest range of flexible, secure, on-demand connectivity options at each of its facilities. The SIN1 data center provides critical connectivity for customers looking to expand into the Southeast Asian market. Cogent, one of the largest Tier 1 IP providers globally, selected BDx due to the company’s state-of-the-art operations and customer support.</p>
<p>The post <a href="https://www.aiuniverse.xyz/big-data-exchange-bdx-partnering-with-cogent-in-singapore-sin1-facility/">Big Data Exchange (BDx) Partnering with Cogent in Singapore (SIN1) Facility</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/big-data-exchange-bdx-partnering-with-cogent-in-singapore-sin1-facility/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What is a service mesh what it means to data center networking</title>
		<link>https://www.aiuniverse.xyz/what-is-a-service-mesh-what-it-means-to-data-center-networking/</link>
					<comments>https://www.aiuniverse.xyz/what-is-a-service-mesh-what-it-means-to-data-center-networking/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 07 Oct 2020 06:58:40 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[applications]]></category>
		<category><![CDATA[data center]]></category>
		<category><![CDATA[Microservices]]></category>
		<category><![CDATA[networking]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12013</guid>

					<description><![CDATA[<p>Source: networkworld.com Microservices-style applications rely on fast, dependable network infrastructure in order to respond quickly and reliably, and the service mesh can be a powerful enabler. At <a class="read-more-link" href="https://www.aiuniverse.xyz/what-is-a-service-mesh-what-it-means-to-data-center-networking/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-a-service-mesh-what-it-means-to-data-center-networking/">What is a service mesh what it means to data center networking</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: networkworld.com</p>



<p>Microservices-style applications rely on fast, dependable network infrastructure in order to respond quickly and reliably, and the service mesh can be a powerful enabler.</p>



<p>At the same time, service-mesh infrastructure can be difficult to deploy and manage at scale and may be too complex for smaller applications, so enterprises need to carefully consider its potential upsides and downsides in relation to their particular circumstances.</p>



<h3 class="wp-block-heading">What is a service mesh?</h3>



<p>A service mesh is infrastructure software that provides fast and reliable communications between the microservices that applications may need. Its networking features include application identification, load balancing, authentication, and encryption.</p>



<p>Network requests are routed between microservices via proxies that run alongside the service. These proxies form a mesh network to connect the individual microservices. A central controller provides for access control, as well as network and performance management.</p>



<p>A service mesh provides logical isolation of microservices applications from the complexity of network routing and security requirements. The abstraction provided by a service mesh enables rapid and flexible deployment of microservices without constantly requiring the data-center networking team to intervene.</p>



<h3 class="wp-block-heading">Why do microservices-style apps need service mesh?</h3>



<p>Applications based on microservices have a different architecture from hypervisor-based applications. They have numerous services running in individual containers on different servers or cores, and the frequency of transactions between these microservices within a single application may require low latency and significant bandwidth. Plus more than one application may need to access the same microservices.</p>



<p>Container-based micro services can often move their physical location from server to server yet provide only limited data about where they’ve moved to and that their status has changed. This makes it difficult for IT professionals to “find” them to resolve application-performance issues.</p>



<p>Meanwhile, DevOps teams require logical isolation from network complexity. They want to rapidly develop and change applications, and they require networking teams to provide networking and security adjustments such as provisioning vLANs in order to do their work.</p>



<p>Service mesh enables significant networking and security benefits for microservices applications. It abstracts the networking infrastructure, thus enabling microservices applications to maintain networking and security polices without requiring the intervention of the data-center networking team for each change.</p>



<p>Key requirements for networking microservices include:</p>



<ul class="wp-block-list"><li>Network performance at scale</li><li>Ease of provisioning networking, compute, and storage resources for new applications</li><li>Ability to rapidly scale bandwidth by application</li><li>Workload migration between internal data centers and public cloud</li><li>Application isolation to enhance security and support multi-tenancy</li></ul>



<p>To meet these requirements IT organizations will need to integrate service-mesh automation and management information into a more comprehensive data-center networking-management system–especially as container deployments become more numerous, complex and strategic.</p>



<p>For applications that are well suited to service mesh deployments, IT organizations will need to plan integration of the technology into their overall management/automation platforms. To prepare, IT teams must evaluate the range of service-mesh options&#8211;cloud, open source, vendor-supplied&#8211;as the technology continues to mature.</p>



<p>Service-mesh technology options can be vendor-supported or open source. Istio is a leading open-source service-mesh option driven by Google. Other open-source projects include Linkerd, HAProxy and Envoy. Leading IaaS suppliers have their own service mesh offerings. Leading network and IT suppliers and start-ups also have service mesh offerings.</p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-a-service-mesh-what-it-means-to-data-center-networking/">What is a service mesh what it means to data center networking</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/what-is-a-service-mesh-what-it-means-to-data-center-networking/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>QEBR Announces Strong Progress On Filecoin Data Center Mining Efforts</title>
		<link>https://www.aiuniverse.xyz/qebr-announces-strong-progress-on-filecoin-data-center-mining-efforts/</link>
					<comments>https://www.aiuniverse.xyz/qebr-announces-strong-progress-on-filecoin-data-center-mining-efforts/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 25 Jun 2020 06:32:02 +0000</pubDate>
				<category><![CDATA[Data Mining]]></category>
		<category><![CDATA[data center]]></category>
		<category><![CDATA[data mining]]></category>
		<category><![CDATA[QEBR]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9769</guid>

					<description><![CDATA[<p>Source: aithority.com QEBR detailed that its technology team has progressed well in setting up a secure Filecoin environment and proven its system as a valid node with CPU, GPU, <a class="read-more-link" href="https://www.aiuniverse.xyz/qebr-announces-strong-progress-on-filecoin-data-center-mining-efforts/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/qebr-announces-strong-progress-on-filecoin-data-center-mining-efforts/">QEBR Announces Strong Progress On Filecoin Data Center Mining Efforts</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: aithority.com</p>



<p>QEBR detailed that its technology team has progressed well in setting up a secure Filecoin environment and proven its system as a valid node with CPU, GPU, bandwidth, and storage compatibility that meets all IPFS guidelines. The QEBR test system has connected with the Filecoin main blockchain and already successfully test-mined Filecoin.</p>



<p>Filecoin expects a global launch in the second half of 2020. Filecoin is a cryptocurrency-based decentralized platform based in the cloud for data storage. Its filing system requires FIL coins as payment to miners in exchange for storage space. The Filecoin ICO (“Initial Coin Offering”) was conducted in 2017 and raised over US$257 Million from investors such as Sequoia Capital, Andreesen Horowitz, Y Combinator, Naval Ravikant, and Winklevoss Capital.</p>



<p>Over the last two months, the global Filecoin testnet has processed many petabytes (PB) of storage at an impressive average increase&nbsp;of 50 TB per day.&nbsp;The expectation is that the Filecoin mainnet launch will have hundreds of PB of storage capacity due to the very high interest of FIL miners around the world.</p>



<p>Jun Liang, Chief Technology Officer of QEBR, stated, “We are very satisfied with the progress made by our technology team in creating a Filecoin environment that is so efficient in mining FIL. We are confident that QEBR will be able to rapidly expand to multiple data centers around the world once the Filecoin mainnet becomes active. QEBR’s first Filecoin mining installation will potentially be in Bangkok since we anticipate strong demand from Malaysia, South Korea, the Philippines, Indonesia, and Japan.”</p>



<p>QEBR previously announced that it is FAST/DWAC eligible as of October 14, 2019. QEBR’s CUSIP number is: 92828H109. The Company also previously announced the acquisition of Idaho Country Mining Co. LLC (“ICMC”) and Shenzhen DZD Digital Technology Ltd (“DZD”) in an exchange of shares. The acquisitions complement QEBR’s existing subsidiaries by adding ICMC’s services in data acquisition, data mining, encrypted data bookkeeping, and encrypted data acquisition; and, DZD, an engineering partner with ICMC in Shenzhen, China. DZD specializes in providing services for data processing, data mining, encrypted data bookkeeping, and researching of data technology.</p>
<p>The post <a href="https://www.aiuniverse.xyz/qebr-announces-strong-progress-on-filecoin-data-center-mining-efforts/">QEBR Announces Strong Progress On Filecoin Data Center Mining Efforts</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/qebr-announces-strong-progress-on-filecoin-data-center-mining-efforts/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Intel drops work on one of its AI-chip lines in favor of an other</title>
		<link>https://www.aiuniverse.xyz/intel-drops-work-on-one-of-its-ai-chip-lines-in-favor-of-an-other/</link>
					<comments>https://www.aiuniverse.xyz/intel-drops-work-on-one-of-its-ai-chip-lines-in-favor-of-an-other/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 07 Feb 2020 06:43:32 +0000</pubDate>
				<category><![CDATA[AI-ONE]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Platform]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[data center]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6613</guid>

					<description><![CDATA[<p>Source: networkworld.com Intel is ending work on its Nervana neural network processors (NNP) in favor of an artificial intelligence line it gained in the recent $2 billion <a class="read-more-link" href="https://www.aiuniverse.xyz/intel-drops-work-on-one-of-its-ai-chip-lines-in-favor-of-an-other/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/intel-drops-work-on-one-of-its-ai-chip-lines-in-favor-of-an-other/">Intel drops work on one of its AI-chip lines in favor of an other</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: networkworld.com</p>



<p>Intel is ending work on its Nervana neural network processors (NNP) in favor of an artificial intelligence line it gained in the recent $2 billion acquisition of Habana Labs. </p>



<p>Intel acquired Nervana in 2016 and issued its first NNP chip one year later. After the $408 million acquisition by Intel, Nervana co-founder Naveen Rao was placed in charge of the AI platforms group, which is part of Intel&#8217;s data platforms group. The Nervana chips were meant to compete with Nvidia GPUs in the AI inference training space, and Facebook worked with Intel “in close collaboration, sharing its technical insights,” according to former Intel CEO Brian Krzanich.</p>



<p>For now, Intel has ended development of its Nervana NNP-T training chips and will deliver on current customer commitments for its Nervana NNP-I inference chips; Intel will move forward with Habana Labs&#8217; Gaudi and Goya processors in their place.</p>



<p>There are two parts to neural networks: training, where the computer learns a process, such as image recognition; and inference, where the system puts what it was trained to do to work. Training is far more compute-intensive than inference, and it’s where Nvidia has excelled.</p>



<p>Intel said the decision was made after input from customers, and that this decision is part of strategic updates to its data-center AI acceleration roadmap. &#8220;We will leverage our combined AI talent and technology to build leadership AI products,&#8221; the company said in a statement to me.</p>



<p>“The Habana product line offers the strong, strategic advantage of a unified, highly-programmable architecture for both inference and training. By moving to a single hardware architecture and software stack for data-center AI acceleration, our engineering teams can join forces and focus on delivering more innovation, faster to our customers,” Intel said.</p>



<p>This outcome from the Habana acquisition wasn&#8217;t entirely unexpected. &#8220;We had thought that they might keep one for training and one for inference. However, Habana&#8217;s execution has been much better and the architecture scales better. And, Intel still gained the IP and expertise of both companies,” said Jim McGregor, president of Tirias Research.</p>



<p>The good news is that whatever developers created for Nervana won’t have to be thrown out. “The frameworks work on either architecture,” McGregor said. &#8220;While there will be some loss going from one architecture to another, there is still value in the learning, and I&#8217;m sure Intel will work with customers to help them with the migration.”</p>



<p>This is the second AI/machine learning effort Intel has shut down, the first being Xeon Phi. Xeon Phi itself was a bit of a problem child, dating back to Intel’s failed Larrabee experiment to build a GPU based on x86 instructions. Larrabee never made it out of the gate, while Xeon Phi lasted a few generations as a co-processor but was ultimately axed in August 2018.</p>



<p>Intel still has a lot of products targeting various AI: Mobileye, Movidius, Agilex FPGA, and its upcoming Xe architecture. Habana Labs has been shipping its Goya Inference Processor since late 2018, and samples of its Gaudi AI Training Processor were sent to select customers in the second half of 2019.</p>
<p>The post <a href="https://www.aiuniverse.xyz/intel-drops-work-on-one-of-its-ai-chip-lines-in-favor-of-an-other/">Intel drops work on one of its AI-chip lines in favor of an other</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/intel-drops-work-on-one-of-its-ai-chip-lines-in-favor-of-an-other/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>THE FUTURE OF SMART DATA CENTERS: ROBOTIC PROCESS AUTOMATION</title>
		<link>https://www.aiuniverse.xyz/the-future-of-smart-data-centers-robotic-process-automation-2/</link>
					<comments>https://www.aiuniverse.xyz/the-future-of-smart-data-centers-robotic-process-automation-2/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 27 Dec 2019 07:32:41 +0000</pubDate>
				<category><![CDATA[Data Robot]]></category>
		<category><![CDATA[Automation]]></category>
		<category><![CDATA[data center]]></category>
		<category><![CDATA[Future]]></category>
		<category><![CDATA[robotic]]></category>
		<category><![CDATA[smart data]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5833</guid>

					<description><![CDATA[<p>Source: A huge change is happening in the back offices of companies over the globe. Many name it as the ascent of the robots, however, an increasingly <a class="read-more-link" href="https://www.aiuniverse.xyz/the-future-of-smart-data-centers-robotic-process-automation-2/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-future-of-smart-data-centers-robotic-process-automation-2/">THE FUTURE OF SMART DATA CENTERS: ROBOTIC PROCESS AUTOMATION</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: </p>



<p>A huge change is happening in the back offices of companies over the globe. Many name it as the ascent of the robots, however, an increasingly fitting term is robotic process automation (RPA). While the association between robotics technology and RPA might be to some degree inexactly characterized, the simple fact is RPA is only an extravagant abbreviation for a software robot, or “bot” in IT vernacular. RPA delivers innovation that joins scripting with intelligence and execution. It is a mix of automated prowess that has been a genuine boon for back-office tasks.</p>



<p>However, for all its ability to drive profitability and diminish physical work, RPA has been met with dread. That dread of job losses and staff decreases, has an exceptionally human component, leading to unwarranted presumptions about robots. Furthermore, nowhere is that fear more noteworthy than in the data center, where highly compensated experts feel undermined by the ascent of the bots, with doubts that RPA will diminish their significance to overall activities and put data center administrators out in the city.</p>



<p>Organizations structure the customer base of most of the data centers, regardless of whether that is a small regional data center, a bustling colocation or the universally distributed network of huge data centers that underlie people in public cloud providers.</p>



<p>As organizations wake up to the power and effectiveness of the cloud, they’re setting up DevOps teams and microservices which demand real-time processing, elastic scalability, big data storage capacity and 99.99% or more reliability. To satisfy the new needs required by these business models while keeping costs aggressive, data centers need to lessen overheads while improving reliability and performance</p>



<p>As framework turns out to be increasingly complex and distributed, there’s an extra contention for robotic assistance. People are essentially incapable to screen and process the numerous floods of data coming into a data center without making mistakes or lessening pace and performance. Network downtime is serious enough, however, with data ruptures presently pulling in record fines, mix-ups can compromise the very presence of a data center.</p>



<p>As we enter this new revolution in how organizations work, it’s important that each bit of data is dealt with and utilized appropriately to improve its value. Without cost-effective storage and progressively amazing hardware, digital transformation and the new business models related to it wouldn’t be possible.</p>



<p>Specialists have been foreseeing for quite a while that the automation advances that are applied in processing plants worldwide would be applied to data centres later on. In all actuality, we’re quickly advancing this probability with the use of Robotic Process Automation (RPA) and machine learning in the datacenter environment.</p>



<p>Human mistake is by a wide margin the most critical reason for network downtime. This is trailed by hardware failures and breakdowns. With practically zero oversight of how hardware is functioning, move must be made once the downtime has just happened. The cost effect is a lot higher as the focus is detracted from different things to deal with the reason for the issue, joined with the effect of the actual network downtime. Dependability, cost and management must be fixed to give an increasingly productive data center. Automation can help accomplish this.</p>



<p>With the fear of bots suppressed, numerous CIOs and data center managers are considering how to best embrace RPA and where to apply the innovation. Apparently, data center tasks today are tied in with accomplishing more with less and are on the leading edge of changing wetware into processes that carry extra value to business operations. All things considered, it turns out to be evident that RPA, particularly as intelligent automation, can carry phenomenal efficiencies to operations. A valid example is data center management, where bots can be made to perform backups, spool up virtual machines on demand, move bots from near online to online frameworks, resolve issues. etc. Everything comes down to the degree of creative mind present and the capacity to distinguish tasks that lend themselves well to automation.</p>



<p>However, RPA is substantially more than macros or contents. RPA presents a level of intelligence that enables bots to decide, which lets them go about as an intelligent automation agent. For instance, bots can be deployed to screen network traffic and trained to make a move dependent on a threshold being accomplished. In addition, the bots can utilize pattern recognition alongside analytics to characterize the thresholds in real-time, enabling them to respond a lot quicker than any human can.</p>



<p>Such abilities look good for data centers that should be flexible and are under consistent security threats. Bots can be worked to recognize usage patterns, standardized traffic, CPU cycles, etc as a reason for scaling up or down. Activities that once spurred a technician into action would now be able to be mechanized. Intelligent automation has additionally demonstrated to be a decent line of barrier against malware, ransomware, and information spillage. With bots checking activity, normalized patterns of usage can be deducted and anticipated behaviors of uses, clients, and different components can be measured. When activity falls out of norms, bots can make a move utilizing either foreordained guidelines or significantly increasingly inventive responses driven by AI.</p>



<p>Another RPA advantage is the normalization of procedures and strategies. By evacuating the variable activities of people from a procedure, data centers can anticipate a more significant level of standardization with considerably more unsurprising results. That in itself is a help for companies driven by compliance regulations, where the upcoming strategies is a critical aspect of meeting compliance.</p>
<p>The post <a href="https://www.aiuniverse.xyz/the-future-of-smart-data-centers-robotic-process-automation-2/">THE FUTURE OF SMART DATA CENTERS: ROBOTIC PROCESS AUTOMATION</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-future-of-smart-data-centers-robotic-process-automation-2/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>VMware’s Project Magna applies machine learning to automate the data centre</title>
		<link>https://www.aiuniverse.xyz/vmwares-project-magna-applies-machine-learning-to-automate-the-data-centre/</link>
					<comments>https://www.aiuniverse.xyz/vmwares-project-magna-applies-machine-learning-to-automate-the-data-centre/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 03 Sep 2019 10:44:34 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[cloud]]></category>
		<category><![CDATA[data center]]></category>
		<category><![CDATA[Learning]]></category>
		<category><![CDATA[machine]]></category>
		<category><![CDATA[service]]></category>
		<category><![CDATA[VMware’s]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4432</guid>

					<description><![CDATA[<p>Source:-blocksandfiles.com VMware is developing a cloud service to monitor software in customer deployments and tune it automatically to improve performance. This is Project Magna and its first <a class="read-more-link" href="https://www.aiuniverse.xyz/vmwares-project-magna-applies-machine-learning-to-automate-the-data-centre/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/vmwares-project-magna-applies-machine-learning-to-automate-the-data-centre/">VMware’s Project Magna applies machine learning to automate the data centre</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:-blocksandfiles.com<br></p>



<p>VMware is developing a cloud service to monitor software in customer 
deployments and tune it automatically to improve performance. This is 
Project Magna and its first target is vSAN in hyperconverged 
infrastructure.</p>



<p>It will work like this: customers select their key performance  indicator – read or write optimisation or both. Magna examines their  vSAN environment and compares it to the KPI average for stored and  monitored deployments. If the site is below average, Magna changes it to  bring it closer to the average.<br></p>



<p>When switched on via vRealize Operations (vROps), Magna Cloud 
Services records data from the deployed vSAN system and uploads it to a 
VMware data store, where it is analysed. A machine learning engine 
inside Magna identifies and implements performance tweaks.</p>



<p> vROPs displays the before and after state graphically so customers 
can see if performance has improved. VMware’s Project Magna people have 
yet to decide the intervals for system monitoring.</p>



<h2 class="wp-block-heading">Reinforcement learning</h2>



<p>Magna
 incorporates a reinforcement learning system that seeks so-called 
rewards. Magna looks at its own performance actions and strengthens 
those that boost customer vSAN performance</p>



<p>A VMware blog says: “Reinforcement Learning combs through your data 
and runs thousands of scenarios that searches for the best reward output
 based on trial and error on the Magna SaaS analytics engine.&nbsp;And this 
is automatically and continuously done across your vSAN clusters to 
ensure it’s always using the best settings to maximize throughput and 
minimize latency of your&nbsp;… hyperconverged infrastructure.”</p>



<p>Magna is also designed so that it does no any harm to systems it 
monitors, the blog states: “There are guard rails within the ML 
algorithms that will not decrease performance by any means.”</p>



<p>Project Magna is intended for all VMware’s software-defined data 
centre components covering compute, storage, network and security. These
 are vCenter, ESXi/vSphere, vSAN, VVols, NSX and Velo Cloud.</p>



<p>Magna is in tech preview and VMware has not committed to introducing it to a specific version of vSphere.</p>
<p>The post <a href="https://www.aiuniverse.xyz/vmwares-project-magna-applies-machine-learning-to-automate-the-data-centre/">VMware’s Project Magna applies machine learning to automate the data centre</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/vmwares-project-magna-applies-machine-learning-to-automate-the-data-centre/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Is Your Data Center Ready for Machine Learning Hardware?</title>
		<link>https://www.aiuniverse.xyz/is-your-data-center-ready-for-machine-learning-hardware/</link>
					<comments>https://www.aiuniverse.xyz/is-your-data-center-ready-for-machine-learning-hardware/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 01 Feb 2019 09:52:11 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[data center]]></category>
		<category><![CDATA[Design]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Nvidia]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=3301</guid>

					<description><![CDATA[<p>Source- datacenterknowledge.com So, you want to scale your computing muscle to train bigger deep learning models. Can your data center handle it? According to Nvidia, which sells more <a class="read-more-link" href="https://www.aiuniverse.xyz/is-your-data-center-ready-for-machine-learning-hardware/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/is-your-data-center-ready-for-machine-learning-hardware/">Is Your Data Center Ready for Machine Learning Hardware?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source- <a href="https://www.datacenterknowledge.com/machine-learning/your-data-center-ready-machine-learning-hardware" target="_blank" rel="noopener">datacenterknowledge.com</a></p>
<p>So, you want to scale your computing muscle to train bigger deep learning models. Can your data center handle it?</p>
<p>According to Nvidia, which sells more of the specialized chips used in machine learning than any other company, it most likely cannot. These systems often consume so much power, a conventional data center doesn’t have the capacity to remove the amount of heat they generate.</p>
<p>It’s easy to see how customers withoutan infrastructure that can support a piece of Nvidia hardware is a business problem for Nvidia. To widen this bottleneck for at least one of its product lines, the company now has a list of pre-approved colocation providers it will send you to if you need a place that will keep your supercomputers cool and happy.</p>
<p>As more companies’ machine learning initiatives graduate from initial experimentation phases – during which their data scientists may have found cloud GPUs rented from the likes of Google or Microsoft sufficient – they start thinking about larger-scale models and investing in their own hardware their teams can share to train those models.</p>
<p>Among the go-to hardware choices for these purposes have been Nvidia’s DGX-1 and DGX-2 supercomputers, which the company designed specifically with machine learning in mind. When a customer considers buying several of these systems for their data scientists, they often find that their facilities cannot support that level of power density and look to outsource the facilities part.</p>
<p>“This program takes that challenge off their plate,” Tony Paikeday, who’s in charge of marketing for the DGX line at Nvidia, told Data Center Knowledge in an interview about the chipmaker’s new colocation referral program. “There’s definitely a lot of organizations that are starting to think about shared infrastructure” for machine learning. Deploying and managing this infrastructure falls to their IT leadership, he explained, and many of the IT leaders “are trying to proactively get ahead of their companies’ AI agendas.”</p>
<h2>Cool Homes for Hot AI Hardware</h2>
<p>DGX isn’t the only system companies use to train deep learning models. There are numerous choices out there, including servers by all the major hardware vendors, powered by Nvidia’s or AMD’s GPUs. But because they all pack lots of GPUs in a single box – an HPE Apollo server has eight GPUs, for example, as does DGX-1, while DGX-2 has 16 GPUs – high power density is a constant across this category of hardware. This means that <a href="https://www.datacenterknowledge.com/archives/2017/03/27/deep-learning-driving-up-data-center-power-density">along with the rise of machine learning comes growing demand for high-density data centers</a>.</p>
<p>The trend benefits specialist colocation providers like Colovore, Core Scientific, and ScaleMatrix, who designed their facilities for high density from the get-go. But other, more generalist data center providers are also capable of building areas within their facilities that can handle high density. Colovore, Core Scientific, and ScaleMatrix are on the list of colocation partners Nvidia will refer DGX customers to, but so are Aligned Energy, CyrusOne, Digital Realty Trust, EdgeConneX, Flexential, and Switch.</p>
<p>Partially owned by Digital Realty, Colovore built its facility in Santa Clara in 2014 <a href="https://www.datacenterknowledge.com/archives/2017/03/01/this-company-owns-the-high-density-data-center-niche-in-silicon-valley">specifically to take care of Silicon Valley’s high-density data center needs</a>. Today, it supports close to 1,000 DGX-1 and DGX-2 systems, Ben Coughlin, the company’s CFO and co-founder, told us. He wouldn’t say who owned the hardware, saying only that it belonged to fewer than 10 customers who were “mostly tech” companies. (Considering that the facility is only a five-minute drive from Nvidia headquarters, it’s likely that the chipmaker itself is responsible for a big portion of that DGX footprint, but we haven’t been able to confirm this.)</p>
<p>Colovore has already added one new customer because of Nvidia’s referral program. A Bay Area healthcare startup using artificial intelligence is “deploying a number of DGX-1 systems to get up and running,” Coughlin said.</p>
<p>A single DGX-1 draws 3kW in the space of three rack units, while a DGX-2 needs 10kW and takes up 10 rack units – that’s 1kW per rack unit regardless of the model. Customers usually put between nine and 11 DGX-1s in a single rack, or up to three DGX-2s, Coughlin said. Pumping chilled water to the rear-door heat exchangers mounted on the cabinets, Colovore’s passive cooling system (no fans on the doors) can cool up to 40kW, according to him.</p>
<p>In a “steady state,” many of the cabinets draw 12kW to 15kW, “but when they go into some sort of workload state, when they’re doing some processing, they’ll spike 25 to 30 kilowatts,” he said. “You can see swings on our UPSs of 400 to 500 kilowatts at that time across our infrastructure. It’s pretty wild.”</p>
<p>Echoing Nvidia’s Paikeday, Chris Orlando, CEO and co-founder of ScaleMatrix, said typical customers that turn to his company’s high-density colocation services in San Diego and Houston are well into their machine learning programs and looking at expanding and scaling the infrastructure that supports those programs.</p>
<p>A <a href="https://www.datacenterknowledge.com/archives/2017/02/06/this-data-center-is-designed-for-deep-learning">high-density specialist</a>, ScaleMatrix’s proprietary cooling design also brings chilled water directly to the IT cabinets. The company has “more than a handful of customers that have DGX boxes colocated today,” Orlando told us.</p>
<h2>High Density Air-Cooled</h2>
<p>Flexential, which is part of Nvidia’s referral program but doesn’t have high-density colocation as its sole focus, uses traditional raised-floor air cooling for high density, adding doors at the ends of the cold aisles to isolate them from the rest of the building and “create a bathtub of cold air for the server intakes,” Jason Carolan, the company’s chief innovation officer, explained in an email.</p>
<p>According to him, this approach works fine for a 35kW rack of DGX systems. “We have next-generation cooling technologies that will take us beyond air, but to date, we haven’t had a sizeable enough customer application that has required … it on a large scale,” he said. Five of Flexential’s 41 data centers can cool high-density cabinets today.</p>
<p>As more and more companies use machine learning, it is becoming an important workload for data center providers to be able to support. Adoption of these computing techniques is only in its early phases, and they are likely to become an important growth driver for colocation companies going forward. Not many enterprises are set up to host supercomputers on-premises, and few are going to spend the money to build this infrastructure, so turning to colocation facilities that are already designed to efficiently cool tens of kilowatts per rack is their logical next step.</p>
<p>The post <a href="https://www.aiuniverse.xyz/is-your-data-center-ready-for-machine-learning-hardware/">Is Your Data Center Ready for Machine Learning Hardware?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/is-your-data-center-ready-for-machine-learning-hardware/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>Designing the Future of Deep Learning</title>
		<link>https://www.aiuniverse.xyz/designing-the-future-of-deep-learning/</link>
					<comments>https://www.aiuniverse.xyz/designing-the-future-of-deep-learning/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 24 Oct 2017 07:27:43 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[data center]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Designing]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1547</guid>

					<description><![CDATA[<p>Source &#8211; enterprisetech.com Artificial Intelligence and Deep Learning are being used to solve some of the world&#8217;s biggest problems and is finding application in autonomous driving, marketing and <a class="read-more-link" href="https://www.aiuniverse.xyz/designing-the-future-of-deep-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/designing-the-future-of-deep-learning/">Designing the Future of Deep Learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>enterprisetech.com</strong></p>
<p>Artificial Intelligence and Deep Learning are being used to solve some of the world&#8217;s biggest problems and is finding application in autonomous driving, marketing and advertising, health and medicine, manufacturing, multimedia and entertainment, financial services, and so much more.  This is made possible by incredible advances in a wide range of technologies, from computation to interconnect to storage, and innovations in software libraries, frameworks, and resource management tools.  While there are many critical challenges, an open technology approach provides significant advantages.</p>
<h2>The Scaling Challenge</h2>
<p>The full deep learning story, though, must be an end-to-end technology discussion and encompass production at scale.  As we scale out deep learning workloads to the massive compute clusters required to tackle these big issues, we begin to run into the same challenges that hamper scaling of traditional high-performance computing (HPC) workloads.</p>
<p>Ensuring optimal use of compute resources can be challenging, particularly in heterogeneous architectures that may include multiple central processing unit (CPU) architectures, such as x86, ARM64, and Power, as well as accelerators, such as graphical processing units (GPUs), field programmable gate arrays (FPGAs), tensor processing units (TPUs), etc. Architecting an optimal deep learning solution for training or inferencing, with potentially varied data types, can result in the application of one or more of these architectures and technologies. The flexibility of open technologies allows one to deploy the optimal platform at server, rack, and data center scales.</p>
<p>One of the most important uses of deep learning is in gaining value from large data sets. The need to effectively manage large amounts of data, which may have varying ingest, processing, and persistent storage and data warehouse needs, is at the center of a modern deep learning solution. The performance requirements throughout the data workflow and processing stages can vary greatly, and, at production-scale, it can simultaneously involve data collection, training, and inference.  The balance of cost effectiveness and high performance is key to providing a properly-scaled deployment. The flexibility of open technologies, allows one to take a software-designed data center approach to the deep learning environment.</p>
<p>Workload orchestration is another familiar challenge in the HPC realm.  A variety of tools and libraries have been developed over the years, including resource managers and job schedulers, parallel programming libraries, and other software frameworks.  As software applications have grown in complexity, with rapidly evolving dependencies, a new approach has been needed.  One such approach is containerization.  Containers allow applications to be bundled with their dependencies and deployed on a variety of compute hosts.  However, challenges have remained for providing access to compute, storage, and other resources.  Moreover, managing the deployment, monitoring, and clean-up of containerized applications presents its own set of challenges.</p>
<h2>The Open Technology Approach</h2>
<p>Penguin Computing applies its decades of expertise in high-performance and scale-out computing to deliver deep learning solutions that support customer workload requirements, whether at development or production scales.  Penguin Computing solutions feature open technologies, enabling design choices that focus on meeting the customer&#8217;s needs.</p>
<p>In the Penguin Computing AI/DL whitepaper, you will learn more about our approach to:</p>
<ul>
<li>Open Architectures for Artificial Intelligence and Deep Learning, combining flexible compute architectures, rack scale platforms, and software-defined networking and storage, to provide a scalable software-defined AI/DL environment.</li>
<li>Discuss AI/DL strategies, providing insight into everything from specialty compute for training vs. inference to Data Lakes and high performance storage for data workflows to orchestration and workflow management tools.</li>
<li>Deploying the AI/DL environments from development to production scale and from on-premise to hybrid to public cloud.</li>
</ul>
<p>The post <a href="https://www.aiuniverse.xyz/designing-the-future-of-deep-learning/">Designing the Future of Deep Learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/designing-the-future-of-deep-learning/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
	</channel>
</rss>
