<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>GPU Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/gpu/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/gpu/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Tue, 29 Jun 2021 11:00:26 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>
	<item>
		<title>Inspur releases liquid cooled AI server with NVIDIA A100 GPUs at ISC High Performance Digital 2021</title>
		<link>https://www.aiuniverse.xyz/inspur-releases-liquid-cooled-ai-server-with-nvidia-a100-gpus-at-isc-high-performance-digital-2021/</link>
					<comments>https://www.aiuniverse.xyz/inspur-releases-liquid-cooled-ai-server-with-nvidia-a100-gpus-at-isc-high-performance-digital-2021/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 29 Jun 2021 11:00:25 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[cooled]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[Inspur]]></category>
		<category><![CDATA[liquid]]></category>
		<category><![CDATA[NVIDIA A100]]></category>
		<category><![CDATA[releases]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14651</guid>

					<description><![CDATA[<p>Source &#8211; https://www.hpcwire.com/ NF5488LA5, boasting high-efficiency liquid-cooling, ranks No.1 in 11 of the 16 tests in the closed data center division of the 2021 MLPerf™ Inference v1.0 Benchmark. SAN JOSE, Calif. – June 28, 2021 – Today at ISC High Performance 2021 Digital, the event for high performance computing, machine learning, and data analytics, Inspur Information, <a class="read-more-link" href="https://www.aiuniverse.xyz/inspur-releases-liquid-cooled-ai-server-with-nvidia-a100-gpus-at-isc-high-performance-digital-2021/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/inspur-releases-liquid-cooled-ai-server-with-nvidia-a100-gpus-at-isc-high-performance-digital-2021/">Inspur releases liquid cooled AI server with NVIDIA A100 GPUs at ISC High Performance Digital 2021</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.hpcwire.com/</p>



<p><em>NF5488LA5, boasting high-efficiency liquid-cooling, ranks No.1 in 11 of the 16 tests in the closed data center division of the 2021 MLPerf™ Inference v1.0 Benchmark.</em></p>



<p>SAN JOSE, Calif. – June 28, 2021 – Today at ISC High Performance 2021 Digital, the event for high performance computing, machine learning, and data analytics, Inspur Information, a leading IT infrastructure solution provider, announces its new liquid cooling AI server, NF5488LA5. Designed with a liquid cold plate and a maximum capability of supporting eight high-speed and interconnected  NVIDIA<sup>®</sup> A100 Tensor Core GPUs via NVSwitch, this new offering is ideal for customers who need a high-performance and energy-efficient AI server.</p>



<p>Designed to meet the energy-saving needs required by High-Performance Computing (HPC) and Artificial Intelligence (AI), the new NF5488LA5 is an update on Inspur’s leading AI server NF5488A5, but now boasts liquid-cooling technology and supports the latest NVIDIA A100 Tensor Core GPU.</p>



<p>NF5488LA5 is equipped with two AMD EYPC 7003 series processors and eight NVIDIA A100 Tensor GPUs in a 4U chassis fully connected by NVSwitch. The GPU-to-GPU communication bandwidth reaches 600GB/s, thus enabling lower latency. The system topology adopts an ultra-low latency design to maximize the communication performance between the processor and the AI accelerator. With immensely improved cooling efficiency enabled by industry-leading warm-water cooling technology, the new server meets the extreme computing needs in science, simulation, and AI.</p>



<p>The liquid cold plate on the NF5488LA5 covers CPUs, GPUs and NVSwitches. The Liquid cooling power consumption accounts for 80% of the total consumption, effectively reducing Power Usage Effectiveness (PUE) to 1.1. The GPU cold plate is meticulously designed with a parallel connection of four water loops, which enables the liquid to flow through the surface of GPU and NVSwitch consecutively for high-efficiency cooling of the server component that generates the most heat. High-efficiency liquid-cooling is among the major reasons that NF5488LA5 ranks No.1 in 11 of the 16 tests in the closed data center division of the 2021 MLPerf™ Inference V1.0 Benchmark. It is also the only GPU server submitted that ran the NVIDIA A100 GPU at 500W TDP via liquid cooling technology.</p>



<p>Deployment-wise, Inspur NF5488LA5 can be connected to a mobile Coolant Distribution Unit (CDU). After connecting it to the RACKCDU-F008 mobile liquid-cooling CDU with quick release connectors, customers can place the units directly in the general air-cooling cabinet, without the need to set up primary side cooling units or rearranging the entire cooling system in the server room. The scaling-up of liquid cooling servers can be done by stacking such units inside the cabinet. The innovation solves the long-standing problem faced by liquid cooling servers in terms of deployment and scalability.</p>
<p>The post <a href="https://www.aiuniverse.xyz/inspur-releases-liquid-cooled-ai-server-with-nvidia-a100-gpus-at-isc-high-performance-digital-2021/">Inspur releases liquid cooled AI server with NVIDIA A100 GPUs at ISC High Performance Digital 2021</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/inspur-releases-liquid-cooled-ai-server-with-nvidia-a100-gpus-at-isc-high-performance-digital-2021/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Cirrascale Cloud Services Broadens Deep Learning Cloud Offerings With World’s Most Powerful GPU For AI Supercomputing</title>
		<link>https://www.aiuniverse.xyz/cirrascale-cloud-services-broadens-deep-learning-cloud-offerings-with-worlds-most-powerful-gpu-for-ai-supercomputing/</link>
					<comments>https://www.aiuniverse.xyz/cirrascale-cloud-services-broadens-deep-learning-cloud-offerings-with-worlds-most-powerful-gpu-for-ai-supercomputing/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 09 Jun 2021 06:30:09 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Broadens]]></category>
		<category><![CDATA[Cirrascale]]></category>
		<category><![CDATA[cloud]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[Services]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14134</guid>

					<description><![CDATA[<p>Source &#8211; https://aithority.com/ Cirrascale Cloud Services, a premier cloud services provider of deep learning infrastructure solutions for autonomous vehicles, natural language processing, and computer vision workflows, announced its dedicated, multi-GPU deep learning cloud servers support the NVIDIA A100 80GB and A30 Tensor Core GPUs. With record-setting performance across every category on the latest release of MLPerf, <a class="read-more-link" href="https://www.aiuniverse.xyz/cirrascale-cloud-services-broadens-deep-learning-cloud-offerings-with-worlds-most-powerful-gpu-for-ai-supercomputing/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/cirrascale-cloud-services-broadens-deep-learning-cloud-offerings-with-worlds-most-powerful-gpu-for-ai-supercomputing/">Cirrascale Cloud Services Broadens Deep Learning Cloud Offerings With World’s Most Powerful GPU For AI Supercomputing</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://aithority.com/</p>



<p>Cirrascale Cloud Services, a premier cloud services provider of deep learning infrastructure solutions for autonomous vehicles, natural language processing, and computer vision workflows, announced its dedicated, multi-GPU deep learning cloud servers support the NVIDIA A100 80GB and A30 Tensor Core GPUs. With record-setting performance across every category on the latest release of MLPerf, these latest offerings provide enterprise customers with mainstream options for a broad range of AI inference, training, graphics, and traditional enterprise compute workloads.</p>



<p>“Model sizes and datasets in general are growing fast and our customers are searching for the best solutions to increase overall performance and memory bandwidth to tackle their workloads in record time,” said Mike LaPan, vice president, Cirrascale Cloud Services. “The NVIDIA A100 80GB Tensor Core GPU delivers this and more. Along with the new A30 Tensor Core GPU with 24GB HBM2 memory, these GPUs enable today’s elastic data center and deliver maximum value for enterprises.”</p>



<p>The NVIDIA A100 80GB Tensor Core GPU introduces groundbreaking features to optimize inference workloads. It accelerates a full range of precision, from FP32 to INT4. Multi-Instance GPU (MIG) technology enables up to 7 instances with up to 10GB of memory to operate simultaneously on a single A100 for optimal utilization of compute resources. Structural sparsity support delivers up to 2X more performance on top of the A100 GPU’s other inference performance gains. A100 provides up to 20x higher performance over the NVIDIA Volta® and on modern conversational AI models like BERT Large, A100 accelerates inference throughput by 100x over CPUs.</p>



<p>Also available through Cirrascale Cloud Services is the NVIDIA A30 Tensor Core GPU, which delivers versatile performance supporting a broad range of AI inference and mainstream enterprise compute workloads, such as recommender systems, conversational AI and computer vision. The A30 also supports MIG technology, delivering superior price/performance with up to 4 instances containing 6GB of memory, perfectly suited to handle entry-level applications. Cirrascale’s accelerated cloud server solutions with NVIDIA A30 GPUs provide the needed compute power — along with large HBM2 memory, 933GB/sec of memory bandwidth, and scalability with NVIDIA NVLink® interconnect technology — to tackle massive datasets and turn them into valuable insights.</p>



<p>“Customers deploying the world’s most powerful GPUs within Cirrascale Cloud Services can accelerate their compute-intensive machine learning and AI workflows better than ever,” said Paresh Kharya, senior director of Product Management, Data Center Computing at NVIDIA.</p>
<p>The post <a href="https://www.aiuniverse.xyz/cirrascale-cloud-services-broadens-deep-learning-cloud-offerings-with-worlds-most-powerful-gpu-for-ai-supercomputing/">Cirrascale Cloud Services Broadens Deep Learning Cloud Offerings With World’s Most Powerful GPU For AI Supercomputing</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/cirrascale-cloud-services-broadens-deep-learning-cloud-offerings-with-worlds-most-powerful-gpu-for-ai-supercomputing/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deno 1.8 preps for GPU-accelerated machine learning</title>
		<link>https://www.aiuniverse.xyz/deno-1-8-preps-for-gpu-accelerated-machine-learning-2/</link>
					<comments>https://www.aiuniverse.xyz/deno-1-8-preps-for-gpu-accelerated-machine-learning-2/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 05 Mar 2021 07:25:34 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[accelerated]]></category>
		<category><![CDATA[Deno 1.8]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[preps]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13270</guid>

					<description><![CDATA[<p>Source &#8211; https://www.arnnet.com.au/ WebGPU API for GPU rendering and computation is supported in the latest upgrade to the JavaScript/TypeScript runtime. Deno 1.8, released on March 2, offers preliminary support for an API to bring enhanced machine learning to the secure JavaScript/TypeScript runtime. Experimental backing for the WebGPU API, for performing operations such as rendering and computation on <a class="read-more-link" href="https://www.aiuniverse.xyz/deno-1-8-preps-for-gpu-accelerated-machine-learning-2/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deno-1-8-preps-for-gpu-accelerated-machine-learning-2/">Deno 1.8 preps for GPU-accelerated machine learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.arnnet.com.au/</p>



<p>WebGPU API for GPU rendering and computation is supported in the latest upgrade to the JavaScript/TypeScript runtime.</p>



<p>Deno 1.8, released on March 2, offers preliminary support for an API to bring enhanced machine learning to the secure JavaScript/TypeScript runtime.</p>



<p>Experimental backing for the WebGPU API, for performing operations such as rendering and computation on a GPU, provides a path toward out-of-the-box GPU accelerated machine learning in Deno, release notes said.</p>



<p>The WebGPU API gives developers a low-level, high-performance cross-architecture mechanism to program GPU hardware from JavaScript. It serves as the effective successor to WebGL on the web.</p>



<p>The WebGPU spec has not been finalised, but support is being added to browsers such as Chromium, Firefox, and Safari, the Deno release notes state. GPU usage in machine learning has enabled more complex neural networks, or deep learning.</p>



<p>Deno’s developers contend that while most neural networks are defined in Python, JavaScript could be used as an ideal language for expressing mathematical ideas if proper infrastructure existed. Providing WebGPU support out-of-the-box in Deno is cited as a step in this direction. The goal is to run <a rel="noreferrer noopener" href="https://www.infoworld.com/article/3305340/tensorflowjs-puts-machine-learning-in-the-browser.html" target="_blank">TensorFlow.js</a> on Deno, with GPU acceleration.</p>



<p>Installation instructions for Deno 1.8 can be found at deno.land. Those with Deno already installed can access Deno 1.8 by running <code>deno upgrade</code>.</p>



<p>Other capabilities in Deno 1.8 include built-in internationalisation APIs have been enabled. <code>JS Intl</code> APIs work out of the box and import maps, for controlling the behaviour of JavaScript imports, are now stabilised.</p>



<p>This is in addition to support for fetching private modules is now stabilised. Developers can fetch remote modules from a private server using auth tokens. Furthermore, coverage infrastructure has been expanded, with coverage handling split into coverage collection and coverage reporting.</p>



<p>Deno 1.8 follows the January 19 release of Deno 1.7. The platform has arisen as an attempt to provide a more secure alternative to Node.js, and with a better module system. Deno’s development and was spearheaded by Node.js creator Ryan Dahl.</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/deno-1-8-preps-for-gpu-accelerated-machine-learning-2/">Deno 1.8 preps for GPU-accelerated machine learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deno-1-8-preps-for-gpu-accelerated-machine-learning-2/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deno 1.8 preps for GPU-accelerated machine learning</title>
		<link>https://www.aiuniverse.xyz/deno-1-8-preps-for-gpu-accelerated-machine-learning/</link>
					<comments>https://www.aiuniverse.xyz/deno-1-8-preps-for-gpu-accelerated-machine-learning/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 04 Mar 2021 10:43:39 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[1.8]]></category>
		<category><![CDATA[accelerated]]></category>
		<category><![CDATA[Deno]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[preps]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13235</guid>

					<description><![CDATA[<p>Source &#8211; https://www.infoworld.com/ WebGPU API for GPU rendering and computation is supported in the latest upgrade to the JavaScript/TypeScript runtime. Deno 1.8, released on March 2, offers preliminary support for an API to bring enhanced machine learning to the secure JavaScript/TypeScript runtime. Experimental backing for the WebGPU API, for performing operations such as rendering and computation on <a class="read-more-link" href="https://www.aiuniverse.xyz/deno-1-8-preps-for-gpu-accelerated-machine-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deno-1-8-preps-for-gpu-accelerated-machine-learning/">Deno 1.8 preps for GPU-accelerated machine learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.infoworld.com/</p>



<p>WebGPU API for GPU rendering and computation is supported in the latest upgrade to the JavaScript/TypeScript runtime.</p>



<p>Deno 1.8, released on March 2, offers preliminary support for an API to bring enhanced machine learning to the secure JavaScript/TypeScript runtime.</p>



<p>Experimental backing for the WebGPU API, for performing operations such as rendering and computation on a GPU, provides a path toward out-of-the-box GPU accelerated machine learning in Deno, release notes said. The WebGPU API gives developers a low-level, high-performance cross-architecture mechanism to program GPU hardware from JavaScript. It serves as the effective successor to WebGL on the web.</p>



<p>The WebGPU spec has not been finalized, but support is being added to browsers such as Chromium, Firefox, and Safari, the Deno release notes state. GPU usage in machine learning has enabled more complex neural networks, or deep learning. Deno’s developers contend that while most neural networks are defined in Python, JavaScript could be used as an ideal language for expressing mathematical ideas if proper infrastructure existed. Providing WebGPU support out-of-the-box in Deno is cited as a step in this direction. The goal is to run TensorFlow.js on Deno, with GPU acceleration.</p>



<p>Installation instructions for Deno 1.8 can be found at deno.land. Those with Deno already installed can access Deno 1.8 by running <code>deno upgrade</code>.<img decoding="async" src="blob:https://www.aiuniverse.xyz/d80b7ff0-bfeb-4997-ade1-f229f0d7e567">https://imasdk.googleapis.com/js/core/bridge3.445.1_en.html#goog_31886267800:00 of 19:29Volume 0% </p>



<p>Other capabilities in Deno 1.8 include:</p>



<ul class="wp-block-list"><li>Built-in internationalization APIs have been enabled. <code>JS Intl</code> APIs work out of the box.</li><li>Import maps, for controlling the behavior of JavaScript imports, are now stabilized.</li><li>Support for fetching private modules is now stabilized. Developers can fetch remote modules from a private server using auth tokens.</li><li>Coverage infrastructure has been expanded, with coverage handling split into coverage collection and coverage reporting.</li></ul>



<p>Deno 1.8 follows the January 19 release of Deno 1.7. The platform has arisen as an attempt to provide a more secure alternative to Node.js, and with a better module system. Deno’s development was spearheaded by Node.js creator Ryan Dahl.</p>
<p>The post <a href="https://www.aiuniverse.xyz/deno-1-8-preps-for-gpu-accelerated-machine-learning/">Deno 1.8 preps for GPU-accelerated machine learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deno-1-8-preps-for-gpu-accelerated-machine-learning/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Supermicro announces integrated A100 GPU-powered systems</title>
		<link>https://www.aiuniverse.xyz/supermicro-announces-integrated-a100-gpu-powered-systems/</link>
					<comments>https://www.aiuniverse.xyz/supermicro-announces-integrated-a100-gpu-powered-systems/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 18 May 2020 06:06:51 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[HPC]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[supercomputer]]></category>
		<category><![CDATA[Supermicro]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8831</guid>

					<description><![CDATA[<p>Source: datacentrenews.eu Super Micro Computer has announced two new systems designed for artificial intelligence (AI) deep learning applications that leverage the third-generation NVIDIA HGX technology with the new NVIDIA A100 Tensor Core GPUs as well as full support for the new NVIDIA A100 GPUs across the company’s broad portfolio of 1U, 2U, 4U and 10U <a class="read-more-link" href="https://www.aiuniverse.xyz/supermicro-announces-integrated-a100-gpu-powered-systems/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/supermicro-announces-integrated-a100-gpu-powered-systems/">Supermicro announces integrated A100 GPU-powered systems</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: datacentrenews.eu</p>



<p>Super Micro Computer has announced two new systems designed for artificial intelligence (AI) deep learning applications that leverage the third-generation NVIDIA HGX technology with the new NVIDIA A100 Tensor Core GPUs as well as full support for the new NVIDIA A100 GPUs across the company’s broad portfolio of 1U, 2U, 4U and 10U GPU servers.&nbsp;</p>



<p>NVIDIA A100 is the first elastic, multi-instance GPU that unifies training, inference, HPC, and analytics.</p>



<p>“Expanding upon our portfolio of GPU systems and NVIDIA HGX-2 system technology, Supermicro is introducing a new 2U system implementing the new NVIDIA HGX A100 4 GPU board (formerly codenamed Redstone) and a new 4U system based on the new NVIDIA HGX A100 8 GPU board (formerly codenamed Delta) delivering 5 PetaFLOPS of AI performance,” says Supermicro CEO and president Charles Liang.&nbsp;</p>



<p>“As GPU accelerated computing evolves and continues to transform data centers, Supermicro will provide customers the very latest system advancements to help them achieve maximum acceleration at every scale while optimising GPU utilisation. These new systems will significantly boost performance on all accelerated workloads for HPC, data analytics, deep learning training and deep learning inference.”</p>



<p>As a balanced data centre platform for HPC and AI applications, Supermicro’s new 2U system leverages the NVIDIA HGX A100 4 GPU board with four direct-attached NVIDIA A100 Tensor Core GPUs using PCI-E 4.0 for maximum performance and NVIDIA NVLink for high-speed GPU-to-GPU interconnects.&nbsp;</p>



<p>This GPU system accelerates compute, networking and storage performance with support for one PCI-E 4.0 x8 and up to four PCI-E 4.0 x16 expansion slots for GPUDirect RDMA high-speed network cards and storage such as InfiniBand HDR, which supports up to 200Gb per second bandwidth.&nbsp;</p>



<p>&nbsp;“AI models are exploding in complexity as they take on next-level challenges such as accurate conversational AI, deep recommender systems and personalised medicine,” says NVIDIA accelerated computing general manager and vice president Ian Buck.</p>



<p>“By implementing the NVIDIA HGX A100 platform into their new servers, Supermicro provides customers the powerful performance and massive scalability that enable researchers to train the most complex AI networks at unprecedented speed.”</p>



<p>Optimised for AI and machine learning, Supermicro’s new 4U system supports eight A100 Tensor Core GPUs.&nbsp;</p>



<p>The 4U form factor with eight GPUs is ideal for customers that want to scale their deployment as their processing requirements expand.&nbsp;</p>



<p>The new 4U system will have one NVIDIA HGX A100 8 GPU board with eight A100 GPUs all-to-all connected with NVIDIA NVSwitch for up to 600GB per second GPU-to-GPU bandwidth and eight expansion slots for GPUDirect RDMA high-speed network cards.&nbsp;</p>



<p>Ideal for deep learning training, data centres can use this scale-up platform to create next-gen AI and maximise data scientists’ productivity with support for ten x16 expansion slots.</p>



<p>Customers can expect a significant performance boost across Supermicro’s extensive portfolio of 1U, 2U, 4U and 10U multi-GPU servers when they are equipped with the new NVIDIA A100 GPUs.&nbsp;&nbsp;</p>



<p>For maximum acceleration, Supermicro’s new A+ GPU system supports up to eight full-height double-wide (or single-wide) GPUs via direct-attach PCI-E 4.0 x16 CPU-to-GPU lanes without any PCI-E switch for the lowest latency and highest bandwidth.&nbsp;</p>



<p>The system also supports up to three additional high-performance PCI-E 4.0 expansion slots for a variety of uses, including high-performance networking connectivity up to 100G. An additional AIOM slot supports a Supermicro AIOM card or an OCP 3.0 mezzanine card.</p>



<p>With 1U, 2U, 4U, and 10U rackmount GPU systems; Ultra, BigTwin, and embedded systems supporting GPUs; as well as GPU blade modules for our 8U SuperBlade, Supermicro offers the industry’s widest and deepest selection of GPU systems to power applications from Edge to Cloud.</p>



<p>To deliver enhanced security and unprecedented performance at the edge, Supermicro plans to add the new NVIDIA EGXB A100 configuration to its edge server portfolio.&nbsp;</p>



<p>The EGX A100 converged accelerator combines a Mellanox SmartNIC with GPUs powered by the new NVIDIA Ampere architecture, so enterprises can run AI at the edge more securely.</p>
<p>The post <a href="https://www.aiuniverse.xyz/supermicro-announces-integrated-a100-gpu-powered-systems/">Supermicro announces integrated A100 GPU-powered systems</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/supermicro-announces-integrated-a100-gpu-powered-systems/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>NEWS Virtualized GPUs Target Deep Learning Workloads on Kubernetes</title>
		<link>https://www.aiuniverse.xyz/news-virtualized-gpus-target-deep-learning-workloads-on-kubernetes/</link>
					<comments>https://www.aiuniverse.xyz/news-virtualized-gpus-target-deep-learning-workloads-on-kubernetes/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 08 May 2020 12:11:24 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Artificial intelligence (AI)]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[Virtualized]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8677</guid>

					<description><![CDATA[<p>Source: virtualizationreview.com Israel-based Run:AI, specializing in virtualizing artificial intelligence (AI) infrastructure, claimed an industry first in announcing a fractional GPU sharing system for deep learning workloads on Kubernetes. The company offers a namesake Run:AI platform built on top of Kubernetes to virtualize AI infrastructure in order to improve on the typical bare-metal approach that statically provisions AI <a class="read-more-link" href="https://www.aiuniverse.xyz/news-virtualized-gpus-target-deep-learning-workloads-on-kubernetes/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/news-virtualized-gpus-target-deep-learning-workloads-on-kubernetes/">NEWS Virtualized GPUs Target Deep Learning Workloads on Kubernetes</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: virtualizationreview.com</p>



<p>Israel-based Run:AI, specializing in virtualizing artificial intelligence (AI) infrastructure, claimed an industry first in announcing a fractional GPU sharing system for deep learning workloads on Kubernetes.</p>



<p>The company offers a namesake Run:AI platform built on top of Kubernetes to virtualize AI infrastructure in order to improve on the typical bare-metal approach that statically provisions AI workloads to data scientists. The firm says that approach comes with limits on experiment size and speed, low GPU utilization, and lack of IT controls.</p>



<p>Creating a virtual pool of GPU (graphics processing unit) resources, the company says, abstract data science workloads from infrastructure to simplify workflows.</p>



<p>In an announcement today (May 6), Run:AI said its fractional GPU system lets data science and AI engineering teams run multiple workloads simultaneously on a single GPU, helping organizations run more workloads such as computer vision, voice recognition and natural language processing on the same hardware, lowering costs.</p>



<p>To overcome some limitations on how Kubernetes handles GPUs, the company resorted to some tricky math, effectively marking them as floats that can be fractionalized for use in containers, rather that integers that either exist or don&#8217;t.</p>



<p>&#8220;Today’s de facto standard for deep learning workloads is to run them in containers orchestrated by Kubernetes,&#8221; the company said. &#8220;However, Kubernetes is only able to allocate whole physical GPUs to containers, lacking the isolation and virtualization capabilities needed to allow GPU resources to be shared without memory overflows or processing clashes.&#8221;</p>



<p>The result of the company&#8217;s work to overcome that limitation are virtualized logical GPUs &#8212; sporting their own memory and computing space &#8212; that appear as self-contained processors to containers.</p>



<p>Especially useful in lightweight workloads &#8212; including inference &#8212; eight or more container-run jobs can share the same physical chip, while typical use cases allow for only two to four jobs running on one GPU.</p>
<p>The post <a href="https://www.aiuniverse.xyz/news-virtualized-gpus-target-deep-learning-workloads-on-kubernetes/">NEWS Virtualized GPUs Target Deep Learning Workloads on Kubernetes</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/news-virtualized-gpus-target-deep-learning-workloads-on-kubernetes/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
