<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>NVIDIA A100 Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/nvidia-a100/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/nvidia-a100/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Tue, 29 Jun 2021 11:00:26 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Inspur releases liquid cooled AI server with NVIDIA A100 GPUs at ISC High Performance Digital 2021</title>
		<link>https://www.aiuniverse.xyz/inspur-releases-liquid-cooled-ai-server-with-nvidia-a100-gpus-at-isc-high-performance-digital-2021/</link>
					<comments>https://www.aiuniverse.xyz/inspur-releases-liquid-cooled-ai-server-with-nvidia-a100-gpus-at-isc-high-performance-digital-2021/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 29 Jun 2021 11:00:25 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[cooled]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[Inspur]]></category>
		<category><![CDATA[liquid]]></category>
		<category><![CDATA[NVIDIA A100]]></category>
		<category><![CDATA[releases]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14651</guid>

					<description><![CDATA[<p>Source &#8211; https://www.hpcwire.com/ NF5488LA5, boasting high-efficiency liquid-cooling, ranks No.1 in 11 of the 16 tests in the closed data center division of the 2021 MLPerf™ Inference v1.0 <a class="read-more-link" href="https://www.aiuniverse.xyz/inspur-releases-liquid-cooled-ai-server-with-nvidia-a100-gpus-at-isc-high-performance-digital-2021/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/inspur-releases-liquid-cooled-ai-server-with-nvidia-a100-gpus-at-isc-high-performance-digital-2021/">Inspur releases liquid cooled AI server with NVIDIA A100 GPUs at ISC High Performance Digital 2021</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.hpcwire.com/</p>



<p><em>NF5488LA5, boasting high-efficiency liquid-cooling, ranks No.1 in 11 of the 16 tests in the closed data center division of the 2021 MLPerf™ Inference v1.0 Benchmark.</em></p>



<p>SAN JOSE, Calif. – June 28, 2021 – Today at ISC High Performance 2021 Digital, the event for high performance computing, machine learning, and data analytics, Inspur Information, a leading IT infrastructure solution provider, announces its new liquid cooling AI server, NF5488LA5. Designed with a liquid cold plate and a maximum capability of supporting eight high-speed and interconnected  NVIDIA<sup>®</sup> A100 Tensor Core GPUs via NVSwitch, this new offering is ideal for customers who need a high-performance and energy-efficient AI server.</p>



<p>Designed to meet the energy-saving needs required by High-Performance Computing (HPC) and Artificial Intelligence (AI), the new NF5488LA5 is an update on Inspur’s leading AI server NF5488A5, but now boasts liquid-cooling technology and supports the latest NVIDIA A100 Tensor Core GPU.</p>



<p>NF5488LA5 is equipped with two AMD EYPC 7003 series processors and eight NVIDIA A100 Tensor GPUs in a 4U chassis fully connected by NVSwitch. The GPU-to-GPU communication bandwidth reaches 600GB/s, thus enabling lower latency. The system topology adopts an ultra-low latency design to maximize the communication performance between the processor and the AI accelerator. With immensely improved cooling efficiency enabled by industry-leading warm-water cooling technology, the new server meets the extreme computing needs in science, simulation, and AI.</p>



<p>The liquid cold plate on the NF5488LA5 covers CPUs, GPUs and NVSwitches. The Liquid cooling power consumption accounts for 80% of the total consumption, effectively reducing Power Usage Effectiveness (PUE) to 1.1. The GPU cold plate is meticulously designed with a parallel connection of four water loops, which enables the liquid to flow through the surface of GPU and NVSwitch consecutively for high-efficiency cooling of the server component that generates the most heat. High-efficiency liquid-cooling is among the major reasons that NF5488LA5 ranks No.1 in 11 of the 16 tests in the closed data center division of the 2021 MLPerf™ Inference V1.0 Benchmark. It is also the only GPU server submitted that ran the NVIDIA A100 GPU at 500W TDP via liquid cooling technology.</p>



<p>Deployment-wise, Inspur NF5488LA5 can be connected to a mobile Coolant Distribution Unit (CDU). After connecting it to the RACKCDU-F008 mobile liquid-cooling CDU with quick release connectors, customers can place the units directly in the general air-cooling cabinet, without the need to set up primary side cooling units or rearranging the entire cooling system in the server room. The scaling-up of liquid cooling servers can be done by stacking such units inside the cabinet. The innovation solves the long-standing problem faced by liquid cooling servers in terms of deployment and scalability.</p>
<p>The post <a href="https://www.aiuniverse.xyz/inspur-releases-liquid-cooled-ai-server-with-nvidia-a100-gpus-at-isc-high-performance-digital-2021/">Inspur releases liquid cooled AI server with NVIDIA A100 GPUs at ISC High Performance Digital 2021</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/inspur-releases-liquid-cooled-ai-server-with-nvidia-a100-gpus-at-isc-high-performance-digital-2021/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>NVIDIA A100, A40 and NVIDIA RTX A6000 Ampere Architecture-Based Professional GPUs Transform Data Science and Big Data Analytics</title>
		<link>https://www.aiuniverse.xyz/nvidia-a100-a40-and-nvidia-rtx-a6000-ampere-architecture-based-professional-gpus-transform-data-science-and-big-data-analytics/</link>
					<comments>https://www.aiuniverse.xyz/nvidia-a100-a40-and-nvidia-rtx-a6000-ampere-architecture-based-professional-gpus-transform-data-science-and-big-data-analytics/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 09 Mar 2021 05:02:02 +0000</pubDate>
				<category><![CDATA[Data Science]]></category>
		<category><![CDATA[A40]]></category>
		<category><![CDATA[Ampere]]></category>
		<category><![CDATA[Architecture]]></category>
		<category><![CDATA[based]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[GPUs]]></category>
		<category><![CDATA[NVIDIA A100]]></category>
		<category><![CDATA[NVIDIA RTX A6000]]></category>
		<category><![CDATA[Professional]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13326</guid>

					<description><![CDATA[<p>Source &#8211; https://insidebigdata.com/ Scientists, researchers, and engineers are solving the world’s most important scientific, industrial, and big data challenges with AI and high-performance computing (HPC). Businesses, even <a class="read-more-link" href="https://www.aiuniverse.xyz/nvidia-a100-a40-and-nvidia-rtx-a6000-ampere-architecture-based-professional-gpus-transform-data-science-and-big-data-analytics/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/nvidia-a100-a40-and-nvidia-rtx-a6000-ampere-architecture-based-professional-gpus-transform-data-science-and-big-data-analytics/">NVIDIA A100, A40 and NVIDIA RTX A6000 Ampere Architecture-Based Professional GPUs Transform Data Science and Big Data Analytics</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://insidebigdata.com/</p>



<p>Scientists, researchers, and engineers are solving the world’s most important scientific, industrial, and big data challenges with AI and high-performance computing (HPC). Businesses, even entire industries, harness the power of AI to extract new insights from massive data sets, both on-premises and in the cloud. NVIDIA Ampere architecture-based products, like the NVIDIA A100 or the NVIDIA RTX A6000, designed for the age of elastic computing, deliver the next giant leap by providing unmatched acceleration at every scale, enabling innovators to push the boundaries of human knowledge and creativity forward.</p>



<p>The NVIDIA Ampere architecture-based products implements ground-breaking innovations. Third-generation Tensor Cores deliver dramatic speedups to AI, reducing training times from weeks to hours and providing massive inference acceleration. Two new precisions – Tensor Float (TF32) and Floating Point 64 (FP64, NVIDIA A100 only) accelerates AI adoption and extends the power of Tensor Cores to HPC.</p>



<p>TF32 works just like FP32 while delivering speedups of up to 10x for AI without requiring any code changes when utilizing sparsity.&nbsp; Automatic mixed precision and FP16 can be invoked for performance optimization by adding just a couple of lines of code. With support for bfloat16, INT8, and INT4, NVIDIA’s third generation Tensor Cores are an incredibly versatile accelerator for AI training and inference. By bringing the power of Tensor Cores to HPC, the NVIDIA A100 enables matrix operations in up to full, IEEE-certified, FP64 precision.</p>



<p>Every AI, data science, and HPC application can benefit from acceleration, but not every application needs the performance of a full Ampere architecture-based GPU. With Multi-Instance GPU (MIG), supported by the A100, the GPU can be partitioned into up to seven GPU instances, fully isolated and secured at the hardware level with their own high-bandwidth memory, cache, and compute cores. This brings breakthrough acceleration to all applications, big and small, and delivers guaranteed quality of service. IT administrators can offer right-sized GPU acceleration for optimal utilization and expand access to every user and application across bare-metal and virtualized environments.</p>



<p>The A100 SXM4 configuration with 40 GB of GPU memory brings massive amounts of compute performance to data centers. To keep these compute engines fully utilized the DGX A100 provides class leading 1.6 terabytes per second (TB/sec) of memory bandwidth, a 67 percent increase over the previous generation. The A100 also has significantly more on-chip memory, including a 40 megabyte (MB) level 2 cache – 7x larger than the previous generation – to maximize compute performance. The PCIe board version retains the 40 GB of HBM2 GPU memory, with a memory bus width of 5120 bits and a peak memory bandwidth of up to 1555 GB/sec, easily taking the performance crown from the prior generation Tesla V100.</p>



<p>Scaling applications across multiple GPUs requires extremely fast movement of data. Third generation NVIDIA NVLink in the A100 SXM4 doubles the GPU-to-GPU direct bandwidth to 600 gigabytes per second (GB/sec), almost 10x higher than PCIe Gen 4. The PCIe 4.0 A100 implementation also features a total maximum NVLink bandwidth of 600 GB/sec. NVIDIA DGX A100 servers can take advantage of NVLink and NVSwitch technology via NVIDIA HGX A100 baseboards to deliver greater scalability for HPC and AI workloads. For those who prefer to deploy PCIe motherboards the NVIDIA A100 PCIe option fully supports NVLink.</p>



<p>Contemporary AI networks are big and getting bigger, with millions and in some cases billions of parameters. Not all of these are necessary for accurate predictions and inference, and some can be converted to zeros to make models “sparse” without compromising accuracy. Ampere architecture-based Tensor Cores in the NVIDIA A100 or RTX A6000 provide up to 10x higher performance for sparse models. While the sparsity feature more readily benefits AI inference, it can also be used to improve the performance of model training.</p>



<p>NVIDIA Ampere architecture-based second-generation RT Cores in the NVIDIA RTX A6000 and NVIDIA A40 GPUs deliver massive speedups for big data analytics, data science, AI, and HPC use cases where seeing (visualizing) the problem is essential to solving the problem. RT Cores enable real-time ray tracing for photorealistic results and work synergistically with Tensor Cores to deliver AI denoising and other productivity enhancing features.</p>
<p>The post <a href="https://www.aiuniverse.xyz/nvidia-a100-a40-and-nvidia-rtx-a6000-ampere-architecture-based-professional-gpus-transform-data-science-and-big-data-analytics/">NVIDIA A100, A40 and NVIDIA RTX A6000 Ampere Architecture-Based Professional GPUs Transform Data Science and Big Data Analytics</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/nvidia-a100-a40-and-nvidia-rtx-a6000-ampere-architecture-based-professional-gpus-transform-data-science-and-big-data-analytics/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
