<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Nvidia Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/nvidia/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/nvidia/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Sat, 20 Feb 2021 05:37:34 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Nvidia Opens the Door to Deep Learning Workshops</title>
		<link>https://www.aiuniverse.xyz/nvidia-opens-the-door-to-deep-learning-workshops/</link>
					<comments>https://www.aiuniverse.xyz/nvidia-opens-the-door-to-deep-learning-workshops/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 20 Feb 2021 05:37:32 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Door]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[Workshops]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12948</guid>

					<description><![CDATA[<p>Source &#8211; https://www.datanami.com/ Good news for folks looking to learn about the latest AI development techniques: Nvidia is now allowing the general public to access the online <a class="read-more-link" href="https://www.aiuniverse.xyz/nvidia-opens-the-door-to-deep-learning-workshops/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/nvidia-opens-the-door-to-deep-learning-workshops/">Nvidia Opens the Door to Deep Learning Workshops</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.datanami.com/</p>



<p>Good news for folks looking to learn about the latest AI development techniques: Nvidia is now allowing the general public to access the online workshops it provides through its Deep Learning Institute (DLI).</p>



<p>The GPU giant today announced today that selected workshops in the DLI catalog will be open to everybody. These workshops previously were available only to companies that wanted specialized training for their in-house developers, or to folks who had attended the company’s GPU Technology Conferences.</p>



<p>Two of the open courses will take place next month, including “Fundamentals of Accelerated Computing with CUDA Python,” which explores developing parallel workloads with CUDA and NumPy and cost $500. There is also “Applications of AI for Predictive Maintenance,” which explores technologies like XGBoost, LSTM, Keras, and Tensorflow, and costs $700. Certificates are available for those who complete both workshops.</p>



<p>The course fees cover access to GPU-accelerated development servers in the cloud, as well as learning material and instructors. Thirteen more courses will be held throughout April and May 2021. You can see the full schedule at</p>



<p>Nvidia’s DLI includes about 40 courses across four topics–deep learning, accelerated computing, data science, and infrastructure topics. Some of the courses are free, while others are available for a fee.</p>



<p>Demand for deep learning and AI training has increased in recent years. Nvidia says that it trained 75,000 developers through the DLI in 2020, a 36% increase over 2019. Opening the workshops up to the general public will increase participation, according to Will Ramey, global head of Developer Programs at Nvidia.</p>



<p>“Our public workshops provide a great opportunity for individual developers and smaller organizations to get industry-leading training in deep learning, accelerated computing and data science,” Ramey says in a blog. “Now the same expert instructors and world-class learning materials that help accelerate innovation at leading companies are available to everyone.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/nvidia-opens-the-door-to-deep-learning-workshops/">Nvidia Opens the Door to Deep Learning Workshops</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/nvidia-opens-the-door-to-deep-learning-workshops/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>NVIDIA NeMo: An Open-Source Toolkit For Developing State-Of-The-Art Conversational AI Models In Three Lines Of Code</title>
		<link>https://www.aiuniverse.xyz/nvidia-nemo-an-open-source-toolkit-for-developing-state-of-the-art-conversational-ai-models-in-three-lines-of-code/</link>
					<comments>https://www.aiuniverse.xyz/nvidia-nemo-an-open-source-toolkit-for-developing-state-of-the-art-conversational-ai-models-in-three-lines-of-code/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 10 Oct 2020 06:09:52 +0000</pubDate>
				<category><![CDATA[PyTorch]]></category>
		<category><![CDATA[AI models]]></category>
		<category><![CDATA[Developing]]></category>
		<category><![CDATA[Neural modules]]></category>
		<category><![CDATA[Nvidia]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12093</guid>

					<description><![CDATA[<p>Source: marktechpost.com NVIDIA’s open-source toolkit, NVIDIA NeMo( Neural Models), is a revolutionary step towards the advancement of Conversational AI. Based on PyTorch, it allows one to build <a class="read-more-link" href="https://www.aiuniverse.xyz/nvidia-nemo-an-open-source-toolkit-for-developing-state-of-the-art-conversational-ai-models-in-three-lines-of-code/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/nvidia-nemo-an-open-source-toolkit-for-developing-state-of-the-art-conversational-ai-models-in-three-lines-of-code/">NVIDIA NeMo: An Open-Source Toolkit For Developing State-Of-The-Art Conversational AI Models In Three Lines Of Code</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: marktechpost.com</p>



<p>NVIDIA’s open-source toolkit, NVIDIA NeMo( Neural Models), is a revolutionary step towards the advancement of Conversational AI. Based on PyTorch, it allows one to build quickly, train, and fine-tune conversational AI models.</p>



<p>As the world is getting more digital, Conversational AI is a way to enable communication between humans and computers. The set of technologies behind some fascinating technologies like automated messaging, speech recognition, voice chatbots, text to speech, etc. It broadly comprises three areas of AI research: automatic speech recognition (ASR), natural language processing (NLP), and speech synthesis (or text-to-speech, TTS). </p>



<p>Conversational AI has shaped the path of human-computer interaction, making it more accessible and exciting. The latest advancements in Conversational AI like NVIDIA NeMo help bridge the gap between machines and humans.</p>



<p>NVIDIA NeMo consists of two subparts: NeMo Core and NeMo Collections. NeMo Core deals with all models generally, whereas NeMo Collections deals with models’ specific domains. In Nemo’s Speech collection (nemo_asr), you’ll find models and various building blocks for speech recognition, command recognition, speaker identification, speaker verification, and voice activity detection. NeMo’s NLP collection (nemo_nlp) contains models for tasks such as question answering, punctuation, named entity recognition, and many others. Finally, in NeMo’s Speech Synthesis (nemo_tts), you’ll find several spectrogram generators and vocoders, which will let you generate synthetic speech.</p>



<p>There are three main concepts in NeMo: model, neural module, and neural type.&nbsp;</p>



<ul class="wp-block-list"><li><strong>Models</strong>&nbsp;contain all the necessary information regarding training, fine-tuning, neural network implementation, tokenization, data augmentation, infrastructure details like the number of GPU nodes,etc., optimization algorithm, etc.</li><li><strong>Neural modules</strong>&nbsp;are a sort of encoder-decoder architecture consisting of conceptual building blocks responsible for different tasks. It represents the logical part of a neural network and forms the basis for describing the model and its training process. Collections have many neural modules that can be reused whenever required.</li><li>Inputs and outputs to Neural Modules are typed with&nbsp;<strong>Neural Types</strong>. A Neural Type is a pair that contains the information about the tensor’s axes layout and semantics of its elements. Every Neural Module has input_types and output_types properties that describe what kinds of inputs this module accepts and what types of outputs it returns.</li></ul>



<p>Even though NeMo is based on PyTorch, it can also be effectively used with other projects like PyTorch Lightning and Hydra. Integration with Lightning makes it easier to train models with mixed precision using Tensor Cores and can scale training to multiple GPUs and compute nodes. It also has some features like logging, checkpointing, overfit checking, etc. Hydra also allows the parametrization of scripts to keep it well organized. It makes it easier to streamline everyday tasks for users.</p>
<p>The post <a href="https://www.aiuniverse.xyz/nvidia-nemo-an-open-source-toolkit-for-developing-state-of-the-art-conversational-ai-models-in-three-lines-of-code/">NVIDIA NeMo: An Open-Source Toolkit For Developing State-Of-The-Art Conversational AI Models In Three Lines Of Code</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/nvidia-nemo-an-open-source-toolkit-for-developing-state-of-the-art-conversational-ai-models-in-three-lines-of-code/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Artificial Intelligence Market To Witness Huge Growth By 2025 &#124; Alphabet, Hanson Robotics, IBM, Amazon, Xilinx, Blue Frog Robotics</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-market-to-witness-huge-growth-by-2025-alphabet-hanson-robotics-ibm-amazon-xilinx-blue-frog-robotics/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-market-to-witness-huge-growth-by-2025-alphabet-hanson-robotics-ibm-amazon-xilinx-blue-frog-robotics/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 24 Aug 2020 10:02:30 +0000</pubDate>
				<category><![CDATA[Robotics]]></category>
		<category><![CDATA[AMR]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[BFSI]]></category>
		<category><![CDATA[coronavirus]]></category>
		<category><![CDATA[customizations]]></category>
		<category><![CDATA[Harman]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[Promobot]]></category>
		<category><![CDATA[Softbank]]></category>
		<category><![CDATA[Xilinx]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11143</guid>

					<description><![CDATA[<p>Source:-scientect Ample Market Research(AMR) has published a new market study, titled, Artificial Intelligence (AI) Market. The market study not only presents a comprehensive analysis of market overview <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-market-to-witness-huge-growth-by-2025-alphabet-hanson-robotics-ibm-amazon-xilinx-blue-frog-robotics/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-market-to-witness-huge-growth-by-2025-alphabet-hanson-robotics-ibm-amazon-xilinx-blue-frog-robotics/">Artificial Intelligence Market To Witness Huge Growth By 2025 | Alphabet, Hanson Robotics, IBM, Amazon, Xilinx, Blue Frog Robotics</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:-scientect</p>



<p><strong>Ample Market Research(AMR) has published a new market study, titled,</strong> Artificial Intelligence (AI) Market. The market study not only presents a comprehensive analysis of market overview and dynamics for the historical period, 2014-2019, but also contributes global and regional predictions on the market value, volume production, and consumption throughout the future period, 2019-2026.</p>



<p>There are a number of insights are included or analyzed in this market study which is helpful in devising strategies for the future and take necessary steps. New project investment feasibility analysis and SWOT analysis are offered along with insights on industry barriers.</p>



<p>The market study also explains the key market players, especially the wholesalers, distributors, businesspersons along with the industrial chain structure. The development of market trends is considered along with the competitive landscape in various regions, countries, provinces which would boost top and arising market players to discover the lucrative investment pockets.</p>



<p>The Coronavirus Pandemic (COVID-19) has affected every aspect of life worldwide. This has led to several changes in market conditions. The report covers the rapidly changing market scenario and the initial and future impact assessments.</p>



<p>The market study starts with a brief introduction and market overview, in which the Artificial Intelligence (AI) industry is first defined before estimating its market scope and size. Next, the market study elaborates on the status of the market scope and market size estimation.</p>



<p>This is followed by an overview of the market segmentation such as type, application, and region. The drivers, limitations, and opportunities are listed for the Artificial Intelligence (AI) industry, followed by industry news and policies.</p>



<p>The market study presents an industry chain examination, concentrating on upstream raw material suppliers and major or principal downstream buyers. The information is presented by tables and figures, which also cover production cost structure and market channel analysis.</p>



<p><strong>Major companies or players involved in the Artificial Intelligence (AI) industry are also outlined, along with their market share and product types.</strong></p>



<p><strong>With the help of tables and figures, valuable insights on production, value, price, and gross margin of each player are offered.</strong></p>



<p>The major market players operating in the industry are Alphabet, Hanson Robotics, IBM, Amazon, Xilinx, Blue Frog Robotics, Promobot, Intel, Kuka, Fanuc, Softbank, ABB, Microsoft, Harman International Industries, Nvidia</p>



<p>Market share based on the region for each player is outlined for 2019. Insights on future growth for each player would help in understanding the evolution of the competitive scenario and assist emerging players to gain a competitive edge.</p>



<p>The market study segments the global Artificial Intelligence (AI) market based on factors such as type, application, and region. For the historic period, extensive insights on value, market share, production, growth rate, and price analysis for each sub-segment is offered by the report.</p>



<p>For the future period, sound forecasts on market value and volume are offered for each type as Hardware, Software, Services and application such as Healthcare, BFSI, Law, Retail, Advertising &amp; Media, Automotive &amp; Transportation, Agriculture, Manufacturing, Others.</p>



<p><strong>In the same period, the report also provides a detailed analysis of market value and consumption for each region.</strong></p>



<p>Additionally, the report also examines regional production, consumption, export, and import for the historic period. The regions analyzed in the research include North America (Covered in Chapter 7 and 14), United States, Canada, Mexico, Europe (Covered in Chapter 8 and 14), Germany, UK, France, Italy, Spain, Russia.</p>



<p>Finally, the current market status and SWOT analysis for each region are elaborated, which would help market players to achieve a competitive edge by determining the predominant segments.</p>



<p><strong>Market Research findings and conclusions and more are provided at the end of the market study of the Artificial Intelligence (AI).</strong></p>



<p>With the presented market data, AMR offers <strong>customizations</strong> according to particular needs on Local, Regional and Global Markets.</p>



<p><strong>About Ample Market Research</strong></p>



<p>Ample Market Research provides comprehensive market research services and solutions across various industry verticals and helps businesses perform exceptionally well. Attention to detail, consistency, and quality are elements we focus on. However, our mainstay remains to be knowledge, expertise, and resources to make us industry players.</p>



<p>Our end goal is to provide quality market research and consulting services to customers and add maximum value to businesses worldwide. We desire to delivery reports that have the perfect concoction of useful data.</p>



<p><strong>Our mission is to capture every aspect of the market and offer businesses a document that makes solid grounds for crucial decision making.</strong></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-market-to-witness-huge-growth-by-2025-alphabet-hanson-robotics-ibm-amazon-xilinx-blue-frog-robotics/">Artificial Intelligence Market To Witness Huge Growth By 2025 | Alphabet, Hanson Robotics, IBM, Amazon, Xilinx, Blue Frog Robotics</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-market-to-witness-huge-growth-by-2025-alphabet-hanson-robotics-ibm-amazon-xilinx-blue-frog-robotics/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>WWT Named Partner Of The Year For Deep Learning AI By NVIDIA</title>
		<link>https://www.aiuniverse.xyz/wwt-named-partner-of-the-year-for-deep-learning-ai-by-nvidia/</link>
					<comments>https://www.aiuniverse.xyz/wwt-named-partner-of-the-year-for-deep-learning-ai-by-nvidia/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 21 Jul 2020 07:19:04 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[WWT]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10355</guid>

					<description><![CDATA[<p>Source: aithority.com World Wide Technology (WWT) announced that it has been selected by the NVIDIA Partner Network (NPN) as the 2019 Deep Learning AI Partner of the Year for the Americas. <a class="read-more-link" href="https://www.aiuniverse.xyz/wwt-named-partner-of-the-year-for-deep-learning-ai-by-nvidia/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/wwt-named-partner-of-the-year-for-deep-learning-ai-by-nvidia/">WWT Named Partner Of The Year For Deep Learning AI By NVIDIA</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: aithority.com</p>



<p>World Wide Technology (WWT) announced that it has been selected by the NVIDIA Partner Network (NPN) as the 2019 Deep Learning AI Partner of the Year for the Americas. This is the third year that WWT has been honored in this category.</p>



<p>The NPN selected WWT for its ongoing AI research and development program. To help customers develop AI leadership, World Wide Technology published six white papers about leveraging the compute power of NVIDIA DGX systems to develop Machine Learning and Deep Learning models for real-time edge video analytics, network optimization, and performance comparisons of multiple reference architectures for ML model development. The WWT research into ML and Deep Learning is tied to real-world business outcomes, and improvements in mining safety, utilities grid optimization, and resource management for manufacturing.</p>



<p>In addition, the WWT Advanced Technology Center (ATC) offers Lab-as-a-service environments for AI development, MLOps, Deep Learning, and testing of storage and networking with GPU-accelerated compute. WWT [also] engineered and deployed some of the largest clusters of DGX-2 servers in North America and China for production of Natural Language Processing applications at massive scale.</p>



<p>“It’s due to the strength of our engineering and data science partnership with NVIDIA that WWT’s customers are today realizing strategic value from Deep Learning and ML solutions that WWT has deployed,” said Tim Brooks, Managing Director of AI Solutions for World Wide Technology. “Our customers are leveraging Natural Language Processing, computer vision, robotics, and geospatial analysis for intelligent agents, autonomous vehicles, retail loss prevention, mining safety, and manufacturing QA.”</p>



<p>“NVIDIA has long worked with WWT to deliver AI solutions for data center and cloud-hosted environments across numerous industries,” said Craig Weinstein, Vice President of the Americas Partner Organization at NVIDIA. “Together with NVIDIA and our OEM partners, WWT provides customers with AI solutions that leverage the power of NVIDIA GPUs and 30 years of engineering and global deployment reliability of WWT.”</p>



<p>The NPN honors its top North American partners who have shown growth in their GPU business through the growth, leadership, and investments made throughout the year.</p>
<p>The post <a href="https://www.aiuniverse.xyz/wwt-named-partner-of-the-year-for-deep-learning-ai-by-nvidia/">WWT Named Partner Of The Year For Deep Learning AI By NVIDIA</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/wwt-named-partner-of-the-year-for-deep-learning-ai-by-nvidia/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>3 Top Artificial Intelligence Stocks to Buy in July</title>
		<link>https://www.aiuniverse.xyz/3-top-artificial-intelligence-stocks-to-buy-in-july/</link>
					<comments>https://www.aiuniverse.xyz/3-top-artificial-intelligence-stocks-to-buy-in-july/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 06 Jul 2020 07:20:04 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[Future]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[software]]></category>
		<category><![CDATA[Technologies]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10024</guid>

					<description><![CDATA[<p>Source: fool.com Over a decade ago, a nebulous idea called &#8220;the cloud&#8221; started to gain momentum. Using the internet to deliver a service to a remotely located <a class="read-more-link" href="https://www.aiuniverse.xyz/3-top-artificial-intelligence-stocks-to-buy-in-july/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/3-top-artificial-intelligence-stocks-to-buy-in-july/">3 Top Artificial Intelligence Stocks to Buy in July</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: fool.com</p>



<p>Over a decade ago, a nebulous idea called &#8220;the cloud&#8221; started to gain momentum. Using the internet to deliver a service to a remotely located user was a novel concept, but today, it&#8217;s an essential piece of the economy.&nbsp;</p>



<p>Artificial intelligence (AI) is likewise an important but oft-misunderstood technology. It&#8217;s still developing, but it promises to create a new segment of the economy based on the automation of simple tasks and raw data crunching.&nbsp;</p>



<p>Researcher IDC estimates that some $37.5 billion was spent globally on AI systems in 2019. That&#8217;s not a particularly large sum, but IDC thinks that figure could roughly triple by 2023. Just as the cloud is now responsible for delivering all sorts of tools and services, AI systems are expected to have a wide range of uses in a very short period of time. &nbsp;</p>



<p>Last month, I talked up <strong>Alphabet</strong>, <strong>salesforce.com</strong>, and <strong>NVIDIA </strong>(NASDAQ:NVDA). For July, I&#8217;m revisiting NVIDIA and going with <strong>Micron Technology </strong>(NASDAQ:MU) and <strong>Appian </strong>(NASDAQ:APPN) as my AI picks of the moment.</p>



<h3 class="wp-block-heading">Before there was software, there was hardware</h3>



<p>As a final product, AI is software: an algorithm that dictates the function of a system or device. But hardware must be built to train, deploy, and operate that software. As AI is still in its infancy, the hardware used to support it is where I will gravitate for the time being.</p>



<p>When talking about AI hardware, it&#8217;s easy to default to NVIDIA. The folks at NVIDIA see a world where AI is ubiquitous, assisting us with tasks and making recommendations. That future is nigh. The company&#8217;s wares are already a largely unseen part of everyday life. Whether it&#8217;s an advanced driver assist feature in a new car, a recommendation for a movie or song, high-end graphics on video games, or the packages delivered to your home, there&#8217;s a good chance NVIDIA was involved. </p>



<p>The bear argument these days is that NVIDIA is too expensive. Looking back over the last year of results, it most certainly is. Shares trade for 20 times revenue and 54 times free cash flow (revenue less cash operating and capital expenses). Yikes. &nbsp;</p>



<p>But I&#8217;ll reiterate what I&#8217;ve said in previous articles on NVIDIA: The past is less important than the future for a high-growth company. Between its internal development and its recent acquisition of Mellanox (which I believe NVIDIA got for a song), revenue for the second quarter was forecast to be some 42% higher than a year ago. With a whole year left to lap its pre-Mellanox results and the current state of world affairs creating insatiable demand for new semiconductors and devices, double-digit percentage growth could continue for a while longer.  </p>



<p>Here&#8217;s my full disclosure: I continue to add shares of NVIDIA not because I think it&#8217;s a fair value now, but because I see at least a decade of rapid AI industry development ahead with the company delivering some of the primary components necessary to make it all possible. That kind of time horizon may not gel for many, but if you think your money will still be invested in 10 years, I don&#8217;t see why this shouldn&#8217;t be a core set it and forget it holding in any portfolio.</p>



<h3 class="wp-block-heading">Remember one of intelligence&#8217;s key ingredients</h3>



<p>Memory is crucial to human and artificial intelligence. A machine&#8217;s ability to make predictions and perform automated work isn&#8217;t simply dictated by how quickly it can crunch information. It also needs stored data from which it can generate such predictions. That&#8217;s where memory semiconductors come in.</p>



<p>Digital memory chips are necessary for all sorts of electronic systems. Many types are highly commoditized and sensitive to changes in supply and demand. That can wreak havoc on pricing and lead to wild swings from sky-high profitability to heavy losses. Micron has historically been at the mercy of this cycle.</p>



<p>This memory chip leader will probably continue to be highly cyclical. However, the company changed its approach a few years ago. It invested in new chip technology and architecture to differentiate its portfolio from the rest of the pack. It&#8217;s also increasingly focused on higher-order computing needs and has walked away from deals that don&#8217;t meet its investment return criteria. Even during the lows of a year-plus semiconductor slump, Micron has thus remained profitable.</p>



<p>Surging orders amid the COVID-19 pandemic have pulled Micron out of its trough and pushed it back into growth mode. Revenue was up 14% in the last quarter. AI systems, data centers, and connected devices operating at the &#8220;network edge&#8221; have needed upgrades during the lockdown.</p>



<p>Lower sales of consumer-facing devices like smartphones and cars partially offset results. But upgrade cycles for new video game consoles, PCs, and advanced driver assistance systems are expected in the years ahead. Micron&#8217;s advanced memory chips play an integral role in these smart devices. With a new upcycle possibly beginning, I think the stock is a buy now. </p>



<h3 class="wp-block-heading">Training bots to handle soul-crushing work</h3>



<p>People worry that AI will compete with humans for jobs. It&#8217;s not an ungrounded concern. Tech&#8217;s increased productivity and cost-savings benefits are very real. Many workers may need to future-proof their careers &#8212; or change careers altogether &#8212; because of the disruptive nature of tech. But right or wrong, it&#8217;s happening. Humans have always had to compete with the technology they create.</p>



<p>AI and related technologies like low-code software development are proving useful to organizations trying to adjust to shelter-in-place orders and the &#8220;new normal&#8221; of the pandemic. Low-code isn&#8217;t AI, per se. It&#8217;s a visual toolkit that builds applications much faster than prior technologies.</p>



<p>There are a number of low-code providers out there, but Appian made an interesting recent move. Early in 2020, the company made its first-ever acquisition by buying robotic process automation (RPA) firm Novayre Systems. What is RPA? Think of it as a virtual robot that can be programmed to do tasks within software, like populating form fields. </p>



<p>Both low-code software and RPA can help companies resume operations. But won&#8217;t that steal jobs? In the short term, it might. But if it&#8217;s going to happen, investors might as well prepare, and I see owning Appian as one way to do so.</p>



<p>Granted, Appian expects its recurring software revenue to slow to a 25% to 26% year-over-year pace in the second quarter (down from 34% in 2019 and 46% in the first quarter of 2020) as many customers are putting new projects on temporary hold. Appian, a small company, still operates at a loss on top of that.</p>



<p>However, the company had no debt and $149 million in cash and equivalents at the end of March 2020. This doesn&#8217;t include its recently announced sale of 1.93 million new shares of its common stock, which would raise about $100 million in fresh cash at current share prices. That news has shares down over 15% from all-time highs. Appian trades for 13 times forward revenue expecations, so it isn&#8217;t particularly cheap. But I think this is an early AI and automation vendor worth taking seriously.</p>



<h3 class="wp-block-heading">10 stocks that could be the biggest winners of the stock market crash</h3>



<p>When investing geniuses David and Tom Gardner have a stock tip, it can pay to listen. After all, the newsletter they have run for over a decade,&nbsp;<em>Motley Fool Stock Advisor</em>, has tripled the market.*</p>



<p>David and Tom just revealed what they believe are the <strong>ten best stocks</strong> for investors to buy right now… and NVIDIA Corporation wasn&#8217;t one of them! That&#8217;s right &#8212; they think these 10 stocks are even better buys.</p>
<p>The post <a href="https://www.aiuniverse.xyz/3-top-artificial-intelligence-stocks-to-buy-in-july/">3 Top Artificial Intelligence Stocks to Buy in July</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/3-top-artificial-intelligence-stocks-to-buy-in-july/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Supermicro announces integrated A100 GPU-powered systems</title>
		<link>https://www.aiuniverse.xyz/supermicro-announces-integrated-a100-gpu-powered-systems/</link>
					<comments>https://www.aiuniverse.xyz/supermicro-announces-integrated-a100-gpu-powered-systems/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 18 May 2020 06:06:51 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[HPC]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[supercomputer]]></category>
		<category><![CDATA[Supermicro]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8831</guid>

					<description><![CDATA[<p>Source: datacentrenews.eu Super Micro Computer has announced two new systems designed for artificial intelligence (AI) deep learning applications that leverage the third-generation NVIDIA HGX technology with the <a class="read-more-link" href="https://www.aiuniverse.xyz/supermicro-announces-integrated-a100-gpu-powered-systems/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/supermicro-announces-integrated-a100-gpu-powered-systems/">Supermicro announces integrated A100 GPU-powered systems</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: datacentrenews.eu</p>



<p>Super Micro Computer has announced two new systems designed for artificial intelligence (AI) deep learning applications that leverage the third-generation NVIDIA HGX technology with the new NVIDIA A100 Tensor Core GPUs as well as full support for the new NVIDIA A100 GPUs across the company’s broad portfolio of 1U, 2U, 4U and 10U GPU servers.&nbsp;</p>



<p>NVIDIA A100 is the first elastic, multi-instance GPU that unifies training, inference, HPC, and analytics.</p>



<p>“Expanding upon our portfolio of GPU systems and NVIDIA HGX-2 system technology, Supermicro is introducing a new 2U system implementing the new NVIDIA HGX A100 4 GPU board (formerly codenamed Redstone) and a new 4U system based on the new NVIDIA HGX A100 8 GPU board (formerly codenamed Delta) delivering 5 PetaFLOPS of AI performance,” says Supermicro CEO and president Charles Liang.&nbsp;</p>



<p>“As GPU accelerated computing evolves and continues to transform data centers, Supermicro will provide customers the very latest system advancements to help them achieve maximum acceleration at every scale while optimising GPU utilisation. These new systems will significantly boost performance on all accelerated workloads for HPC, data analytics, deep learning training and deep learning inference.”</p>



<p>As a balanced data centre platform for HPC and AI applications, Supermicro’s new 2U system leverages the NVIDIA HGX A100 4 GPU board with four direct-attached NVIDIA A100 Tensor Core GPUs using PCI-E 4.0 for maximum performance and NVIDIA NVLink for high-speed GPU-to-GPU interconnects.&nbsp;</p>



<p>This GPU system accelerates compute, networking and storage performance with support for one PCI-E 4.0 x8 and up to four PCI-E 4.0 x16 expansion slots for GPUDirect RDMA high-speed network cards and storage such as InfiniBand HDR, which supports up to 200Gb per second bandwidth.&nbsp;</p>



<p>&nbsp;“AI models are exploding in complexity as they take on next-level challenges such as accurate conversational AI, deep recommender systems and personalised medicine,” says NVIDIA accelerated computing general manager and vice president Ian Buck.</p>



<p>“By implementing the NVIDIA HGX A100 platform into their new servers, Supermicro provides customers the powerful performance and massive scalability that enable researchers to train the most complex AI networks at unprecedented speed.”</p>



<p>Optimised for AI and machine learning, Supermicro’s new 4U system supports eight A100 Tensor Core GPUs.&nbsp;</p>



<p>The 4U form factor with eight GPUs is ideal for customers that want to scale their deployment as their processing requirements expand.&nbsp;</p>



<p>The new 4U system will have one NVIDIA HGX A100 8 GPU board with eight A100 GPUs all-to-all connected with NVIDIA NVSwitch for up to 600GB per second GPU-to-GPU bandwidth and eight expansion slots for GPUDirect RDMA high-speed network cards.&nbsp;</p>



<p>Ideal for deep learning training, data centres can use this scale-up platform to create next-gen AI and maximise data scientists’ productivity with support for ten x16 expansion slots.</p>



<p>Customers can expect a significant performance boost across Supermicro’s extensive portfolio of 1U, 2U, 4U and 10U multi-GPU servers when they are equipped with the new NVIDIA A100 GPUs.&nbsp;&nbsp;</p>



<p>For maximum acceleration, Supermicro’s new A+ GPU system supports up to eight full-height double-wide (or single-wide) GPUs via direct-attach PCI-E 4.0 x16 CPU-to-GPU lanes without any PCI-E switch for the lowest latency and highest bandwidth.&nbsp;</p>



<p>The system also supports up to three additional high-performance PCI-E 4.0 expansion slots for a variety of uses, including high-performance networking connectivity up to 100G. An additional AIOM slot supports a Supermicro AIOM card or an OCP 3.0 mezzanine card.</p>



<p>With 1U, 2U, 4U, and 10U rackmount GPU systems; Ultra, BigTwin, and embedded systems supporting GPUs; as well as GPU blade modules for our 8U SuperBlade, Supermicro offers the industry’s widest and deepest selection of GPU systems to power applications from Edge to Cloud.</p>



<p>To deliver enhanced security and unprecedented performance at the edge, Supermicro plans to add the new NVIDIA EGXB A100 configuration to its edge server portfolio.&nbsp;</p>



<p>The EGX A100 converged accelerator combines a Mellanox SmartNIC with GPUs powered by the new NVIDIA Ampere architecture, so enterprises can run AI at the edge more securely.</p>
<p>The post <a href="https://www.aiuniverse.xyz/supermicro-announces-integrated-a100-gpu-powered-systems/">Supermicro announces integrated A100 GPU-powered systems</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/supermicro-announces-integrated-a100-gpu-powered-systems/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Seeing AI to AI: NVIDIA Deepens Ties with Top Research Center</title>
		<link>https://www.aiuniverse.xyz/seeing-ai-to-ai-nvidia-deepens-ties-with-top-research-center/</link>
					<comments>https://www.aiuniverse.xyz/seeing-ai-to-ai-nvidia-deepens-ties-with-top-research-center/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 09 Apr 2020 08:21:29 +0000</pubDate>
				<category><![CDATA[AI-ONE]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Andreas Dengel]]></category>
		<category><![CDATA[DFKI]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[Research Center]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8067</guid>

					<description><![CDATA[<p>Source: martechseries.com Andreas Dengel wants to get AI into more people’s hands while he continues to advance the technology. Sharing that mission and a history of close <a class="read-more-link" href="https://www.aiuniverse.xyz/seeing-ai-to-ai-nvidia-deepens-ties-with-top-research-center/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/seeing-ai-to-ai-nvidia-deepens-ties-with-top-research-center/">Seeing AI to AI: NVIDIA Deepens Ties with Top Research Center</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: martechseries.com</p>



<p>Andreas Dengel wants to get AI into more people’s hands while he continues to advance the technology.</p>



<p>Sharing that mission and a history of close ties, NVIDIA just joined him and his roughly 1,000 colleagues as a shareholder in the German Research Center for Artificial Intelligence (DFKI).</p>



<p>“A study last week said many companies are collecting data, but they don’t know what to do with it. We can help them join an increasingly data-driven economy,” said Dengel. He serves as head of DFKI’s site in Kaiserslautern in southwest Germany, a member of DFKI’s management board and the scientific director of its smart data and knowledge services group.</p>



<p>One company is already testing a prototype AI sandbox and catalog DFKI built to let users try deep learning. The catalog includes 35 top neural networks for imaging with audio and video models on the way, targeting commercial use next year.</p>



<p>“Removing the boundaries for companies who want to use AI is very important,” said Dengel.</p>



<h4 class="wp-block-heading"><strong>Advancing AI in Germany and Beyond</strong></h4>



<p>While DFKI spreads AI’s use, it also aims to advance the technology. The research institute is part of a group making a proposal for a German national supercomputing center focused on AI. A decision is expected later this year on the center, expected to have a budget of up to $16 million a year.</p>



<p>The effort comes amid a pan-European drive to beef up AI research.&nbsp;In mid-March, the European Commission granted&nbsp;a new AI research alliance&nbsp;about $54 million. It’s seen as a downpayment on the region’s future investments in AI.</p>



<p>In this environment, DFKI has no shortage of ambitions. One of its research teams is wrestling with the grand challenge of explainable AI, understanding how deep learning gets its amazing results.</p>



<p>A member of the team presented a paper in June 2018 that&nbsp;won an award&nbsp;from NVIDIA’s founder and CEO Jensen Huang. The paper described a way one neural network can monitor another to understand and optimize its processes.</p>



<p>The work put some light on how deep learning gets its impressive results. But there’s much more to be done as the types of neural networks and datasets proliferate.</p>



<p>“Experts who depend on AI systems should be able to visualize or explain their processes. That’s especially critical for applications on finance and healthcare,” Dengel said.</p>



<p>It’s one of some 250 projects across 20 departments at DFKI, one of the world’s largest AI research centers.</p>



<p>Among other projects, one team is helping the German federal bank apply AI. Another conducts 600-hour tests of car engines, making predictions with AI based on the results. Yet another uses GPUs to analyze high-resolution satellite images, helping coordinate disaster relief efforts.</p>



<h4 class="wp-block-heading"><strong>17 Petaflops of GPU Compute and Growing</strong></h4>



<p>DFKI computer rooms pack an estimated 17 petaflops of GPU computing power with multiple&nbsp;NVIDIA DGX systems. They include what was&nbsp;the first DGX-2 in Europe, all linked on Mellanox InfiniBand switches.</p>



<p>It’s a lot of horsepower, but not enough to meet rising demands. The group spun up a climate modeling application two months ago, satellites are “growing exponentially” for imaging applications. And DFKI has a new collaboration with the European Space Agency that will spawn multiple projects.</p>



<p>“We are at the limit of our systems’ use. Our goal is to expand in a big way. We want to provide infrastructure that’s a platform for both the German and the broader European industry,” said Dengel.</p>



<p>“Our experience has shown that by putting apps on NVIDIA GPU clusters companies better understand what GPU acceleration can do for them,” he added.</p>



<p>In the wake of the shareholder agreement, DFKI and NVIDIA are discussing plans for collaborating on software projects. It’s another step in deepening ties at many levels.</p>



<p>The two groups also sometimes share talented people. A former DFKI professor is now an NVIDIA architect, and a handful of DFKI grad students are just back from a sabbatical working with NVIDIA.</p>
<p>The post <a href="https://www.aiuniverse.xyz/seeing-ai-to-ai-nvidia-deepens-ties-with-top-research-center/">Seeing AI to AI: NVIDIA Deepens Ties with Top Research Center</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/seeing-ai-to-ai-nvidia-deepens-ties-with-top-research-center/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>NVIDIA’s Deep Learning Super Sampling (DLSS) 2.0 Technology Is The Real Deal</title>
		<link>https://www.aiuniverse.xyz/nvidias-deep-learning-super-sampling-dlss-2-0-technology-is-the-real-deal/</link>
					<comments>https://www.aiuniverse.xyz/nvidias-deep-learning-super-sampling-dlss-2-0-technology-is-the-real-deal/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 30 Mar 2020 10:29:38 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[DLSS]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7838</guid>

					<description><![CDATA[<p>Source: forbes.com This past week, NVIDIA announced an update to its Deep Learning Super Sampling technology, aptly dubbed DLSS 2.0. If you recall, DLSS originally a launched alongside <a class="read-more-link" href="https://www.aiuniverse.xyz/nvidias-deep-learning-super-sampling-dlss-2-0-technology-is-the-real-deal/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/nvidias-deep-learning-super-sampling-dlss-2-0-technology-is-the-real-deal/">NVIDIA’s Deep Learning Super Sampling (DLSS) 2.0 Technology Is The Real Deal</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: forbes.com</p>



<p>This past week, NVIDIA announced an update to its Deep Learning Super Sampling technology, aptly dubbed DLSS 2.0. If you recall, DLSS originally a launched alongside the company’s Turning-based GeForce RTX series graphics cards in the fall of 2018, and this latest iteration improves the technology in a number of meaningful ways, that benefit not only gamers, but game developers as well.</p>



<p>DLSS essentially takes lower-resolution imagery, and intelligently upscales it to looks like native, higher-resolution output. DLSS 1.0 required game developers to provide game assets to NVIDIA, who would then feed the data to a DGX-1 neural network to evaluate the game’s visuals, frame by frame, comparing them to a “ground truth” golden sample of image quality. The ground truth reference was a full resolution image with 64x super-sampling (64xSS) applied, to provide extreme detail and ultra-high-quality anti-aliasing. The neural network is then tasked with producing an image output, measuring the difference between it and the 64xSS ground truth image quality target, and adjusting its weights accordingly in an effort to perfect the image on the next iteration; and the process continues until the model is built for that particular game.</p>



<p>In practice, DLSS 1.0’s effect on game visuals and performance was a mixed bag, that varied from title to title. Performance was universally improved to varying degrees since the game was being internally rendered at a lower resolution, but image quality wasn’t always optimal and sometimes resulted in unwanted artifacts.</p>



<p>DLSS 2.0 improves upon the original release in just about every way. Game developers are still required to implement the technology into their games (it’s not a tick-box feature than can be enabled universally via drivers), but they no longer have to provide game-specific assets to NVIDIA. DLSS 2.0 is trained using generalized game content on a neural network that works across virtually all games. NVIDIA has also improved DLSS 2.0’s performance by leveraging the Tensor cores available in its GeForce RTX-series GPUs and image quality is drastically improved as well.</p>



<p> Users have three quality options to choose from with DLSS 2.0: Quality, Balanced, and Performance. And the feature can now be enabled at any resolution. The lower-resolution internal rendering target will vary, but generally speaking, DLSS 2.0 should off significant performance and image quality gains versus DLSS 1.0. </p>



<p>Although the feature is brand new, a couple of games already support DLSS 2.0. In this evaluation at HotHardware, which tested DLSS 2.0 with MechWarrior 5 and Control, both a mainstream GeForce RTX 2060 and high-end GeForce RTX 2080 Super showed massive performance gains at all DLSS 2.0 quality levels. In-game image quality was clearly improved as well.</p>



<p>Ben Funk states, “NVIDIA deserves a lot of credit for what it&#8217;s doing with DLSS 2.0. Not only does it overall improve image quality and performance, it does so at no cost to GeForce RTX owners. It&#8217;s not very often that mid-generation software updates bring along this kind of improvement.”</p>



<p>To date, only a few games that support DLSS 2.0 have been announced and existing titles that already support DLSS 1.0 will need to be patched to support the new technology. Considering the barriers NVIDIA has removed to more easily implement DLSS 2.0 though, and the performance and image quality benefits, additional game developers are likely to hop on board.</p>
<p>The post <a href="https://www.aiuniverse.xyz/nvidias-deep-learning-super-sampling-dlss-2-0-technology-is-the-real-deal/">NVIDIA’s Deep Learning Super Sampling (DLSS) 2.0 Technology Is The Real Deal</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/nvidias-deep-learning-super-sampling-dlss-2-0-technology-is-the-real-deal/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Microsoft Expands NVIDIA GPU Preview Integration with Azure</title>
		<link>https://www.aiuniverse.xyz/microsoft-expands-nvidia-gpu-preview-integration-with-azure/</link>
					<comments>https://www.aiuniverse.xyz/microsoft-expands-nvidia-gpu-preview-integration-with-azure/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 27 Mar 2020 07:04:21 +0000</pubDate>
				<category><![CDATA[Microsoft Azure Machine Learning]]></category>
		<category><![CDATA[Azure Stack Edge]]></category>
		<category><![CDATA[EGX]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Microsoft]]></category>
		<category><![CDATA[Nvidia]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7764</guid>

					<description><![CDATA[<p>Source: winbuzzer.com Since the COVID-19 outbreak started in in January, we have seen many tech events cancelled or converted to virtual conferences. Form Mobile World Conference (MWC) <a class="read-more-link" href="https://www.aiuniverse.xyz/microsoft-expands-nvidia-gpu-preview-integration-with-azure/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/microsoft-expands-nvidia-gpu-preview-integration-with-azure/">Microsoft Expands NVIDIA GPU Preview Integration with Azure</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: winbuzzer.com</p>



<p> Since the COVID-19 outbreak started in in January, we have seen many tech events cancelled or converted to virtual conferences. Form Mobile World Conference (MWC) being cancelled and Microsoft’s Build 2020 heading online, tech events have taken a beating. Among them was NVIDIA’s GPU Technology Conference (GTC), which went virtual. </p>



<p> The conference is now happening online and will run until March 26. One of the announcements made at GTC was Microsoft confirming it has expanded Azure Stack Edge with NVIDIA preview. </p>



<p>NVIDIA GPU preview first arrived at MWC 2019 and is a collaboration involving Microsoft Azure and NVIDIA’s EGX platform. More specifically, Azure Stack Edge uses the NVIDIA T4 Tensor Core GPU to boost the capabilities of machine learning.</p>



<p>In its blog post, Microsoft says the preview has been adopted by industries who want to leverage Azure for machine learning.</p>



<p>Essentially, NVIDIA T4 Tensor Core GPU is leveraged by the Azure Stack Edge in order to bring a hardware boost to machine learning (ML) workloads. Microsoft states that since its introduction, it has observed a largely positive reception to the idea, with a variety of industries looking to take advantage of the improved Azure ML capabilities.</p>



<h2 class="wp-block-heading">Benefits of Azure Edge Stack</h2>



<ul class="wp-block-list"><li><strong>“Azure Machine Learning</strong>: Build and train your model in the cloud, then deploy it to the edge for FPGA or GPU-accelerated inferencing.</li><li><strong>Edge Compute</strong>: Run IoT, AI, and business applications in containers at your location. Use these to interact with your local systems, or to pre-process your data before it transfers to Azure.</li><li> <strong>Cloud Storage Gateway</strong>: Automatically transfer data between the local appliance and your Azure Storage account.  Azure Stack Edge caches the hottest data locally and speaks file and object protocols to your on-prem applications.</li><li><strong>Azure-managed appliance</strong>: Easily order and manage Azure Stack Edge from the Azure Portal.  No initial capex fees; pay as you go, just like any other Azure service.” </li></ul>
<p>The post <a href="https://www.aiuniverse.xyz/microsoft-expands-nvidia-gpu-preview-integration-with-azure/">Microsoft Expands NVIDIA GPU Preview Integration with Azure</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/microsoft-expands-nvidia-gpu-preview-integration-with-azure/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deep Instinct, The Deep Learning Cybersecurity Innovator, Raises $43 Million Series C Financing To Accelerate Growth</title>
		<link>https://www.aiuniverse.xyz/deep-instinct-the-deep-learning-cybersecurity-innovator-raises-43-million-series-c-financing-to-accelerate-growth/</link>
					<comments>https://www.aiuniverse.xyz/deep-instinct-the-deep-learning-cybersecurity-innovator-raises-43-million-series-c-financing-to-accelerate-growth/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 14 Feb 2020 06:30:33 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Deep Instinct]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[EliteBook]]></category>
		<category><![CDATA[Funding]]></category>
		<category><![CDATA[HP Sure Sense]]></category>
		<category><![CDATA[Nvidia]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6753</guid>

					<description><![CDATA[<p>Source: aithority.com Deep Instinct, the first and only cybersecurity company to successfully apply end-to-end deep learning to predict, identify, and prevent cyberattacks, announced its $43 million Series <a class="read-more-link" href="https://www.aiuniverse.xyz/deep-instinct-the-deep-learning-cybersecurity-innovator-raises-43-million-series-c-financing-to-accelerate-growth/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-instinct-the-deep-learning-cybersecurity-innovator-raises-43-million-series-c-financing-to-accelerate-growth/">Deep Instinct, The Deep Learning Cybersecurity Innovator, Raises $43 Million Series C Financing To Accelerate Growth</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:  aithority.com</p>



<p>Deep Instinct, the first and only cybersecurity company to successfully apply end-to-end deep learning to predict, identify, and prevent cyberattacks, announced its $43 million Series C funding. The round was led by Millennium New Horizons, with participation from Unbound, the London-based investment firm founded by Shravin Mittal, along with LG, and existing investor NVIDIA. Deep Instinct now counts four of the world’s largest technology companies amongst its investors, with HP Inc. and Samsung having participated in previous financing rounds. The investment, which brings the company’s total funding to $100 million, will be used to accelerate sales and marketing, as well as to support the expansion of business operations globally.</p>



<p>“Traditional cybersecurity is broken,” said Guy Caspi, co-founder and CEO of Deep Instinct. “Current solutions based on ‘assume breach’ are simply insufficient for the highly sophisticated attack landscape we all face. Deep Instinct takes an entirely new approach, preventing attacks before they are executed.”</p>



<p>Unlike traditional security solutions that primarily guard against known threats in the Windows operating system and help identify a cyberattack once it has already breached a system, Deep Instinct uses a patented deep learning platform trained to identify and prevent first-seen, sophisticated and advanced cyber threats. Threats are prevented anywhere within the enterprise from any type of file-based or file-less cyber attacks in zero-time, with unmatched accuracy and speed.</p>



<p>Deep Instinct’s deep learning protection has the lowest level of false positives of any cybersecurity provider. It is inclusive of physical and virtual networks, endpoints, and mobile, across multiple operating systems (Windows, iOS, Android, Chrome OS, and macOS).</p>



<p>“This significant round of new funding highlights the importance of prevention for every enterprise. The economic impact of repairing a breach is too high to ignore the need to prevent threats before they occur. The message to the market is that to fight today’s cyber threats true prevention will become more critical than detection and response.” said Lane Bess, Deep Instinct’s Chairman.</p>



<p>“There is no shortage of cybersecurity software providers, yet no company aside from Deep Instinct has figured out how to apply deep learning to automate malware analysis,” said Ray Cheng, Partner at Millennium New Horizons. “What excites us most about Deep Instinct is its proven ability to use its proprietary neural network to effectively detect viruses and malware no other software can catch. That genuine protection in an age of escalating threats, without the need of exorbitantly expensive or complicated systems, is a paradigm change.”</p>



<p>Deep Instinct recently announced an OEM partnership with HP Inc. to launch HP Sure Sense, on HP’s latest EliteBook and ZBook devices. By leveraging Deep Instinct’s deep learning threat prevention engine, HP Sure Sense provides zero-time detection and prevention against the most advanced cyber threats.</p>



<p>“Artificial intelligence is now sweeping across industries, bringing benefits to a wide range of vertical markets,” said Jeff Herbst, Vice President of Business Development at NVIDIA. “Deep Instinct’s unique approach in applying true deep learning to cybersecurity is yielding revolutionary breakthroughs that are being embraced by a growing market.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-instinct-the-deep-learning-cybersecurity-innovator-raises-43-million-series-c-financing-to-accelerate-growth/">Deep Instinct, The Deep Learning Cybersecurity Innovator, Raises $43 Million Series C Financing To Accelerate Growth</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deep-instinct-the-deep-learning-cybersecurity-innovator-raises-43-million-series-c-financing-to-accelerate-growth/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
