<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>PyTorch Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/category/pytorch/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/category/pytorch/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Sat, 10 Oct 2020 06:09:56 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>
	<item>
		<title>NVIDIA NeMo: An Open-Source Toolkit For Developing State-Of-The-Art Conversational AI Models In Three Lines Of Code</title>
		<link>https://www.aiuniverse.xyz/nvidia-nemo-an-open-source-toolkit-for-developing-state-of-the-art-conversational-ai-models-in-three-lines-of-code/</link>
					<comments>https://www.aiuniverse.xyz/nvidia-nemo-an-open-source-toolkit-for-developing-state-of-the-art-conversational-ai-models-in-three-lines-of-code/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 10 Oct 2020 06:09:52 +0000</pubDate>
				<category><![CDATA[PyTorch]]></category>
		<category><![CDATA[AI models]]></category>
		<category><![CDATA[Developing]]></category>
		<category><![CDATA[Neural modules]]></category>
		<category><![CDATA[Nvidia]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12093</guid>

					<description><![CDATA[<p>Source: marktechpost.com NVIDIA’s open-source toolkit, NVIDIA NeMo( Neural Models), is a revolutionary step towards the advancement of Conversational AI. Based on PyTorch, it allows one to build quickly, train, and fine-tune conversational AI models. As the world is getting more digital, Conversational AI is a way to enable communication between humans and computers. The set <a class="read-more-link" href="https://www.aiuniverse.xyz/nvidia-nemo-an-open-source-toolkit-for-developing-state-of-the-art-conversational-ai-models-in-three-lines-of-code/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/nvidia-nemo-an-open-source-toolkit-for-developing-state-of-the-art-conversational-ai-models-in-three-lines-of-code/">NVIDIA NeMo: An Open-Source Toolkit For Developing State-Of-The-Art Conversational AI Models In Three Lines Of Code</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: marktechpost.com</p>



<p>NVIDIA’s open-source toolkit, NVIDIA NeMo( Neural Models), is a revolutionary step towards the advancement of Conversational AI. Based on PyTorch, it allows one to build quickly, train, and fine-tune conversational AI models.</p>



<p>As the world is getting more digital, Conversational AI is a way to enable communication between humans and computers. The set of technologies behind some fascinating technologies like automated messaging, speech recognition, voice chatbots, text to speech, etc. It broadly comprises three areas of AI research: automatic speech recognition (ASR), natural language processing (NLP), and speech synthesis (or text-to-speech, TTS). </p>



<p>Conversational AI has shaped the path of human-computer interaction, making it more accessible and exciting. The latest advancements in Conversational AI like NVIDIA NeMo help bridge the gap between machines and humans.</p>



<p>NVIDIA NeMo consists of two subparts: NeMo Core and NeMo Collections. NeMo Core deals with all models generally, whereas NeMo Collections deals with models’ specific domains. In Nemo’s Speech collection (nemo_asr), you’ll find models and various building blocks for speech recognition, command recognition, speaker identification, speaker verification, and voice activity detection. NeMo’s NLP collection (nemo_nlp) contains models for tasks such as question answering, punctuation, named entity recognition, and many others. Finally, in NeMo’s Speech Synthesis (nemo_tts), you’ll find several spectrogram generators and vocoders, which will let you generate synthetic speech.</p>



<p>There are three main concepts in NeMo: model, neural module, and neural type.&nbsp;</p>



<ul class="wp-block-list"><li><strong>Models</strong>&nbsp;contain all the necessary information regarding training, fine-tuning, neural network implementation, tokenization, data augmentation, infrastructure details like the number of GPU nodes,etc., optimization algorithm, etc.</li><li><strong>Neural modules</strong>&nbsp;are a sort of encoder-decoder architecture consisting of conceptual building blocks responsible for different tasks. It represents the logical part of a neural network and forms the basis for describing the model and its training process. Collections have many neural modules that can be reused whenever required.</li><li>Inputs and outputs to Neural Modules are typed with&nbsp;<strong>Neural Types</strong>. A Neural Type is a pair that contains the information about the tensor’s axes layout and semantics of its elements. Every Neural Module has input_types and output_types properties that describe what kinds of inputs this module accepts and what types of outputs it returns.</li></ul>



<p>Even though NeMo is based on PyTorch, it can also be effectively used with other projects like PyTorch Lightning and Hydra. Integration with Lightning makes it easier to train models with mixed precision using Tensor Cores and can scale training to multiple GPUs and compute nodes. It also has some features like logging, checkpointing, overfit checking, etc. Hydra also allows the parametrization of scripts to keep it well organized. It makes it easier to streamline everyday tasks for users.</p>
<p>The post <a href="https://www.aiuniverse.xyz/nvidia-nemo-an-open-source-toolkit-for-developing-state-of-the-art-conversational-ai-models-in-three-lines-of-code/">NVIDIA NeMo: An Open-Source Toolkit For Developing State-Of-The-Art Conversational AI Models In Three Lines Of Code</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/nvidia-nemo-an-open-source-toolkit-for-developing-state-of-the-art-conversational-ai-models-in-three-lines-of-code/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deep Learning Restores Time-Ravaged Photos</title>
		<link>https://www.aiuniverse.xyz/deep-learning-restores-time-ravaged-photos/</link>
					<comments>https://www.aiuniverse.xyz/deep-learning-restores-time-ravaged-photos/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 05 Oct 2020 09:29:52 +0000</pubDate>
				<category><![CDATA[PyTorch]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[GitHub]]></category>
		<category><![CDATA[Photos]]></category>
		<category><![CDATA[researchers]]></category>
		<category><![CDATA[Restores]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11933</guid>

					<description><![CDATA[<p>Source: i-programmer.info Researchers have devised a novel deep learning approach to repairing the damage suffered by old photographic prints. The project is open source and a PyTorch implementation is downloadable from GitHub. There&#8217;s also a Colab where you can try it out. We&#8217;ve encountered neural networks that can colorize old black and white shots, can <a class="read-more-link" href="https://www.aiuniverse.xyz/deep-learning-restores-time-ravaged-photos/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-restores-time-ravaged-photos/">Deep Learning Restores Time-Ravaged Photos</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: i-programmer.info</p>



<p>Researchers have devised a novel deep learning approach to repairing the damage suffered by old photographic prints. The project is open source and a PyTorch implementation is downloadable from GitHub. There&#8217;s also a Colab where you can try it out.</p>



<p>We&#8217;ve encountered neural networks that can colorize old black and white shots, can improve on photographs of landscapes and even paint portraits in the style of an old master. Here the goal is more modest &#8211; to apply a deep learning approach to restoring old photos that have suffered severe degradation.</p>



<p>The researchers, from Microsoft Research Asia in Beijing, China and at the University of Science and Technology of China, and now the City University of Hong Kong start from the premise that:</p>



<p>Photos are taken to freeze the happy moments that otherwise gone. Even though time goes by, one can still evoke memories of the past by viewing them. Nonetheless, old photo prints deteriorate when kept in poor environmental condition, which causes the valuable photo content permanently damaged.</p>



<p>As manual retouching of prints is laborious and time-consuming they set out to design automatic algorithms that can instantly repair old photos for those who wish to bring them back to life.&nbsp;</p>



<p>The researchers presented their work as an oral presentation at CVPR 2020, held virtually in June  and their paper, &#8220;Bringing Old Photos Back to Life&#8221;, which is part of the conference proceedings is already available.</p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-restores-time-ravaged-photos/">Deep Learning Restores Time-Ravaged Photos</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deep-learning-restores-time-ravaged-photos/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>THIS LATEST MODEL SERVING LIBRARY HELPS DEPLOY PYTORCH MODELS AT SCALE</title>
		<link>https://www.aiuniverse.xyz/this-latest-model-serving-library-helps-deploy-pytorch-models-at-scale/</link>
					<comments>https://www.aiuniverse.xyz/this-latest-model-serving-library-helps-deploy-pytorch-models-at-scale/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 04 May 2020 06:57:20 +0000</pubDate>
				<category><![CDATA[PyTorch]]></category>
		<category><![CDATA[automatically]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[developed]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8542</guid>

					<description><![CDATA[<p>Source: analyticsindiamag.com PyTorch has become popular within organisations to develop superior deep learning products. But building, scaling, securing, and managing models in production due to lack of PyTorch’s model server was keeping companies from going all in. The robust model server allows loading one or more models and automatically generating prediction API, backed by a <a class="read-more-link" href="https://www.aiuniverse.xyz/this-latest-model-serving-library-helps-deploy-pytorch-models-at-scale/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/this-latest-model-serving-library-helps-deploy-pytorch-models-at-scale/">THIS LATEST MODEL SERVING LIBRARY HELPS DEPLOY PYTORCH MODELS AT SCALE</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: analyticsindiamag.com</p>



<p>PyTorch has become popular within organisations to develop superior deep learning products. But building, scaling, securing, and managing models in production due to lack of PyTorch’s model server was keeping companies from going all in. The robust model server allows loading one or more models and automatically generating prediction API, backed by a scalable web server. Besides, it also offers production-critical features like logging, monitoring, and security.</p>



<p>Until now, TensorFlow Serving and Multi-Model Server catered to the needs of developers in production, but the lack of a model server that could effectively manage the workflows with PyTorch was causing hindrance among users. Consequently, to simplify the model development process, Facebook and Amazon collaborated to bring TorchServe, a PyTorch model serving library, that assists in deploying trained PyTorch models at scale without having to write custom code.</p>



<h4 class="wp-block-heading">TorchServe &amp; TorchElastic</h4>



<p>Motivated by the request from Alex Wong on GitHub, Facebook and AWS released the much-needed service for PyTorch enthusiasts. TorchServe will be available as part of the PyTorch open-source project. Users can not only bring their models to production quicker for low latency prediction API, but also embed default handlers for the most common applications, such as object detection and text classification.</p>



<p>TorchServe also includes multi-model serving, model versioning for A/B testing, monitoring metrics, and RESTful endpoints for application integration. Developers can leverage the model server on various machine learning environments, including Amazon SageMaker, container services, and EC2 (Amazon Elastic Computer Cloud).</p>
<p>The post <a href="https://www.aiuniverse.xyz/this-latest-model-serving-library-helps-deploy-pytorch-models-at-scale/">THIS LATEST MODEL SERVING LIBRARY HELPS DEPLOY PYTORCH MODELS AT SCALE</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/this-latest-model-serving-library-helps-deploy-pytorch-models-at-scale/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AWS Announces Support for PyTorch with Amazon Elastic Inference</title>
		<link>https://www.aiuniverse.xyz/aws-announces-support-for-pytorch-with-amazon-elastic-inference/</link>
					<comments>https://www.aiuniverse.xyz/aws-announces-support-for-pytorch-with-amazon-elastic-inference/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 24 Mar 2020 06:31:01 +0000</pubDate>
				<category><![CDATA[PyTorch]]></category>
		<category><![CDATA[Amazon]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[Elastic Inference]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7663</guid>

					<description><![CDATA[<p>Source: datanami.com AWS has announced that the Amazon Elastic Inference is now compatible with PyTorch models. PyTorch, which AWS describes as a “popular deep learning framework that uses dynamic computational graphs,” is a piece of free, open-source software developed largely by Facebook’s AI Research Lab (FAIR) that allows developers to more easily apply Python code <a class="read-more-link" href="https://www.aiuniverse.xyz/aws-announces-support-for-pytorch-with-amazon-elastic-inference/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/aws-announces-support-for-pytorch-with-amazon-elastic-inference/">AWS Announces Support for PyTorch with Amazon Elastic Inference</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: datanami.com</p>



<p>AWS has announced that the Amazon Elastic Inference is now compatible with PyTorch models. PyTorch, which AWS describes as a “popular deep learning framework that uses dynamic computational graphs,” is a piece of free, open-source software developed largely by Facebook’s AI Research Lab (FAIR) that allows developers to more easily apply Python code for deep learning. With Amazon’s announcement, PyTorch can now work with Amazon’s SageMaker and EC2 cloud services. PyTorch is the third major deep learning framework to be supported by Amazon Elastic Inference, following in the footsteps of TensorFlow and Apache MXNet.&nbsp;</p>



<p>Inference – making actual predictions with a trained model – is a computing power-intensive process, accounting for up to 90% of PyTorch models’ total compute costs according to AWS. Instance selection is, therefore, important for optimization. “Optimizing for one of these resources on a standalone GPU instance usually leads to under-utilization of other resources,” wrote David Fan (a software engineer with AWS AI) and Srinivas Hanabe (a principal product manager with AWS AI for Elastic Inference) in the AWS announcement blog. “Therefore, you might pay for unused resources.”</p>



<p>The duo argue that Amazon Elastic Inference solves this problem for PyTorch by allowing users to select the most appropriate CPU instance in AWS and separately select the appropriate amount of GPU-based inference acceleration.</p>



<p>In order to use PyTorch with Elastic Inference, developers must convert their models to TorchScript. “PyTorch’s use of dynamic computational graphs greatly simplifies the model development process,” Fan and Hanabe wrote. “However, this paradigm presents unique challenges for production model deployment. In a production context, it is beneficial to have a static graph representation of the model.”&nbsp;</p>



<p>To that end, they said, TorchScript bridges the gap by allowing users to compile and export their models into a graph-based form. In the blog, the authors provide step-by-step guides for using PyTorch with Amazon Elastic Inference, including conversion to TorchScript, instance selection, and more. They also discuss cost and latency among cloud deep learning platforms, highlighting how Elastic Inference’s hybrid approach offers “the best of both worlds” by combining the advantages of CPUs and GPUs without the drawbacks of standalone instances. To that end, they presented a bar chart comparing cost-per-inference and latency across Elastic Inference models (gray), models run on standalone GPU instances (green), and models run on standalone CPU instances (blue).</p>



<p> “Amazon Elastic Inference is a low-cost and flexible solution for PyTorch inference workloads on Amazon SageMaker,” they concluded. “You can get GPU-like inference acceleration and remain more cost-effective than both standalone Amazon SageMaker GPU and CPU instances, by attaching Elastic Inference accelerators to an Amazon SageMaker instance.” </p>
<p>The post <a href="https://www.aiuniverse.xyz/aws-announces-support-for-pytorch-with-amazon-elastic-inference/">AWS Announces Support for PyTorch with Amazon Elastic Inference</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/aws-announces-support-for-pytorch-with-amazon-elastic-inference/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>PyTorch 1.4 Release Introduces Java Bindings, Distributed Training</title>
		<link>https://www.aiuniverse.xyz/pytorch-1-4-release-introduces-java-bindings-distributed-training/</link>
					<comments>https://www.aiuniverse.xyz/pytorch-1-4-release-introduces-java-bindings-distributed-training/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 26 Feb 2020 05:27:41 +0000</pubDate>
				<category><![CDATA[PyTorch]]></category>
		<category><![CDATA[applications]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Facebook]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7035</guid>

					<description><![CDATA[<p>Source: infoq.com PyTorch, Facebook&#8217;s open-source deep-learning framework, announced the release of version 1.4. This release, which will be the last version to support Python 2, includes improvements to distributed training and mobile inference and introduces support for Java. This release follows the recent announcements and presentations at the 2019 Conference on Neural Information Processing Systems (NeurIPS) in December. For training large models, <a class="read-more-link" href="https://www.aiuniverse.xyz/pytorch-1-4-release-introduces-java-bindings-distributed-training/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/pytorch-1-4-release-introduces-java-bindings-distributed-training/">PyTorch 1.4 Release Introduces Java Bindings, Distributed Training</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: infoq.com</p>



<p><br>PyTorch, Facebook&#8217;s open-source deep-learning framework, announced the release of version 1.4. This release, which will be the last version to support Python 2, includes improvements to distributed training and mobile inference and introduces support for Java.</p>



<p>This release follows the recent announcements and presentations at the 2019 Conference on Neural Information Processing Systems (NeurIPS) in December. For training large models, the release includes a distributed framework to support model-parallel training across multiple GPUs. Improvements to PyTorch Mobile allow developers to customize their build scripts, which can greatly reduce the storage required by models. Building on the Android interface for PyTorch Mobile, the release includes experimental Java bindings for using TorchScript models to perform inference. PyTorch also supports Python and C++; this release will be the last that supports Python 2 and C++ 11. According to the release notes:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>The release contains over 1,500 commits and a significant amount of effort in areas spanning existing areas like JIT, ONNX, Distributed, Performance and Eager Frontend Improvements and improvements to experimental areas like mobile and quantization.</p></blockquote>



<p>Recent trends in deep-learning research, particularly in natural-language processing (NLP), have produced larger and more complex models such as RoBERTa, with hundreds of millions of parameters. These models are too large to fit within the memory of a single GPU, but a technique called <em>model-parallel</em> training allows different subsets of the parameters of the model to be handled by different GPUs. Previous versions of PyTorch have supported <em>single-machine</em> model parallel, which requires that all the GPUs used for training be hosted in the same machine. By contrast, PyTorch 1.4 introduces a distributed remote procedure call (RPC) system which supports model-parallel training across many machines.</p>



<p>After a model is trained, it must be deployed and used for inference or prediction. Because many applications are deployed on mobile devices with limited compute, memory, and storage resources, the large models often cannot be deployed as-is. PyTorch 1.3 introduced PyTorch Mobile and TorchScript, which aimed to shorten end-to-end development cycle time by supporting the same APIs across different platforms, eliminating the need to export models to a mobile framework such as Caffe2. The 1.4 release allows developers to customize their build packages to only include the PyTorch operators needed by their models. The PyTorch team reports that customized packages can be &#8220;40% to 50% smaller than the prebuilt PyTorch mobile library.&#8221; With the new Java bindings, developers can invoke TorchScript models directly from Java code; previous versions only supported Python and C++. The Java bindings are only available on Linux.</p>



<p>Although rival deep-learning framework TensorFlow ranks as the leading choice for commercial applications, PyTorch has the lead in the research community. At the 2019 NeurIPS conference in December, PyTorch was used in 70% of the papers presented which cited a framework. Recently, both Preferred Networks, Inc (PFN) and research consortium OpenAI annouced moves to PyTorch. OpenAI claimed that &#8220;switching to PyTorch decreased our iteration time on research ideas in generative modeling from weeks to days.&#8221; In a discussion thread about the announcement, a user on Hacker News noted:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>At work, we switched over from TensorFlow to PyTorch when 1.0 was released, both for R&amp;D and production&#8230; and our productivity and happiness with PyTorch noticeably, significantly improved.</p></blockquote>



<p>The PyTorch source code and release notes for version 1.4 are available on GitHub.</p>
<p>The post <a href="https://www.aiuniverse.xyz/pytorch-1-4-release-introduces-java-bindings-distributed-training/">PyTorch 1.4 Release Introduces Java Bindings, Distributed Training</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/pytorch-1-4-release-introduces-java-bindings-distributed-training/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>PyTorch and TensorFlow: Which ML Framework is More Popular in Academia and Industry</title>
		<link>https://www.aiuniverse.xyz/pytorch-and-tensorflow-which-ml-framework-is-more-popular-in-academia-and-industry/</link>
					<comments>https://www.aiuniverse.xyz/pytorch-and-tensorflow-which-ml-framework-is-more-popular-in-academia-and-industry/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 04 Nov 2019 07:53:07 +0000</pubDate>
				<category><![CDATA[PyTorch]]></category>
		<category><![CDATA[developed]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[systems]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4987</guid>

					<description><![CDATA[<p>Source: infoq.com Horace He recently published an article summarising The State of Machine Learning Frameworks in 2019. The article utilizes several metrics to argue the point that PyTorch is quickly becoming the dominant framework for research, whereas TensorFlow is the dominant framework for applications deployed within a commercial/industrial context. He, a research student at Cornell University, counted the number of papers discusing either PyTorch or TensorFlow <a class="read-more-link" href="https://www.aiuniverse.xyz/pytorch-and-tensorflow-which-ml-framework-is-more-popular-in-academia-and-industry/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/pytorch-and-tensorflow-which-ml-framework-is-more-popular-in-academia-and-industry/">PyTorch and TensorFlow: Which ML Framework is More Popular in Academia and Industry</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: infoq.com</p>



<p>Horace He recently published an article summarising The State of Machine Learning Frameworks in 2019. The article utilizes several metrics to argue the point that PyTorch is quickly becoming the dominant framework for research, whereas TensorFlow is the dominant framework for applications deployed within a commercial/industrial context.</p>



<p>He, a research student at Cornell University, counted the number of papers discusing either PyTorch or TensorFlow that were presented at a series of well-known machine learning oriented conferences, namely ECCV, NIPS, ACL, NAACL, ICML, CVPR, ICLR, ICCV and EMNLP. In summary, the majority of papers were implemented in PyTorch for every major conference in 2019. PyTorch outnumbered TensorFlow by 2:1 in vision related conferences and 3:1 in language related conferences. PyTorch also has more references in papers published in more general Machine Learning conferences like ICLR and ICML.</p>



<p>He argued that the reasons&nbsp;that PyTorch is gaining ground includes its&nbsp;simplicity, its&nbsp;simple to use and intuitive API, and (at least) acceptable performance, when compared to TensorFlow.</p>



<p>On the other hand, the author&#8217;s metrics for measuring industry adoption show that TensorFlow is still the leader. The metrics used were: job listings, GitHub popularity, count of medium articles, etc. He posited that the answer to why the disparity between academia and industry is threefold. First of all, the overhead of a Python runtime is something that many companies will try to avoid where possible. The second reason is that PyTorch offers no support for mobile &#8220;edge&#8221; ML. Coincidentally, Mobile support has just been added to PyTorch by Facebook in version 1.3, which was released earlier this month. The third reason is the lack of features around serving, which means that PyTorch systems are harder to productionalize than equivalent systems developed using TensorFlow.</p>



<p>In the past year, PyTorch and TensorFlow have been converging in a several ways. PyTorch introduced &#8220;Torchscript&#8221; and a JIT compiler, whereas TensorFlow announced that it would be moving to an &#8220;eager mode&#8221; of execution starting from version 2.0. Torchscript is essentially a graph representation of PyTorch. Getting a graph from the code means that we can deploy the model in C++ and optimize it. TensorFlow&#8217;s eager mode provides an imperative programming environment that evaluates operations immediately, without building graphs. This is similar to PyTorch&#8217;s eager mode in both advantages and shortcomings. It helps with debugging, but then models cannot be exported outside of Python, be optimized, run on mobile, etc.</p>



<p>In the future, both frameworks will be closer than they are today. New contenders may challenge them in areas like code generation or Higher Order Differentiation. He identified a potential contender as JAX. This is built by the same people who worked on the popular Autograd project, and features both forward- and reverse-mode auto-differentiation. This allows computation of higher order derivatives &#8220;orders of magnitude faster than what PyTorch/TensorFlow can offer&#8221;.</p>



<p>Horace He, the author of the article can be contacted via Twitter; he has published both the code used to generate the datasets and also interactive charts from the article.</p>
<p>The post <a href="https://www.aiuniverse.xyz/pytorch-and-tensorflow-which-ml-framework-is-more-popular-in-academia-and-industry/">PyTorch and TensorFlow: Which ML Framework is More Popular in Academia and Industry</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/pytorch-and-tensorflow-which-ml-framework-is-more-popular-in-academia-and-industry/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Uber unveils a conversational AI platform called Plato</title>
		<link>https://www.aiuniverse.xyz/uber-unveils-a-conversational-ai-platform-called-plato/</link>
					<comments>https://www.aiuniverse.xyz/uber-unveils-a-conversational-ai-platform-called-plato/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 18 Jul 2019 12:14:41 +0000</pubDate>
				<category><![CDATA[PyTorch]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Conversational]]></category>
		<category><![CDATA[platform]]></category>
		<category><![CDATA[Plato]]></category>
		<category><![CDATA[Uber Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4055</guid>

					<description><![CDATA[<p>Source: siliconangle.com Uber Technology Inc. has open-sourced a conversational artificial intelligence engine called the Plato Research Dialog System that’s set to compete with similar offerings such as Google LLC’s Dialogflow, Microsoft Corp.’s Bot Framework, and Amazon.com Inc.’s Lex. In a blog post today, Uber’s AI research team explained that Plato is designed for building, training and deploying prototype and demonstration systems. It <a class="read-more-link" href="https://www.aiuniverse.xyz/uber-unveils-a-conversational-ai-platform-called-plato/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/uber-unveils-a-conversational-ai-platform-called-plato/">Uber unveils a conversational AI platform called Plato</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: siliconangle.com</p>



<p>Uber Technology Inc. has open-sourced a conversational artificial intelligence engine called the Plato Research Dialog System that’s set to compete with similar offerings such as Google LLC’s Dialogflow, Microsoft Corp.’s Bot Framework, and Amazon.com Inc.’s Lex.</p>



<p>In a blog post today, Uber’s AI research team explained that Plato is designed for building, training and deploying prototype and demonstration systems. It can also facilitate conversational data collection.</p>



<p>Plato comes with a “clean and understandable” design that makes it ideal for users with a limited background in conversational AI, the company said. And it can integrate with existing deep learning models, thereby reducing the need to write any code.</p>



<p>Plato version 0.1 can support interactions with humans, data and other conversational AI agents through speech, text and “structured information,” Uber said. It also supports multiple agents and can incorporate pre-trained AI models for each component of those agents. Those models can be trained using either datasets or via interactions, using popular open-source machine learning frameworks such as Google’s TensorFlow, Facebook Inc.’s PyTorch and Uber’s very own Ludwig.</p>



<p>Another aspect of Plato is its “modular design,” which breaks up data processing into seven parts. Those include speech recognition, language understanding, state tracking, API calls, dialogue policies, language generation and speech synthesis.</p>



<p>Plato also handles data logging by keeping track of events with its Dialogue Episode Recorder. The recorder saves information about previous dialogue states, what actions were taken and also current dialogue states.</p>



<p>“We believe that Plato has the capability to more seamlessly train conversational agents across deep learning frameworks, from Ludwig and TensorFlow to PyTorch, Keras and other open-source projects, leading to improved conversational AI technologies across academic and industry applications,” Uber’s AI researchers wrote in a blog post. “[We’ve] leveraged Plato to easily train a conversational agent how to ask for restaurant information and another agent how to provide such information; over time, their conversations become more and more natural.”</p>



<p>Analyst Holger Mueller of Constellation Research Inc. said one of the most interesting aspects of the Plato system was its ability to&nbsp;support multiple agents, which is necessary for Uber as it needs to facilitate multi-party chat between its customers, drivers and its own support and customer service agents.</p>



<p>“This means there need to be intelligent conversation sharing, and so it makes sense for Uber to throw its hat in the ring with Plato, despite the already crowded nature of the chatbot framework space,” Mueller said. “As with all new open-source projects we need to check&nbsp;on adoption in a few quarters, as the act of open-sourcing code assets does not guarantee developer or enterprise adoption.”</p>



<p>The release of Plato follows the debut of the aforementioned Ludwig, which is a set of open-source tools built on top of Google’s TensorFlow framework that allows users to train and test AI models without having to write code.</p>
<p>The post <a href="https://www.aiuniverse.xyz/uber-unveils-a-conversational-ai-platform-called-plato/">Uber unveils a conversational AI platform called Plato</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/uber-unveils-a-conversational-ai-platform-called-plato/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>PyTorch announces the availability of PyTorch Hub for improving machine learning research reproducibility</title>
		<link>https://www.aiuniverse.xyz/pytorch-announces-the-availability-of-pytorch-hub-for-improving-machine-learning-research-reproducibility/</link>
					<comments>https://www.aiuniverse.xyz/pytorch-announces-the-availability-of-pytorch-hub-for-improving-machine-learning-research-reproducibility/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 13 Jun 2019 10:47:37 +0000</pubDate>
				<category><![CDATA[PyTorch]]></category>
		<category><![CDATA[announces]]></category>
		<category><![CDATA[availability]]></category>
		<category><![CDATA[Hub]]></category>
		<category><![CDATA[improving]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=3781</guid>

					<description><![CDATA[<p>Source:- hub.packtpub.com Yesterday, the team at PyTorch announced the availability of PyTorch Hub which is a simple API and workflow that offers the basic building blocks to improve machine learningresearch reproducibility. Reproducibility plays an important role in research as it is an essential requirement for a lot of fields related to research including the ones based on machine <a class="read-more-link" href="https://www.aiuniverse.xyz/pytorch-announces-the-availability-of-pytorch-hub-for-improving-machine-learning-research-reproducibility/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/pytorch-announces-the-availability-of-pytorch-hub-for-improving-machine-learning-research-reproducibility/">PyTorch announces the availability of PyTorch Hub for improving machine learning research reproducibility</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source:- hub.packtpub.com</p>
<p>Yesterday, the team at PyTorch announced the availability of PyTorch Hub which is a simple API and workflow that offers the basic building blocks to improve machine learningresearch reproducibility.</p>
<p>Reproducibility plays an important role in research as it is an essential requirement for a lot of fields related to research including the ones based on machine learning techniques. But most of the machine learning based research publications are either not reproducible or are too difficult to reproduce.</p>
<p>With the increase in the number of research publications, tens of thousands of papers being hosted on arXiv and submissions to conferences, research reproducibility has now become even more important. Though most of the publications are accompanied by code and trained models that are useful but still it is difficult for users to figure out for most of the steps, themselves.</p>
<p>PyTorch Hub consists of a pre-trained model repository that is designed to facilitate research reproducibility and also to enable new research. It provides built-in support for Colab, integration with Papers With Code and also contains a set of models including classification and segmentation, transformers, generative, etc. By adding a simple hubconf.py file, it supports the publication of pre-trained models to a GitHub repository, which provides a list of models that are to be supported and a list of dependencies that are required for running the models.</p>
<p>For example, one can check out the torchvision, huggingface-bert and gan-model-zoorepositories. Considering the case of <i>torchvision</i> <i>hubconf.py</i>: In torchvision repository, each of the model files can function and can be executed independently. These model files don’t require any package except for PyTorch and they don’t need separate entry-points.</p>
<p>A <i>hubconf.py</i> can help users to send a pull request based on the template mentioned on the GitHub page.</p>
<p>The official blog post reads<i>, “Our goal is to curate high-quality, easily-reproducible, maximally-beneficial models for research reproducibility. Hence, we may work with you to refine your pull request and in some cases reject some low-quality models to be published. Once we accept your pull request, your model will soon appear on </i><i>Pytorch hub webpage</i><i> for all users to explore.”</i></p>
<p>PyTorch Hub allows users to explore available models, load a model as well as understand the kind of methods available for any given model. Below mentioned are few of the examples:</p>
<p><b>Explore available entrypoints:</b></p>
<p>With the help of torch.hub.list() API, users can now list all available entrypoints in a repo.  PyTorch Hub also allows auxillary entrypoints apart from pretrained models such as <i>bertTokenizer</i> for preprocessing in the BERT models and making the user workflow smoother.</p>
<p><b>Load a model:</b></p>
<p>With the help of torch.hub.load() API, users can load a model entrypoint. This API can also provide useful information about instantiating the model.</p>
<p>Most of the users are happy about this news as they think it will be useful for them. A user commented on HackerNews, <i>“I love that the tooling for ML experimentation is becoming more mature. Keeping track of hyperparameters, training/validation/test experiment test set manifests, code state, etc is both extremely crucial and extremely necessary.”</i></p>
<p>The post <a href="https://www.aiuniverse.xyz/pytorch-announces-the-availability-of-pytorch-hub-for-improving-machine-learning-research-reproducibility/">PyTorch announces the availability of PyTorch Hub for improving machine learning research reproducibility</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/pytorch-announces-the-availability-of-pytorch-hub-for-improving-machine-learning-research-reproducibility/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
