<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>TensorFlow Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/category/tensorflow/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/category/tensorflow/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Wed, 14 Oct 2020 06:27:12 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>
	<item>
		<title>Android Studio improves machine learning support</title>
		<link>https://www.aiuniverse.xyz/android-studio-improves-machine-learning-support/</link>
					<comments>https://www.aiuniverse.xyz/android-studio-improves-machine-learning-support/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 14 Oct 2020 06:26:23 +0000</pubDate>
				<category><![CDATA[TensorFlow]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12204</guid>

					<description><![CDATA[<p>Source: channelasia.tech Google’s Android Studio IDE team has released the stable version of Android Studio 4.1, featuring machine learning improvements and a database inspector. With the 4.1 release, Android Studio improves on-device machine learning support via backing for TensorFlow Lite models in Android projects. Android Studio generates classes so models can be run with better type safety and less <a class="read-more-link" href="https://www.aiuniverse.xyz/android-studio-improves-machine-learning-support/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/android-studio-improves-machine-learning-support/">Android Studio improves machine learning support</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: channelasia.tech</p>



<p>Google’s Android Studio IDE team has released the stable version of Android Studio 4.1, featuring machine learning improvements and a database inspector.</p>



<p>With the 4.1 release, Android Studio improves on-device machine learning support via backing for TensorFlow Lite models in Android projects. Android Studio generates classes so models can be run with better type safety and less code.</p>



<p>The database inspector, meanwhile, enables querying of an app’s database, whether the app uses the Jetpack Room library or the Android platform version of SQLite directly. Values can be modified using the database inspector, with changes seen in apps.</p>



<p>Introduced October 12 and accessible from developer.android.com, Android Studio 4.1 also makes it easier to navigate Dagger-related dependency injection code by providing a new gutter action and extending support in the Find Usages Window. For example, clicking on the gutter action next to a method that consumes a given type navigates to where a type is used as a dependency.</p>



<p>Other capabilities in Android Studio 4.1 include templates in the create New Project dialog now use Material Design Components and conform to updated guidance for themes and styles by default. These changes make it easier to recommended material styling patterns and support UI features such as dark themes.</p>



<p>Android Emulator now can also be run directly in Android Studio. This can conserve screen real estate and enable navigation quickly between the emulator and editor window using hotkeys. Also, the emulator now supports foldables, with developers able to configure foldable devices with a variety of designs and configurations.</p>



<p>In addition, symbolification for native crash reports is available; updates to Apply Changes allow for faster builds and the Android Studio Memory Profiler now includes a Native Memory Profiler for apps deployed to physical devices running Android 10 or later.</p>



<p>The Native Memory Profiler tracks allocations and deallocations of objects in native code for a specific time period and offers information about total allocations and remaining heap size.</p>



<p>Rounding off the changes, C/C++ dependencies can be exported from AAR (Android Archive) files; Android Studio Profilers can be accessed in a separate window from the primary Android Studio window, which is useful for game developers; System Trace UI improvements are on offer and 2,370 bugs were fixed and 275 public issues were closed.</p>
<p>The post <a href="https://www.aiuniverse.xyz/android-studio-improves-machine-learning-support/">Android Studio improves machine learning support</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/android-studio-improves-machine-learning-support/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google to replace TensorFlow’s runtime with TFRT</title>
		<link>https://www.aiuniverse.xyz/google-to-replace-tensorflows-runtime-with-tfrt/</link>
					<comments>https://www.aiuniverse.xyz/google-to-replace-tensorflows-runtime-with-tfrt/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 02 May 2020 09:22:40 +0000</pubDate>
				<category><![CDATA[TensorFlow]]></category>
		<category><![CDATA[deployment]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[TFRT]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8509</guid>

					<description><![CDATA[<p>Source: sdtimes.com Google has announced a new TensorFlow runtime designed to make it easier to build and deploy machine learning models across many different devices.&#160; The company explained that ML ecosystems are vastly different than they were 4 or 5 years ago. Today, innovation in ML has led to more complex models and deployment scenarios <a class="read-more-link" href="https://www.aiuniverse.xyz/google-to-replace-tensorflows-runtime-with-tfrt/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-to-replace-tensorflows-runtime-with-tfrt/">Google to replace TensorFlow’s runtime with TFRT</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: sdtimes.com</p>



<p>Google has announced a new TensorFlow runtime designed to make it easier to build and deploy machine learning models across many different devices.&nbsp;</p>



<p>The company explained that ML ecosystems are vastly different than they were 4 or 5 years ago. Today, innovation in ML has led to more complex models and deployment scenarios that require increasing compute needs.</p>



<p>To address these new needs, Google decided to take a new approach towards a high-performance low-level runtime and replace the current TensorFlow stack that is optimized for graph execution, and incurs non-trivial overhead when dispatching a single op.</p>



<p>The new TFRT provides efficient use of multithreaded host CPUs, supports fully asynchronous programming models, and focuses on low-level efficiency and is aimed at a broad range of users such as:</p>



<ul class="wp-block-list"><li>researchers looking for faster iteration time and better error reporting,</li><li>application developers looking for improved performance,</li><li>and hardware makers looking to integrate edge and datacenter devices into TensorFlow in a modular way.&nbsp;</li></ul>



<p>It is also responsible for the efficient execution of kernels – low-level device-specific primitives – on targeted hardware, and playing a critical part in both eager and graph execution.</p>



<p>“Whereas the existing TensorFlow runtime was initially built for graph execution and training workloads, the new runtime will make eager execution and inference first-class citizens, while putting special emphasis on architecture extensibility and modularity,” Eric Johnson, TRFT product manager, and Mingsheng Hong, TFRT tech lead, wrote in a post.</p>



<p>To achieve higher performance, TFRT has a lock-free graph executor that supports concurrent op execution with low synchronization overhead and has decoupled device runtimes from the host runtime, the core TFRT component that drives host CPU and I/O work.</p>



<p>The runtime is also tightly integrated with MLIR’s compiler infrastructure to generate and optimized, target-specific representation of the computational graph that the runtime executes.&nbsp;</p>



<p>“Together, TFRT and MLIR will improve TensorFlow’s unification, flexibility, and extensibility,” Johnson. and Hong wrote.</p>



<p>TFRT will be integrated into TensorFlow, and will be enabled initially through an opt-in flag, giving the team time to fix any bugs and fine-tune performance. Eventually, it will become TensorFlow’s default runtime.&nbsp;</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-to-replace-tensorflows-runtime-with-tfrt/">Google to replace TensorFlow’s runtime with TFRT</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-to-replace-tensorflows-runtime-with-tfrt/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>GOOGLE LAUNCHES TENSORFLOW RUNTIME FOR ITS TENSORFLOW ML FRAMEWORK</title>
		<link>https://www.aiuniverse.xyz/google-launches-tensorflow-runtime-for-its-tensorflow-ml-framework/</link>
					<comments>https://www.aiuniverse.xyz/google-launches-tensorflow-runtime-for-its-tensorflow-ml-framework/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 01 May 2020 07:20:41 +0000</pubDate>
				<category><![CDATA[TensorFlow]]></category>
		<category><![CDATA[applications]]></category>
		<category><![CDATA[framework]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Machine learning]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8480</guid>

					<description><![CDATA[<p>Source: analyticsindiamag.com Google has launched TensorFlow RunTime (TFRT), which is a new runtime for its TensorFlow machine learning framework.&#160; According to a recent blog post by Eric Johnson, TFRT Product Manager and Mingsheng Hong, TFRT Tech Lead/Manager, “TensorFlow RunTime aims to provide a unified, extensible infrastructure layer with best-in-class performance across a wide variety of domain-specific hardware. <a class="read-more-link" href="https://www.aiuniverse.xyz/google-launches-tensorflow-runtime-for-its-tensorflow-ml-framework/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-launches-tensorflow-runtime-for-its-tensorflow-ml-framework/">GOOGLE LAUNCHES TENSORFLOW RUNTIME FOR ITS TENSORFLOW ML FRAMEWORK</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: analyticsindiamag.com</p>



<p>Google has launched TensorFlow RunTime (TFRT), which is a new runtime for its TensorFlow machine learning framework.&nbsp;</p>



<p>According to a recent blog post by Eric Johnson, TFRT Product Manager and Mingsheng Hong, TFRT Tech Lead/Manager, “TensorFlow RunTime aims to provide a unified, extensible infrastructure layer with best-in-class performance across a wide variety of domain-specific hardware. It provides efficient use of multithreaded host CPUs, supports fully asynchronous programming models, and focuses on low-level efficiency.”</p>



<p>The company has made TFRT available on GitHub. According to the company, as part of a benchmarking study for TensorFlow Dev Summit 2020 — while comparing the performance of GPU inference over TFRT to the current runtime, we saw an improvement of 28% in average inference time. These early results are strong validation for TFRT to provide a significant boost to performance.</p>



<p>The blog further stated how TFRT could benefit a broad range of users — including the researchers who are looking for faster iteration time and better error reporting when developing complex new models in eager mode; application developers who are looking for improved performance when training and serving models in production; and hardware makers looking to integrate edge and datacenter devices into TensorFlow in a modular way.</p>



<p>Explaining further, Johnson stated that TFRT is responsible for efficient execution of kernels – low-level device-specific primitives – on targeted hardware. Alongside, it also plays a critical part in both eager and graph execution.</p>



<p>TensorFlow training stack</p>



<p>“In eager execution, TensorFlow APIs call directly into the new runtime. In graph execution, your program’s computational graph is lowered to an optimised target-specific program and dispatched to TFRT. In both execution paths, the new runtime invokes a set of kernels that call into the underlying hardware devices to complete the model execution, as shown by the black arrows,” wrote Johnson.</p>



<p>Comparing it with TensorFlow runtime, which was initially built for graph execution and training workloads, TFRT will make eager execution and inference first-class citizens, while putting special emphasis on architecture extensibility and modularity. Besides, TFRT has the following selected design highlights:</p>



<ul class="wp-block-list"><li>TFRT has a lock-free graph executor that supports concurrent op execution with low synchronisation overhead, and a thin, eager op dispatch stack so that eager API calls will be asynchronous and more efficient. This will help in achieving higher performance.</li><li>The company decoupled device runtimes from the host runtime, the core TFRT component that drives host CPU and I/O work, in order to make extending the TF stack easier.</li><li>To get consistent behaviour, TFRT leverages common abstractions, such as shape functions and kernels, across both eager and graph.</li></ul>



<p>According to the blog, “A high-performance low-level runtime is a key to enable the trends of today and empower the innovations of tomorrow.”</p>



<p>The company has limited the contributions, to begin with, but is encouraging participation in the form of requirements and design discussions.</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-launches-tensorflow-runtime-for-its-tensorflow-ml-framework/">GOOGLE LAUNCHES TENSORFLOW RUNTIME FOR ITS TENSORFLOW ML FRAMEWORK</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-launches-tensorflow-runtime-for-its-tensorflow-ml-framework/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google unveils TensorFlow tool for making mobile-ready models</title>
		<link>https://www.aiuniverse.xyz/google-unveils-tensorflow-tool-for-making-mobile-ready-models/</link>
					<comments>https://www.aiuniverse.xyz/google-unveils-tensorflow-tool-for-making-mobile-ready-models/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 20 Apr 2020 07:25:07 +0000</pubDate>
				<category><![CDATA[TensorFlow]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Metadata]]></category>
		<category><![CDATA[mobile devices]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8303</guid>

					<description><![CDATA[<p>Source: sg.channelasia.tech Google has announced TensorFlow Lite Model Maker, a tool for converting an existing TensorFlow model to the TensorFlow Lite format used to serve predictions on lightweight hardware such as mobile devices. TensorFlow models can be quite large, and serving predictions remotely from beefy hardware capable of handling them isn’t always possible. Google created the TensorFlow Lite <a class="read-more-link" href="https://www.aiuniverse.xyz/google-unveils-tensorflow-tool-for-making-mobile-ready-models/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-unveils-tensorflow-tool-for-making-mobile-ready-models/">Google unveils TensorFlow tool for making mobile-ready models</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: sg.channelasia.tech</p>



<p>Google has announced TensorFlow Lite Model Maker, a tool for converting an existing TensorFlow model to the TensorFlow Lite format used to serve predictions on lightweight hardware such as mobile devices.</p>



<p>TensorFlow models can be quite large, and serving predictions remotely from beefy hardware capable of handling them isn’t always possible.</p>



<p>Google created the TensorFlow Lite model format to make it more efficient to serve predictions locally, but creating a TensorFlow Lite version of a model previously required some work.</p>



<p>In a blog post, Google described how TensorFlow Lite Model Maker adapts existing TensorFlow models to the Lite format with only a few lines of code.</p>



<p>The adaptation process uses one of a small number of task types to evaluate the model and generate a Lite version. The downside is that only a couple of task types are available for use right now — i.e., image and text classification — so models for other tasks (e.g., machine vision) aren’t yet supported.</p>



<p>Other TensorFlow Lite tools announced in the same post include a tool to automatically generate platform-specific wrapper code to work with a given model.</p>



<p>Because hand-coding wrappers for models can be error-prone, the tool automatically generates the wrapper from metadata in the model autogenerated by Model Maker.&nbsp;The tool is currently available in a pre-release beta version, and supports only Android right now, with plans to eventually integrate it into Android Studio.</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-unveils-tensorflow-tool-for-making-mobile-ready-models/">Google unveils TensorFlow tool for making mobile-ready models</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-unveils-tensorflow-tool-for-making-mobile-ready-models/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
