<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>MediaPipe Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/mediapipe/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/mediapipe/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Sat, 28 Mar 2020 09:03:12 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Google&#8217;s MediaPipe Machine Learning Framework Web-Enabled with WebAssembly</title>
		<link>https://www.aiuniverse.xyz/googles-mediapipe-machine-learning-framework-web-enabled-with-webassembly/</link>
					<comments>https://www.aiuniverse.xyz/googles-mediapipe-machine-learning-framework-web-enabled-with-webassembly/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 28 Mar 2020 09:02:44 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[MediaPipe]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7791</guid>

					<description><![CDATA[<p>Source: infoq.com Google recently presented MediaPipe graphs for browsers, enabled by WebAssembly and accelerated by the XNNPack ML Inference Library. As previously demonstrated on mobile (Android, iOS), MediaPipe graphs allow developers <a class="read-more-link" href="https://www.aiuniverse.xyz/googles-mediapipe-machine-learning-framework-web-enabled-with-webassembly/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/googles-mediapipe-machine-learning-framework-web-enabled-with-webassembly/">Google&#8217;s MediaPipe Machine Learning Framework Web-Enabled with WebAssembly</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: infoq.com</p>



<p>Google recently presented MediaPipe graphs for browsers, enabled by WebAssembly and accelerated by the XNNPack ML Inference Library. As previously demonstrated on mobile (Android, iOS), MediaPipe graphs allow developers to build and run machine-learning (ML) pipelines, to achieve complex tasks.</p>



<p>MediaPipe graphs are visualized in a browser environment with MediaPipe Visualizer, a dedicated web application that allows developers to build a graph consisting of nodes representing different machine learning or other processing tasks. The following figure corresponds to the running of the MediaPipe face detection example in the visualizer.</p>



<p>As is apparent from the graph, the face detection application transforms input frames (<code>input_frames_gpu</code>) into output frames (<code>output_frames_gpu</code>) through a series of transformations including the conversion of incoming frames into image tensors (<code>TfLiteTensor</code>), posterior processing by a TFLite model for face detection, and overlaying of annotations on the output video.</p>



<p>The visualized graph matches the facing text which contains a description of the nodes’ content and the expected processing to realize. The MediaPipe Visualizer will react in real-time to changes made within the editor in order to maintain the correspondence between text and graph. The configuration of the previous TFLite model is for instance as follows:</p>



<p> # Converts the transformed input image on GPU into an image tensor stored as a # TfLiteTensor. node  { calculator:  &#8220;TfLiteConverterCalculator&#8221; input_stream:  &#8220;IMAGE:transformed_input_video_cpu&#8221; output_stream:  &#8220;TENSORS:image_tensor&#8221; </p>



<p>Google created four demos (Edge Detection, Face Detection, Hair Segmentation, Hand Tracking) to be run in the browser.</p>



<p>The browser-enabled version of MediaPipe graphs is implemented by compiling the C++ source code to WebAssembly using Emscripten, and creating an API for all necessary communications back and forth between JavaScript and C++. Required demo assets (ML models and auxiliary text/data files) are packaged as individual binary data packages, to be loaded at runtime.</p>



<p>To optimize for performance, MediaPipe’s browser version leverages the GPU for image operations whenever possible, and resort to the lightest (yet accurate) available ML models. The XNNPack ML Inference Library is additionally used in connection with the TensorflowLite inference calculator (<a href="https://github.com/google/mediapipe/blob/master/mediapipe/calculators/tflite/tflite_inference_calculator.cc"><code>TfLiteInferenceCalculator</code></a>), resulting in an estimated 2-3x speed gain in most of applications.</p>



<p>Google plans to improve MediaPipe’s browser version and give developers more control over template graphs and assets used in the MediaPipe model files. Developers are invited to follow the Google Developer twitter account.</p>



<p>MediaPipe is a cross-platform framework for mobile devices, workstations and servers, and supports GPU acceleration. MediaPipe is available under the MIT open source license. Contributions and feedback are welcome and may be provided via the GitHub project.</p>
<p>The post <a href="https://www.aiuniverse.xyz/googles-mediapipe-machine-learning-framework-web-enabled-with-webassembly/">Google&#8217;s MediaPipe Machine Learning Framework Web-Enabled with WebAssembly</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/googles-mediapipe-machine-learning-framework-web-enabled-with-webassembly/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google brings cross-platform AI pipeline framework MediaPipe to the web</title>
		<link>https://www.aiuniverse.xyz/google-brings-cross-platform-ai-pipeline-framework-mediapipe-to-the-web/</link>
					<comments>https://www.aiuniverse.xyz/google-brings-cross-platform-ai-pipeline-framework-mediapipe-to-the-web/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 29 Jan 2020 07:50:02 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[AI pipeline]]></category>
		<category><![CDATA[framework]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[MediaPipe]]></category>
		<category><![CDATA[platform]]></category>
		<category><![CDATA[WEB]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6433</guid>

					<description><![CDATA[<p>Source: venturebeat.com Roughly a year ago, Google open-sourced MediaPipe, a framework for building cross-platform AI pipelines consisting of fast inference and media processing (like video decoding). Basically, it’s <a class="read-more-link" href="https://www.aiuniverse.xyz/google-brings-cross-platform-ai-pipeline-framework-mediapipe-to-the-web/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-brings-cross-platform-ai-pipeline-framework-mediapipe-to-the-web/">Google brings cross-platform AI pipeline framework MediaPipe to the web</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: venturebeat.com</p>



<p>Roughly a year ago, Google open-sourced MediaPipe, a framework for building cross-platform AI pipelines consisting of fast inference and media processing (like video decoding). Basically, it’s a quick and dirty way to perform object detection, face detection, hand tracking, multi-hand tracking, hair segmentation, and other such tasks in a modular fashion, with popular machine learning frameworks like Google’s own TensorFlow and TensorFlow Lite.</p>



<p>MediaPipe could previously be deployed to desktop, mobile devices running Android and iOS, and edge devices like Google’s Coral hardware family, but it’s increasingly making its way to the web courtesy WebAssembly, a portable binary code format for executable programs, and XNNPack ML Inference Library, an optimized collection of floating-point AI inference operators. On the graphics and rendering side, MediaPipe now automatically taps directly into WebGL, a JavaScript API for rendering interactive 2D and 3D graphics within any compatible web browser, such that the browser creates a virtual machine at runtime that executes instructions very quickly.</p>



<p>An API facilitates communications between JavaScript and C++, allowing users to change and interact with MediaPipe graphs directly using JavaScript. And all the requisite demo assets, including AI models and auxiliary text and data files, are packaged as individual binary data packages to be loaded at runtime.</p>



<p>“Since everything runs directly in the browser, video never leaves the user’s computer and each iteration can be immediately tested on a live webcam stream (and soon, arbitrary video),” explained MediaPipe team members Michael Hays and Tyler Mullen in a blog post.</p>



<p>Google leveraged the above-listed components to integrate preview functionality into a web-based visualizer — a sort of workspace for iterating over MediaPipe flow designs. The visualizer, which is hosted at viz.mediapipe.dev, enables developers to inspect MediaPipe graphs (frameworks for building machine learning pipelines) by pasting a graph code into the editor tab or uploading a file to the visualizer. Users can pan around and zoom into the graphical representation using a mouse and scroll wheel, and the visualization reacts to changes made within the editor in real time.</p>



<p>Hays and Mullen note that currently, web-based MediaPipe support is limited to the demo graphs supplied by Google. Developers must edit one of the template graphs — they can’t provide their own from scratch or add or alter assets. TensorFlow Lite inference isn’t supported, and the graph’s computations must be run on a single processor thread.</p>



<p>A lack of compute shaders — routines compiled for high-throughput accelerators — available for the web is to blame for this last limitation, which Hays, Mullen, and team attempted to work around by using graphic cards for image operations where possible and the lightest-weight possible versions of all AI models. They plan to “continue to build upon this new platform” and to provide developers with “much more control” over time.</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-brings-cross-platform-ai-pipeline-framework-mediapipe-to-the-web/">Google brings cross-platform AI pipeline framework MediaPipe to the web</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-brings-cross-platform-ai-pipeline-framework-mediapipe-to-the-web/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
