<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>TinyML Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/tinyml/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/tinyml/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Sat, 12 Jun 2021 05:43:29 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Machine Learning at the Edge: TinyML Is Getting Big</title>
		<link>https://www.aiuniverse.xyz/machine-learning-at-the-edge-tinyml-is-getting-big/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-at-the-edge-tinyml-is-getting-big/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 12 Jun 2021 05:43:27 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Big]]></category>
		<category><![CDATA[Edge]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[TinyML]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14245</guid>

					<description><![CDATA[<p>Source &#8211; https://jpt.spe.org/ Being able to deploy machine learning applications at the edge is the key to unlocking a multibillion-dollar market. TinyML is the art and science <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-at-the-edge-tinyml-is-getting-big/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-at-the-edge-tinyml-is-getting-big/">Machine Learning at the Edge: TinyML Is Getting Big</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://jpt.spe.org/</p>



<p>Being able to deploy machine learning applications at the edge is the key to unlocking a multibillion-dollar market. TinyML is the art and science of producing machine-learning models frugal enough to work at the edge, and it&#8217;s seeing rapid growth.</p>



<p>Is it $61 billion and 38.4% compound annual growth rate (CAGR) by 2028 or $43 billion and 37.4% CAGR by 2027? Depends on which report outlining the growth of edge computing you choose to go by, but in the end it is not that different.</p>



<p>What matters is that edge computing is booming. There is growing interest by vendors, and ample coverage, for good reason. Although the definition of what constitutes edge computing is a bit fuzzy, the idea is simple. It is about taking compute out of the data center and bringing it as close to where the action is as possible.</p>



<p>Whether it&#8217;s stand-alone Internet-of-things sensors, devices of all kinds, drones, or autonomous vehicles, there&#8217;s one thing in common. Increasingly, data generated at the edge are used to feed applications powered by machine learning models. There&#8217;s just one problem: machine learning models were never designed to be deployed at the edge. Not until now, at least. Enter TinyML.</p>



<p>Tiny machine learning (TinyML) is broadly defined as a fast-growing field of machine-learning technologies and applications including hardware, algorithms, and software capable of performing on-device sensor data analytics at extremely low power, typically in the mW range and below, hence enabling a variety of always-on use-cases and targeting battery-operated devices.</p>



<p>This week, the inaugural TinyML EMEA Technical Forum is taking place, and it was a good opportunity to discuss with some key people in this domain. <em>ZDNet</em> caught up with Evgeni Gousev from Qualcomm, Blair Newman from Neuton, and Pete Warden from Google.</p>



<p><strong>Hey Google</strong><br>Pete Warden wrote the world&#8217;s only mustache-detection image-processing algorithm. He also was the founder and chief technology officer of startup Jetpac. He raised a Series A from Khosla Ventures, built a technical team, and created a unique data product that analyzed the pixel data of more than 140 million photos from Instagram and turned them into in-depth guides for more than 5,000 cities around the world.</p>



<p>Jetpac was acquired by Google in 2014, and Warden has been a Google Staff Research Engineer since. Back then, Warden was feeling pretty good about himself for being able to fit machine-learning models in 2 megabytes.</p>



<p>That was until he found some of his new Google colleagues had a 13 kilobyte model that they were using to recognize wake words running on always-on digital signal processor<br>on Android devices. That way the main CPU wasn&#8217;t burning battery listening out for &#8220;that&#8221; wake word—Hey Google.</p>



<p>&#8220;That really blew my mind, the fact that you could do something actually really useful in that smaller model. And it really got me thinking about all of the other applications that might be possible if we can run especially all these new machine-learning, deep-learning approaches&#8221; Warden said.</p>



<p>Although Warden is oftentimes credited by his peers as having kickstarted the TinyML subdomain of machine learning, he is quite modest about it. Much of what he did, he acknowledges, was based off things others were already working on: &#8220;A lot of my contribution has been helping publicize and document a bunch of these engineering practices that have emerged,&#8221; he said.</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-at-the-edge-tinyml-is-getting-big/">Machine Learning at the Edge: TinyML Is Getting Big</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-at-the-edge-tinyml-is-getting-big/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google, Harvard, and EdX Team Up to Offer TinyML Training</title>
		<link>https://www.aiuniverse.xyz/google-harvard-and-edx-team-up-to-offer-tinyml-training/</link>
					<comments>https://www.aiuniverse.xyz/google-harvard-and-edx-team-up-to-offer-tinyml-training/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 14 Aug 2020 07:27:36 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Harvard]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[TinyML]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10891</guid>

					<description><![CDATA[<p>Source: informationweek.com Online learning platform EdX; Google’s open-source machine learning platform, TensorFlow; and HarvardX have put together a certification program to train tech professionals to work with tiny machine <a class="read-more-link" href="https://www.aiuniverse.xyz/google-harvard-and-edx-team-up-to-offer-tinyml-training/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-harvard-and-edx-team-up-to-offer-tinyml-training/">Google, Harvard, and EdX Team Up to Offer TinyML Training</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: informationweek.com</p>



<p>Online learning platform EdX; Google’s open-source machine learning platform, TensorFlow; and HarvardX have put together a certification program to train tech professionals to work with tiny machine learning (TinyML). The program is meant to support this specialized segment of development that can include edge computing with smart devices, wildlife tracking, and other sensors. The program comprises a series of courses that can be completed at home.</p>



<p>The idea is to scale machine learning to function in small form, edge devices that use far less power than desktop computers and have limited storage and processing capacity, says Anant Agarwal, CEO of EdX, which was founded by MIT and Harvard. That can include devices that operate on batteries, such as remote sensors, microphones, and cameras set up in the wilderness.</p>



<p>Agarwal says machine learning is transforming the world with such developments as speech recognition, but the early stages of making the technology work posed a challenge. “It was a hog,” he says. “It was a memory hog; it was a computation hog. It was very expensive to run machine learning, but machine learning could do amazing things.”</p>



<p>The capabilities of machine learning can be limited though by access and availability of robust networks with supporting resources. Devices might always not have such connections, Agarwal says. Smartphones and tablets can leverage machine learning because they connect with computers running in the cloud. That type of access might not be feasible in every environment, he says. “This is where TinyML comes in.”</p>



<p>Google got involved to support the certificate program, in part because it may lead to more developers using its TensorFlow machine learning platform, says Josh Gordon, developer advocate on TensorFlow. “One of the goals, in addition to an open source framework, is we care a lot about the developer community,” he says. “We’re hoping that as more people learn how to use the software they will contribute to new examples and applications in the space.” Gordon describes TinyML as greenfield territory that is waiting to be explored. “We’re interested in seeing what types of projects the students come up with,” he says.</p>



<p>TinyML is meant to run machine learning when the footprint of the hardware is literally tiny, Agarwal says, potentially opening the door for new IT ecosystems and more edge computing. “When the device is small, it has to consume very low power and doesn’t have a huge link to the cloud,” he says. For instance, a motion sensor tied to a camera in the wilderness could be triggered to record leopards passing by. “There’s no way you can have a big computer server there with huge batteries to run it,” Agarwal says. “You don’t have a huge internet connection to transmit the data to the cloud where it can be processed. All your computation has to happen right there.”</p>



<p>More support for the development of TinyML could lead to more embedded devices that operate on little power and bandwidth, he says. “This is the Internet of Things in its most compelling form.”</p>



<p>There is already momentum for such innovation, he says, as more sensors in buildings, infrastructure, vehicles, and personal devices record and compute. The data streams those devices produce must still be turned into actionable intelligence, which can be performed through TinyML, Agarwal says.</p>



<p>He sees ways for TinyML to support multiple industries, such as energy companies with sensors that monitor pipelines, aircraft makers that have sensors on actuators on planes, and the technology behind self-driving cars.</p>



<p>The certification course is taught by Google engineers from the TensorFlow group and Harvard professors, Agarwal says, and can be completed within a few months. The pervasive nature of machine learning and AI could make this program useful to many types of engineers, he says, whether they operate in IT, software, hardware, devices, or sensors. “They might find it useful in terms of learning about applications of TinyML,” Agarwal says. “Others may find it useful in terms of how to develop for these applications.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-harvard-and-edx-team-up-to-offer-tinyml-training/">Google, Harvard, and EdX Team Up to Offer TinyML Training</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-harvard-and-edx-team-up-to-offer-tinyml-training/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>TinyML: When Small IoT Devices Call for Compressed Machine Learning</title>
		<link>https://www.aiuniverse.xyz/tinyml-when-small-iot-devices-call-for-compressed-machine-learning/</link>
					<comments>https://www.aiuniverse.xyz/tinyml-when-small-iot-devices-call-for-compressed-machine-learning/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 29 May 2020 07:00:04 +0000</pubDate>
				<category><![CDATA[Reinforcement Learning]]></category>
		<category><![CDATA[AutoML]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Technologies]]></category>
		<category><![CDATA[TinyML]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9113</guid>

					<description><![CDATA[<p>Source: allaboutcircuits.com Many of us are familiar with the concept of machine learning as it pertains to neural networks. But what about TinyML? Surging Interest&#160;in TinyML TinyML refers <a class="read-more-link" href="https://www.aiuniverse.xyz/tinyml-when-small-iot-devices-call-for-compressed-machine-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/tinyml-when-small-iot-devices-call-for-compressed-machine-learning/">TinyML: When Small IoT Devices Call for Compressed Machine Learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: allaboutcircuits.com</p>



<p>Many of us are familiar with the concept of machine learning as it pertains to neural networks. But what about TinyML?</p>



<h3 class="wp-block-heading">Surging Interest&nbsp;in TinyML</h3>



<p>TinyML refers to the&nbsp;machine learning technologies on the tiniest of microprocessors using the least amount of power (usually in mW range and lower) while&nbsp;aiming for maximized results.</p>



<p>With the proliferation of IoT devices, big names like Renesas and Arm have taken a vested interest in TinyML—for instance, with Arm&#8217;s recent expansion of its AI portfolio with new machine learning and neural processing IP and Renesas&#8217; release of its TinyML platform, Qeexo AutoML, which does not require code nor expertise in ML. </p>



<p>Other companies have zeroed in on partnerships that will help them exaggerate the utility of TinyML. Eta Compute and Edge Impulse recently announced their partnership in which they&#8217;ll combine the strengths of Eta Compute’s neural sensor processor, the ECM3532, with Edge Impulse&#8217;s tinyML platform. With an eye on battery capacity—a difficult point to work around in TinyML—this partnership hopes to accelerate the time-to-market of machine learning in billions of low-powered IoT products. </p>



<p>Another way we can assess the progress of TinyML is to reflect on the tinyML Summit, which took place earlier this year. Several of the presentations at the conference illustrate the key concepts of machine learning at the smallest level. </p>



<h3 class="wp-block-heading">Reflections on the tinyML Summit</h3>



<p>In February, AAC contributor Luke James forecasted the high aims for the 2020 tinyML Summit, which would, as in years past, spotlight developments in TinyML. The summit published presentations online and explored a number of categories pertaining to TinyML: hardware (dedicated integrated circuits), systems, algorithms and software, and applications.</p>



<p>Here are a few noteworthy presentations as they relate to design engineers.</p>



<h4 class="wp-block-heading">Model Compression</h4>



<p>Two of the presenters at the conference brought the realities of tinyML into focus by discussing a device we all have: mobile phones. In their discussion of &#8220;model compression,&#8221; MIT researcher Yujun Lin explained that typical machine learning devices, such as cell phones, have approximately 8 GB of RAM while microcontrollers have approximately 100 KB to 1 MB of RAM. Because microcontrollers have weight and activation constraints, they necessitate model compression.</p>



<p>The concept is to shrink the pre-trained large models into smaller ones without losing accuracy. This can be achieved in processes like pruning and deep compression. Pruning parses out synapses and neurons, resulting in ten times fewer connections. Deep compression takes pruning a step further with quantization (fewer bits per weight) and a technique known as &#8220;Huffman Encoding.&#8221; </p>



<p>The researchers suggested that by combining a concept known as <em>neural-hardware architecture search</em> with non-expert usage into the neural network, we can improve AI-geared hardware. The VP and lab director of Samsung&#8217;s Advanced Institute of Technology, Changkyu Choi, went into further detail on deep model compression, but his focus was on acceleration toward on-sensor AI. </p>



<h4 class="wp-block-heading">Deep Reinforcement Learning</h4>



<p>Another expert, Hoi-Jun Yoo, the ICT endowed chair professor from the engineering school at KAIST (Korea Advanced Institute of Science and Technology) spoke about the importance of deep reinforcement learning (DRL) accelerators within the deep neural network (DNN).</p>



<p>In his discussion, he points out that &#8220;software and hardware co-optimization for DNN training is necessary for low-power and high-speed accelerators&nbsp;in the same way it brought a dramatic increase in the performance of DNN inference accelerators.&#8221;</p>



<p>Yoo also explains that DRL is an essential factor in TinyML because it enables continuous decision-making in a low-power,&nbsp;&#8220;unknown environment,&#8221; or an environment in which labeled data is difficult to capture.&nbsp;</p>



<h4 class="wp-block-heading">DNNs for Always-on AI for Battery-Powered Devices</h4>



<p>Another company, Syntiant, showcased one of their devices, the NDP100 neural decision processor (NDP), to discuss a broader concept: the value of deep learning over algorithmic genius. Dr. Stephen Bailey, CTO of Syntiant, explained that the magic of the company&#8217;s NDP, an always-on and &#8220;listening&#8221; device, is its deep neural networks (DNN)—continuing Yoo&#8217;s discussion on DNNs. </p>



<p>The Syntiant NDP feeds acoustic features to a large DNN (no need for cascading or energy gating) and trains&nbsp;the DNN with large data sets and wide-ranging augmentation. Beyond its noise immunity, the&nbsp;NDP100&nbsp;is extremely small in size (1.4 mm x 1.8 mm) and consumes less than 140 μW.&nbsp;</p>



<p>Since the summit, Syntiant has also released the NDP101, which is said to couple computation power and memory to exploit &#8220;the vast inherent parallelism of deep learning and computing at only required numerical precision.&#8221; Syntiant says that these features improve efficiency by 100 times compared to the stored program architectures you&#8217;d see in CPUs and DSPs. </p>



<h3 class="wp-block-heading">Smaller Devices Call for Compressed&nbsp;Machine Learning</h3>



<p>The hardware requirements for machine learning in larger systems are&nbsp;similar&nbsp;for TinyML in small IoT. But sometimes, the stakes&nbsp;are higher because of the device&#8217;s small size: accuracy, latency, and power consumption. As smaller IoT devices hit the market, engineers may increasingly dabble in TinyML, familiarizing themselves with concepts like deep neural networks, model compression, and deep reinforcement learning.&nbsp;</p>
<p>The post <a href="https://www.aiuniverse.xyz/tinyml-when-small-iot-devices-call-for-compressed-machine-learning/">TinyML: When Small IoT Devices Call for Compressed Machine Learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/tinyml-when-small-iot-devices-call-for-compressed-machine-learning/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
