Machine Learning at the Edge: TinyML Is Getting Big

Source – https://jpt.spe.org/

Being able to deploy machine learning applications at the edge is the key to unlocking a multibillion-dollar market. TinyML is the art and science of producing machine-learning models frugal enough to work at the edge, and it’s seeing rapid growth.

Is it $61 billion and 38.4% compound annual growth rate (CAGR) by 2028 or $43 billion and 37.4% CAGR by 2027? Depends on which report outlining the growth of edge computing you choose to go by, but in the end it is not that different.

What matters is that edge computing is booming. There is growing interest by vendors, and ample coverage, for good reason. Although the definition of what constitutes edge computing is a bit fuzzy, the idea is simple. It is about taking compute out of the data center and bringing it as close to where the action is as possible.

Whether it’s stand-alone Internet-of-things sensors, devices of all kinds, drones, or autonomous vehicles, there’s one thing in common. Increasingly, data generated at the edge are used to feed applications powered by machine learning models. There’s just one problem: machine learning models were never designed to be deployed at the edge. Not until now, at least. Enter TinyML.

Tiny machine learning (TinyML) is broadly defined as a fast-growing field of machine-learning technologies and applications including hardware, algorithms, and software capable of performing on-device sensor data analytics at extremely low power, typically in the mW range and below, hence enabling a variety of always-on use-cases and targeting battery-operated devices.

This week, the inaugural TinyML EMEA Technical Forum is taking place, and it was a good opportunity to discuss with some key people in this domain. ZDNet caught up with Evgeni Gousev from Qualcomm, Blair Newman from Neuton, and Pete Warden from Google.

Hey Google
Pete Warden wrote the world’s only mustache-detection image-processing algorithm. He also was the founder and chief technology officer of startup Jetpac. He raised a Series A from Khosla Ventures, built a technical team, and created a unique data product that analyzed the pixel data of more than 140 million photos from Instagram and turned them into in-depth guides for more than 5,000 cities around the world.

Jetpac was acquired by Google in 2014, and Warden has been a Google Staff Research Engineer since. Back then, Warden was feeling pretty good about himself for being able to fit machine-learning models in 2 megabytes.

That was until he found some of his new Google colleagues had a 13 kilobyte model that they were using to recognize wake words running on always-on digital signal processor
on Android devices. That way the main CPU wasn’t burning battery listening out for “that” wake word—Hey Google.

“That really blew my mind, the fact that you could do something actually really useful in that smaller model. And it really got me thinking about all of the other applications that might be possible if we can run especially all these new machine-learning, deep-learning approaches” Warden said.

Although Warden is oftentimes credited by his peers as having kickstarted the TinyML subdomain of machine learning, he is quite modest about it. Much of what he did, he acknowledges, was based off things others were already working on: “A lot of my contribution has been helping publicize and document a bunch of these engineering practices that have emerged,” he said.

Related Posts

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence