Google releases source code of new on-device machine learning solutions

Source:-zdnet.com
MobileNetV3 and MobileNetEdgeTPU have been released to the open source community.

Google has opened up the source code of two machine learning (ML) on-device systems, MobileNetV3 and MobileNetEdgeTPU, to the open source community.

In a blog post, software and silicon engineers Andrew Howard and Suyog Gupta from Google Research said on Wednesday that both the source code and checkpoints for MobileNetV3, as well as the Pixel 4 Edge TPU-optimized counterpart MobileNetEdgeTPU, are now available. 

Featured

  • Best Black Friday 2019 deals: Business Bargain Hunter’s top picks
  • Black Friday 2019: Tools, tips, and tricks to save you money
  • Android ecosystem is not ready for adolescent Pixel 4
  • Microsoft Ignite postmortem: Cutting through the complexity

On-device ML applications for responsive intelligence have been designed with power-limited devices in mind, including our smartphones, tablets, and Internet of Things (IoT) electronics. 

See also: Google updates CallJoy phone agent with customizable AI features

Google says the demand for mobile intelligence has prompted research into algorithmically-efficient neural network models and hardware “capable of performing billions of math operations per second while consuming only a few milliwatts of power,” such as in the case of the Google Pixel 4’s Pixel Neural Core. 

The latest MobileNet offerings include improvements to architectural design, speed, and accuracy, Google says. On mobile CPUs, users can expect MobileNetV3 to run at double the speed of MobileNetV2, bolstered through AutoML and NetAdapt, the latter of which has sliced away under-utilized activation channels. 

CNET: Huawei ban: Full timeline as Trump’s tech chief slams countries working with Chinese company

A new activation function called hard-swish (h-swish) has also been implemented to improve functionality on mobile devices and reduce the risk of bottlenecks. Overall latency has been decreased by 15 percent and object detection latency has been reduced by 25 percent in comparison to MobileNetV2.

The MobileNetEdgeTPU model — similar to the Edge TPU in Coral products but tweaked for the camera features in Pixel 4 — now also has increased accuracy in comparison to earlier versions, while reducing both runtime and power requirements. 

Google did not set out to reduce the power demands of this model, but when compared to the basic MobileNetV3, MobileNetEdgeTPU consumes 50 percent less juice.

TechRepublic: IBM social engineer easily hacked two journalists’ information

MobileNetV3 and MobileNetEdgeTPU code can now be accessed from the MobileNet GitHub repository. 

Developers can also pick up a copy of open source implementation for MobileNetV3 and MobileNetEdgeTPU object detection from the Tensorflow Object Detection API page, and DeepLab is hosting the open source implementation for MobileNetV3 semantic

Related Posts

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence