Google’s TensorFlow Lite Model Maker adapts state-of-the-art models for on-device AI

15Apr - by aiuniverse - 0 - In Google AI


Google today announced TensorFlow Lite Model Maker, a tool that adapts state-of-the-art machine learning models to custom data sets using a technique known as transfer learning. It wraps machine learning concepts with an API that enables developers to train models in Google’s TensorFlow AI framework with only a few lines of code, and to deploy those models for on-device AI applications.

Tools like Model Maker could help companies incorporate AI into their workflows faster than before. According to a study conducted by Algorithmia, 50% of organizations spend between 8 and 90 days deploying a single machine learning model, with most blaming the duration on a failure to scale.

Model Maker, which currently only supports image and text classification use cases, works with many of the models in TensorFlow Hub, Google’s library for reusable machine learning modules. (“Modules” in this context refers to self-contained algorithms along with assets that can be used across different AI tasks.) Essentially, Model Maker applies models trained on one task to another related task at varying levels of accuracy, according to several parameters specified at the outset.

Model accuracy can be improved with Model Maker by changing the model architecture, which requires editing one line of code. After the input data specific to an on-device AI is loaded in, Model Maker evaluates the model and exports it as a TensorFlow Lite model. (TensorFlow Lite is a version of TensorFlow that’s optimized for mobile, embedded, and internet of things devices.)

Models created by TensorFlow Lite Model Maker have metadata attached to them, including machine-readable parameters like mean, standard deviation, category label files, and human-readable parameters such as model descriptions and licenses. Google notes that fields like licenses can be critical in deciding whether a model can be used, while other systems can use the machine-readable parameters to generate wrapper code.

In the coming months, Google intends to enhance Model Maker to support more tasks, including object detection and several natural language processing tasks. Specifically, it says it’ll add BERT, a pretraining technique for natural language processing, for applications such as question-and-answer.

The launch of Model Maker follows on the heels of an API — Quantization Aware Training (QAT) — that trains smaller, faster TensorFlow models with the performance benefits of quantization (the process of mapping input values from a large set to output values in a smaller set) while retaining close to their original accuracy. Earlier in the year, Google unveiled TensorFlow Quantum, a machine learning framework for training quantum models, at the TensorFlow Dev Summit.

Facebook Comments