Upgrade & Secure Your Future with DevOps, SRE, DevSecOps, MLOps!

We spend hours on Instagram and YouTube and waste money on coffee and fast food, but won’t spend 30 minutes a day learning skills to boost our careers.
Master in DevOps, SRE, DevSecOps & MLOps!

Learn from Guru Rajesh Kumar and double your salary in just one year.

Get Started Now!

Google Cloud AI Platform Gets Enhanced Training And Inference Capabilities

Source: forbes.com

Google announced updates to its Cloud AI Platform that enhance training and prediction capabilities of machine learning and deep learning models.

Google Cloud AI Platform is an end-to-end machine learning platform as a service (ML PaaS) targeting data scientists, ML developers, and AI engineers. The Cloud AI Platform has services to tackle the lifecycle of machine learning models. From data preparation to training to model serving, the platform has all the essential building blocks to develop and deploy sophisticated machine learning models. 

The most recent updates make training and deploying ML models on Google Cloud Platform flexible and powerful. 

Model Development

The support for running custom containers to train models on Cloud AI Platform has become generally available. This feature allows users to bring their own Docker container images with any pre-installed ML framework or algorithm to run on the AI Platform. 

Custom Containers support removes the constraints involved in training models at scale in the cloud. Customers can now package a custom container image with specific versions of the language, framework and tools used in their training programs. This eliminates the need to choose a specific version of tools expected by the platform to train models. With custom containers, data scientists and ML developers can bring their own frameworks and libraries to the AI Platform even if they are not natively supported by the platform. Developers can build and test the container images locally before deploying them to the cloud. DevOps teams can integrate AI Platform with existing CI/CD pipelines to automate the deployment process.

To simplify the process of choosing the right hardware configuration for training ML models, Google has introduced scale tiers – a set of predefined cluster specification based on a class of GCE VMs. Each scale tier is defined in terms of its suitability for certain types of jobs.  

Customers can also choose a custom tier where they can mention the machine configuration for the master, worker, and parameter sever. These servers within a cluster facilitate distributed training to speed up training large datasets. 

Both the features – custom containers and machine types for training – are generally available now. 

Model Deployment and Inference

The process of hosting a fully-trained model that responds with predictions is called inference. 

Customers can host trained machine learning models in the Google Cloud AI Platform and use AI Platform Prediction service to infer target values for new data. The Cloud AI Platform Prediction manages computing resources in the cloud to run ML models. Developers consuming ML models can request predictions from the deployed models and in response get the predicted target values.  

The Cloud AI Platform Prediction service now lets customers choose from a set of Google Compute Engine machine types to run an ML model. Customers can add GPUs such as NVIDIA T4 or TPUs. to accelerate the inferencing process. As a managed platform, the service handles provisioning, scaling, and serving without manual intervention. Previously, Online Prediction service only supported choosing from one or four vCPU machine types.

GCP customers using AI Platform can now log prediction requests and responses directly to BigQuery to analyze and detect skew and outliers, or to decide if retraining is required to increase the accuracy of the model.

Cloud AI Platform Prediction is powered by Google Kubernetes Engine which delivers the required scale. 

After a major rebranding of its ML PaaS to AI Platform at the Cloud NEXT event, Google is constantly enhancing the service. The general availability of features such as custom containers and GKE-based prediction service make the platform flexible and scalable to train and deploy machine learning models in the cloud.

Related Posts

Google fires second AI ethics leader

Source – https://www.itnews.com.au/ As dispute over research, diversity grows. Google fired staff scientist Margaret Mitchell on Saturday, they both said, a move that fanned company divisions on Read More

Read More

Total and Google to launch AI tool Solar Mapper in Europe

Source: solarpowerportal.co.uk O&G giant Total and Google Cloud are launching a new artificial intelligence (AI) tool to help accelerate the deployment of residential solar panels. Together they Read More

Read More

Unlock a new career in Google Cloud with this mastery bundle

Source: androidguys.com You may not realize this, but you interact with AI technology on a consistent, if not daily basis. And if you do recognize it, chances Read More

Read More

Cloud computing is betting on outer space

Source: livemint.com Microsoft CEO Satya Nadella announced the preview of Azure Orbital at Microsoft Ignite 2020 in New Orleans. According to Microsoft, Orbital is ‘Ground Station as Read More

Read More

Google Cloud And Anaplan Innovate To Transform Enterprise Planning

Source: aithority.com Google Cloud and Anaplan, Inc. announced a strategic partnership to offer Anaplan’s platform for enterprise planning and business performance on Google Cloud. As Anaplan’s first public cloud Read More

Read More

HOW DEEPMIND ALGORITHMS HELPED IMPROVE THE ACCURACY OF GOOGLE MAPS?

Source: analyticsinsight.net DeepMind is one of the companies that are leading the AI charge and coming up with innovative uses of AI. This London-based AI lab has been Read More

Read More
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x