Upgrade & Secure Your Future with DevOps, SRE, DevSecOps, MLOps!

We spend hours on Instagram and YouTube and waste money on coffee and fast food, but won’t spend 30 minutes a day learning skills to boost our careers.
Master in DevOps, SRE, DevSecOps & MLOps!

Learn from Guru Rajesh Kumar and double your salary in just one year.

Get Started Now!

Google’s new SEED RL framework reduces AI model training costs by 80%

Source: siliconangle.com

Researchers at Google have open-sourced a new framework that can scale up artificial intelligence model training across thousands of machines.

It’s a promising development because it should enable AI algorithm training to be performed at millions of frames per second while reducing the costs of doing so by as much as 80%, Google noted in a research paper.

That kind of reduction could help to level the playing field a bit for startups that previously haven’t been able to compete with major players such as Google in AI. Indeed, the cost of training sophisticated machine learning models in the cloud is surprisingly expensive.

One recent report by Synced found that the University of Washington racked up $25,000 in costs to train its Grover model, which is used to detect and generate fake news. Meanwhile, OpenAI paid $256 per hour to train its GPT-2 language model, while Google itself spend around $6,912 to train its BERT model for natural language processing tasks.

SEED RL is built atop of the TensorFlow 2.0 framework and works by leveraging a combination of graphics processing units and tensor processing units to centralize model inference. The inference is then performed centrally using a learner component that trains the model.

The target model’s variables and state information are kept local, and observations on them are sent to the learner at every step of the process. SEED RL also uses a network library based on the open-source universal RPC framework to minimize latency.

Google’s researchers said the learner component of SEED RL can be scaled across thousands of cores, while the number of actors that iterate between taking steps in the environment and running inference on the model to predict the next action, can scale to thousands of machines.

Google evaluated SEED RL’s efficiency by benchmarking it on the popular Arcade Learning Environment, the Google Research Football environment and several DeepMind Lab environments. The results show they managed to solve a Google Research Football task while training the model at 2.4 million frames per second using 64 Cloud Tensor Processing Unit chips. That’s around 80 times faster than previous frameworks, Google said.

“This results in a significant speed-up in wall-clock time and, because accelerators are orders of magnitude cheaper per operation than CPUs, the cost of experiments is reduced drastically,” Lasse Espeholt, a research engineer at Google Research in Amsterdam, wrote in the company’s AI blog Monday. “We believe SEED RL, and the results presented, demonstrate that reinforcement learning has once again caught up with the rest of the deep learning field in terms of taking advantage of accelerators.”

Constellation Research Inc. analyst Holger Mueller told SiliconANGLE that SEED RL looks to be another example of “reinforcement learning”, which he said is emerging as one of the most promising AI techniques to advance next generation applications.

“When you tweak software to work well with hardware, you usually see major advances and that is what Google is showing here – the combination of its SEED RL library with its TPU architecture,” Mueller said. “Not surprisingly it provides substantial performance gains over conventional solutions. This makes reinforcement learning available to the masses, although users would be locked into the Google Cloud Platform. But AI is served best in the cloud, and GCP is a very good choice for AI apps.”

Google said the code for SEED RL has been open-sourced and made available on Github, together with examples that show how to run it on Google Cloud with graphics processing units.

Related Posts

DeepMind open-sources Lab2D to support creation of 2D environments for AI and machine learning

Source: computing.co.uk Alphabet subsidiary DeepMind announced on Monday that it has open-sourced Lab2D, a scalable environment simulator for artificial intelligence (AI) research that facilitates researcher-led experimentation with environment Read More

Read More

A VR Film/Game with AI Characters Can Be Different Every Time You Watch or Play

Source: technologyreview.com The square-faced, three-legged alien shoves and jostles to get at the enormous plant taking over its tiny planet. But each bite just makes the forbidden Read More

Read More

Researchers detail LaND, AI that learns from autonomous vehicle disengagements

Source: venturebeat.com UC Berkeley AI researchers say they’ve created AI for autonomous vehicles driving in unseen, real-world landscapes that outperforms leading methods for delivery robots driving on Read More

Read More

Google Teases Large Scale Reinforcement Learning Infrastructurean

Source: alyticsindiamag.com The current state-of-the-art reinforcement learning techniques require many iterations over many samples from the environment to learn a target task. For instance, the game Dota Read More

Read More

Plan2Explore: Active Model-Building for Self-Supervised Visual Reinforcement Learning

Source: bair.berkeley.edu To operate successfully in unstructured open-world environments, autonomous intelligent agents need to solve many different tasks and learn new tasks quickly. Reinforcement learning has enabled Read More

Read More

Is AI an Existential Threat?

Source: unite.ai When discussing Artificial Intelligence (AI), a common debate is whether AI is an existential threat. The answer requires understanding the technology behind Machine Learning (ML), and recognizing Read More

Read More
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x