Deep Learning Helps Robots Control Their Limbs
In a study published in IEEE Robotics and Automation Letters, researchers have shown that walking robots spontaneously developed coordinated limb control when trained using deep learning.
Human motor control can execute complex movements naturally, efficiently and without much thought involved. This is because of motor synergy in the central nervous system (CNS), which allows the CNS to use a smaller set of variables to control a large group of muscles; thereby simplifying the control over coordinated and complex movements.
Now, researchers at Tohoku University, Japan, have observed a similar concept in robotic agents using deep reinforcement learning (DRL) algorithms.
DRL allows robotic agents to learn the best action possible in their virtual environment. It allows complex robotic tasks to be solved while minimizing manual operations and achieving peak performance. Classical algorithms, on the other hand, require manual intervention to find specific solutions for every new task that appears.
However, applying motor synergy from the human world to the robotic world was no small task. In the current study, the researchers used two DRL algorithms on walking robotic agents known as HalfCheetah and FullCheetah.
The two algorithms were TD3, a classical DRL, and SAC, a high-performing DRL. The two robotic agents were tasked with running forward as far as possible within a given time. In total, the robotic agents completed three million steps. Synergy information was not used to train the DRLs but motor synergy nonetheless emerged in the movements of the robotic agents.
“We first confirmed in a quantitative way that motor synergy can emerge even in deep learning as humans do,” said study co-author Professor Mitsuhiro Hayashibe. “After employing deep learning, the robotic agents improved their motor performances while limiting energy consumption by employing motor synergy.”
Going forward, the researchers aim to explore more task with different body models to further confirm their findings.