AI Enhanced Robotics and The Future of Manufacturing

Source: metrology.news

In today’s manufacturing, robots deployed across various industries are mostly doing repetitive tasks. The robots’ overall task performance hinges on the accuracy of their controllers to track predefined motions. The ability of robots to handle unconstructed, complex environments, such as the flexible grasping of previously unknown objects or the assembly of new components, is very limited. Endowing machines with greater levels of intelligence to acquire skills autonomously and to generalize unseen situations would be a game-changer for quite a few industry sectors.

The main challenge to robot evolution is the need to design adaptable yet robust control algorithms that can address all possible system behaviors and also the necessity of ‘behavior generalization,’ e.g. the ability to react to unforeseen situations. Two forms of artificial intelligence, Deep Learning and Reinforcement Learning (that can use Deep Learning), hold notable promise for solving such challenges because they enable robots in manufacturing systems to deal with uncertainties, to learn behaviors through interaction with their surrounding environments, and ideally generalize to new situations. Let’s take a look at how Deep Learning and Reinforcement Learning play a key role in the aforementioned use cases: flexible picking and the assembly of new components.

Flexible grasping made possible through Deep Learning

Humans are equipped with a universal picking skill. Even if they encounter an object that they have never seen or grasped before, they will immediately know where to grasp the object in order to successfully lift it. Robots in today’s manufacturing must be explicitly programmed so that they can approach a predefined grasp pose and execute the grasp. This requires the objects to be grasped to be always in the same position and orientation (think of an assembly line). The challenge to programmers is finding a way to get robots to grasp an unknown object at any orientation. This is where Deep Learning comes in.

Deep Learning operates through artificial digital neural networks—large, non-linear function approximators that are loosely inspired by the human brain. State-of-the art neural networks have millions of parameters. Using a dataset of input-output relationships, these parameters can be set so that the neural network can predict the specific output for a given input.

This is how Deep Learning can be applied to grasping. Instead of programming the robot on how to grasp, the programmers provide the robot, via the neural networks, examples of grasping. The robot training data consists of images or models of various objects as well as how to grasp them. Given a database of millions of such examples, the neural network learns how to compute grasps for any given image of an object. These examples can be conveniently created in simulation. The robot masters the skill of grasping without executing a single grasp in the real-world.

While there are many examples of Deep Learning-based approaches for grasping, the SPS show 2018 was the first venue where these algorithms were demonstrated with real industrial hardware by Siemens: a grasp-capable neural network deployed into an industrial platform called the SIMATIC TM NPU, which is the first industrial controller for AI applications. At the Hannover Fair 2019, an upgraded version of the algorithm and an object-recognition neural network were combined for a bin-picking demonstration. The result was the first deep learning-based bin picking fully implemented on the controller level with a PLC and an NPU

Solving industrial assembly tasks with reinforcement learning

Another intelligent-robot approach to industrial tasks is based on reinforcement learning (RL). RL is a framework of principles that allows robots to “learn” behaviors through interactions with the environment; i.e., the data comes from actual surroundings. Unlike traditional feedback robot-control methods, the core idea of RL is to provide robot controllers with high-level specifications of what to do instead of how to do it. As the robot interacts with the environment and collects observations and rewards, the RL algorithm reinforces those behaviors that yield high rewards. Recent progress in RL research introduced deep neural networks for modelling the robot’s behavioral policy and its dynamics.

While the idea of RL is very promising for creating autonomous systems that learn, the adoption has been limited so far, because such large amounts of data are needed for robots to learn successful control policies. Thus, executing all this training on real robot hardware is problematic, because it takes such a long time and results in wear and tear on the equipment. Recent research in RL is aimed toward reducing the required training amount on real robots.

In fact, a method called Residual RL has been applied to various real-world assembly tasks in which a robot learned successful assembly procedures. That has been possible because Residual RL requires only a fraction of the learning samples in the real world compared to pure RL. This approach is a form of combined-control behavior, in which part of the problem facing the robot is solved with conventional feedback control (i.e., position control) and the rest is handled by the Residual RL. Siemens Corporate Technology researchers, in collaboration with UC Berkeley, figured out this data-driven approach, with the outputs of the conventional and RL controllers superimposed, forming the complete command for the robot’s actions.

This means that if a robot control problem can be partially handled with conventional feedback control, e.g. with position control, that can be broken down into two parts. The first part is solved with conventional hand-engineered control techniques and the second part is solved with Residual RL. The Residual RL portion also prevents the robot from unsafe “exploratory behavior”—meaning the robot does not damage itself or the environment during learning—which is an important prerequisite in manufacturing applications.

Robots in the real world

Ultimately, for the sake of flexible grasping and object assembly, what researchers want to create is a robot that can solve tasks in a flexible way by making its own decision, using its own skills, while the operator specifies high-level commands only. For example, instead of programming the trajectories for a successful grasp, we just ask the robot to grasp a component and then let the robot decide on the execution.

What does this all mean for the future of the manufacturing industry? AI-enhanced robotics is considered a prerequisite for flexible manufacturing and lot-size-one-production. When preprogramming isn’t a necessity for every single robot motion, then robots will become economically viable for rapidly changing product configurations.

This article was posted on the Siemens Blog. Authors: Juan Aparicio Ojea, Head of Research Group Advanced Manufacturing Automation and Eugen Solowjow, Staff Scientist.

Related Posts

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence