Deep reinforcement learning applied to a robotic pick-and-place application

Industrial robot manipulators are widely used for repetitive applications that require high precision, like pick-and-place. In many cases, the movements of industrial robot manipulators are hard-coded or manually defined, and need to be adjusted if the objects being manipulated change position. To i...

Full description

Bibliographic Details
Main Author: Gomes, Natanael Magno (author)
Other Authors: Martins, Felipe N. (author), Lima, José (author), Wörtche, Heinrich (author)
Format: conferenceObject
Language:eng
Published: 2022
Subjects:
Online Access:http://hdl.handle.net/10198/25357
Country:Portugal
Oai:oai:bibliotecadigital.ipb.pt:10198/25357
Description
Summary:Industrial robot manipulators are widely used for repetitive applications that require high precision, like pick-and-place. In many cases, the movements of industrial robot manipulators are hard-coded or manually defined, and need to be adjusted if the objects being manipulated change position. To increase flexibility, an industrial robot should be able to adjust its configuration in order to grasp objects in variable/ unknown positions. This can be achieved by off-the-shelf visionbased solutions, but most require prior knowledge about each object to be manipulated. To address this issue, this work presents a ROS-based deep reinforcement learning solution to robotic grasping for a Collaborative Robot (Cobot) using a depth camera. The solution uses deep Q-learning to process the color and depth images and generate a ϵ- greedy policy used to define the robot action. The Q-values are estimated using Convolutional Neural Network (CNN) based on pre-trained models for feature extraction. Experiments were carried out in a simulated environment to compare the performance of four different pretrained CNN models (RexNext, MobileNet, MNASNet and DenseNet). Results show that the best performance in our application was reached by MobileNet, with an average of 84 % accuracy after training in simulated environment.