Summary: | With the joint evolution of Industry and Robotics, manufacturing systems are becoming more complex, resilient, and safer. At the same time, Industry 4.0 answers the requirements to adapt to society demands as fast, seamless, and flexible as possible. Despite bringing intelligence methodologies to the factory floor, the contemporary robotic engineering techniques fail to be flexible or resilient when faced with new configurations of new products or different production parameters. Also, currently used methodologies rarely allow to transfer skills from other tasks. All combined, these factors make the development and updating of robotic systems a cumbersome task that requires extensive resources. To address these limitations, this thesis explores how the use of Learning from Demonstration can contribute the improve robotic engineering. The first goal was to create a manufacturing simulation scenario that could easily transfer to real situations while reacting to Reinforcement Learning Techniques. The second objective was to study how to discretise a complex task. The last goal assessed the impact of reusing pre-trained models in different tasks. The methodology used the robo-gym framework, that connects the OpenAI Gym with the Gazebo physics engine, to create a modified pick and place task, where an object had to be fitted in a goal pose. The training involved expert demonstrations as part of the Learning from Demonstration scope. The algorithm employed was the Generative Adversarial Imitation Learning, which shares both Reinforcement Learning and Inverse Reinforcement Learning characteristics. The first key finding was that task discretisation could be achieved with reward function modelling. It can be done with a default smooth gradient error and positive rewards associated with sub-tasks completion. The positive rewards must be sequentially higher as well as the increments between them. This discretisation approach simplifies the complexity associated with the tasks and boosts performance compared to the sequential modelling approach. Secondly, we have proved that retraining models can be sometimes advantageous even when new skills are not required, or when the trade-off between adaptation and exploration is positive. In that case, the learning curve is more stable. This proposal gathers guidelines to flexibilise and simplify engineering associated with a manufacturing task based on a reward function. The work developed in this thesis resulted in a paper already submitted to ICAR and in a second paper under preparation.
|