Q-learning for autonomous mobile robot obstacle avoidance

An approach to the problem of autonomous mobile robot obstacle avoidance using Reinforcement Learning, more precisely Q-Learning, is presented in this paper. Reinforcement Learning in Robotics has been a challenging topic for the past few years. The ability to equip a robot with a powerful enough to...

ver descrição completa

Detalhes bibliográficos
Autor principal: Ribeiro, Tiago (author)
Outros Autores: Gonçalves, Fernando (author), Garcia, Inês (author), Lopes, Gil (author), Ribeiro, A. Fernando (author)
Formato: conferencePaper
Idioma:eng
Publicado em: 2019
Assuntos:
Texto completo:http://hdl.handle.net/1822/70282
País:Portugal
Oai:oai:repositorium.sdum.uminho.pt:1822/70282
Descrição
Resumo:An approach to the problem of autonomous mobile robot obstacle avoidance using Reinforcement Learning, more precisely Q-Learning, is presented in this paper. Reinforcement Learning in Robotics has been a challenging topic for the past few years. The ability to equip a robot with a powerful enough tool to allow an autonomous discovery of an optimal behavior through trial-and-error interactions with its environment has been a reason for numerous deep research projects. In this paper, two different Q-Learning approaches are presented as well as an extensive hyperparameter study. These algorithms were developed for a simplistically simulated Bot'n Roll ONE A (Fig. 1). The simulated robot communicates with the control script via ROS. The robot must surpass three levels of iterative complexity mazes similar to the ones presented on RoboParty [1] educational event challenge. For both algorithms, an extensive hyperparameter search was taken into account by testing hundreds of simulations with different parameters. Both Q-Learning solutions develop different strategies trying to solve the three labyrinths, enhancing its learning ability as well as discovering different approaches to certain situations, and finishing the task in complex environments.