Resumo: | Currently, there is a growing interest in the development of autonomous navigation technologies for applications in domestic, urban and industrial environments. Machine Learning tools such as neural networks, reinforcement learning and deep learning have been the main choice to solve many problems associated with autonomous mobile robot navigation. This dissertation mainly focus on solving the problem of mobile robot navigation in maze-like environments with multiple goals. The center point here is to apply a hierarchical structure of reinforcement learning algorithms (QLearning and R-Learning) to a robot in a continuous environment so that it can navigate in a maze. Both the state-space and the action-space are obtained by discretizing the data collected by the robot in order to prevent them from being too large. The implementation is done with a hierarchical approach, which is a structure that allows to split the complexity of the problem into many easier sub-problems, ending up with a set of lower-level tasks followed by a higher-level one. The robot performance is evaluated in two maze-like environments, showing that the hierarchical approach is a very feasible solution to reduce the complexity of the problem. Besides that, two more scenarios are presented: a multi-goal situation where the robot navigates across multiple goals relying on the topological representation of the environment and the experience memorized during learning and a dynamic behaviour situation where the robot must adapt its policies according to the changes that happen in the environment (such as blocked paths). In the end, both scenarios were successfully accomplished and it has been concluded that a hierarchical approach has many advantages when compared to a classic reinforcement learning approach.
|