Sistema de realidade virtual (Oculus Rift) para controlo remoto de braço robótico

Recently, immersive interfaces have entered the scene, notoriously coupled with natural interactions. We have Head-Mounted Displays that allow for virtual worlds to be rendered and viewed in 3D, and motion sensors that track the user's body and recognize certain gestures. One of many potential...

ver descrição completa

Detalhes bibliográficos
Autor principal: Daniel António Teixeira Varum (author)
Formato: masterThesis
Idioma:por
Publicado em: 2015
Assuntos:
Texto completo:https://repositorio-aberto.up.pt/handle/10216/88268
País:Portugal
Oai:oai:repositorio-aberto.up.pt:10216/88268
Descrição
Resumo:Recently, immersive interfaces have entered the scene, notoriously coupled with natural interactions. We have Head-Mounted Displays that allow for virtual worlds to be rendered and viewed in 3D, and motion sensors that track the user's body and recognize certain gestures. One of many potential usages of these technologies is in the teleoperation of robots without the need of technical expertise.The idea is to build a proof of concept that combines these areas of study. In order to do that, a case study was chosen: physical maintenance of computer network devices. Currently this is done by experts that need to reach each network rack to handle these devices. This dissertation aims to change the way these tasks are performed by trying to eliminate the need to walk to each rack.The goal is to develop an immersive system with a natural interface to remotely control a robotic arm through gestural commands of the arms, hands and fingers of the user so that the mentioned network device maintenance tasks can be executed potentially from a distance. The studied references describe systems that solve similar problems using technologies such as inertial sensors and computer vision algorithms, but do not consider the issues of robot movement precision and visualization of the physical space where the robot operates.In the proposed solution, an Oculus Rift and a stream of real time 3D video are used to solve the problem of visualization of the area where the network devices are. The recognition of several gestures and poses performed by the user is done with the aid of a Leap Motion. A set of gestures and a set of robot actions are defined and translation is done from the first to the latter. It is also expected to balance the trade-off between the gesture recognition rate and the delay between commands and robot actions.