Self-supervised learning of depth-based navigation affordances from haptic cues

This paper presents a ground vehicle capable of exploiting haptic cues to learn navigation affordances from depth cues. A simple pan-tilt telescopic antenna and a Kinect sensor, both fitted to the robot’s body frame, provide the required haptic and depth sensory feedback, respectively. With the ante...

ver descrição completa

Detalhes bibliográficos
Autor principal: Baleia, J. (author)
Outros Autores: Santana, P. (author), Barata, J. (author)
Formato: conferenceObject
Idioma:eng
Publicado em: 2022
Assuntos:
Texto completo:http://hdl.handle.net/10071/25845
País:Portugal
Oai:oai:repositorio.iscte-iul.pt:10071/25845
Descrição
Resumo:This paper presents a ground vehicle capable of exploiting haptic cues to learn navigation affordances from depth cues. A simple pan-tilt telescopic antenna and a Kinect sensor, both fitted to the robot’s body frame, provide the required haptic and depth sensory feedback, respectively. With the antenna, the robot determines whether an object is traversable by the robot. Then, the interaction outcome is associated to the object’s depth-based descriptor. Later on, the robot to predict if a newly observed object is traversable just by inspecting its depth-based appearance uses this acquired knowledge. A set of field trials show the ability of the to robot progressively learn which elements of the environment are traversable.