Summary: | Human communication is not reduced to verbal. There are several aspects that can complement or replace audible speech, such as gestures, gaze and silent speech. The diversity and contexts of use of interactive systems demand the proposal of more natural (and intuitive) forms of interaction. In this context, multimodality was introduced, providing different methods of interaction, allowing to shorten the gap between the human and the computer. However, the development of interactive systems where there are integrated multiple interaction modalities is a complex task, particularly due to the wide range of technologies that developers need to master to do it. Additionally, the support for a particular interaction technology is often developed for specific applications, which hinders reusing it for other contexts. In this research work, a model is proposed and developed for the integration of silent interaction modalities in interactive systems. A set of generic modalities is designed and developed to support a silent interaction aligned with the proposed vision. This research work contributed to the development of three generic modalities, namely the gestures modality, the gaze modality and the silent speech modality. The generic modalities proposed already include basic modules so that they can be used. Tests were also carried out on the capabilities of the modalities, it was demonstrated their use in a proof-of-concept application, so as to control the VLC, a multimedia content player, and an evaluation done by developers in order to evaluate the facility of use of the gestures modality.
|