Physics-based Concatenative Sound Synthesis of Photogrammetric models for Aural and Haptic Feedback in Virtual Environments

We present a novel physics-based concatenative sound synthesis (CSS) methodology for congruent interactions across physical, graphical, aural and haptic modalities in Virtual Environments. Navigation in aural and haptic corpora of annotated audio units is driven by user interactions with highly real...

ver descrição completa

Detalhes bibliográficos
Autor principal: Eduardo Magalhães (author)
Outros Autores: João Jacob (author), Niels Nilsson (author), Rolf Nordahl (author), Gilberto Bernardes (author)
Formato: book
Idioma:eng
Publicado em: 2020
Texto completo:https://hdl.handle.net/10216/129043
País:Portugal
Oai:oai:repositorio-aberto.up.pt:10216/129043
Descrição
Resumo:We present a novel physics-based concatenative sound synthesis (CSS) methodology for congruent interactions across physical, graphical, aural and haptic modalities in Virtual Environments. Navigation in aural and haptic corpora of annotated audio units is driven by user interactions with highly realistic photogrammetric based models in a game engine, where automated and interactive positional, physics and graphics data are supported. From a technical perspective, the current contribution expands existing CSS frameworks in avoiding mapping or mining the annotation data to real-time performance attributes, while guaranteeing degrees of novelty and variation for the same gesture.