Summary: | The ability for humans to hear, attend to, and understand incoming sounds is affected, at least, by environmental, morphological, and cognitive factors. Conversely, current implementations of audition in virtual characters often only consider as constraint to the auditory process the distance between the sound emitter and the sound receiver. To cope with this limitation, this dissertation presents a novel framework, directed to game developers interested in implementing non-player characters with noisy, characterspecific, and context-dependent auditory perception. The framework is prepared to be integrated with games developed in Unity's game engine, providing custom made Unity scripts that simplify the integration into Unity games. A First Person Shooter game named Fortress was developed, which was integrated with the framework in order to test the mechanisms for sound transmission and perception provided by the framework. The framework allows developers to enable the ability for virtual characters to perceive sound from their surroundings, while keeping the ability of perception closely binded to the current context the virtual character is in. The developer may then customize each virtual character's physical and psychological traits that react the way the virtual character perceives sound from its surroundings. Consequently, the level of immersion for players is increased, as the characters from the virtual world react to sound from their surroundings in a more human way.
|