Abstract
Algorithms for the improvement of speech intelligibility in hearing prostheses can degrade the spatial quality of the sound signal. To investigate the influence on distance perception and localization of such algorithms, a system to virtually render arbitrary static acoustical scenes has been developed. In this master thesis, the existing virtual acoustics system has been extended to present more
realistic dynamic scenes. The system is able as well to compensate for the head movements of the test subject.
Subjective listening tests were conducted to evaluate the extended system.
Static sources remain stable even in the case of fast head movements, the externalization of sound sources is improved compared to the existing system and simulated sound sources are nearly indistinguishable from real sound sources.