Permanent URL to this publication: http://dx.doi.org/10.5167/uzh-46002
Grämer, T. Efficient modeling of head movements and dynamic scenes in virtual acoustics. 2010, University of Zurich/ETH, Faculty of Medicine.
|Creative Commons: Public Domain Dedication|
Algorithms for the improvement of speech intelligibility in hearing prostheses can degrade the spatial quality of the sound signal. To investigate the influence on distance perception and localization of such algorithms, a system to virtually render arbitrary static acoustical scenes has been developed. In this master thesis, the existing virtual acoustics system has been extended to present more
realistic dynamic scenes. The system is able as well to compensate for the head movements of the test subject.
Subjective listening tests were conducted to evaluate the extended system.
Static sources remain stable even in the case of fast head movements, the externalization of sound sources is improved compared to the existing system and simulated sound sources are nearly indistinguishable from real sound sources.
|Communities & Collections:||04 Faculty of Medicine > University Hospital Zurich > Clinic for Otorhinolaryngology|
|DDC:||610 Medicine & health|
|Deposited On:||17 Feb 2011 09:25|
|Last Modified:||19 Oct 2012 05:30|
|Number of Pages:||83|
Users (please log in): suggest update or correction for this item
Repository Staff Only: item control page