Abstract
A method to predict the amount of noise reduction which can be achieved using a two-microphone adaptive beamforming noise reduction system for hearing aids [J. Acoust. Soc. Am. 109, 1123 (2001)] is verified experimentally. 34 experiments are performed in real environments and 58 in simulated environments and the results are compared to the predictions. In all experiments, one noise source and one target signal source are present. Starting from a setting in a moderately reverberant room (reverberation time 0.42 s, volume 34 m3, distance between listener and either sound source 1 m, length of the adaptive filter 25 ms), eight different parameters of the acoustical environment and three different design parameters of the adaptive beamformer were systematically varied. For those experiments, in which the direct-to-reverberant ratios of the noise signal is +3 dB or less, the difference between the predicted and the measured improvement in signal-to-noise ratio (SNR) is -0.21+/-0.59 dB for real environments and -0.25+/-0.51 dB for simulated environments (average +/- standard deviation). At higher direct-to-reverberant ratios, SNR improvement is systematically underestimated by up to 5.34 dB. The parameters with the greatest influence on the performance of the adaptive beamformer have been found to be the direct-to-reverberant ratio of the noise source, the reverberation time of the acoustic environment, and the length of the adaptive filter.