Header

UZH-Logo

Maintenance Infos

Bayes Risk minimization using metric loss functions


Schlüter, Ralf; Scharrenbach, Thomas; Steinbiss, Volker; Ney, Hermann (2005). Bayes Risk minimization using metric loss functions. In: The 9th European Conference on Speech Communication and Technology (Interspeech), Lisboa, Portugal, 4 September 2005 - 8 September 2005, 1449-1452.

Abstract

In this work, fundamental properties of Bayes decision rule us- ing general loss functions are derived analytically and are ver- ified experimentally for automatic speech recognition. It is shown that, for maximum posterior probabilities larger than 1/2, Bayes decision rule with a metric loss function always decides on the posterior maximizing class independent of the specific choice of (metric) loss function. Also for maximum posterior probabilities less than 1/2, a condition is derived un- der which the Bayes risk using a general metric loss function is still minimized by the posterior maximizing class. For a speech recognition task with low initial word error rate, it is shown that nearly 2/3 of the test utterances fulfil these conditions and need not be considered for Bayes risk minimization with Lev- enshtein loss, which reduces the computational complexity of Bayes risk minimization. In addition, bounds for the difference between the Bayes risk for the posterior maximizing class and minimum Bayes risk are derived, which can serve as cost esti- mates for Bayes risk minimization approaches.

Abstract

In this work, fundamental properties of Bayes decision rule us- ing general loss functions are derived analytically and are ver- ified experimentally for automatic speech recognition. It is shown that, for maximum posterior probabilities larger than 1/2, Bayes decision rule with a metric loss function always decides on the posterior maximizing class independent of the specific choice of (metric) loss function. Also for maximum posterior probabilities less than 1/2, a condition is derived un- der which the Bayes risk using a general metric loss function is still minimized by the posterior maximizing class. For a speech recognition task with low initial word error rate, it is shown that nearly 2/3 of the test utterances fulfil these conditions and need not be considered for Bayes risk minimization with Lev- enshtein loss, which reduces the computational complexity of Bayes risk minimization. In addition, bounds for the difference between the Bayes risk for the posterior maximizing class and minimum Bayes risk are derived, which can serve as cost esti- mates for Bayes risk minimization approaches.

Statistics

Citations

Downloads

96 downloads since deposited on 27 Jun 2013
13 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Conference or Workshop Item (Paper), refereed, original work
Communities & Collections:03 Faculty of Economics > Department of Informatics
Dewey Decimal Classification:000 Computer science, knowledge & systems
Language:English
Event End Date:8 September 2005
Deposited On:27 Jun 2013 09:22
Last Modified:14 Aug 2017 12:52
Publisher:Curran Associates, Inc.
Other Identification Number:merlin-id:8207

Download

Download PDF  'Bayes Risk minimization using metric loss functions'.
Preview
Filetype: PDF
Size: 133kB