Publication: Evaluation of HTR models without Ground Truth Material
Evaluation of HTR models without Ground Truth Material
Date
Date
Date
Citations
Ströbel, P. B., Clematide, S., Volk, M., Schwitter, R., Hodel, T., & Schoch, D. (2022, June). Evaluation of HTR models without Ground Truth Material. LREC 2022, Marseille. http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.467.pdf
Abstract
Abstract
Abstract
The evaluation of Handwritten Text Recognition (HTR) models during their development is straightforward: because HTR is a supervised problem, the usual data split into training, validation, and test data sets allows the evaluation of models in terms of accuracy or error rates. However, the evaluation process becomes tricky as soon as we switch from development to application. A compilation of a new (and forcibly smaller) ground truth (GT) from a sample of the data that we want to apply the model on and the subsequent evaluation of mod
Metrics
Downloads
Views
Additional indexing
Creators (Authors)
Event Title
Event Title
Event Title
Event Location
Event Location
Event Location
Event Start Date
Event Start Date
Event Start Date
Event End Date
Event End Date
Event End Date
Publisher
Publisher
Publisher
Item Type
Item Type
Item Type
Dewey Decimal Classifikation
Dewey Decimal Classifikation
Dewey Decimal Classifikation
Keywords
Language
Language
Language
Date available
Date available
Date available
OA Status
OA Status
OA Status
Free Access at
Free Access at
Free Access at
Official URL
Official URL
Official URL
Metrics
Downloads
Views
Citations
Ströbel, P. B., Clematide, S., Volk, M., Schwitter, R., Hodel, T., & Schoch, D. (2022, June). Evaluation of HTR models without Ground Truth Material. LREC 2022, Marseille. http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.467.pdf