Header

UZH-Logo

Maintenance Infos

Estimating the Information Gap between Textual and Visual Representations


Henning, Christian Andreas; Ewerth, Ralph (2017). Estimating the Information Gap between Textual and Visual Representations. In: International Conference on Multimedia Retrieval (ICMR) 17, Bucharest, 6 June 2017 - 9 June 2017, 14 - 22.

Abstract

Photos, drawings, figures, etc. supplement textual information in various kinds of media, for example, in web news or scientific publications. In this respect, the intended effect of an image can be quite different, e.g., providing additional information, focusing on certain details of surrounding text, or simply being a general illustration of a topic. As a consequence, the semantic correlation between information of different modalities can vary noticeably, too. Moreover, cross-modal interrelations are often hard to describe in a precise way. The variety of possible interrelations of textual and graphical information and the question, how they can be described and automatically estimated have not been addressed yet by previous work. In this paper, we present several contributions to close this gap. First, we introduce two measures to describe cross-modal interrelations: cross-modal mutual information (CMI) and semantic correlation (SC). Second, a novel approach relying on deep learning is suggested to estimate CMI and SC of textual and visual information. Third, three diverse datasets are leveraged to learn an appropriate deep neural network model for the demanding task. The system has been evaluated on a challenging test set and the experimental results demonstrate the feasibility of the approach.

Abstract

Photos, drawings, figures, etc. supplement textual information in various kinds of media, for example, in web news or scientific publications. In this respect, the intended effect of an image can be quite different, e.g., providing additional information, focusing on certain details of surrounding text, or simply being a general illustration of a topic. As a consequence, the semantic correlation between information of different modalities can vary noticeably, too. Moreover, cross-modal interrelations are often hard to describe in a precise way. The variety of possible interrelations of textual and graphical information and the question, how they can be described and automatically estimated have not been addressed yet by previous work. In this paper, we present several contributions to close this gap. First, we introduce two measures to describe cross-modal interrelations: cross-modal mutual information (CMI) and semantic correlation (SC). Second, a novel approach relying on deep learning is suggested to estimate CMI and SC of textual and visual information. Third, three diverse datasets are leveraged to learn an appropriate deep neural network model for the demanding task. The system has been evaluated on a challenging test set and the experimental results demonstrate the feasibility of the approach.

Statistics

Citations

Dimensions.ai Metrics

Altmetrics

Downloads

8 downloads since deposited on 23 Feb 2018
8 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Conference or Workshop Item (Paper), refereed, original work
Communities & Collections:07 Faculty of Science > Institute of Neuroinformatics
Dewey Decimal Classification:570 Life sciences; biology
Language:English
Event End Date:9 June 2017
Deposited On:23 Feb 2018 10:23
Last Modified:20 Sep 2018 04:30
Publisher:ICMR '17 Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval
Series Name:Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval
Number of Pages:9
OA Status:Hybrid
Free access at:Official URL. An embargo period may apply.
Publisher DOI:https://doi.org/10.1145/3078971.3078991
Official URL:http://dl.acm.org/citation.cfm?doid=3078971.3078991
Related URLs:https://www.zora.uzh.ch/id/eprint/149363/

Download

Download PDF  'Estimating the Information Gap between Textual and Visual Representations'.
Preview
Content: Published Version
Filetype: PDF
Size: 5MB
View at publisher