Multimodal news analytics using measures of cross-modal entity and context consistency

Abstract

The World Wide Web has become a popular source to gather information and news. Multimodal information, e.g., supplement text with photographs, is typically used to convey the news more effectively or to attract attention. The photographs can be decorative, depict additional details, but might also contain misleading information. The quantification of the cross-modal consistency of entity representations can assist human assessors’ evaluation of the overall multimodal message. In some cases such measures might give hints to detect fake news, which is an increasingly important topic in today’s society. In this paper, we present a multimodal approach to quantify the entity coherence between image and text in real-world news. Named entity linking is applied to extract persons, locations, and events from news texts. Several measures are suggested to calculate the cross-modal similarity of the entities in text and photograph by exploiting state-of-the-art computer vision approaches. In contrast to previous work, our system automatically acquires example data from the Web and is applicable to real-world news. Moreover, an approach that quantifies contextual image-text relations is introduced. The feasibility is demonstrated on two datasets that cover different languages, topics, and domains.

Description
Keywords
Cross-modal consistency, News analytics, Image-text relations, Image repurposing detection
Citation
Müller-Budack, E., Theiner, J., Diering, S., Idahl, M., Hakimov, S., & Ewerth, R. (2021). Multimodal news analytics using measures of cross-modal entity and context consistency. 10. https://doi.org//10.1007/s13735-021-00207-4
License
CC BY 4.0 Unported