Search Results

Now showing 1 - 2 of 2
  • Item
    OEKG: The Open Event Knowledge Graph
    (Aachen, Germany : RWTH Aachen, 2021) Gottschalk, Simon; Kacupaj, Endri; Abdollahi, Sara; Alves, Diego; Amaral, Gabriel; Koutsiana, Elisavet; Kuculo, Tin; Major, Daniela; Mello, Caio; Cheema, Gullal S.; Sittar, Abdul; Swati; Tahmasebzadeh, Golsa; Thakkar, Gaurish
    Accessing and understanding contemporary and historical events of global impact such as the US elections and the Olympic Games is a major prerequisite for cross-lingual event analytics that investigate event causes, perception and consequences across country borders. In this paper, we present the Open Event Knowledge Graph (OEKG), a multilingual, event-centric, temporal knowledge graph composed of seven different data sets from multiple application domains, including question answering, entity recommendation and named entity recognition. These data sets are all integrated through an easy-to-use and robust pipeline and by linking to the event-centric knowledge graph EventKG. We describe their common schema and demonstrate the use of the OEKG at the example of three use cases: type-specific image retrieval, hybrid question answering over knowledge graphs and news articles, as well as language-specific event recommendation. The OEKG and its query endpoint are publicly available.
  • Item
    A Feature Analysis for Multimodal News Retrieval
    (Aachen : RWTH, 2020) Tahmasebzadeh, Golsa; Hakimov, Sherzod; Müller-Budack, Eric; Ewerth, Ralph
    Content-based information retrieval is based on the information contained in documents rather than using metadata such as keywords. Most information retrieval methods are either based on text or image. In this paper, we investigate the usefulness of multimodal features for cross-lingual news search in various domains: politics, health, environment, sport, and finance. To this end, we consider five feature types for image and text and compare the performance of the retrieval system using different combinations. Experimental results show that retrieval results can be improved when considering both visual and textual information. In addition, it is observed that among textual features entity overlap outperforms word embeddings, while geolocation embeddings achieve better performance among visual features in the retrieval task.