Search Results

Now showing 1 - 2 of 2
  • Item
    OEKG: The Open Event Knowledge Graph
    (Aachen, Germany : RWTH Aachen, 2021) Gottschalk, Simon; Kacupaj, Endri; Abdollahi, Sara; Alves, Diego; Amaral, Gabriel; Koutsiana, Elisavet; Kuculo, Tin; Major, Daniela; Mello, Caio; Cheema, Gullal S.; Sittar, Abdul; Swati; Tahmasebzadeh, Golsa; Thakkar, Gaurish
    Accessing and understanding contemporary and historical events of global impact such as the US elections and the Olympic Games is a major prerequisite for cross-lingual event analytics that investigate event causes, perception and consequences across country borders. In this paper, we present the Open Event Knowledge Graph (OEKG), a multilingual, event-centric, temporal knowledge graph composed of seven different data sets from multiple application domains, including question answering, entity recommendation and named entity recognition. These data sets are all integrated through an easy-to-use and robust pipeline and by linking to the event-centric knowledge graph EventKG. We describe their common schema and demonstrate the use of the OEKG at the example of three use cases: type-specific image retrieval, hybrid question answering over knowledge graphs and news articles, as well as language-specific event recommendation. The OEKG and its query endpoint are publicly available.
  • Item
    A Multimodal Approach for Semantic Patent Image Retrieval
    (Aachen, Germany : RWTH Aachen, 2021) Pustu-Iren, Kader; Bruns, Gerrit; Ewerth, Ralph
    Patent images such as technical drawings contain valuable information and are frequently used by experts to compare patents. However, current approaches to patent information retrieval are largely focused on textual information. Consequently, we review previous work on patent retrieval with a focus on illustrations in figures. In this paper, we report on work in progress for a novel approach for patent image retrieval that uses deep multimodal features. Scene text spotting and optical character recognition are employed to extract numerals from an image to subsequently identify references to corresponding sentences in the patent document. Furthermore, we use a neural state-of-the-art CLIP model to extract structural features from illustrations and additionally derive textual features from the related patent text using a sentence transformer model. To fuse our multimodal features for similarity search we apply re-ranking according to averaged or maximum scores. In our experiments, we compare the impact of different modalities on the task of similarity search for patent images. The experimental results suggest that patent image retrieval can be successfully performed using the proposed feature sets, while the best results are achieved when combining the features of both modalities.