Search Results

Now showing 1 - 2 of 2
  • Item
    Temporal Evolution of the Migration-related Topics on Social Media
    (Aachen, Germany : RWTH Aachen, 2021) Chen, Yiyi; Gesese, Genet Asefa; Sack, Harald; Alam, Mehwish; Seneviratne, Oshani; Pesquita, Catia; Sequeda, Juan; Etcheverry, Lorena
    This poster focuses on capturing the temporal evolution of migration-related topics on relevant tweets. It uses Dynamic Embedded Topic Model (DETM) as a learning algorithm to perform a quantitative and qualitative analysis of these emerging topics. TweetsKB is extended with the extracted Twitter dataset along with the results of DETM which considers temporality. These results are then further analyzed and visualized. It reveals that the trajectories of the migration-related topics are in agreement with historical events. The source codes are available online: https://bit.ly/3dN9ICB.
  • Item
    Challenges of Applying Knowledge Graph and their Embeddings to a Real-world Use-case
    (Aachen, Germany : RWTH Aachen, 2021) Petzold, Rick; Gesese, Genet Asefa; Bogdanova, Viktoria; Zylowski, Thorsten; Sack, Harald; Alam, Mehwish; Alam, Mehwish; Buscaldi, Davide; Cochez, Michael; Osborne, Francesco; Reforgiato Recupero, Diego; Sack, Harald
    Different Knowledge Graph Embedding (KGE) models have been proposed so far which are trained on some specific KG completion tasks such as link prediction and evaluated on datasets which are mainly created for such purpose. Mostly, the embeddings learnt on link prediction tasks are not applied for downstream tasks in real-world use-cases such as data available in different companies/organizations. In this paper, the challenges with enriching a KG which is generated from a real-world relational database (RDB) about companies, with information from external sources such as Wikidata and learning representations for the KG are presented. Moreover, a comparative analysis is presented between the KGEs and various text embeddings on some downstream clustering tasks. The results of experiments indicate that in use-cases like the one used in this paper, where the KG is highly skewed, it is beneficial to use text embeddings or language models instead of KGEs.