Search Results

Now showing 1 - 10 of 72
  • Item
    Comparative Verification of the Digital Library of Mathematical Functions and Computer Algebra Systems
    (Berlin ; Heidelberg : Springer, 2022) Greiner-Petter, André; Cohl, Howard S.; Youssef, Abdou; Schubotz, Moritz; Trost, Avi; Dey, Rajen; Aizawa, Akiko; Gipp, Bela; Fisman, Dana; Rosu, Grigore
    Digital mathematical libraries assemble the knowledge of years of mathematical research. Numerous disciplines (e.g., physics, engineering, pure and applied mathematics) rely heavily on compendia gathered findings. Likewise, modern research applications rely more and more on computational solutions, which are often calculated and verified by computer algebra systems. Hence, the correctness, accuracy, and reliability of both digital mathematical libraries and computer algebra systems is a crucial attribute for modern research. In this paper, we present a novel approach to verify a digital mathematical library and two computer algebra systems with one another by converting mathematical expressions from one system to the other. We use our previously developed conversion tool (referred to as ) to translate formulae from the NIST Digital Library of Mathematical Functions to the computer algebra systems Maple and Mathematica. The contributions of our presented work are as follows: (1) we present the most comprehensive verification of computer algebra systems and digital mathematical libraries with one another; (2) we significantly enhance the performance of the underlying translator in terms of coverage and accuracy; and (3) we provide open access to translations for Maple and Mathematica of the formulae in the NIST Digital Library of Mathematical Functions.
  • Item
    EVENTSKG: A 5-Star Dataset of Top-Ranked Events in Eight Computer Science Communities
    (Berlin ; Heidelberg : Springer, 2019) Fathalla, Said; Lange, Christoph; Auer, Sören; Hitzler, Pascal; Fernández, Miriam; Janowicz, Krzysztof; Zaveri, Amrapali; Gray, Alasdair J.G.; Lopez, Vanessa; Haller, Armin; Hammar, Karl
    Metadata of scientific events has become increasingly available on the Web, albeit often as raw data in various formats, disregarding its semantics and interlinking relations. This leads to restricting the usability of this data for, e.g., subsequent analyses and reasoning. Therefore, there is a pressing need to represent this data in a semantic representation, i.e., Linked Data. We present the new release of the EVENTSKG dataset, comprising comprehensive semantic descriptions of scientific events of eight computer science communities. Currently, EVENTSKG is a 5-star dataset containing metadata of 73 top-ranked event series (almost 2,000 events) established over the last five decades. The new release is a Linked Open Dataset adhering to an updated version of the Scientific Events Ontology, a reference ontology for event metadata representation, leading to richer and cleaner data. To facilitate the maintenance of EVENTSKG and to ensure its sustainability, EVENTSKG is coupled with a Java API that enables users to add/update events metadata without going into the details of the representation of the dataset. We shed light on events characteristics by analyzing EVENTSKG data, which provides a flexible means for customization in order to better understand the characteristics of renowned CS events.
  • Item
    Collaborative annotation and semantic enrichment of 3D media
    (New York,NY,United States : Association for Computing Machinery, 2022) Rossenova, Lozana; Schubert, Zoe; Vock, Richard; Sohmen, Lucia; Günther, Lukas; Duchesne, Paul; Blümel, Ina; Aizawa, Akiko
    A new FOSS (free and open source software) toolchain and associated workflow is being developed in the context of NFDI4Culture, a German consortium of research- and cultural heritage institutions working towards a shared infrastructure for research data that meets the needs of 21st century data creators, maintainers and end users across the broad spectrum of the digital libraries and archives field, and the digital humanities. This short paper and demo present how the integrated toolchain connects: 1) OpenRefine - for data reconciliation and batch upload; 2) Wikibase - for linked open data (LOD) storage; and 3) Kompakkt - for rendering and annotating 3D models. The presentation is aimed at librarians, digital curators and data managers interested in learning how to manage research datasets containing 3D media, and how to make them available within an open data environment with 3D-rendering and collaborative annotation features.
  • Item
    Temporal Role Annotation for Named Entities
    (Amsterdam [u.a.] : Elsevier, 2018) Koutraki, Maria; Bakhshandegan-Moghaddam, Farshad; Sack, Harald; Fensel, Anna; de Boer, Victor; Pellegrini, Tassilo; Kiesling, Elmar; Haslhofer, Bernhard; Hollink, Laura; Schindler, Alexander
    Natural language understanding tasks are key to extracting structured and semantic information from text. One of the most challenging problems in natural language is ambiguity and resolving such ambiguity based on context including temporal information. This paper, focuses on the task of extracting temporal roles from text, e.g. CEO of an organization or head of a state. A temporal role has a domain, which may resolve to different entities depending on the context and especially on temporal information, e.g. CEO of Microsoft in 2000. We focus on the temporal role extraction, as a precursor for temporal role disambiguation. We propose a structured prediction approach based on Conditional Random Fields (CRF) to annotate temporal roles in text and rely on a rich feature set, which extracts syntactic and semantic information from text. We perform an extensive evaluation of our approach based on two datasets. In the first dataset, we extract nearly 400k instances from Wikipedia through distant supervision, whereas in the second dataset, a manually curated ground-truth consisting of 200 instances is extracted from a sample of The New York Times (NYT) articles. Last, the proposed approach is compared against baselines where significant improvements are shown for both datasets.
  • Item
    The Research Core Dataset (KDSF) in the Linked Data context
    (Amsterdam [u.a.] : Elsevier, 2019) Walther, Tatiana; Hauschke, Christian; Kasprzik, Anna; Sicilia, Miguel-Angel; Simons, Ed; Clements, Anna; de Castro, Pablo; Bergström, Johan
    This paper describes our efforts to implement the Research Core Dataset (“Kerndatensatz Forschung”; KDSF) as an ontology in VIVO. KDSF is used in VIVO to record the required metadata on incoming data and to produce reports as an output. While both processes need an elaborate adaptation of the KDSF specification, this paper focusses on the adaptation of the KDSF basic data model for recording data in VIVO. In this context, the VIVO and KDSF ontologies were compared with respect to domain, syntax, structure, and granularity in order to identify correspondences and mismatches. To produce an alignment, different matching approaches have been applied. Furthermore, we made necessary modifications and extensions on KDSF classes and properties.
  • Item
    Contextual Language Models for Knowledge Graph Completion
    (Aachen, Germany : RWTH Aachen, 2021) Russa, Biswas; Sofronova, Radina; Alam, Mehwish; Sack, Harald; Mehwish, Alam; Ali, Medi; Groth, Paul; Hitzler, Pascal; Lehmann, Jens; Paulheim, Heiko; Rettinger, Achim; Sack, Harald; Sadeghi, Afshin; Tresp, Volker
    Knowledge Graphs (KGs) have become the backbone of various machine learning based applications over the past decade. However, the KGs are often incomplete and inconsistent. Several representation learning based approaches have been introduced to complete the missing information in KGs. Besides, Neural Language Models (NLMs) have gained huge momentum in NLP applications. However, exploiting the contextual NLMs to tackle the Knowledge Graph Completion (KGC) task is still an open research problem. In this paper, a GPT-2 based KGC model is proposed and is evaluated on two benchmark datasets. The initial results obtained from the _ne-tuning of the GPT-2 model for triple classi_cation strengthens the importance of usage of NLMs for KGC. Also, the impact of contextual language models for KGC has been discussed.
  • Item
    Data Protection Impact Assessments in Practice: Experiences from Case Studies
    (Berlin ; Heidelberg : Springer, 2022) Friedewald, Michael; Schiering, Ina; Martin, Nicholas; Hallinan, Dara; Katsikas, Sokratis; Lambrinoudakis, Costas; Cuppens, Nora; Mylopoulos, John; Kalloniatis, Christos; Meng, Weizhi; Furnell, Steven; Pallas, Frank; Pohle, Jörg; Sasse, M. Angela; Abie, Habtamu; Ranise, Silvio; Verderame, Luca; Cambiaso, Enrico; Vidal, Jorge Maestre; Monge, Marco Antonio Sotelo
    In the context of the project A Data Protection Impact Assessment (DPIA) Tool for Practical Use in Companies and Public Administration an operationalization for Data Protection Impact Assessments was developed based on the approach of Forum Privatheit. This operationalization was tested and refined during twelve tests with startups, small- and medium sized enterprises, corporations and public bodies. This paper presents the operationalization and summarizes the experience from the tests.
  • Item
    Check square at CheckThat! 2020: Claim Detection in Social Media via Fusion of Transformer and Syntactic Features
    (Aachen, Germany : RWTH Aachen, 2020) Cheema, Gullasl S.; Hakimov, Sherzod; Ewerth, Ralph; Cappellato, Linda; Eickhoff, Carsten; Ferro, Nicola; Névéol, Aurélie
    In this digital age of news consumption, a news reader has the ability to react, express and share opinions with others in a highly interactive and fast manner. As a consequence, fake news has made its way into our daily life because of very limited capacity to verify news on the Internet by large companies as well as individuals. In this paper, we focus on solving two problems which are part of the fact-checking ecosystem that can help to automate fact-checking of claims in an ever increasing stream of content on social media. For the first prob-lem, claim check-worthiness prediction, we explore the fusion of syntac-tic features and deep transformer Bidirectional Encoder Representations from Transformers (BERT) embeddings, to classify check-worthiness of a tweet, i.e. whether it includes a claim or not. We conduct a detailed feature analysis and present our best performing models for English and Arabic tweets. For the second problem, claim retrieval, we explore the pre-trained embeddings from a Siamese network transformer model (sentence-transformers) specifically trained for semantic textual similar-ity, and perform KD-search to retrieve verified claims with respect to a query tweet.
  • Item
    Combining Textual Features for the Detection of Hateful and Offensive Language
    (Aachen, Germany : RWTH Aachen, 2021) Hakimov, Sherzod; Ewerth, Ralph; Mehta, Parth; Mandl, Thomas; Majumder, Prasenjit; Mitra, Mandar
    The detection of offensive, hateful and profane language has become a critical challenge since many users in social networks are exposed to cyberbullying activities on a daily basis. In this paper, we present an analysis of combining different textual features for the detection of hateful or offensive posts on Twitter. We provide a detailed experimental evaluation to understand the impact of each building block in a neural network architecture. The proposed architecture is evaluated on the English Subtask 1A: Identifying Hate, offensive and profane content from the post datasets of HASOC-2021 dataset under the team name TIB-VA. We compared different variants of the contextual word embeddings combined with the character level embeddings and the encoding of collected hate terms.
  • Item
    Modelling Archival Hierarchies in Practice: Key Aspects and Lessons Learned
    (Aachen, Germany : RWTH Aachen, 2021) Vafaie, Mahsa; Bruns, Oleksandra; Pilz, Nastasja; Dessì, Danilo; Sack, Harald; Sumikawa, Yasunobu; Ikejiri, Ryohei; Doucet, Antoine; Pfanzelter, Eva; Hasanuzzaman, Mohammed; Dias, Gaël; Milligan, Ian; Jatowt, Adam
    An increasing number of archival institutions aim to provide public access to historical documents. Ontologies have been designed, developed and utilised to model the archival description of historical documents and to enable interoperability between different information sources. However, due to the heterogeneous nature of archives and archival systems, current ontologies for the representation of archival content do not always cover all existing structural organisation forms equallywell. After briefly contextualising the heterogeneity in the hierarchical structure of German archives, this paper describes and evaluates differences between two archival ontologies, ArDO and RiC-O, and their approaches to modelling hierarchy levels and archive dynamics.