Search Results

Now showing 1 - 2 of 2
  • Item
    Toward a Comparison Framework for Interactive Ontology Enrichment Methodologies
    (Aachen, Germany : RWTH Aachen, 2022) Vrolijk, Jarno; Reklos, Ioannis; Vafaie, Mahsa; Massari, Arcangelo; Mohammadi, Maryam; Rudolph, Sebastian; Fu, Bo; Lambrix, Patrick; Pesquita, Catia
    The growing demand for well-modeled ontologies in diverse application areas increases the need for intuitive interaction techniques that support human domain experts in ontology modeling and enrichment tasks, such that quality expectations are met. Beyond the correctness of the specified information, the quality of an ontology depends on its (relative) completeness, i.e., whether the ontology contains all the necessary information to draw expected inferences. On an abstract level, the Ontology Enrichment problem consists of identifying and filling the gap between information that can be logically inferred from the ontology and the information expected to be inferable by the user. To this end, numerous approaches have been described in the literature, providing methodologies from the fields of Formal Semantics and Automated Reasoning targeted at eliciting knowledge from human domain experts. These approaches vary greatly in many aspects and their applicability typically depends on the specifics of the concrete modeling scenario at hand. Toward a better understanding of the landscape of methodological possibilities, this position paper proposes a framework consisting of multiple performance dimensions along which existing and future approaches to interactive ontology enrichment can be characterized. We apply our categorization scheme to a selection of methodologies from the literature. In light of this comparison, we address the limitations of the methods and propose directions for future work.
  • Item
    Identifying and correcting invalid citations due to DOI errors in Crossref data
    (Dordrecht [u.a.] : Springer Science + Business Media B.V., 2022) Cioffi, Alessia; Coppini, Sara; Massari, Arcangelo; Moretti, Arianna; Peroni, Silvio; Santini, Cristian; Shahidzadeh Asadi, Nooshin
    This work aims to identify classes of DOI mistakes by analysing the open bibliographic metadata available in Crossref, highlighting which publishers were responsible for such mistakes and how many of these incorrect DOIs could be corrected through automatic processes. By using a list of invalid cited DOIs gathered by OpenCitations while processing the OpenCitations Index of Crossref open DOI-to-DOI citations (COCI) in the past two years, we retrieved the citations in the January 2021 Crossref dump to such invalid DOIs. We processed these citations by keeping track of their validity and the publishers responsible for uploading the related citation data in Crossref. Finally, we identified patterns of factual errors in the invalid DOIs and the regular expressions needed to catch and correct them. The outcomes of this research show that only a few publishers were responsible for and/or affected by the majority of invalid citations. We extended the taxonomy of DOI name errors proposed in past studies and defined more elaborated regular expressions that can clean a higher number of mistakes in invalid DOIs than prior approaches. The data gathered in our study can enable investigating possible reasons for DOI mistakes from a qualitative point of view, helping publishers identify the problems underlying their production of invalid citation data. Also, the DOI cleaning mechanism we present could be integrated into the existing process (e.g. in COCI) to add citations by automatically correcting a wrong DOI. This study was run strictly following Open Science principles, and, as such, our research outcomes are fully reproducible.