Search Results

Now showing 1 - 2 of 2
  • Item
    Persistent Identification for Conferences
    (Paris : CODATA, 2022) Franken, Julian; Birukou, Aliaksandr; Eckert, Kai; Fahl, Wolfgang; Hauschke, Christian; Lange, Christoph
    Persistent identification of entities plays a major role in the progress of digitization of many fields. In the scholarly publishing realm there are already persistent identifiers (PID) for papers (DOI), people (ORCID), organisation (GRID, ROR), books (ISBN) but there is no generally accepted PID system for scholarly events such as conferences or workshops yet. This article describes the relevant use cases that motivate the introduction of persistent identifiers for conferences. The use cases were mainly derived from interviews, discussions with experts and their previous work. As primary stakeholders who are involved in the typical conference event life cycle researchers, conference organizers, and data consumers were identified. The resulting list of use cases illustrates how PIDs for conference events will improve the current situation for these stakeholders and help with problems they are facing today.
  • Item
    A comprehensive quality assessment framework for scientific events
    (Dordrecht [u.a.] : Springer Science + Business Media B.V., 2020) Vahdati, Sahar; Fathalla, Said; Lange, Christoph; Behrend, Andreas; Say, Aysegul; Say, Zeynep; Auer, Sören
    Systematic assessment of scientific events has become increasingly important for research communities. A range of metrics (e.g., citations, h-index) have been developed by different research communities to make such assessments effectual. However, most of the metrics for assessing the quality of less formal publication venues and events have not yet deeply investigated. It is also rather challenging to develop respective metrics because each research community has its own formal and informal rules of communication and quality standards. In this article, we develop a comprehensive framework of assessment metrics for evaluating scientific events and involved stakeholders. The resulting quality metrics are determined with respect to three general categories—events, persons, and bibliometrics. Our assessment methodology is empirically applied to several series of computer science events, such as conferences and workshops, using publicly available data for determining quality metrics. We show that the metrics’ values coincide with the intuitive agreement of the community on its “top conferences”. Our results demonstrate that highly-ranked events share similar profiles, including the provision of outstanding reviews, visiting diverse locations, having reputed people involved, and renowned sponsors.