Search Results

Now showing 1 - 4 of 4
  • Item
    TinyGenius: Intertwining natural language processing with microtask crowdsourcing for scholarly knowledge graph creation
    (New York,NY,United States : Association for Computing Machinery, 2022) Oelen, Allard; Stocker, Markus; Auer, Sören; Aizawa, Akiko
    As the number of published scholarly articles grows steadily each year, new methods are needed to organize scholarly knowledge so that it can be more efficiently discovered and used. Natural Language Processing (NLP) techniques are able to autonomously process scholarly articles at scale and to create machine readable representations of the article content. However, autonomous NLP methods are by far not sufficiently accurate to create a high-quality knowledge graph. Yet quality is crucial for the graph to be useful in practice. We present TinyGenius, a methodology to validate NLP-extracted scholarly knowledge statements using microtasks performed with crowdsourcing. The scholarly context in which the crowd workers operate has multiple challenges. The explainability of the employed NLP methods is crucial to provide context in order to support the decision process of crowd workers. We employed TinyGenius to populate a paper-centric knowledge graph, using five distinct NLP methods. In the end, the resulting knowledge graph serves as a digital library for scholarly articles.
  • Item
    An Approach to Evaluate User Interfaces in a Scholarly Knowledge Communication Domain
    (Cham : Springer, 2023) Obrezkov, Denis; Oelen, Allard; Auer, Sören; Abdelnour-Nocera, José L.; Marta Lárusdóttir; Petrie, Helen; Piccinno, Antonio; Winckler, Marco
    The amount of research articles produced every day is overwhelming: scholarly knowledge is getting harder to communicate and easier to get lost. A possible solution is to represent the information in knowledge graphs: structures representing knowledge in networks of entities, their semantic types, and relationships between them. But this solution has its own drawback: given its very specific task, it requires new methods for designing and evaluating user interfaces. In this paper, we propose an approach for user interface evaluation in the knowledge communication domain. We base our methodology on the well-established Cognitive Walkthough approach but employ a different set of questions, tailoring the method towards domain-specific needs. We demonstrate our approach on a scholarly knowledge graph implementation called Open Research Knowledge Graph (ORKG).
  • Item
    Crowdsourcing Scholarly Discourse Annotations
    (New York, NY : ACM, 2021) Oelen, Allard; Stocker, Markus; Auer, Sören
    The number of scholarly publications grows steadily every year and it becomes harder to find, assess and compare scholarly knowledge effectively. Scholarly knowledge graphs have the potential to address these challenges. However, creating such graphs remains a complex task. We propose a method to crowdsource structured scholarly knowledge from paper authors with a web-based user interface supported by artificial intelligence. The interface enables authors to select key sentences for annotation. It integrates multiple machine learning algorithms to assist authors during the annotation, including class recommendation and key sentence highlighting. We envision that the interface is integrated in paper submission processes for which we define three main task requirements: The task has to be . We evaluated the interface with a user study in which participants were assigned the task to annotate one of their own articles. With the resulting data, we determined whether the participants were successfully able to perform the task. Furthermore, we evaluated the interface’s usability and the participant’s attitude towards the interface with a survey. The results suggest that sentence annotation is a feasible task for researchers and that they do not object to annotate their articles during the submission process.
  • Item
    Quality evaluation of open educational resources
    (Cham : Springer, 2020) Elias, Mirette; Oelen, Allard; Tavakoli, Mohammadreza; Kismihok, Gábor; Auer, Sören; Alario-Hoyos, Carlos; Rodríguez-Triana, María Jesús; Scheffel, Maren; Arnedillo-Sánchez, Inmaculada; Dennerlein, Sebastian Maximilian
    Open Educational Resources (OER) are free and open-licensed educational materials widely used for learning. OER quality assessment has become essential to support learners and teachers in finding high-quality OERs, and to enable online learning repositories to improve their OERs. In this work, we establish a set of evaluation metrics that assess OER quality in OER authoring tools. These metrics provide guidance to OER content authors to create high-quality content. The metrics were implemented and evaluated within SlideWiki, a collaborative OpenCourseWare platform that provides educational materials in presentation slides format. To evaluate the relevance of the metrics, a questionnaire is conducted among OER expert users. The evaluation results indicate that the metrics address relevant quality aspects and can be used to determine the overall OER quality.