Browsing by Author "Oelen, Allard"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
- ItemAn Approach to Evaluate User Interfaces in a Scholarly Knowledge Communication Domain(Cham : Springer, 2023) Obrezkov, Denis; Oelen, Allard; Auer, Sören; Abdelnour-Nocera, José L.; Marta Lárusdóttir; Petrie, Helen; Piccinno, Antonio; Winckler, MarcoThe amount of research articles produced every day is overwhelming: scholarly knowledge is getting harder to communicate and easier to get lost. A possible solution is to represent the information in knowledge graphs: structures representing knowledge in networks of entities, their semantic types, and relationships between them. But this solution has its own drawback: given its very specific task, it requires new methods for designing and evaluating user interfaces. In this paper, we propose an approach for user interface evaluation in the knowledge communication domain. We base our methodology on the well-established Cognitive Walkthough approach but employ a different set of questions, tailoring the method towards domain-specific needs. We demonstrate our approach on a scholarly knowledge graph implementation called Open Research Knowledge Graph (ORKG).
- ItemCreating a Scholarly Knowledge Graph from Survey Article Tables(Cham : Springer, 2020) Oelen, Allard; Stocker, Markus; Auer, Sören; Ishita, Emi; Pang, Natalie Lee San; Zhou, LihongDue to the lack of structure, scholarly knowledge remains hardly accessible for machines. Scholarly knowledge graphs have been proposed as a solution. Creating such a knowledge graph requires manual effort and domain experts, and is therefore time-consuming and cumbersome. In this work, we present a human-in-the-loop methodology used to build a scholarly knowledge graph leveraging literature survey articles. Survey articles often contain manually curated and high-quality tabular information that summarizes findings published in the scientific literature. Consequently, survey articles are an excellent resource for generating a scholarly knowledge graph. The presented methodology consists of five steps, in which tables and references are extracted from PDF articles, tables are formatted and finally ingested into the knowledge graph. To evaluate the methodology, 92 survey articles, containing 160 survey tables, have been imported in the graph. In total, 2626 papers have been added to the knowledge graph using the presented methodology. The results demonstrate the feasibility of our approach, but also indicate that manual effort is required and thus underscore the important role of human experts.
- ItemCrowdsourcing Scholarly Discourse Annotations(New York, NY : ACM, 2021) Oelen, Allard; Stocker, Markus; Auer, SörenThe number of scholarly publications grows steadily every year and it becomes harder to find, assess and compare scholarly knowledge effectively. Scholarly knowledge graphs have the potential to address these challenges. However, creating such graphs remains a complex task. We propose a method to crowdsource structured scholarly knowledge from paper authors with a web-based user interface supported by artificial intelligence. The interface enables authors to select key sentences for annotation. It integrates multiple machine learning algorithms to assist authors during the annotation, including class recommendation and key sentence highlighting. We envision that the interface is integrated in paper submission processes for which we define three main task requirements: The task has to be . We evaluated the interface with a user study in which participants were assigned the task to annotate one of their own articles. With the resulting data, we determined whether the participants were successfully able to perform the task. Furthermore, we evaluated the interface’s usability and the participant’s attitude towards the interface with a survey. The results suggest that sentence annotation is a feasible task for researchers and that they do not object to annotate their articles during the submission process.
- ItemGenerate FAIR Literature Surveys with Scholarly Knowledge Graphs(New York City, NY : Association for Computing Machinery, 2020) Oelen, Allard; Jaradeh, Mohamad Yaser; Stocker, Markus; Auer, SörenReviewing scientific literature is a cumbersome, time consuming but crucial activity in research. Leveraging a scholarly knowledge graph, we present a methodology and a system for comparing scholarly literature, in particular research contributions describing the addressed problem, utilized materials, employed methods and yielded results. The system can be used by researchers to quickly get familiar with existing work in a specific research domain (e.g., a concrete research question or hypothesis). Additionally, it can be used to publish literature surveys following the FAIR Data Principles. The methodology to create a research contribution comparison consists of multiple tasks, specifically: (a) finding similar contributions, (b) aligning contribution descriptions, (c) visualizing and finally (d) publishing the comparison. The methodology is implemented within the Open Research Knowledge Graph (ORKG), a scholarly infrastructure that enables researchers to collaboratively describe, find and compare research contributions. We evaluate the implementation using data extracted from published review articles. The evaluation also addresses the FAIRness of comparisons published with the ORKG.
- ItemOpen Research Knowledge Graph(Goettingen: Cuvillier Verlag, 2024-05-07) Auer, Sören; Ilangovan, Vinodh; Stocker, Markus; Tiwari, Sanju; Vogt, Lars; Bernard-Verdier, Maud; D'Souza, Jennifer; Fadel , Kamel; Farfar, Kheir Eddine; Göpfert , Jan; Haris , Muhammad; Heger, Tina; Hussein, Hassan; Jaradeh, Yaser; Jeschke, Jonathan M.; Jiomekong , Azanzi; Kabongo, Salomon; Karras, Oliver; Kuckertz, Patrick; Kullamann, Felix; Martin, Emily A.; Oelen, Allard; Perez-Alvarez, Ricardo; Prinz, Manuel; Snyder, Lauren D.; Stolten, Detlef; Weinand, Jann M.As we mark the fifth anniversary of the alpha release of the Open Research Knowledge Graph (ORKG), it is both timely and exhilarating to celebrate the significant strides made in this pioneering project. We designed this book as a tribute to the evolution and achievements of the ORKG and as a practical guide encapsulating its essence in a form that resonates with both the general reader and the specialist. The ORKG has opened a new era in the way scholarly knowledge is curated, managed, and disseminated. By transforming vast arrays of unstructured narrative text into structured, machine-processable knowledge, the ORKG has emerged as an essential service with sophisticated functionalities. Over the past five years, our team has developed the ORKG into a vibrant platform that enhances the accessibility and visibility of scientific research. This book serves as a non-technical guide and a comprehensive reference for new and existing users that outlines the ORKG’s approach, technologies, and its role in revolutionizing scholarly communication. By elucidating how the ORKG facilitates the collection, enhancement, and sharing of knowledge, we invite readers to appreciate the value and potential of this groundbreaking digital tool presented in a tangible form. Looking ahead, we are thrilled to announce the upcoming unveiling of promising new features and tools at the fifth-year celebration of the ORKG’s alpha release. These innovations are set to redefine the boundaries of machine assistance enabled by research knowledge graphs. Among these enhancements, you can expect more intuitive interfaces that simplify the user experience, and enhanced machine learning models that improve the automation and accuracy of data curation. We also included a glossary tailored to clarifying key terms and concepts associated with the ORKG to ensure that all readers, regardless of their technical background, can fully engage with and understand the content presented. This book transcends the boundaries of a typical technical report. We crafted this as an inspiration for future applications, a testament to the ongoing evolution in scholarly communication that invites further collaboration and innovation. Let this book serve as both your guide and invitation to explore the ORKG as it continues to grow and shape the landscape of scientific inquiry and communication.
- ItemQuality evaluation of open educational resources(Cham : Springer, 2020) Elias, Mirette; Oelen, Allard; Tavakoli, Mohammadreza; Kismihok, Gábor; Auer, Sören; Alario-Hoyos, Carlos; Rodríguez-Triana, María Jesús; Scheffel, Maren; Arnedillo-Sánchez, Inmaculada; Dennerlein, Sebastian MaximilianOpen Educational Resources (OER) are free and open-licensed educational materials widely used for learning. OER quality assessment has become essential to support learners and teachers in finding high-quality OERs, and to enable online learning repositories to improve their OERs. In this work, we establish a set of evaluation metrics that assess OER quality in OER authoring tools. These metrics provide guidance to OER content authors to create high-quality content. The metrics were implemented and evaluated within SlideWiki, a collaborative OpenCourseWare platform that provides educational materials in presentation slides format. To evaluate the relevance of the metrics, a questionnaire is conducted among OER expert users. The evaluation results indicate that the metrics address relevant quality aspects and can be used to determine the overall OER quality.
- ItemTinyGenius: Intertwining natural language processing with microtask crowdsourcing for scholarly knowledge graph creation(New York,NY,United States : Association for Computing Machinery, 2022) Oelen, Allard; Stocker, Markus; Auer, Sören; Aizawa, AkikoAs the number of published scholarly articles grows steadily each year, new methods are needed to organize scholarly knowledge so that it can be more efficiently discovered and used. Natural Language Processing (NLP) techniques are able to autonomously process scholarly articles at scale and to create machine readable representations of the article content. However, autonomous NLP methods are by far not sufficiently accurate to create a high-quality knowledge graph. Yet quality is crucial for the graph to be useful in practice. We present TinyGenius, a methodology to validate NLP-extracted scholarly knowledge statements using microtasks performed with crowdsourcing. The scholarly context in which the crowd workers operate has multiple challenges. The explainability of the employed NLP methods is crucial to provide context in order to support the decision process of crowd workers. We employed TinyGenius to populate a paper-centric knowledge graph, using five distinct NLP methods. In the end, the resulting knowledge graph serves as a digital library for scholarly articles.