Search Results

Now showing 1 - 7 of 7
Loading...
Thumbnail Image
Item

Toward Representing Research Contributions in Scholarly Knowledge Graphs Using Knowledge Graph Cells

2020, Vogt, Lars, D'Souza, Jennifer, Stocker, Markus, Auer, Sören

There is currently a gap between the natural language expression of scholarly publications and their structured semantic content modeling to enable intelligent content search. With the volume of research growing exponentially every year, a search feature operating over semantically structured content is compelling. Toward this end, in this work, we propose a novel semantic data model for modeling the contribution of scientific investigations. Our model, i.e. the Research Contribution Model (RCM), includes a schema of pertinent concepts highlighting six core information units, viz. Objective, Method, Activity, Agent, Material, and Result, on which the contribution hinges. It comprises bottom-up design considerations made from three scientific domains, viz. Medicine, Computer Science, and Agriculture, which we highlight as case studies. For its implementation in a knowledge graph application we introduce the idea of building blocks called Knowledge Graph Cells (KGC), which provide the following characteristics: (1) they limit the expressibility of ontologies to what is relevant in a knowledge graph regarding specific concepts on the theme of research contributions; (2) they are expressible via ABox and TBox expressions; (3) they enforce a certain level of data consistency by ensuring that a uniform modeling scheme is followed through rules and input controls; (4) they organize the knowledge graph into named graphs; (5) they provide information for the front end for displaying the knowledge graph in a human-readable form such as HTML pages; and (6) they can be seamlessly integrated into any existing publishing process thatsupports form-based input abstracting its semantic technicalities including RDF semantification from the user. Thus RCM joins the trend of existing work toward enhanced digitalization of scholarly publication enabled by an RDF semantification as a knowledge graph fostering the evolution of the scholarly publications beyond written text.

Loading...
Thumbnail Image
Item

Representing Semantified Biological Assays in the Open Research Knowledge Graph

2020, Anteghini, Marco, D'Souza, Jennifer, Martins dos Santos, Vitor A.P., Auer, Sören, Ishita, Emi, Pang, Natalie Lee San, Zhou, Lihong

In the biotechnology and biomedical domains, recent text mining efforts advocate for machine-interpretable, and preferably, semantified, documentation formats of laboratory processes. This includes wet-lab protocols, (in)organic materials synthesis reactions, genetic manipulations and procedures for faster computer-mediated analysis and predictions. Herein, we present our work on the representation of semantified bioassays in the Open Research Knowledge Graph (ORKG). In particular, we describe a semantification system work-in-progress to generate, automatically and quickly, the critical semantified bioassay data mass needed to foster a consistent user audience to adopt the ORKG for recording their bioassays and facilitate the organisation of research, according to FAIR principles.

Loading...
Thumbnail Image
Item

NLPContributions: An Annotation Scheme for Machine Reading of Scholarly Contributions in Natural Language Processing Literature

2020, D'Souza, Jennifer, Auer, Sören

We describe an annotation initiative to capture the scholarly contributions in natural language processing (NLP) articles, particularly, for the articles that discuss machine learning (ML) approaches for various information extraction tasks. We develop the annotation task based on a pilot annotation exercise on 50 NLP-ML scholarly articles presenting contributions to five information extraction tasks 1. machine translation, 2. named entity recognition, 3. Question answering, 4. relation classification, and 5. text classification. In this article, we describe the outcomes of this pilot annotation phase. Through the exercise we have obtained an annotation methodology; and found ten core information units that reflect the contribution of the NLP-ML scholarly investigations. The resulting annotation scheme we developed based on these information units is called NLPContributions. The overarching goal of our endeavor is four-fold: 1) to find a systematic set of patterns of subject-predicate-object statements for the semantic structuring of scholarly contributions that are more or less generically applicable for NLP-ML research articles; 2) to apply the discovered patterns in the creation of a larger annotated dataset for training machine readers [18] of research contributions; 3) to ingest the dataset into the Open Research Knowledge Graph (ORKG) infrastructure as a showcase for creating user-friendly state-of-the-art overviews; 4) to integrate the machine readers into the ORKG to assist users in the manual curation of their respective article contributions. We envision that the NLPContributions methodology engenders a wider discussion on the topic toward its further refinement and development. Our pilot annotated dataset of 50 NLP-ML scholarly articles according to the NLPContributions scheme is openly available to the research community at https://doi.org/10.25835/0019761.

Loading...
Thumbnail Image
Item

Eigenfactor

2021, Fraumann, Grischa, D'Souza, Jennifer, Holmberg, Kim

The Eigenfactor™ is a journal metric, which was developed by Bergstrom and his colleagues at the University of Washington. They invented the Eigenfactor as a response to the criticism against the use of simple citation counts. The Eigenfactor makes use of the network structure of citations, i.e. citations between journals, and establishes the importance, influence or impact of a journal based on its location in a network of journals. The importance is defined based on the number of citations between journals. As such, the Eigenfactor algorithm is based on Eigenvector centrality. While journal based metrics have been criticized, the Eigenfactor has also been suggested as an alternative in the widely used San Francisco Declaration on ResearchAssessment (DORA).

Loading...
Thumbnail Image
Item

Domain-Independent Extraction of Scientific Concepts from Research Articles

2020, Brack, Arthur, D'Souza, Jennifer, Hoppe, Anett, Auer, Sören, Ewerth, Ralph, Jose, Joemon M., Yilmaz, Emine, Magalhães, João, Castells, Pablo, Ferro, Nicola, Silva, Mário J., Martins, Flávio

We examine the novel task of domain-independent scientific concept extraction from abstracts of scholarly articles and present two contributions. First, we suggest a set of generic scientific concepts that have been identified in a systematic annotation process. This set of concepts is utilised to annotate a corpus of scientific abstracts from 10 domains of Science, Technology and Medicine at the phrasal level in a joint effort with domain experts. The resulting dataset is used in a set of benchmark experiments to (a) provide baseline performance for this task, (b) examine the transferability of concepts between domains. Second, we present a state-of-the-art deep learning baseline. Further, we propose the active learning strategy for an optimal selection of instances from among the various domains in our data. The experimental results show that (1) a substantial agreement is achievable by non-experts after consultation with domain experts, (2) the baseline system achieves a fairly high F1 score, (3) active learning enables us to nearly halve the amount of required training data.

Loading...
Thumbnail Image
Item

The STEM-ECR Dataset: Grounding Scientific Entity References in STEM Scholarly Content to Authoritative Encyclopedic and Lexicographic Sources

2020, D'Souza, Jennifer, Hoppe, Anett, Brack, Arthur, Jaradeh, Mohamad Yaser, Auer, Sören, Ewerth, Ralph

We introduce the STEM (Science, Technology, Engineering, and Medicine) Dataset for Scientific Entity Extraction, Classification, and Resolution, version 1.0 (STEM-ECR v1.0). The STEM-ECR v1.0 dataset has been developed to provide a benchmark for the evaluation of scientific entity extraction, classification, and resolution tasks in a domain-independent fashion. It comprises abstracts in 10 STEM disciplines that were found to be the most prolific ones on a major publishing platform. We describe the creation of such a multidisciplinary corpus and highlight the obtained findings in terms of the following features: 1) a generic conceptual formalism for scientific entities in a multidisciplinary scientific context; 2) the feasibility of the domain-independent human annotation of scientific entities under such a generic formalism; 3) a performance benchmark obtainable for automatic extraction of multidisciplinary scientific entities using BERT-based neural models; 4) a delineated 3-step entity resolution procedure for human annotation of the scientific entities via encyclopedic entity linking and lexicographic word sense disambiguation; and 5) human evaluations of Babelfy returned encyclopedic links and lexicographic senses for our entities. Our findings cumulatively indicate that human annotation and automatic learning of multidisciplinary scientific concepts as well as their semantic disambiguation in a wide-ranging setting as STEM is reasonable.

Loading...
Thumbnail Image
Item

SciBERT-based Semantification of Bioassays in the Open Research Knowledge Graph

2020, Anteghini, Marco, D'Souza, Jennifer, Martins dos Santos, Vitor A.P., Auer, Sören

As a novel contribution to the problem of semantifying bio- logical assays, in this paper, we propose a neural-network-based approach to automatically semantify, thereby structure, unstructured bioassay text descriptions. Experimental evaluations, to this end, show promise as the neural-based semantification significantly outperforms a naive frequencybased baseline approach. Specifically, the neural method attains 72% F1 versus 47% F1 from the frequency-based method. The work in this paper aligns with the present cutting-edge trend of the scholarly knowledge digitalization impetus which aim to convert the long-standing document-based format of scholarly content into knowledge graphs (KG). To this end, our selected data domain of bioassays are a prime candidate for structuring into KGs.