Search Results

Now showing 1 - 10 of 127
Loading...
Thumbnail Image
Item

EVENTSKG: A 5-Star Dataset of Top-Ranked Events in Eight Computer Science Communities

2019, Fathalla, Said, Lange, Christoph, Auer, Sören, Hitzler, Pascal, Fernández, Miriam, Janowicz, Krzysztof, Zaveri, Amrapali, Gray, Alasdair J.G., Lopez, Vanessa, Haller, Armin, Hammar, Karl

Metadata of scientific events has become increasingly available on the Web, albeit often as raw data in various formats, disregarding its semantics and interlinking relations. This leads to restricting the usability of this data for, e.g., subsequent analyses and reasoning. Therefore, there is a pressing need to represent this data in a semantic representation, i.e., Linked Data. We present the new release of the EVENTSKG dataset, comprising comprehensive semantic descriptions of scientific events of eight computer science communities. Currently, EVENTSKG is a 5-star dataset containing metadata of 73 top-ranked event series (almost 2,000 events) established over the last five decades. The new release is a Linked Open Dataset adhering to an updated version of the Scientific Events Ontology, a reference ontology for event metadata representation, leading to richer and cleaner data. To facilitate the maintenance of EVENTSKG and to ensure its sustainability, EVENTSKG is coupled with a Java API that enables users to add/update events metadata without going into the details of the representation of the dataset. We shed light on events characteristics by analyzing EVENTSKG data, which provides a flexible means for customization in order to better understand the characteristics of renowned CS events.

Loading...
Thumbnail Image
Item

Temporal Role Annotation for Named Entities

2018, Koutraki, Maria, Bakhshandegan-Moghaddam, Farshad, Sack, Harald, Fensel, Anna, de Boer, Victor, Pellegrini, Tassilo, Kiesling, Elmar, Haslhofer, Bernhard, Hollink, Laura, Schindler, Alexander

Natural language understanding tasks are key to extracting structured and semantic information from text. One of the most challenging problems in natural language is ambiguity and resolving such ambiguity based on context including temporal information. This paper, focuses on the task of extracting temporal roles from text, e.g. CEO of an organization or head of a state. A temporal role has a domain, which may resolve to different entities depending on the context and especially on temporal information, e.g. CEO of Microsoft in 2000. We focus on the temporal role extraction, as a precursor for temporal role disambiguation. We propose a structured prediction approach based on Conditional Random Fields (CRF) to annotate temporal roles in text and rely on a rich feature set, which extracts syntactic and semantic information from text. We perform an extensive evaluation of our approach based on two datasets. In the first dataset, we extract nearly 400k instances from Wikipedia through distant supervision, whereas in the second dataset, a manually curated ground-truth consisting of 200 instances is extracted from a sample of The New York Times (NYT) articles. Last, the proposed approach is compared against baselines where significant improvements are shown for both datasets.

Loading...
Thumbnail Image
Item

Unveiling Relations in the Industry 4.0 Standards Landscape Based on Knowledge Graph Embeddings

2020, Rivas, Ariam, Grangel-González, Irlán, Collarana, Diego, Lehmann, Jens, Vidal, Maria-Esther, Hartmann, Sven, Küng, Josef, Kotsis, Gabriele, Tjoa, A Min, Khalil, Ismail

Industry 4.0 (I4.0) standards and standardization frameworks have been proposed with the goal of empowering interoperability in smart factories. These standards enable the description and interaction of the main components, systems, and processes inside of a smart factory. Due to the growing number of frameworks and standards, there is an increasing need for approaches that automatically analyze the landscape of I4.0 standards. Standardization frameworks classify standards according to their functions into layers and dimensions. However, similar standards can be classified differently across the frameworks, producing, thus, interoperability conflicts among them. Semantic-based approaches that rely on ontologies and knowledge graphs, have been proposed to represent standards, known relations among them, as well as their classification according to existing frameworks. Albeit informative, the structured modeling of the I4.0 landscape only provides the foundations for detecting interoperability issues. Thus, graph-based analytical methods able to exploit knowledge encoded by these approaches, are required to uncover alignments among standards. We study the relatedness among standards and frameworks based on community analysis to discover knowledge that helps to cope with interoperability conflicts between standards. We use knowledge graph embeddings to automatically create these communities exploiting the meaning of the existing relationships. In particular, we focus on the identification of similar standards, i.e., communities of standards, and analyze their properties to detect unknown relations. We empirically evaluate our approach on a knowledge graph of I4.0 standards using the Trans∗ family of embedding models for knowledge graph entities. Our results are promising and suggest that relations among standards can be detected accurately.

Loading...
Thumbnail Image
Item

Building Scholarly Knowledge Bases with Crowdsourcing and Text Mining

2020, Stocker, Markus, Zhang, Chengzhi, Mayr, Philipp, Lu, Wei, Zhang, Yi

For centuries, scholarly knowledge has been buried in documents. While articles are great to convey the story of scientific work to peers, they make it hard for machines to process scholarly knowledge. The recent proliferation of the scholarly literature and the increasing inability of researchers to digest, reproduce, reuse its content are constant reminders that we urgently need a transformative digitalization of the scholarly literature. Building on the Open Research Knowledge Graph (http://orkg.org) as a concrete research infrastructure, in this talk we present how using crowdsourcing and text mining humans and machines can collaboratively build scholarly knowledge bases, i.e. systems that acquire, curate and publish data, information and knowledge published in the scholarly literature in structured and semantic form. We discuss some key challenges that human and technical infrastructures face as well as the possibilities scholarly knowledge bases enable.

Loading...
Thumbnail Image
Item

Collaborative annotation and semantic enrichment of 3D media

2022, Rossenova, Lozana, Schubert, Zoe, Vock, Richard, Sohmen, Lucia, Günther, Lukas, Duchesne, Paul, Blümel, Ina, Aizawa, Akiko

A new FOSS (free and open source software) toolchain and associated workflow is being developed in the context of NFDI4Culture, a German consortium of research- and cultural heritage institutions working towards a shared infrastructure for research data that meets the needs of 21st century data creators, maintainers and end users across the broad spectrum of the digital libraries and archives field, and the digital humanities. This short paper and demo present how the integrated toolchain connects: 1) OpenRefine - for data reconciliation and batch upload; 2) Wikibase - for linked open data (LOD) storage; and 3) Kompakkt - for rendering and annotating 3D models. The presentation is aimed at librarians, digital curators and data managers interested in learning how to manage research datasets containing 3D media, and how to make them available within an open data environment with 3D-rendering and collaborative annotation features.

Loading...
Thumbnail Image
Item

Translating the Concept of Goal Setting into Practice: What ‘else’ Does It Require than a Goal Setting Tool?

2020, Kismihók, Gábor, Zhao, Catherine, Schippers, Michaéla, Mol, Stefan, Harrison, Scott, Shehata, Shady, Lane, H. Chad, Zvacek, Susan, Uhomoibhi, James

This conceptual paper reviews the current status of goal setting in the area of technology enhanced learning and education. Besides a brief literature review, three current projects on goal setting are discussed. The paper shows that the main barriers for goal setting applications in education are not related to the technology, the available data or analytical methods, but rather the human factor. The most important bottlenecks are the lack of students’ goal setting skills and abilities, and the current curriculum design, which, especially in the observed higher education institutions, provides little support for goal setting interventions.

Loading...
Thumbnail Image
Item

Context-Based Entity Matching for Big Data

2020, Tasnim, Mayesha, Collarana, Diego, Graux, Damien, Vidal, Maria-Esther, Janev, Valentina, Graux, Damien, Jabeen, Hajira, Sallinger, Emanuel

In the Big Data era, where variety is the most dominant dimension, the RDF data model enables the creation and integration of actionable knowledge from heterogeneous data sources. However, the RDF data model allows for describing entities under various contexts, e.g., people can be described from its demographic context, but as well from their professional contexts. Context-aware description poses challenges during entity matching of RDF datasets—the match might not be valid in every context. To perform a contextually relevant entity matching, the specific context under which a data-driven task, e.g., data integration is performed, must be taken into account. However, existing approaches only consider inter-schema and properties mapping of different data sources and prevent users from selecting contexts and conditions during a data integration process. We devise COMET, an entity matching technique that relies on both the knowledge stated in RDF vocabularies and a context-based similarity metric to map contextually equivalent RDF graphs. COMET follows a two-fold approach to solve the problem of entity matching in RDF graphs in a context-aware manner. In the first step, COMET computes the similarity measures across RDF entities and resorts to the Formal Concept Analysis algorithm to map contextually equivalent RDF entities. Finally, COMET combines the results of the first step and executes a 1-1 perfect matching algorithm for matching RDF entities based on the combined scores. We empirically evaluate the performance of COMET on testbed from DBpedia. The experimental results suggest that COMET accurately matches equivalent RDF graphs in a context-dependent manner.

Loading...
Thumbnail Image
Item

Labour Market Information Driven, Personalized, OER Recommendation System for Lifelong Learners

2020, Tavakoli, Mohammadreza, Mol, Stefan, Kismihók, Gábor, Lane, H. Chad, Zvacek, Susan, Uhomoibhi, James

In this paper, we suggest a novel method to aid lifelong learners to access relevant OER based learning content to master skills demanded on the labour market. Our software prototype 1) applies Text Classification and Text Mining methods on vacancy announcements to decompose jobs into meaningful skills components, which lifelong learners should target; and 2) creates a hybrid OER Recommender System to suggest personalized learning content for learners to progress towards their skill targets. For the first evaluation of this prototype we focused on two job areas: Data Scientist, and Mechanical Engineer. We applied our skill extractor approach and provided OER recommendations for learners targeting these jobs. We conducted in-depth, semi-structured interviews with 12 subject matter experts to learn how our prototype performs in terms of its objectives, logic, and contribution to learning. More than 150 recommendations were generated, and 76.9% of these recommendations were treated as us eful by the interviewees. Interviews revealed that a personalized OER recommender system, based on skills demanded by labour market, has the potential to improve the learning experience of lifelong learners.

Loading...
Thumbnail Image
Item

The Research Core Dataset (KDSF) in the Linked Data context

2019, Walther, Tatiana, Hauschke, Christian, Kasprzik, Anna, Sicilia, Miguel-Angel, Simons, Ed, Clements, Anna, de Castro, Pablo, Bergström, Johan

This paper describes our efforts to implement the Research Core Dataset (“Kerndatensatz Forschung”; KDSF) as an ontology in VIVO. KDSF is used in VIVO to record the required metadata on incoming data and to produce reports as an output. While both processes need an elaborate adaptation of the KDSF specification, this paper focusses on the adaptation of the KDSF basic data model for recording data in VIVO. In this context, the VIVO and KDSF ontologies were compared with respect to domain, syntax, structure, and granularity in order to identify correspondences and mismatches. To produce an alignment, different matching approaches have been applied. Furthermore, we made necessary modifications and extensions on KDSF classes and properties.

Loading...
Thumbnail Image
Item

Contextual Language Models for Knowledge Graph Completion

2021, Russa, Biswas, Sofronova, Radina, Alam, Mehwish, Sack, Harald, Mehwish, Alam, Ali, Medi, Groth, Paul, Hitzler, Pascal, Lehmann, Jens, Paulheim, Heiko, Rettinger, Achim, Sack, Harald, Sadeghi, Afshin, Tresp, Volker

Knowledge Graphs (KGs) have become the backbone of various machine learning based applications over the past decade. However, the KGs are often incomplete and inconsistent. Several representation learning based approaches have been introduced to complete the missing information in KGs. Besides, Neural Language Models (NLMs) have gained huge momentum in NLP applications. However, exploiting the contextual NLMs to tackle the Knowledge Graph Completion (KGC) task is still an open research problem. In this paper, a GPT-2 based KGC model is proposed and is evaluated on two benchmark datasets. The initial results obtained from the _ne-tuning of the GPT-2 model for triple classi_cation strengthens the importance of usage of NLMs for KGC. Also, the impact of contextual language models for KGC has been discussed.