Search Results

Now showing 1 - 10 of 438
Loading...
Thumbnail Image
Item

Handreichung Technik und Infastrukturen

2023, Eichler, Frederik, Eppelin, Anita, Kampkaspar, Dario, Schrader, Antonia C., Söllner, Konstanze, Vierkant, Paul, Withanage, Dulip, Wrzesinski, Marcel

In der vorliegenden Handreichung stellen wir unterschiedliche technische Ressourcen vor, die redaktionelle Arbeiten unterstützen können. Dabei empfiehlt es sich, Software und Systeme zu nutzen, die den Wandel hin zu einer offenen, niederschwelligen und nachhaltigen Wissenschaftskultur fördern. Hierzu zählt in erster Linie die Verwendung von Open-Source-Software. Unsere Empfehlungen haben dabei eine begrenzte Reichweite: Serviceanbieter, Software und Projekte sind zu einem späteren Zeitpunkt ggf. nicht mehr verfügbar. Auch sind gerade die Infrastruktureinrichtungen in das föderale Wissenschaftssystem integriert, was sie bestimmten Unwägbarkeiten aussetzt.

Loading...
Thumbnail Image
Item

Collaborative annotation and semantic enrichment of 3D media

2022, Rossenova, Lozana, Schubert, Zoe, Vock, Richard, Sohmen, Lucia, Günther, Lukas, Duchesne, Paul, Blümel, Ina, Aizawa, Akiko

A new FOSS (free and open source software) toolchain and associated workflow is being developed in the context of NFDI4Culture, a German consortium of research- and cultural heritage institutions working towards a shared infrastructure for research data that meets the needs of 21st century data creators, maintainers and end users across the broad spectrum of the digital libraries and archives field, and the digital humanities. This short paper and demo present how the integrated toolchain connects: 1) OpenRefine - for data reconciliation and batch upload; 2) Wikibase - for linked open data (LOD) storage; and 3) Kompakkt - for rendering and annotating 3D models. The presentation is aimed at librarians, digital curators and data managers interested in learning how to manage research datasets containing 3D media, and how to make them available within an open data environment with 3D-rendering and collaborative annotation features.

Loading...
Thumbnail Image
Item

The Research Core Dataset (KDSF) in the Linked Data context

2019, Walther, Tatiana, Hauschke, Christian, Kasprzik, Anna, Sicilia, Miguel-Angel, Simons, Ed, Clements, Anna, de Castro, Pablo, Bergström, Johan

This paper describes our efforts to implement the Research Core Dataset (“Kerndatensatz Forschung”; KDSF) as an ontology in VIVO. KDSF is used in VIVO to record the required metadata on incoming data and to produce reports as an output. While both processes need an elaborate adaptation of the KDSF specification, this paper focusses on the adaptation of the KDSF basic data model for recording data in VIVO. In this context, the VIVO and KDSF ontologies were compared with respect to domain, syntax, structure, and granularity in order to identify correspondences and mismatches. To produce an alignment, different matching approaches have been applied. Furthermore, we made necessary modifications and extensions on KDSF classes and properties.

Loading...
Thumbnail Image
Item

Building Scholarly Knowledge Bases with Crowdsourcing and Text Mining

2020, Stocker, Markus, Zhang, Chengzhi, Mayr, Philipp, Lu, Wei, Zhang, Yi

For centuries, scholarly knowledge has been buried in documents. While articles are great to convey the story of scientific work to peers, they make it hard for machines to process scholarly knowledge. The recent proliferation of the scholarly literature and the increasing inability of researchers to digest, reproduce, reuse its content are constant reminders that we urgently need a transformative digitalization of the scholarly literature. Building on the Open Research Knowledge Graph (http://orkg.org) as a concrete research infrastructure, in this talk we present how using crowdsourcing and text mining humans and machines can collaboratively build scholarly knowledge bases, i.e. systems that acquire, curate and publish data, information and knowledge published in the scholarly literature in structured and semantic form. We discuss some key challenges that human and technical infrastructures face as well as the possibilities scholarly knowledge bases enable.

Loading...
Thumbnail Image
Item

EVENTSKG: A 5-Star Dataset of Top-Ranked Events in Eight Computer Science Communities

2019, Fathalla, Said, Lange, Christoph, Auer, Sören, Hitzler, Pascal, Fernández, Miriam, Janowicz, Krzysztof, Zaveri, Amrapali, Gray, Alasdair J.G., Lopez, Vanessa, Haller, Armin, Hammar, Karl

Metadata of scientific events has become increasingly available on the Web, albeit often as raw data in various formats, disregarding its semantics and interlinking relations. This leads to restricting the usability of this data for, e.g., subsequent analyses and reasoning. Therefore, there is a pressing need to represent this data in a semantic representation, i.e., Linked Data. We present the new release of the EVENTSKG dataset, comprising comprehensive semantic descriptions of scientific events of eight computer science communities. Currently, EVENTSKG is a 5-star dataset containing metadata of 73 top-ranked event series (almost 2,000 events) established over the last five decades. The new release is a Linked Open Dataset adhering to an updated version of the Scientific Events Ontology, a reference ontology for event metadata representation, leading to richer and cleaner data. To facilitate the maintenance of EVENTSKG and to ensure its sustainability, EVENTSKG is coupled with a Java API that enables users to add/update events metadata without going into the details of the representation of the dataset. We shed light on events characteristics by analyzing EVENTSKG data, which provides a flexible means for customization in order to better understand the characteristics of renowned CS events.

Loading...
Thumbnail Image
Item

Labour Market Information Driven, Personalized, OER Recommendation System for Lifelong Learners

2020, Tavakoli, Mohammadreza, Mol, Stefan, Kismihók, Gábor, Lane, H. Chad, Zvacek, Susan, Uhomoibhi, James

In this paper, we suggest a novel method to aid lifelong learners to access relevant OER based learning content to master skills demanded on the labour market. Our software prototype 1) applies Text Classification and Text Mining methods on vacancy announcements to decompose jobs into meaningful skills components, which lifelong learners should target; and 2) creates a hybrid OER Recommender System to suggest personalized learning content for learners to progress towards their skill targets. For the first evaluation of this prototype we focused on two job areas: Data Scientist, and Mechanical Engineer. We applied our skill extractor approach and provided OER recommendations for learners targeting these jobs. We conducted in-depth, semi-structured interviews with 12 subject matter experts to learn how our prototype performs in terms of its objectives, logic, and contribution to learning. More than 150 recommendations were generated, and 76.9% of these recommendations were treated as us eful by the interviewees. Interviews revealed that a personalized OER recommender system, based on skills demanded by labour market, has the potential to improve the learning experience of lifelong learners.

Loading...
Thumbnail Image
Item

Renegotiating Open-Access-Licences for Scientific Films

2016, Brehm, Elke

Scientific publishing is not limited to text any more, but more and more extends also to digital audio-visual media. Thus services for publishing these media in portals designed for scientific content, oriented towards the demands of scientists and which comply with the requirements of Open Access Licenses must be provided. Among others, it is the goal of the Competence Centre for Non-textual-materials of TIB to collect, archive and provide access to scientific audio-visual media in the TIB AV-Portal under the best possible (open) conditions. This applies to older films, as for example the film collection of the former IWF Knowledge and Media gGmbH i. L. (IWF) and to new films. However, even if the acquisition of the necessary rights for audio-visual media is complex, the renegotiation of Open-Access- Licenses for older films is very successful. This paper focuses on the role of Open Access in the licensing strategy of TIB regarding scientific films, the respective experience of TIB and the presentation in the AV-Portal, but also touches upon prerequisites and procedures for the use of Orphan Works.

Loading...
Thumbnail Image
Item

Die Rolle der ORCID iD in der Wissenschaftskommunikation: Der Beitrag des ORCID-Deutschland-Konsortiums und das ORCID-DE-Projekt

2019, Dreyer, Britta, Hagemann-Wilholt, Stephanie, Vierkant, Paul, Strecker, Dorothea, Glagla-Dietz, Stephanie, Summann, Friedrich, Pampel, Heinz, Burger, Marleen

ORCID’s services such as the unambiguous linking of researchers and their research output form the basis of modern scholarly communication. The ORCID Germany Consortium offers a reduced ORCID premium membership fee and supports its members during ORCID integration. Services include a dialogue platform that provides German-language information and additional support services. Another major success factor is an all-encompassing communication strategy: members of the ORCID implementation can resort to established organizational communication channels. Together and with the support of the ORCID DE project they contribute significantly to the successful distribution of ORCID in Germany.

Loading...
Thumbnail Image
Item

Translating the Concept of Goal Setting into Practice: What ‘else’ Does It Require than a Goal Setting Tool?

2020, Kismihók, Gábor, Zhao, Catherine, Schippers, Michaéla, Mol, Stefan, Harrison, Scott, Shehata, Shady, Lane, H. Chad, Zvacek, Susan, Uhomoibhi, James

This conceptual paper reviews the current status of goal setting in the area of technology enhanced learning and education. Besides a brief literature review, three current projects on goal setting are discussed. The paper shows that the main barriers for goal setting applications in education are not related to the technology, the available data or analytical methods, but rather the human factor. The most important bottlenecks are the lack of students’ goal setting skills and abilities, and the current curriculum design, which, especially in the observed higher education institutions, provides little support for goal setting interventions.

Loading...
Thumbnail Image
Item

Context-Based Entity Matching for Big Data

2020, Tasnim, Mayesha, Collarana, Diego, Graux, Damien, Vidal, Maria-Esther, Janev, Valentina, Graux, Damien, Jabeen, Hajira, Sallinger, Emanuel

In the Big Data era, where variety is the most dominant dimension, the RDF data model enables the creation and integration of actionable knowledge from heterogeneous data sources. However, the RDF data model allows for describing entities under various contexts, e.g., people can be described from its demographic context, but as well from their professional contexts. Context-aware description poses challenges during entity matching of RDF datasets—the match might not be valid in every context. To perform a contextually relevant entity matching, the specific context under which a data-driven task, e.g., data integration is performed, must be taken into account. However, existing approaches only consider inter-schema and properties mapping of different data sources and prevent users from selecting contexts and conditions during a data integration process. We devise COMET, an entity matching technique that relies on both the knowledge stated in RDF vocabularies and a context-based similarity metric to map contextually equivalent RDF graphs. COMET follows a two-fold approach to solve the problem of entity matching in RDF graphs in a context-aware manner. In the first step, COMET computes the similarity measures across RDF entities and resorts to the Formal Concept Analysis algorithm to map contextually equivalent RDF entities. Finally, COMET combines the results of the first step and executes a 1-1 perfect matching algorithm for matching RDF entities based on the combined scores. We empirically evaluate the performance of COMET on testbed from DBpedia. The experimental results suggest that COMET accurately matches equivalent RDF graphs in a context-dependent manner.