Search Results

Now showing 1 - 3 of 3
  • Item
    A short guide to increase FAIRness of atmospheric model data
    (Stuttgart : E. Schweizerbart Science Publishers, 2020) Ganske, Anette; Heydebreck, Daniel; Höck, Daniel; Kraft, Angelina; Quaas, Johannes; Kaiser, Amandine
    The generation, processing and analysis of atmospheric model data are expensive, as atmospheric model runs are often computationally intensive and the costs of ‘fast’ disk space are rising. Moreover, atmospheric models are mostly developed by groups of scientists over many years and therefore only few appropriate models exist for specific analyses, e.g. for urban climate. Hence, atmospheric model data should be made available for reuse by scientists, the public sector, companies and other stakeholders. Thereby, this leads to an increasing need for swift, user-friendly adaptation of standards.The FAIR data principles (Findable, Accessible, Interoperable, Reusable) were established to foster the reuse of data. Research data become findable and accessible if they are published in public repositories with general metadata and Persistent Identifiers (PIDs), e.g. DataCite DOIs. The use of PIDs should ensure that describing metadata is persistently available. Nevertheless, PIDs and basic metadata do not guarantee that the data are indeed interoperable and reusable without project-specific knowledge. Additionally, the lack of standardised machine-readable metadata reduces the FAIRness of data. Unfortunately, there are no common standards for non-climate models, e.g. for mesoscale models, available. This paper proposes a concept to improve the FAIRness of archived atmospheric model data. This concept was developed within the AtMoDat project (Atmospheric Model Data). The approach consists of several aspects, each of which is easy to implement: requirements for rich metadata with controlled vocabulary, the landing pages, file formats (netCDF) and the structure within the files. The landing pages are a core element of this concept as they should be human- and machine readable, hold discipline-specific metadata and present metadata on simulation and variable level. This guide is meant to help data producers and curators to prepare data for publication. Furthermore, this guide provides information for the choice of keywords, which supports data reusers in their search for data with search engines. © 2020 The authors
  • Item
    Access and preservation of digital research content: Linked open data services - A research library perspective
    (München : European Geosciences Union, 2016) Kraft, Angelina; Sens, Irina; Löwe, Peter; Dreyer, Britta
    [no abstract available]
  • Item
    The RADAR Project - A Service for Research Data Archival and Publication
    (Basel : MDPI, 2016) Kraft, Angelina; Razum, Matthias; Potthoff, Jan; Porzel, Andrea; Engel, Thomas; Lange, Frank; van den Broek, Karina; Furtado, Filipe
    The aim of the RADAR (Research Data Repository) project is to set up and establish an infrastructure that facilitates research data management: the infrastructure will allow researchers to store, manage, annotate, cite, curate, search and find scientific data in a digital platform available at any time that can be used by multiple (specialized) disciplines. While appropriate and innovative preservation strategies and systems are in place for the big data communities (e.g., environmental sciences, space, and climate), the stewardship for many other disciplines, often called the “long tail research domains”, is uncertain. Funded by the German Research Foundation (DFG), the RADAR collaboration project develops a service oriented infrastructure for the preservation, publication and traceability of (independent) research data. The key aspect of RADAR is the implementation of a two-stage business model for data preservation and publication: clients may preserve research results for up to 15 years and assign well-graded access rights, or to publish data with a DOI assignment for an unlimited period of time. Potential clients include libraries, research institutions, publishers and open platforms that desire an adaptable digital infrastructure to archive and publish data according to their institutional requirements and workflows.