Browsing by Author "Wyborn, Lesley"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
- ItemCall to action for global access to and harmonization of quality information of individual earth science datasets(Paris : CODATA, 2021) Peng, Ge; Downs, Robert R.; Lacagnina, Carlo; Ramapriyan, Hampapuram; Ivánová, Ivana; Moroni, David; Wei, Yaxing; Larnicol, Gilles; Wyborn, Lesley; Goldberg, Mitch; Schulz, Jörg; Bastrakova, Irina; Ganske, Anette; Bastin, Lucy; Khalsa, Siri Jodha S.; Wu, Mingfang; Shie, Chung-Lin; Ritchey, Nancy; Jones, Dave; Habermann, Ted; Lief, Christina; Maggio, Iolanda; Albani, Mirko; Stall, Shelley; Zhou, Lihang; Drévillon, Marie; Champion, Sarah; Hou, C. Sophie; Doblas-Reyes, Francisco; Lehnert, Kerstin; Robinson, Erin; Bugbee, KaylinKnowledge about the quality of data and metadata is important to support informed decisions on the (re)use of individual datasets and is an essential part of the ecosystem that supports open science. Quality assessments reflect the reliability and usability of data. They need to be consistently curated, fully traceable, and adequately documented, as these are crucial for sound decision- and policy-making efforts that rely on data. Quality assessments also need to be consistently represented and readily integrated across systems and tools to allow for improved sharing of information on quality at the dataset level for individual quality attribute or dimension. Although the need for assessing the quality of data and associated information is well recognized, methodologies for an evaluation framework and presentation of resultant quality information to end users may not have been comprehensively addressed within and across disciplines. Global interdisciplinary domain experts have come together to systematically explore needs, challenges and impacts of consistently curating and representing quality information through the entire lifecycle of a dataset. This paper describes the findings of that effort, argues the importance of sharing dataset quality information, calls for community action to develop practical guidelines, and outlines community recommendations for developing such guidelines. Practical guidelines will allow for global access to and harmonization of quality information at the level of individual Earth science datasets, which in turn will support open science.
- ItemGlobal Community Guidelines for Documenting, Sharing, and Reusing Quality Information of Individual Digital Datasets(Paris : CODATA, 2022) Peng, Ge; Lacagnina, Carlo; Downs, Robert R.; Ganske, Anette; Ramapriyan, Hampapuram K.; Ivánová, Ivana; Wyborn, Lesley; Jones, Dave; Bastin, Lucy; Shie, Chung-lin; Moroni, David F.Open-source science builds on open and free resources that include data, metadata, software, and workflows. Informed decisions on whether and how to (re)use digital datasets are dependent on an understanding about the quality of the underpinning data and relevant information. However, quality information, being difficult to curate and often context specific, is currently not readily available for sharing within and across disciplines. To help address this challenge and promote the creation and (re)use of freely and openly shared information about the quality of individual datasets, members of several groups around the world have undertaken an effort to develop international community guidelines with practical recommendations for the Earth science community, collaborating with international domain experts. The guidelines were inspired by the guiding principles of being findable, accessible, interoperable, and reusable (FAIR). Use of the FAIR dataset quality information guidelines is intended to help stakeholders, such as scientific data centers, digital data repositories, and producers, publishers, stewards and managers of data, to: i) capture, describe, and represent quality information of their datasets in a manner that is consistent with the FAIR Guiding Principles; ii) allow for the maximum discovery, trust, sharing, and reuse of their datasets; and iii) enable international access to and integration of dataset quality information. This article describes the processes that developed the guidelines that are aligned with the FAIR principles, presents a generic quality assessment workflow, describes the guidelines for preparing and disseminating dataset quality information, and outlines a path forward to improve their disciplinary diversity.
- ItemIntegrating data and analysis technologies within leading environmental research infrastructures: Challenges and approaches(Amsterdam [u.a.] : Elsevier, 2021) Huber, Robert; D'Onofrio, Claudio; Devaraju, Anusuriya; Klump, Jens; Loescher, Henry W.; Kindermann, Stephan; Guru, Siddeswara; Grant, Mark; Morris, Beryl; Wyborn, Lesley; Evans, Ben; Goldfarb, Doron; Genazzio, Melissa A.; Ren, Xiaoli; Magagna, Barbara; Thiemann, Hannes; Stocker, MarkusWhen researchers analyze data, it typically requires significant effort in data preparation to make the data analysis ready. This often involves cleaning, pre-processing, harmonizing, or integrating data from one or multiple sources and placing them into a computational environment in a form suitable for analysis. Research infrastructures and their data repositories host data and make them available to researchers, but rarely offer a computational environment for data analysis. Published data are often persistently identified, but such identifiers resolve onto landing pages that must be (manually) navigated to identify how data are accessed. This navigation is typically challenging or impossible for machines. This paper surveys existing approaches for improving environmental data access to facilitate more rapid data analyses in computational environments, and thus contribute to a more seamless integration of data and analysis. By analysing current state-of-the-art approaches and solutions being implemented by world‑leading environmental research infrastructures, we highlight the existing practices to interface data repositories with computational environments and the challenges moving forward. We found that while the level of standardization has improved during recent years, it still is challenging for machines to discover and access data based on persistent identifiers. This is problematic in regard to the emerging requirements for FAIR (Findable, Accessible, Interoperable, and Reusable) data, in general, and problematic for seamless integration of data and analysis, in particular. There are a number of promising approaches that would improve the state-of-the-art. A key approach presented here involves software libraries that streamline reading data and metadata into computational environments. We describe this approach in detail for two research infrastructures. We argue that the development and maintenance of specialized libraries for each RI and a range of programming languages used in data analysis does not scale well. Based on this observation, we propose a set of established standards and web practices that, if implemented by environmental research infrastructures, will enable the development of RI and programming language independent software libraries with much reduced effort required for library implementation and maintenance as well as considerably lower learning requirements on users. To catalyse such advancement, we propose a roadmap and key action points for technology harmonization among RIs that we argue will build the foundation for efficient and effective integration of data and analysis.