Search Results

Now showing 1 - 10 of 107
  • Item
    Discussion on Existing Standards and Quality Criteria in Nanosafety Research : Summary of the NanoS-QM Expert Workshop
    (Zenodo, 2021) Binder, Kunigunde; Bonatto Minella, Christian; Elberskirchen, Linda; Kraegeloh, Annette; Liebing, Julia; Petzold, Christiane; Razum, Matthias; Riefler, Norbert; Schins, Roel; Sofranko, Adriana; van Thriel, Christoph; Unfried, Klaus
    The partners of the research project NanoS-QM (Quality- and Description Standards for Nanosafety Research Data) identified and invited relevant experts from research institutions, federal agencies, and industry to evaluate the traceability of the results generated with the existing standards and quality criteria. During the discussion it emerged that numerous studies seem to be of insufficient quality for regulatory purposes or exhibit weaknesses with regard to data completeness. Deficiencies in study design could be avoided by more comprehensive use of appropriate standards, many of which already exist. The use of Electronic Laboratory Notebooks (ELNs) that allow for early collection of metadata and enrichment of datasets could be one solution to enable data re-use and simplify quality control. Generally, earlier provision and curation of data and metadata indicating their quality and completeness (e.g. guidelines, standards, standard operating procedures (SOPs) that were used) would improve their findability, accessibility, interoperability, and reusability (FAIR) in the nanosafety research field.
  • Item
    Concept for Setting up an LTA Working Group in the NFDI Section "Common Infrastructures"
    (Zenodo, 2022-04-12) Bach, Felix; Degkwitz, Andreas; Horstmann, Wolfram; Leinen, Peter; Puchta, Michael; Stäcker, Thomas
    NFDI consortia have a variety of disparate and distributed information infrastructures, many of which are as yet only loosely or poorly connected. A major goal is to create a Research Data Commons (RDC) . The RDC concept1 includes, for example, shared cloud services, an application layer with access to high-performance computing (HPC), collaborative workspaces, terminology services, and a common authentication and authorization infrastructure (AAI). The necessary interoperability of services requires, in particular, agreement on protocols and standards, the specification of workflows and interfaces, and the definition of long-term sustainable responsibilities for overarching services and deliverables. Infrastructure components are often well-tested in NFDI on a domain-specific basis, but are quite heterogeneous and diverse between domains. LTA for digital resources has been a recurring problem for well over 30 years and has not been conclusively solved to date, getting urgency with the exponential growth of research data, whether it involves demands from funders - the DFG requires 10 years of retention - or digital artifacts that must be preserved indefinitely as digital cultural heritage. Against this background, the integration of the LTA into the RDC of the NFDI is an urgent desideratum in order to be able to guarantee the permanent usability of research data. A distinction must be2 made between the archiving of the digital objects as bitstreams (this can be numeric or textual data or complex objects such as models), which represents a first step towards long-term usability, and the archiving of the semantic and software-technical context of the digital original objects, which entails far more effort. Beyond the technical embedding of the LTA in the system environment of a multi-cloud-based infrastructure, a number of technically differentiated requirements of the NFDI's subject consortia are part of the development of a basic service for the LTA and for the re-use of research data.3 The need for funding for the development of a basic LTA service for the NFDI consortia results primarily from the additional costs associated with the technical and organizational development of a cross-NFDI, decentralized network structure for LTA and the sustainable subsequent use of research data. It is imperative that the technical actors are able to act within the network as a technology-oriented community, and that they can provide their own services as part of the support for also within a federated infrastructure. The working group "Long Term Archiving" (LTA) is to develop the requirements of the technical consortia for LTA and, on this basis, strategic approaches for the implementation of a basic service LTA. The working group consists of members of various NFDI consortia covering the humanities, natural science and engineering disciplines and experts from a variety of pertinent infrastructures with strong overall connections to the nestor long-term archiving competence network. The close linkage of NFDI consortia with experienced4 partners in the field of LTA ensures that a) the relevant technical state-of-the-art is present in the group and b) the knowledge of data producers about contexts of origin and data users interact directly. This composition enables the team to take an overarching view that spans the requirements of the disciplines and consortia, also takes into account interdisciplinary needs, and at the same time brings in the existing know-how in the infrastructure sector.
  • Item
    NFDI4Chem - Deliverable D3.3.1: Gap analysis report for selected repositories
    (Genève : CERN, 2023) Bach, Felix; Binder, Kunigunde; Christian Bonatto, Minella; Lutz, Benjamin; Razum, Matthias
    The deliverable 3.3.1 “Gap analysis report for selected repositories” aims both to identify gaps in the coverage regarding data types or disciplines and to close them through adjustments or, if necessary, new developments. In order to accomplish that, the TA3-team performed a gap analysis of the existing relevant repositories by means of individual interviews with the repository leaders. The interview consisted of a series of questions ranging from general information up to metadata standards and ontology, data contents, technical information about Authorisation and Authentication Infrastructure (AAI), API, services and functionality, operating environment as well as software architecture and workflows. The interviews will serve to establish the current degree of maturity as well as the operational fitness of the selected repositories and to derive suitable recommendations aiming to fulfil the yet missing requirements.
  • Item
    Analyzing social media for measuring public attitudes toward controversies and their driving factors: a case study of migration
    (Wien : Springer, 2022) Chen, Yiyi; Sack, Harald; Alam, Mehwish
    Among other ways of expressing opinions on media such as blogs, and forums, social media (such as Twitter) has become one of the most widely used channels by populations for expressing their opinions. With an increasing interest in the topic of migration in Europe, it is important to process and analyze these opinions. To this end, this study aims at measuring the public attitudes toward migration in terms of sentiments and hate speech from a large number of tweets crawled on the decisive topic of migration. This study introduces a knowledge base (KB) of anonymized migration-related annotated tweets termed as MigrationsKB (MGKB). The tweets from 2013 to July 2021 in the European countries that are hosts of immigrants are collected, pre-processed, and filtered using advanced topic modeling techniques. BERT-based entity linking and sentiment analysis, complemented by attention-based hate speech detection, are performed to annotate the curated tweets. Moreover, external databases are used to identify the potential social and economic factors causing negative public attitudes toward migration. The analysis aligns with the hypothesis that the countries with more migrants have fewer negative and hateful tweets. To further promote research in the interdisciplinary fields of social sciences and computer science, the outcomes are integrated into MGKB, which significantly extends the existing ontology to consider the public attitudes toward migrations and economic indicators. This study further discusses the use-cases and exploitation of MGKB. Finally, MGKB is made publicly available, fully supporting the FAIR principles.
  • Item
    Further with Knowledge Graphs. Proceedings of the 17th International Conference on Semantic Systems
    (Berlin : AKA ; Amsterdam : IOS Press, 2021) Alam, Mehwish; Groth, Paul; de Boer, Victor; Pellegrini, Tassilo; Pandit, Harshvardhan J.; Montiel, Elena; Rodríguez-Doncel, Victor; McGillivray, Barbara; Meroño-Peñuela, Albert
    The field of semantic computing is highly diverse, linking areas such as artificial intelligence, data science, knowledge discovery and management, big data analytics, e-commerce, enterprise search, technical documentation, document management, business intelligence, and enterprise vocabulary management. As such it forms an essential part of the computing technology that underpins all our lives today. This volume presents the proceedings of SEMANTiCS 2021, the 17th International Conference on Semantic Systems. As a result of the continuing Coronavirus restrictions, SEMANTiCS 2021 was held in a hybrid form in Amsterdam, the Netherlands, from 6 to 9 September 2021. The annual SEMANTiCS conference provides an important platform for semantic computing professionals and researchers, and attracts information managers, IT­architects, software engineers, and researchers from a wide range of organizations, such as research facilities, NPOs, public administrations and the largest companies in the world. The subtitle of the 2021 conference’s was “In the Era of Knowledge Graphs”, and 66 submissions were received, from which the 19 papers included here were selected following a rigorous single-blind reviewing process; an acceptance rate of 29%. Topics covered include data science, machine learning, logic programming, content engineering, social computing, and the Semantic Web, as well as the additional sub-topics of digital humanities and cultural heritage, legal tech, and distributed and decentralized knowledge graphs. Providing an overview of current research and development, the book will be of interest to all those working in the field of semantic systems.
  • Item
    Editorial of the Special issue on Cultural heritage and semantic web
    (Amsterdam : IOS Press, 2022) Alam, Mehwish; de Boer, Victor; Daga, Enrico; van Erp, Marieke; Hyvönen, Eero; Meroño-Peñuela, Albert
    [no abstract available]
  • Item
    Thesenpapier Nationale Forschungsdateninfrastruktur für die Chemie (NFDI4Chem)
    (Zenodo, 2018) Koepler, Oliver; Jung, Nicole; Kraft, Angelina; Neumann, Janna; Auer, Sören; Bach, Felix; Bähr, Thomas; Engel, Thomas; Kettner, Carsten; Kowol-Santen, Johanna; Liermann, Johannes; Lipp, Anne; Porzel, Andrea; Razum, Matthias; Schlörer, Niels; Solle, Dörte; Winkler, Torsten
    “Der stufenweise Aufbau einer Nationalen Forschungsdateninfrastruktur in Netzwerkform hat das Ziel, ein verlässliches und nachhaltiges Dienste-Portfolio zu schaffen, welches generische und fachspezifische Bedarfe des Forschungsdatenmanagements in Deutschland abdeckt.” Für das Fachgebiet Chemie ermöglicht eine solche nationale Forschungsdateninfrastruktur, öffentlich-finanzierte Forschungsdaten effizient zu erheben, standardisiert zu beschreiben, dauerhaft zu speichern und durch Persistent Identifier (PID) eindeutig referenzierbar und auffindbar zu machen. Sie unterstützt gemäß den Vorgaben des RfII die Reproduzierbarkeit und Nachnutzbarkeit der Daten zum Zwecke einer perpetuierten Wissensgenerierung. Mit der Reproduzierbarkeit von Forschungsergebnissen unterstützt eine solche Forschungsdateninfrastruktur den Peer Reviewing Prozess zur Förderung der wissenschaftlichen Selbstkontrolle und erhöht die Datenqualität, insbesondere in wissenschaftlichen Publikationen. Die NFDI4Chem ist ein gemeinschaftlicher Ansatz von Wissenschaftlern aus der Chemie, der Fachgesellschaft Gesellschaft Deutscher Chemiker und deren Fachgruppen, Einrichtungen aus der Forschungsförderung und Infrastruktureinrichtungen (Technische Informationsbibliothek). Eine Gruppe von Vertretern dieser Stakeholder hat sich Ende April 2018 zu einem Auftakttreffen “Fachgespräch NFDI4Chem” in Hannover getroffen. Dieses Papier fasst die Erkenntnisse des Fachgespräches zusammen. Weitere Stakeholder wie Verlage oder Datenbankanbieter sind im folgenden Diskurs willkommen.
  • Item
    Gesamtkonzept für die Informationsinfrastruktur in Deutschland
    (Kommission Zukunft der Informationsinfrastruktur, 2011) Kommission Zukunft der Informationsinfrastruktur
    Was haben digitalisierte Objektträger aus der Krebsforschung, Magnetbandaufzeichnungen des ersten bemannten Mondfluges und das Tierstimmenarchiv der Berliner Humboldt- Universität miteinander zu tun? In allen Fällen enthalten sie wertvolle wissenschaftliche Informationen. Ihre Verfügbarkeit jedoch ist nicht immer gegeben: Wenige Klicks am Rechner genügen, um übers Internet beispielsweise den Teichfrosch (Rana esculenta) quaken zu hören. Doch wer Originalaufzeichnungen der ersten Mondmission sucht, hat Pech gehabt: Seit Jahren stöbern Mitarbeiter der US-Weltraumagentur NASA erfolglos in ihren Archiven und suchen die Spulen. Es wird immer mehr zur Gewissheit: Die drei Zentimeter breiten Magnetbänder wurden irgendwann schlicht gelöscht und mit anderen Daten überspielt. Ein Gutes hatte aber die Suche der NASA: Sie förderte in Australien andere alte Datenbänder zutage, auf denen Informationen über Mondstaub gespeichert sind. Doch darauf folgte gleich das nächste Problem – die Daten waren nicht lesbar. Man fand glücklicherweise einen historischen Rekorder, mit dem die Informationen entziffert werden konnten. Das Gerät von der Größe eines Kühlschranks kommt aus einem Museum. Diese Beispiele illustrieren die zunehmend wichtige Frage, wie Forscherinnen und Forscher künftig mit wissenschaftlichen Informationen und Daten künftig umgehen müssen, um sie für weitere Forschungsprozesse zu sichern und zugänglich zu machen. Mit diesem Themenkomplex hat sich die „Kommission Zukunft der Informationsinfrastruktur“ befasst. Diese hochrangig besetzte Expertengruppe hat unter der Federführung der Leibniz-Gemeinschaft das vorliegende Gesamtkonzept erarbeitet. Der Auftrag dazu kam von der Gemeinsamen Wissenschaftskonferenz des Bundes und der Länder (GWK). In der bemerkenswert kurzen Zeit von nur 15 Monaten ist es den Experten – es waren knapp 135 Personen aus 54 Institutionen – gelungen, eine umfassende Sachdarstellung sowie detaillierte Empfehlungen zu erarbeiten. Die Zusammensetzung der Kommission stellt ein Novum dar. Sie repräsentiert die maßgeblichen Akteure der Informationsinfrastruktur in Deutschland, und zwar sowohl die Dienstleister selbst als auch die Förderorganisationen ebenso wie die wissenschaftlichen Nutzer. Allen Mitgliedern der Kommission gebührt großer Dank für die erfolgreiche Arbeit. Mein ganz besonderer Dank gilt dem Engagement der Präsidiumsbeauftragten der Leibniz-Gemeinschaft für Informationsinfrastruktur, Sabine Brünger-Weilandt, die den Vorsitz der Kommission innehatte. Sie ist die Geschäftsführerin des Leibniz-Instituts für Informationsinfrastruktur – FIZ Karlsruhe, das sie zeitgleich zur Leitung der Kommission durch seine turnusgemäße Evaluierung geführt hat. Das vorliegende Konzept zeigt das enorme Potenzial für den Wissenschaftsstandort Deutschland, das in der strategischen Weiterentwicklung der Informationsinfrastruktur steckt. Und es weist den Weg in die Zukunft der Informationsinfrastruktur. Jetzt gilt es, die Umsetzung voranzutreiben.