Search Results

Now showing 1 - 10 of 245
  • Item
    Unveiling Relations in the Industry 4.0 Standards Landscape Based on Knowledge Graph Embeddings
    (Cham : Springer, 2020) Rivas, Ariam; Grangel-González, Irlán; Collarana, Diego; Lehmann, Jens; Vidal, Maria-Esther; Hartmann, Sven; Küng, Josef; Kotsis, Gabriele; Tjoa, A Min; Khalil, Ismail
    Industry 4.0 (I4.0) standards and standardization frameworks have been proposed with the goal of empowering interoperability in smart factories. These standards enable the description and interaction of the main components, systems, and processes inside of a smart factory. Due to the growing number of frameworks and standards, there is an increasing need for approaches that automatically analyze the landscape of I4.0 standards. Standardization frameworks classify standards according to their functions into layers and dimensions. However, similar standards can be classified differently across the frameworks, producing, thus, interoperability conflicts among them. Semantic-based approaches that rely on ontologies and knowledge graphs, have been proposed to represent standards, known relations among them, as well as their classification according to existing frameworks. Albeit informative, the structured modeling of the I4.0 landscape only provides the foundations for detecting interoperability issues. Thus, graph-based analytical methods able to exploit knowledge encoded by these approaches, are required to uncover alignments among standards. We study the relatedness among standards and frameworks based on community analysis to discover knowledge that helps to cope with interoperability conflicts between standards. We use knowledge graph embeddings to automatically create these communities exploiting the meaning of the existing relationships. In particular, we focus on the identification of similar standards, i.e., communities of standards, and analyze their properties to detect unknown relations. We empirically evaluate our approach on a knowledge graph of I4.0 standards using the Trans∗ family of embedding models for knowledge graph entities. Our results are promising and suggest that relations among standards can be detected accurately.
  • Item
    Advancing Research Data Management in Universities of Science and Technology
    (Meyrin : CERN, 2020-02-13) Björnemalm, Matthias; Cappellutti, Federica; Dunning, Alastair; Gheorghe, Dana; Goraczek, Malgorzata Zofia; Hausen, Daniela; Hermann, Sibylle; Kraft, Angelina; Martinez Lavanchy, Paula; Prisecaru, Tudor; Sànchez, Barbara; Strötgen, Robert
    The white paper ‘Advancing Research Data Management in Universities of Science and Technology’ shares insights on the state-of-the-art in research data management, and recommendations for advancement. A core part of the paper are the results of a survey, which was distributed to our member institutions in 2019 and addressed the following aspects of research data management (RDM): (i) the establishment of a RDM policy at the university; (ii) the provision of suitable RDM infrastructure and tools; and (iii) the establishment of RDM support services and trainings tailored to the requirements of science and technology disciplines. The paper reveals that while substantial progress has been made, there is still a long way to go when it comes to establishing “advanced-degree programmes at our major universities for the emerging field of data scientist”, as recommended in the seminal 2010 report ‘Riding the Wave’, and our white paper offers concrete recommendations and best practices for university leaders, researchers, operational staff, and policy makers. The topic of RDM has become a focal point in many scientific disciplines, in Europe and globally. The management and full utilisation of research data are now also at the top of the European agenda, as exemplified by Ursula von der Leyen addressat this year’s World Economic Forum.However, the implementation of RDM remains divergent across Europe. The white paper was written by a diverse team of RDM specialists, including data scientists and data stewards, with the work led by the RDM subgroup of our Task Force Open Science. The writing team included Angelina Kraft (Head of Lab Research Data Services at TIB, Leibniz University Hannover) who said: “The launch of RDM courses and teaching materials at universities of science and technology is a first important step to motivate people to manage their data. Furthermore, professors and PIs of all disciplines should actively support data management and motivate PhD students to publish their data in recognised digital repositories.” Another part of the writing team was Barbara Sanchez (Head of Centre for Research Data Management, TU Wien) and Malgorzata Goraczek (International Research Support / Data Management Support, TU Wien) who added:“A reliable research data infrastructure is a central component of any RDM service. In addition to the infrastructure, proper RDM is all about communication and cooperation. This includes bringing tools, infrastructures, staff and units together.” Alastair Dunning (Head of 4TU.ResearchData, Delft University of Technology), also one of the writers, added: “There is a popular misconception that better research data management only means faster and more efficient computers. In this white paper, we emphasise the role that training and a culture of good research data management must play.”
  • Item
    NFDI4Chem - A Research Data Network for International Chemistry
    (Berlin : De Gruyter, 2023) Steinbeck, Christoph; Koepler, Oliver; Herres-Pawlis, Sonja; Bach, Felix; Jung, Nicole; Razum, Matthias; Liermann, Johannes C.; Neumann, Steffen
    Research data provide evidence for the validation of scientific hypotheses in most areas of science. Open access to them is the basis for true peer review of scientific results and publications. Hence, research data are at the heart of the scientific method as a whole. The value of openly sharing research data has by now been recognized by scientists, funders and politicians. Today, new research results are increasingly obtained by drawing on existing data. Many organisations such as the Research Data Alliance (RDA), the goFAIR initiative, and not least IUPAC are supporting and promoting the collection and curation of research data. One of the remaining challenges is to find matching data sets, to understand them and to reuse them for your own purpose. As a consequence, we urgently need better research data management.
  • Item
    Digitale Langzeitarchivierung - Was ist das? Was hat das mit DOIs zu tun? Und was macht die TIB in der LZA?
    (Hannover : Technische Informationsbibliothek, PID Competence Center, 2023-05-02) Lindlar, Micky
    Folien zum Thema Langzeitarchivierung für den virtuellen Workshop "Frühlings TIB DOI Konsortium Workshop - Retrodigitalisierung und Langzeitarchivierung".
  • Item
    Survey on Big Data Applications
    (Cham : Springer, 2020) Janev, Valentina; Pujić, Dea; Jelić, Marko; Vidal, Maria-Esther; Janev, Valentina; Graux, Damien; Jabeen, Hajira; Sallinger, Emanuel
    The goal of this chapter is to shed light on different types of big data applications needed in various industries including healthcare, transportation, energy, banking and insurance, digital media and e-commerce, environment, safety and security, telecommunications, and manufacturing. In response to the problems of analyzing large-scale data, different tools, techniques, and technologies have bee developed and are available for experimentation. In our analysis, we focused on literature (review articles) accessible via the Elsevier ScienceDirect service and the Springer Link service from more recent years, mainly from the last two decades. For the selected industries, this chapter also discusses challenges that can be addressed and overcome using the semantic processing approaches and knowledge reasoning approaches discussed in this book.
  • Item
    A Data-Driven Approach for Analyzing Healthcare Services Extracted from Clinical Records
    (Piscataway, NJ : IEEE, 2020) Scurti, Manuel; Menasalvas-Ruiz, Ernestina; Vidal, Maria-Esther; Torrente, Maria; Vogiatzis, Dimitrios; Paliouras, George; Provencio, Mariano; Rodríguez-González, Alejandro; Seco de Herrera, Alba García; Rodríguez González, Alejandro; Santosh, K.C.; Temesgen, Zelalem; Soda, Paolo
    Cancer remains one of the major public health challenges worldwide. After cardiovascular diseases, cancer is one of the first causes of death and morbidity in Europe, with more than 4 million new cases and 1.9 million deaths per year. The suboptimal management of cancer patients during treatment and subsequent follows up are major obstacles in achieving better outcomes of the patients and especially regarding cost and quality of life In this paper, we present an initial data-driven approach to analyze the resources and services that are used more frequently by lung-cancer patients with the aim of identifying where the care process can be improved by paying a special attention on services before diagnosis to being able to identify possible lung-cancer patients before they are diagnosed and by reducing the length of stay in the hospital. Our approach has been built by analyzing the clinical notes of those oncological patients to extract this information and their relationships with other variables of the patient. Although the approach shown in this manuscript is very preliminary, it shows that quite interesting outcomes can be derived from further analysis. © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
  • Item
    In wenigen Schritten zur Zweitveröffentlichung: Ein Leitfaden für Mitarbeiter:innen in Publikationsservices
    (Zenodo, 2024-02-08) Dellmann, Sarah; Deuter, Franziska; Hulin, Sylvia; Kuhlmeier, Antje; Matuszkiewicz, Kai; Schneider, Corinna; Schröer, Cäcilia; Weisheit, Silke; Strauß, Helene
    Viele wissenschaftliche Bibliotheken bieten den Angehörigen der eigenen Einrichtung einen Zweitveröffentlichungsservice an. Um den Austausch zu diesem Thema zu fördern, wurde im Jahr 2021 die „Digitale Fokusgruppe Zweitveröffentlichung“ im Rahmen der Kompetenz- und Vernetzungsplattform „open-access.network“ gegründet. Der vorliegende Leitfaden ist aus dem Bedarf entstanden, Kolleg:innen bei Einrichtung und Ausbau eines Zweitveröffentlichungsservices zu unterstützen. Er richtet sich daher vor allem an Kolleg:innen, die (neu) im Themenfeld Zweitveröffentlichung arbeiten oder die sich z. B. im Rahmen der bibliothekarischen Ausbildung mit diesem Thema beschäftigen. Der Leitfaden entstand in einem kollaborativen Schreibprozess von Mitgliedern der Fokusgruppe Zweitveröffentlichung zwischen Dezember 2022 und November 2023.
  • Item
    In wenigen Schritten zur Zweitveröffentlichung : Workflows für Publikationsservices
    (Zenodo, 2022) Dellmann, Sarah; Drescher, Katharina; Hofmann, Andrea; Hulin, Sylvia; Jung, Jakob; Kobusch, Alexander; Kuhlmeier, Antje; Matuszkiewicz, Kai; Pfeifer, Mats; Schneider, Corinna; Slavcheva, Adriana; Steinecke, Mascha; Ziegler, Barbara
    Zweitveröffentlichungen von bereits veröffentlichten Publikationen haben eine freie Nutzbarmachung zum Ziel und nutzen den "Grünen Weg" im Open Access. Der Zweitveröffentlichungsworkflow umfasst mehrere Schritte, deren Reihenfolge zum Teil flexibel ist. Für die genaue Umsetzung der einzelnen Schritte gibt es stets verschiedene Optionen, die je nach den Rahmenbedingungen der jeweiligen Einrichtung (z. B. technische Infrastruktur, Personalressourcen) festgelegt werden. Dabei sollten bei der Planung bzw. Weiterentwicklung des Workflows möglichst die Automatisierungspotenziale einzelner Prozesse genutzt werden. Basierend auf einem Workflow-Vergleich der Zweitveröffentlichungsservices der beteiligten Einrichtungen empfiehlt die Unterarbeitsgruppe Workflows (Fokusgruppe Zweitveröffentlichungen) des open-access.network folgende neun Schritte: (1) Auswahl von Informationsquellen zur Identifikation potentieller Zweitveröffentlichungen (2) Rechtliche Prüfung, ob eine Zweitveröffentlichung zulässig ist; (3) Einholen der Publikationsgenehmigung von den Autor*innen; (4) Prüfung, welche Manuskriptversion veröffentlicht werden darf; (5) Bearbeitung der Publikationsdatei; (6) Dublettenprüfung, Eingabe der Metadaten und Ablage der Datei; (7) Dokumentation der Rechtsgrundlage für die Veröffentlichung; (8) Öffentlichkeitsarbeit für die Zweitveröffentlichung; (9) Monitoring und Dokumentation sowie Werbung für den eigenen Service.
  • Item
    Ontology Design for Pharmaceutical Research Outcomes
    (Cham : Springer, 2020) Say, Zeynep; Fathalla, Said; Vahdati, Sahar; Lehmann, Jens; Auer, Sören; Hall, Mark; Merčun, Tanja; Risse, Thomas; Duchateau, Fabien
    The network of scholarly publishing involves generating and exchanging ideas, certifying research, publishing in order to disseminate findings, and preserving outputs. Despite enormous efforts in providing support for each of those steps in scholarly communication, identifying knowledge fragments is still a big challenge. This is due to the heterogeneous nature of the scholarly data and the current paradigm of distribution by publishing (mostly document-based) over journal articles, numerous repositories, and libraries. Therefore, transforming this paradigm to knowledge-based representation is expected to reform the knowledge sharing in the scholarly world. Although many movements have been initiated in recent years, non-technical scientific communities suffer from transforming document-based publishing to knowledge-based publishing. In this paper, we present a model (PharmSci) for scholarly publishing in the pharmaceutical research domain with the goal of facilitating knowledge discovery through effective ontology-based data integration. PharmSci provides machine-interpretable information to the knowledge discovery process. The principles and guidelines of the ontological engineering have been followed. Reasoning-based techniques are also presented in the design of the ontology to improve the quality of targeted tasks for data integration. The developed ontology is evaluated with a validation process and also a quality verification method.
  • Item
    A multi-method psychometric assessment of the affinity for technology interaction (ATI) scale
    (Amsterdam : Elsevier, 2020) Lezhnina, Olga; Kismihók, Gábor
    In order to develop valid and reliable instruments, psychometric validation should be conducted as an iterative process that “requires a multi-method assessment” (Schimmack, 2019, p. 4). In this study, a multi-method psychometric approach was applied to a recently developed and validated scale, the Affinity for Technology Interaction (ATI) scale (Franke, Attig, & Wessel, 2018). The dataset (N ​= ​240) shared by the authors of the scale (Franke et al., 2018) was used. Construct validity of the ATI was explored by means of hierarchical clustering on variables, and its psychometric properties were analysed in accordance with an extended psychometric protocol (Dima, 2018) by methods of Classical Test Theory (CTT) and Item Response Theory (IRT). The results showed that the ATI is a unidimensional scale (homogeneity H ​= ​0.55) with excellent reliability (ω ​= ​0.90 [0.88-0.92]) and construct validity. Suggestions for further improvement of the ATI scale and the psychometric protocol were made.