Search Results

Now showing 1 - 10 of 14
  • Item
    A Case for Integrated Data Processing in Large-Scale Cyber-Physical Systems
    (Maui, Hawaii : HICSS, 2019) Glebke, René; Henze, Martin; Wehrle, Klaus; Niemietz, Philipp; Trauth, Daniel; Mattfeld, Patrick; Bergs, Thomas; Bui, Tung X.
    Large-scale cyber-physical systems such as manufacturing lines generate vast amounts of data to guarantee precise control of their machinery. Visions such as the Industrial Internet of Things aim at making this data available also to computation systems outside the lines to increase productivity and product quality. However, rising amounts and complexities of data and control decisions push existing infrastructure for data transmission, storage, and processing to its limits. In this paper, we exemplarily study a fine blanking line which can produce up to 6.2 Gbit/s worth of data to showcase the extreme requirements found in modern manufacturing. We consequently propose integrated data processing which keeps inherently local and small-scale tasks close to the processes while at the same time centralizing tasks relying on more complex decision procedures and remote data sources. Our approach thus allows for both maintaining control of field-level processes and leveraging the benefits of “big data” applications.
  • Item
    Formalizing Gremlin pattern matching traversals in an integrated graph Algebra
    (Aachen, Germany : RWTH Aachen, 2019) Thakkar, Harsh; Auer, Sören; Vidal, Maria-Esther; Samavi, Reza; Consens, Mariano P.; Khatchadourian, Shahan; Nguyen, Vinh; Sheth, Amit; Giménez-García, José M.; Thakkar, Harsh
    Graph data management (also called NoSQL) has revealed beneficial characteristics in terms of flexibility and scalability by differ-ently balancing between query expressivity and schema flexibility. This peculiar advantage has resulted into an unforeseen race of developing new task-specific graph systems, query languages and data models, such as property graphs, key-value, wide column, resource description framework (RDF), etc. Present-day graph query languages are focused towards flex-ible graph pattern matching (aka sub-graph matching), whereas graph computing frameworks aim towards providing fast parallel (distributed) execution of instructions. The consequence of this rapid growth in the variety of graph-based data management systems has resulted in a lack of standardization. Gremlin, a graph traversal language, and machine provide a common platform for supporting any graph computing sys-tem (such as an OLTP graph database or OLAP graph processors). In this extended report, we present a formalization of graph pattern match-ing for Gremlin queries. We also study, discuss and consolidate various existing graph algebra operators into an integrated graph algebra.
  • Item
    DoMoRe – A recommender system for domain modeling
    (Setúbal : SciTePress, 2018) Agt-Rickauer, Henning; Kutsche, Ralf-Detlef; Sack, Harald; Hammoudi, Slimane; Ferreira Pires, Luis; Selic, Bran
    Domain modeling is an important activity in early phases of software projects to achieve a shared understanding of the problem field among project participants. Domain models describe concepts and relations of respective application fields using a modeling language and domain-specific terms. Detailed knowledge of the domain as well as expertise in model-driven development is required for software engineers to create these models. This paper describes DoMoRe, a system for automated modeling recommendations to support the domain modeling process. We describe an approach in which modeling benefits from formalized knowledge sources and information extraction from text. The system incorporates a large network of semantically related terms built from natural language data sets integrated with mediator-based knowledge base querying in a single recommender system to provide context-sensitive suggestions of model elements.
  • Item
    A Computational Pipeline for Sepsis Patients’ Stratification and Diagnosis
    ([Setúbal, Portugal] : SCITEPRESS - Science and Technology Publications, Lda., 2018) Campos, David; Pinho, Renato; Neugebauer, Ute; Popp, Juergen; Oliveira, José Luis; Zwiggelaar, Reyer; Gamboa, Hugo; Fred, Ana; Bermúdez i Badia, Sergi
    Sepsis is still a little acknowledged public health issue, despite its increasing incidence and the growing mortality rate. In addition, a clear diagnosis can be lengthy and complicated, due to highly variable symptoms and non-specific criteria, causing the disease to be diagnosed and treated too late. This paper presents the HemoSpec platform, a decision support system which, by collecting and automatically processing data from several acquisition devices, can help in the early diagnosis of sepsis.
  • Item
    Why reinvent the wheel: Let's build question answering systems together
    (New York City : Association for Computing Machinery, 2018) Singh, K.; Radhakrishna, A.S.; Both, A.; Shekarpour, S.; Lytra, I.; Usbeck, R.; Vyas, A.; Khikmatullaev, A.; Punjani, D.; Lange, C.; Vidal, Maria-Esther; Lehmann, J.; Auer, Sören
    Modern question answering (QA) systems need to flexibly integrate a number of components specialised to fulfil specific tasks in a QA pipeline. Key QA tasks include Named Entity Recognition and Disambiguation, Relation Extraction, and Query Building. Since a number of different software components exist that implement different strategies for each of these tasks, it is a major challenge to select and combine the most suitable components into a QA system, given the characteristics of a question. We study this optimisation problem and train classifiers, which take features of a question as input and have the goal of optimising the selection of QA components based on those features. We then devise a greedy algorithm to identify the pipelines that include the suitable components and can effectively answer the given question. We implement this model within Frankenstein, a QA framework able to select QA components and compose QA pipelines. We evaluate the effectiveness of the pipelines generated by Frankenstein using the QALD and LC-QuAD benchmarks. These results not only suggest that Frankenstein precisely solves the QA optimisation problem but also enables the automatic composition of optimised QA pipelines, which outperform the static Baseline QA pipeline. Thanks to this flexible and fully automated pipeline generation process, new QA components can be easily included in Frankenstein, thus improving the performance of the generated pipelines.
  • Item
    When humans and machines collaborate: Cross-lingual Label Editing in Wikidata
    (New York City : Association for Computing Machinery, 2019) Kaffee, L.-A.; Endris, K.M.; Simperl, E.
    The quality and maintainability of a knowledge graph are determined by the process in which it is created. There are different approaches to such processes; extraction or conversion of available data in the web (automated extraction of knowledge such as DBpedia from Wikipedia), community-created knowledge graphs, often by a group of experts, and hybrid approaches where humans maintain the knowledge graph alongside bots. We focus in this work on the hybrid approach of human edited knowledge graphs supported by automated tools. In particular, we analyse the editing of natural language data, i.e. labels. Labels are the entry point for humans to understand the information, and therefore need to be carefully maintained. We take a step toward the understanding of collaborative editing of humans and automated tools across languages in a knowledge graph. We use Wikidata as it has a large and active community of humans and bots working together covering over 300 languages. In this work, we analyse the different editor groups and how they interact with the different language data to understand the provenance of the current label data.
  • Item
    The Research Core Dataset (KDSF) in the Linked Data context
    (Amsterdam [u.a.] : Elsevier, 2019) Walther, Tatiana; Hauschke, Christian; Kasprzik, Anna; Sicilia, Miguel-Angel; Simons, Ed; Clements, Anna; de Castro, Pablo; Bergström, Johan
    This paper describes our efforts to implement the Research Core Dataset (“Kerndatensatz Forschung”; KDSF) as an ontology in VIVO. KDSF is used in VIVO to record the required metadata on incoming data and to produce reports as an output. While both processes need an elaborate adaptation of the KDSF specification, this paper focusses on the adaptation of the KDSF basic data model for recording data in VIVO. In this context, the VIVO and KDSF ontologies were compared with respect to domain, syntax, structure, and granularity in order to identify correspondences and mismatches. To produce an alignment, different matching approaches have been applied. Furthermore, we made necessary modifications and extensions on KDSF classes and properties.
  • Item
    Linked Data Supported Content Analysis for Sociology
    (Berlin ; Heidelberg : Springer, 2019) Tietz, Tabea; Sack, Harald; Acosta, Maribel; Cudré-Mauroux, Philippe; Maleshkova, Maria; Pellegrini, Tassilo; Sack, Harald; Sure-Vetter, York
    Philology and hermeneutics as the analysis and interpretation of natural language text in written historical sources are the predecessors of modern content analysis and date back already to antiquity. In empirical social sciences, especially in sociology, content analysis provides valuable insights to social structures and cultural norms of the present and past. With the ever growing amount of text on the web to analyze, also numerous computer-assisted text analysis techniques and tools were developed in sociological research. However, existing methods often go without sufficient standardization. As a consequence, sociological text analysis is lacking transparency, reproducibility and data re-usability. The goal of this paper is to show, how Linked Data principles and Entity Linking techniques can be used to structure, publish and analyze natural language text for sociological research to tackle these shortcomings. This is achieved on the use case of constitutional text documents of the Netherlands from 1884 to 2016 which represent an important contribution to the European cultural heritage. Finally, the generated data is made available and re-usable as Linked Data not only for sociologists, but also for all other researchers in the digital humanities domain interested in the development of constitutions in the Netherlands.
  • Item
    Preface
    (Aachen, Germany : RWTH Aachen, 2019) Kaffee, Lucie-Aimee; Endris, Kemele M.; Vidal, Maria-Esther; Comerio, Marco; Sadeghi, Mersedeh; Chaves-Fraga; David, Colpaert Pieter; Kaffee, Lucie Aimée; Endris, Kemele M.; Vidal, María-Esther; Comerio, Marco; Sadeghi, Mersedeh; Chaves-Fraga, David; Colpaert, Pieter
    This volumne presents the proceedings of the 1st International Workshop on Approaches for Making Data Interoperable (AMAR 2019) and 1st International Workshop on Semantics for Transport (Sem4Tra) held in Karlsruhe, Germany, September 9, 2019, co-located with SEMANTiCS 2019. Interoperability of data is an important factor to make transportation data accessible, therefore we present the topics alongside each other in this proceedings.
  • Item
    Semantic segmentation of non-linear multimodal images for disease grading of inflammatory bowel disease: A segnet-based application
    ([Sétubal] : SCITEPRESS - Science and Technology Publications Lda., 2019) Pradhan, Pranita; Meyer, Tobias; Vieth, Michael; Stallmach, Andreas; Waldner, Maximilian; Schmitt, Michael; Popp, Juergen; Bocklitz, Thomas; De Marsico, Maria; Sanniti di Baja, Gabriella; Fred, Ana
    Non-linear multimodal imaging, the combination of coherent anti-stokes Raman scattering (CARS), two-photon excited fluorescence (TPEF) and second harmonic generation (SHG), has shown its potential to assist the diagnosis of different inflammatory bowel diseases (IBDs). This label-free imaging technique can support the ‘gold-standard’ techniques such as colonoscopy and histopathology to ensure an IBD diagnosis in clinical environment. Moreover, non-linear multimodal imaging can measure biomolecular changes in different tissue regions such as crypt and mucosa region, which serve as a predictive marker for IBD severity. To achieve a real-time assessment of IBD severity, an automatic segmentation of the crypt and mucosa regions is needed. In this paper, we semantically segment the crypt and mucosa region using a deep neural network. We utilized the SegNet architecture (Badrinarayanan et al., 2015) and compared its results with a classical machine learning approach. Our trained SegNet mod el achieved an overall F1 score of 0.75. This model outperformed the classical machine learning approach for the segmentation of the crypt and mucosa region in our study.