Search Results

Now showing 1 - 5 of 5
  • Item
    Bridging the Gap Between (AI-) Services and Their Application in Research and Clinical Settings Through Interoperability: the OMI-Protocol
    (Hannover : Technische Informationsbibliothek, 2024-02) Sigle, Stefan; Werner, Patrick; Schweizer, Simon; Caldeira, Liliana; Hosch, René; Dyrba, Martin; Fegeler, Christian; Sigle, Stefan; Werner, Patrick; Schweizer, Simon; Caldeira, Liliana; Hosch, René; Dyrba, Martin; Fegeler, Christian; Grönke, Ana; Seletkov, Dmitrii; Kotter, Elmar; Nensa, Felix; Wehrle, Julius; Kaufmes, Kevin; Scherer, Lucas; Nolden, Marco; Boeker, Martin; Schmidt, Marvin; Pelka, Obioma; Braren, Rickmer; Stump, Shura-Roman; Graetz, Teresa; Pogarell, Tobias; Susetzky, Tobias; Wieland, Tobias; Parmar, Vicky; Wang, Yuanbin
    Artificial Intelligence (AI) in research and clinical contexts is transforming the areas of medical and life sciences permanently. Aspects like findability, accessibility, interoperability, and reusability are often neglected for AI-based inference services. The Open Medical Inference (OMI) protocol aims to support remote inference by addressing the aforementioned aspects. Key component of the proposed protocol is an interoperable registry for remote inference services, which addresses the issue of findability for algorithms. It is complemented by information on how to invoke services remotely. Together, these components lay the basis for the implementation of distributed inference services beyond organizational borders. The OMI protocol considers prior work for aspects like data representation and transmission standards wherever possible. Based on Business Process Modeling of prototypical use cases for the service registry and common inference processes, a generic information model for remote services was inferred. Based on this model, FHIR resources were identified to represent AI-based services. The OMI protocol is first introduced using AI-services in radiology but is designed to be generalizable to other application domains as well. It provides an accessible, open specification as blueprint for the introduction and implementation of remote inference services.
  • Item
    Analysis of Knowledge Tracing performance on synthesised student data
    (Hannover : Technische Informationsbibliothek, 2024) Pagonis, Panagiotis; Hartung, Kai; Wu, Di; Georges, Munir; Gröttrup, Sören
    Knowledge Tracing (KT) aims to predict the future performance of students by tracking the development of their knowledge states. Despite all the recent progress made in this field, the application of KT models in education systems is still restricted from the data perspectives: 1) limited access to real life data due to data protection concerns, 2) lack of diversity in public datasets, 3) noises in benchmark datasets such as duplicate records. To resolve these problems, we simulated student data with three statistical strategies based on public datasets and tested their performance on two KT baselines. While we observe only minor performance improvement with additional synthetic data, our work shows that using only synthetic data for training can lead to similar performance as real data.
  • Item
    Untersuchung des Honeypot-Effekts an (halb-)öffentlichen Ambient Displays in Langzeitfeldstudien
    (Hannover : Technische Informationsbibliothek, 2024-11-01) Koch, Michael; Draheim, Susanne; Fietkau, Julian; Schwarzer, Jan; von Luck, Kai
    Im Rahmen des Projektes ist ein Framework zur Analyse der Nutzung von interaktiven großen Wandbildschirmen im Feldeinsatz entstanden – von Erfahrungsberichten bei Konzeption und Aufbau bis zu Open Source Toolsets. Diese erlauben eine parallele Analyse von Interaktions- und Beobachtungsdaten und schaffen neue Möglichkeiten, die Nutzung von im Langzeitbetrieb befindlichen Wandbildschirmen teilautomatisiert zu untersuchen und dabei umfangreiche sensorbasierte Datensätze hinsichtlich interessanter Muster zu filtern und zu visualisieren. Der Einsatz dieses Frameworks im Bereich Honeypot- und Novelty-Effekt zeigt, dass durch eigene Entwicklung von Methoden und Werkzeugen zur Analyse von Body-Tracking-Daten die Identifikation von möglichen Situationen zur genaueren Untersuchung viel einfacher geworden ist. Auch sind quantitative Abschätzungen dazu möglich, wie häufig Honeypot- und andere Effekte auftreten und wie deren Häufigkeit sich über Monate und Jahre wandelt.
  • Item
    DFG Project Report: Space-efficient Algorithms
    (Hannover : Technische Informationsbibliothek, 2025) Kammer, Frank
    Graphs arise in many applications, from social networks to transportation and logistics. Often, these graphs are constructed from massive data sets, commonly referred to as Big Data. As applications are faced with ever-increasing amounts of data, researchers tackled the problem from various directions. One such direction is the field of so-called space-efficient algorithms, which for a given problem aim to maintain the runtime of standard solutions while significantly reducing the memory requirements, which typically means using a linear number of bits. This is motivated by the observation that algorithms using a sublinear amount of space tend to have impractical runtimes, together with a lower bound of Barnes et al. [SIAM J. Comput., 1998], who showed that directed s-t-connectivity can not be solved in polynomial time with o(n/√ log n) bits in certain models. Prior to the research project, only a handful of space-efficient algorithms were known such as those for standard graph traversals like depth-first search and breadth-first search. These algorithms use a linear number of bits while (almost) maintaining an optimal linear runtime. We have extended the toolbox of space-efficient algorithms with powerful frameworks such as algorithms for constructing so-called tree decompositions, a structure commonly used in the design of parameterized algorithms for NP-hard problems. Other results include algorithms for special classes of graphs such as planar graphs and graphs that can be categorized by so-called forbidden minors. One such result is a so-called graph-coarsening framework that allows us to execute various algorithms space-efficiently with a trade-off in solution quality. We have obtained results not only for static graphs, but also in the dynamic settings. First, for the construction of efficient and succinct data structures that provides so-called minor operations (delete vertices as well as delete/contract edges) in planar graphs. Minor operations are useful in numerous applications. Using the aforementioned graph-coarsening framework we are able to construct this data structure space-efficiently. Second, space-efficient results for so-called exploration and multi-stage problems on temporal graphs, i.e., graphs where the set of edges changes over discrete time steps. For practical applications, we applied our knowledge of space-efficient techniques to design a winning solver for the PACE challenge 2024 on upper bounding the parameter of so-called twinwidth, in addition to implementing various space-efficient algorithms and data structures in a library.
  • Item
    Final Report: Automated Termination and Complexity Analysis of Imperative Programs
    (Hannover : Technische Informationsbibliothek, 2024-07-15) Giesl, Jürgen
    [no abstract available]