Search Results

Now showing 1 - 10 of 97
  • Item
    A Review on Recent Advances in Video-based Learning Research: Video Features, Interaction, Tools, and Technologies
    (Aachen, Germany : RWTH Aachen, 2021) Navarrete, Evelyn; Hoppe, Anett; Ewerth, Ralph; Cong, Gao; Ramanath, Maya
    Human learning shifts stronger than ever towards online settings, and especially towards video platforms. There is an abundance of tutorials and lectures covering diverse topics, from fixing a bike to particle physics. While it is advantageous that learning resources are freely available on the Web, the quality of the resources varies a lot. Given the number of available videos, users need algorithmic support in finding helpful and entertaining learning resources. In this paper, we present a review of the recent research literature (2020-2021) on video-based learning. We focus on publications that examine the characteristics of video content, analyze frequently used features and technologies, and, finally, derive conclusions on trends and possible future research directions.
  • Item
    Formalizing Gremlin pattern matching traversals in an integrated graph Algebra
    (Aachen, Germany : RWTH Aachen, 2019) Thakkar, Harsh; Auer, Sören; Vidal, Maria-Esther; Samavi, Reza; Consens, Mariano P.; Khatchadourian, Shahan; Nguyen, Vinh; Sheth, Amit; Giménez-García, José M.; Thakkar, Harsh
    Graph data management (also called NoSQL) has revealed beneficial characteristics in terms of flexibility and scalability by differ-ently balancing between query expressivity and schema flexibility. This peculiar advantage has resulted into an unforeseen race of developing new task-specific graph systems, query languages and data models, such as property graphs, key-value, wide column, resource description framework (RDF), etc. Present-day graph query languages are focused towards flex-ible graph pattern matching (aka sub-graph matching), whereas graph computing frameworks aim towards providing fast parallel (distributed) execution of instructions. The consequence of this rapid growth in the variety of graph-based data management systems has resulted in a lack of standardization. Gremlin, a graph traversal language, and machine provide a common platform for supporting any graph computing sys-tem (such as an OLTP graph database or OLAP graph processors). In this extended report, we present a formalization of graph pattern match-ing for Gremlin queries. We also study, discuss and consolidate various existing graph algebra operators into an integrated graph algebra.
  • Item
    On the Role of Images for Analyzing Claims in Social Media
    (Aachen, Germany : RWTH Aachen, 2021) Cheema, Gullal S.; Hakimov, Sherzod; Müller-Budack, Eric; Ewerth, Ralph
    Fake news is a severe problem in social media. In this paper, we present an empirical study on visual, textual, and multimodal models for the tasks of claim, claim check-worthiness, and conspiracy detection, all of which are related to fake news detection. Recent work suggests that images are more influential than text and often appear alongside fake text. To this end, several multimodal models have been proposed in recent years that use images along with text to detect fake news on social media sites like Twitter. However, the role of images is not well understood for claim detection, specifically using transformer-based textual and multimodal models. We investigate state-of-the-art models for images, text (Transformer-based), and multimodal information for four different datasets across two languages to understand the role of images in the task of claim and conspiracy detection.
  • Item
    Towards Customizable Chart Visualizations of Tabular Data Using Knowledge Graphs
    (Cham : Springer, 2020) Wiens, Vitalis; Stocker, Markus; Auer, Sören; Ishita, Emi; Pang, Natalie Lee San; Zhou, Lihong
    Scientific articles are typically published as PDF documents, thus rendering the extraction and analysis of results a cumbersome, error-prone, and often manual effort. New initiatives, such as ORKG, focus on transforming the content and results of scientific articles into structured, machine-readable representations using Semantic Web technologies. In this article, we focus on tabular data of scientific articles, which provide an organized and compressed representation of information. However, chart visualizations can additionally facilitate their comprehension. We present an approach that employs a human-in-the-loop paradigm during the data acquisition phase to define additional semantics for tabular data. The additional semantics guide the creation of chart visualizations for meaningful representations of tabular data. Our approach organizes tabular data into different information groups which are analyzed for the selection of suitable visualizations. The set of suitable visualizations serves as a user-driven selection of visual representations. Additionally, customization for visual representations provides the means for facilitating the understanding and sense-making of information.
  • Item
    Towards the semantic formalization of science
    (New York City, NY : Association for Computing Machinery, 2020) Fathalla, Said; Auer, Sören; Lange, Christoph
    The past decades have witnessed a huge growth in scholarly information published on the Web, mostly in unstructured or semi-structured formats, which hampers scientific literature exploration and scientometric studies. Past studies on ontologies for structuring scholarly information focused on describing scholarly articles' components, such as document structure, metadata and bibliographies, rather than the scientific work itself. Over the past four years, we have been developing the Science Knowledge Graph Ontologies (SKGO), a set of ontologies for modeling the research findings in various fields of modern science resulting in a knowledge graph. Here, we introduce this ontology suite and discuss the design considerations taken into account during its development. We deem that within the next years, a science knowledge graph is likely to become a crucial component for organizing and exploring scientific work.
  • Item
    Survey on Big Data Applications
    (Cham : Springer, 2020) Janev, Valentina; Pujić, Dea; Jelić, Marko; Vidal, Maria-Esther; Janev, Valentina; Graux, Damien; Jabeen, Hajira; Sallinger, Emanuel
    The goal of this chapter is to shed light on different types of big data applications needed in various industries including healthcare, transportation, energy, banking and insurance, digital media and e-commerce, environment, safety and security, telecommunications, and manufacturing. In response to the problems of analyzing large-scale data, different tools, techniques, and technologies have bee developed and are available for experimentation. In our analysis, we focused on literature (review articles) accessible via the Elsevier ScienceDirect service and the Springer Link service from more recent years, mainly from the last two decades. For the selected industries, this chapter also discusses challenges that can be addressed and overcome using the semantic processing approaches and knowledge reasoning approaches discussed in this book.
  • Item
    3b Open-Access-Publikationsfonds
    (Zenodo, 2017) Pampel, Heinz; Tullney, Marco
    Ein Open-Access-Publikationsfonds ist ein Finanzierungs- und Steuerungsinstrument wissenschaftlicher Einrichtungen zur Übernahme von Open-Access-Publikationsgebühren. Dieser Beitrag befasst sich mit Aufbau und Betrieb eines solchen Fonds.
  • Item
    Schlesien versus Sparta. Gerhart Hauptmanns Besinnung auf schlesische Identität im Kontext der Rassenideologie
    (München : Oldenbourg, 2014) Tempel, Bernhard
    Mehrfach setzt der deutsche Dichter Gerhart Hauptmann (1862-1946) zwischen 1906 und 1942 Schlesien und Sparta in Beziehung. Im Reisetagebuch seiner Griechenlandreise erinnert ihn die Landschaft Spartas an die schlesische Landwirtschaftsidylle und eine Liebschaft während seiner Ausbildung in Lederose. Der veröffentlichte Reisebericht 'Griechischer Frühling' bezieht die Bevölkerungspolitik Spartas nach den Lykurgischen Gesetzen ein, die der deutschen Eugenik (von Hauptmanns Freund Alfred Ploetz 1895 als "Rassenhygiene" inauguriert) als vorbildlich galten. Zu einer Entgegensetzung von Sparta und Schlesien, in deren Landschaften er weiterhin Gemeinsamkeiten sieht, kommt Hauptmann 1922 in einem Paralipomenon zum Fragment gebliebenen Roman 'Der neue Chistophorus', wo der sein idealisiertes Selbstbild, den Bergpater, erklären läßt, spartanisches Freiheitsdrang werde in Schlesien nie heimisch sein. Vollends kritisch wird schließlich der Blick auf Sparta Ende der 1930er Jahre: Hauptmann begreift dann Schlesien als Land der Mischung und seine Familie als "Kolonisten"; es deutet sich in Tagebuchaufzeichnungen an, daß er Schlesien als Gegenmodell zu Sparta entwirft, dem (nach Ernst Baltrusch) "ersten totalitären Staat der Weltgeschichte", in dem Kunst - für Hauptmann das Maß aller Dinge - gegenüber der einseitig auf körperliche Tauglichkeit des Nachwuchses und Reinheit der Rasse ausgerichteten keinen Platz hat. Die Analogien zwischen der Rassenpolitik im Dritten Reich und Sparta (auch in zeitgenössischen Berufungen auf Sparta) nahm er wahr und lehnte beides ab.
  • Item
    Toward Representing Research Contributions in Scholarly Knowledge Graphs Using Knowledge Graph Cells
    (New York City, NY : Association for Computing Machinery, 2020) Vogt, Lars; D'Souza, Jennifer; Stocker, Markus; Auer, Sören
    There is currently a gap between the natural language expression of scholarly publications and their structured semantic content modeling to enable intelligent content search. With the volume of research growing exponentially every year, a search feature operating over semantically structured content is compelling. Toward this end, in this work, we propose a novel semantic data model for modeling the contribution of scientific investigations. Our model, i.e. the Research Contribution Model (RCM), includes a schema of pertinent concepts highlighting six core information units, viz. Objective, Method, Activity, Agent, Material, and Result, on which the contribution hinges. It comprises bottom-up design considerations made from three scientific domains, viz. Medicine, Computer Science, and Agriculture, which we highlight as case studies. For its implementation in a knowledge graph application we introduce the idea of building blocks called Knowledge Graph Cells (KGC), which provide the following characteristics: (1) they limit the expressibility of ontologies to what is relevant in a knowledge graph regarding specific concepts on the theme of research contributions; (2) they are expressible via ABox and TBox expressions; (3) they enforce a certain level of data consistency by ensuring that a uniform modeling scheme is followed through rules and input controls; (4) they organize the knowledge graph into named graphs; (5) they provide information for the front end for displaying the knowledge graph in a human-readable form such as HTML pages; and (6) they can be seamlessly integrated into any existing publishing process thatsupports form-based input abstracting its semantic technicalities including RDF semantification from the user. Thus RCM joins the trend of existing work toward enhanced digitalization of scholarly publication enabled by an RDF semantification as a knowledge graph fostering the evolution of the scholarly publications beyond written text.
  • Item
    Falcon 2.0: An Entity and Relation Linking Tool over Wikidata
    (New York City, NY : Association for Computing Machinery, 2020) Sakor, Ahmad; Singh, Kuldeep; Patel, Anery; Vidal, Maria-Esther
    The Natural Language Processing (NLP) community has significantly contributed to the solutions for entity and relation recognition from a natural language text, and possibly linking them to proper matches in Knowledge Graphs (KGs). Considering Wikidata as the background KG, there are still limited tools to link knowledge within the text to Wikidata. In this paper, we present Falcon 2.0, the first joint entity and relation linking tool over Wikidata. It receives a short natural language text in the English language and outputs a ranked list of entities and relations annotated with the proper candidates in Wikidata. The candidates are represented by their Internationalized Resource Identifier (IRI) in Wikidata. Falcon 2.0 resorts to the English language model for the recognition task (e.g., N-Gram tiling and N-Gram splitting), and then an optimization approach for the linking task. We have empirically studied the performance of Falcon 2.0 on Wikidata and concluded that it outperforms all the existing baselines. Falcon 2.0 is open source and can be reused by the community; all the required instructions of Falcon 2.0 are well-documented at our GitHub repository (https://github.com/SDM-TIB/falcon2.0). We also demonstrate an online API, which can be run without any technical expertise. Falcon 2.0 and its background knowledge bases are available as resources at https://labs.tib.eu/falcon/falcon2/.