Search Results

Now showing 1 - 10 of 21
Loading...
Thumbnail Image
Item

An AI-based open recommender system for personalized labor market driven education

2022, Tavakoli, Mohammadreza, Faraji, Abdolali, Vrolijk, Jarno, Molavi, Mohammadreza, Mol, Stefan T., Kismihók, Gábor

Attaining those skills that match labor market demand is getting increasingly complicated, not in the last place in engineering education, as prerequisite knowledge, skills, and abilities are evolving dynamically through an uncontrollable and seemingly unpredictable process. Anticipating and addressing such dynamism is a fundamental challenge to twenty-first century education. The burgeoning availability of data, not only on the demand side but also on the supply side (in the form of open educational resources) coupled with smart technologies, may provide a fertile ground for addressing this challenge. In this paper, we propose a novel, Artificial Intelligence (AI) driven approach to the development of an open, personalized, and labor market oriented learning recommender system, called eDoer. We discuss the complete system development cycle starting with a systematic user requirements gathering, and followed by system design, implementation, and validation. Our recommender prototype (1) derives the skill requirements for particular occupations through an analysis of online job vacancy announcements

Loading...
Thumbnail Image
Item

Transforming the study of organisms: Phenomic data models and knowledge bases

2020, Thessen, Anne E., Walls, Ramona L., Vogt, Lars, Singer, Jessica, Warren, Robert, Buttigieg, Pier Luigi, Balhoff, James P., Mungall, Christopher J., McGuinness, Deborah L., Stucky, Brian J., Yoder, Matthew J., Haendel, Melissa A.

The rapidly decreasing cost of gene sequencing has resulted in a deluge of genomic data from across the tree of life; however, outside a few model organism databases, genomic data are limited in their scientific impact because they are not accompanied by computable phenomic data. The majority of phenomic data are contained in countless small, heterogeneous phenotypic data sets that are very difficult or impossible to integrate at scale because of variable formats, lack of digitization, and linguistic problems. One powerful solution is to represent phenotypic data using data models with precise, computable semantics, but adoption of semantic standards for representing phenotypic data has been slow, especially in biodiversity and ecology. Some phenotypic and trait data are available in a semantic language from knowledge bases, but these are often not interoperable. In this review, we will compare and contrast existing ontology and data models, focusing on nonhuman phenotypes and traits. We discuss barriers to integration of phenotypic data and make recommendations for developing an operationally useful, semantically interoperable phenotypic data ecosystem.

Loading...
Thumbnail Image
Item

Ranking facts for explaining answers to elementary science questions

2023, D’Souza, Jennifer, Mulang, Isaiah Onando, Auer, Sören

In multiple-choice exams, students select one answer from among typically four choices and can explain why they made that particular choice. Students are good at understanding natural language questions and based on their domain knowledge can easily infer the question's answer by “connecting the dots” across various pertinent facts. Considering automated reasoning for elementary science question answering, we address the novel task of generating explanations for answers from human-authored facts. For this, we examine the practically scalable framework of feature-rich support vector machines leveraging domain-targeted, hand-crafted features. Explanations are created from a human-annotated set of nearly 5000 candidate facts in the WorldTree corpus. Our aim is to obtain better matches for valid facts of an explanation for the correct answer of a question over the available fact candidates. To this end, our features offer a comprehensive linguistic and semantic unification paradigm. The machine learning problem is the preference ordering of facts, for which we test pointwise regression versus pairwise learning-to-rank. Our contributions, originating from comprehensive evaluations against nine existing systems, are (1) a case study in which two preference ordering approaches are systematically compared, and where the pointwise approach is shown to outperform the pairwise approach, thus adding to the existing survey of observations on this topic; (2) since our system outperforms a highly-effective TF-IDF-based IR technique by 3.5 and 4.9 points on the development and test sets, respectively, it demonstrates some of the further task improvement possibilities (e.g., in terms of an efficient learning algorithm, semantic features) on this task; (3) it is a practically competent approach that can outperform some variants of BERT-based reranking models; and (4) the human-engineered features make it an interpretable machine learning model for the task.

Loading...
Thumbnail Image
Item

Multimodal news analytics using measures of cross-modal entity and context consistency

2021, Müller-Budack, Eric, Theiner, Jonas, Diering, Sebastian, Idahl, Maximilian, Hakimov, Sherzod, Ewerth, Ralph

The World Wide Web has become a popular source to gather information and news. Multimodal information, e.g., supplement text with photographs, is typically used to convey the news more effectively or to attract attention. The photographs can be decorative, depict additional details, but might also contain misleading information. The quantification of the cross-modal consistency of entity representations can assist human assessors’ evaluation of the overall multimodal message. In some cases such measures might give hints to detect fake news, which is an increasingly important topic in today’s society. In this paper, we present a multimodal approach to quantify the entity coherence between image and text in real-world news. Named entity linking is applied to extract persons, locations, and events from news texts. Several measures are suggested to calculate the cross-modal similarity of the entities in text and photograph by exploiting state-of-the-art computer vision approaches. In contrast to previous work, our system automatically acquires example data from the Web and is applicable to real-world news. Moreover, an approach that quantifies contextual image-text relations is introduced. The feasibility is demonstrated on two datasets that cover different languages, topics, and domains.

Loading...
Thumbnail Image
Item

Personalised information spaces for chemical digital libraries

2009, Koepler, O., Balke, W.-T., Köncke, B., Tönnies, S.

[No abstract available]

Loading...
Thumbnail Image
Item

Deutschsprachige Game Studies 2021 – 2031: eine Vorausschau

2021, Inderst, Rudolf, Heller, Lambert

Rudolf Inderst und Lambert Heller stellen die grundsätzliche Frage, ob Text überhaupt die richtige Form ist, um sich mit digitalen Spielen wissenschaftlich auseinanderzusetzen. Sie sprechen sich dabei für die Etablierung und Verwendung der Form des Videoessays ein, die bereits in ihrer audiovisuellen Materialität dem Gegenstand angemessener sei.

Loading...
Thumbnail Image
Item

Analysing the requirements for an Open Research Knowledge Graph: use cases, quality requirements, and construction strategies

2021, Brack, Arthur, Hoppe, Anett, Stocker, Markus, Auer, Sören, Ewerth, Ralph

Current science communication has a number of drawbacks and bottlenecks which have been subject of discussion lately: Among others, the rising number of published articles makes it nearly impossible to get a full overview of the state of the art in a certain field, or reproducibility is hampered by fixed-length, document-based publications which normally cannot cover all details of a research work. Recently, several initiatives have proposed knowledge graphs (KG) for organising scientific information as a solution to many of the current issues. The focus of these proposals is, however, usually restricted to very specific use cases. In this paper, we aim to transcend this limited perspective and present a comprehensive analysis of requirements for an Open Research Knowledge Graph (ORKG) by (a) collecting and reviewing daily core tasks of a scientist, (b) establishing their consequential requirements for a KG-based system, (c) identifying overlaps and specificities, and their coverage in current solutions. As a result, we map necessary and desirable requirements for successful KG-based science communication, derive implications, and outline possible solutions.

Loading...
Thumbnail Image
Item

Information extraction pipelines for knowledge graphs

2023, Jaradeh, Mohamad Yaser, Singh, Kuldeep, Stocker, Markus, Both, Andreas, Auer, Sören

In the last decade, a large number of knowledge graph (KG) completion approaches were proposed. Albeit effective, these efforts are disjoint, and their collective strengths and weaknesses in effective KG completion have not been studied in the literature. We extend Plumber, a framework that brings together the research community’s disjoint efforts on KG completion. We include more components into the architecture of Plumber to comprise 40 reusable components for various KG completion subtasks, such as coreference resolution, entity linking, and relation extraction. Using these components, Plumber dynamically generates suitable knowledge extraction pipelines and offers overall 432 distinct pipelines. We study the optimization problem of choosing optimal pipelines based on input sentences. To do so, we train a transformer-based classification model that extracts contextual embeddings from the input and finds an appropriate pipeline. We study the efficacy of Plumber for extracting the KG triples using standard datasets over three KGs: DBpedia, Wikidata, and Open Research Knowledge Graph. Our results demonstrate the effectiveness of Plumber in dynamically generating KG completion pipelines, outperforming all baselines agnostic of the underlying KG. Furthermore, we provide an analysis of collective failure cases, study the similarities and synergies among integrated components and discuss their limitations.

Loading...
Thumbnail Image
Item

Compact representations for efficient storage of semantic sensor data

2021, Karim, Farah, Vidal, Maria-Esther, Auer, Sören

Nowadays, there is a rapid increase in the number of sensor data generated by a wide variety of sensors and devices. Data semantics facilitate information exchange, adaptability, and interoperability among several sensors and devices. Sensor data and their meaning can be described using ontologies, e.g., the Semantic Sensor Network (SSN) Ontology. Notwithstanding, semantically enriched, the size of semantic sensor data is substantially larger than raw sensor data. Moreover, some measurement values can be observed by sensors several times, and a huge number of repeated facts about sensor data can be produced. We propose a compact or factorized representation of semantic sensor data, where repeated measurement values are described only once. Furthermore, these compact representations are able to enhance the storage and processing of semantic sensor data. To scale up to large datasets, factorization based, tabular representations are exploited to store and manage factorized semantic sensor data using Big Data technologies. We empirically study the effectiveness of a semantic sensor’s proposed compact representations and their impact on query processing. Additionally, we evaluate the effects of storing the proposed representations on diverse RDF implementations. Results suggest that the proposed compact representations empower the storage and query processing of sensor data over diverse RDF implementations, and up to two orders of magnitude can reduce query execution time.

Loading...
Thumbnail Image
Item

Survey vs Scraped Data: Comparing Time Series Properties of Web and Survey Vacancy Data

2019, De Pedraza, P., Visintin, S., Tijdens, K., Kismihók, G.

This paper studies the relationship between a vacancy population obtained from web crawling and vacancies in the economy inferred by a National Statistics Office (NSO) using a traditional method. We compare the time series properties of samples obtained between 2007 and 2014 by Statistics Netherlands and by a web scraping company. We find that the web and NSO vacancy data present similar time series properties, suggesting that both time series are generated by the same underlying phenomenon: the real number of new vacancies in the economy. We conclude that, in our case study, web-sourced data are able to capture aggregate economic activity in the labor market.