Search Results

Now showing 1 - 10 of 12
  • Item
    ORCID Germany Consortium - Numbers and Figures
    (Meyrin : CERN , 2018) Vierkant, Paul; Pampel, Heinz; Bertelmann, Roland; Dreyer, Britta
    This poster shows the development and status of the ORCID Germany Consortium.
  • Item
    IK und KI-ein Herz und eine Seele: Ein Streit über künstliche Intelligenz im Kontext von Informationskompetenz
    (Frankfurt, Main : DGI ; Berlin ; New York, NY : de Gruyter, 2019) Burblies, Christine; Pianos, Tamara
    [no abstract available]
  • Item
    Why reinvent the wheel: Let's build question answering systems together
    (New York City : Association for Computing Machinery, 2018) Singh, K.; Radhakrishna, A.S.; Both, A.; Shekarpour, S.; Lytra, I.; Usbeck, R.; Vyas, A.; Khikmatullaev, A.; Punjani, D.; Lange, C.; Vidal, Maria-Esther; Lehmann, J.; Auer, Sören
    Modern question answering (QA) systems need to flexibly integrate a number of components specialised to fulfil specific tasks in a QA pipeline. Key QA tasks include Named Entity Recognition and Disambiguation, Relation Extraction, and Query Building. Since a number of different software components exist that implement different strategies for each of these tasks, it is a major challenge to select and combine the most suitable components into a QA system, given the characteristics of a question. We study this optimisation problem and train classifiers, which take features of a question as input and have the goal of optimising the selection of QA components based on those features. We then devise a greedy algorithm to identify the pipelines that include the suitable components and can effectively answer the given question. We implement this model within Frankenstein, a QA framework able to select QA components and compose QA pipelines. We evaluate the effectiveness of the pipelines generated by Frankenstein using the QALD and LC-QuAD benchmarks. These results not only suggest that Frankenstein precisely solves the QA optimisation problem but also enables the automatic composition of optimised QA pipelines, which outperform the static Baseline QA pipeline. Thanks to this flexible and fully automated pipeline generation process, new QA components can be easily included in Frankenstein, thus improving the performance of the generated pipelines.
  • Item
    “When was this picture taken?” – Image date estimation in the wild
    (Berlin : Springer Verlag, 2017) Müller, E.; Springstein, M.; Ewerth, R.
    The problem of automatically estimating the creation date of photos has been addressed rarely in the past. In this paper, we introduce a novel dataset Date Estimation in the Wild for the task of predicting the acquisition year of images captured in the period from 1930 to 1999. In contrast to previous work, the dataset is neither restricted to color photography nor to specific visual concepts. The dataset consists of more than one million images crawled from Flickr and contains a large number of different motives. In addition, we propose two baseline approaches for regression and classification, respectively, relying on state-of-the-art deep convolutional neural networks. Experimental results demonstrate that these baselines are already superior to annotations of untrained humans.
  • Item
    When humans and machines collaborate: Cross-lingual Label Editing in Wikidata
    (New York City : Association for Computing Machinery, 2019) Kaffee, L.-A.; Endris, K.M.; Simperl, E.
    The quality and maintainability of a knowledge graph are determined by the process in which it is created. There are different approaches to such processes; extraction or conversion of available data in the web (automated extraction of knowledge such as DBpedia from Wikipedia), community-created knowledge graphs, often by a group of experts, and hybrid approaches where humans maintain the knowledge graph alongside bots. We focus in this work on the hybrid approach of human edited knowledge graphs supported by automated tools. In particular, we analyse the editing of natural language data, i.e. labels. Labels are the entry point for humans to understand the information, and therefore need to be carefully maintained. We take a step toward the understanding of collaborative editing of humans and automated tools across languages in a knowledge graph. We use Wikidata as it has a large and active community of humans and bots working together covering over 300 languages. In this work, we analyse the different editor groups and how they interact with the different language data to understand the provenance of the current label data.
  • Item
    EVENTSKG: A 5-Star Dataset of Top-Ranked Events in Eight Computer Science Communities
    (Berlin ; Heidelberg : Springer, 2019) Fathalla, Said; Lange, Christoph; Auer, Sören; Hitzler, Pascal; Fernández, Miriam; Janowicz, Krzysztof; Zaveri, Amrapali; Gray, Alasdair J.G.; Lopez, Vanessa; Haller, Armin; Hammar, Karl
    Metadata of scientific events has become increasingly available on the Web, albeit often as raw data in various formats, disregarding its semantics and interlinking relations. This leads to restricting the usability of this data for, e.g., subsequent analyses and reasoning. Therefore, there is a pressing need to represent this data in a semantic representation, i.e., Linked Data. We present the new release of the EVENTSKG dataset, comprising comprehensive semantic descriptions of scientific events of eight computer science communities. Currently, EVENTSKG is a 5-star dataset containing metadata of 73 top-ranked event series (almost 2,000 events) established over the last five decades. The new release is a Linked Open Dataset adhering to an updated version of the Scientific Events Ontology, a reference ontology for event metadata representation, leading to richer and cleaner data. To facilitate the maintenance of EVENTSKG and to ensure its sustainability, EVENTSKG is coupled with a Java API that enables users to add/update events metadata without going into the details of the representation of the dataset. We shed light on events characteristics by analyzing EVENTSKG data, which provides a flexible means for customization in order to better understand the characteristics of renowned CS events.
  • Item
    The Research Core Dataset (KDSF) in the Linked Data context
    (Amsterdam [u.a.] : Elsevier, 2019) Walther, Tatiana; Hauschke, Christian; Kasprzik, Anna; Sicilia, Miguel-Angel; Simons, Ed; Clements, Anna; de Castro, Pablo; Bergström, Johan
    This paper describes our efforts to implement the Research Core Dataset (“Kerndatensatz Forschung”; KDSF) as an ontology in VIVO. KDSF is used in VIVO to record the required metadata on incoming data and to produce reports as an output. While both processes need an elaborate adaptation of the KDSF specification, this paper focusses on the adaptation of the KDSF basic data model for recording data in VIVO. In this context, the VIVO and KDSF ontologies were compared with respect to domain, syntax, structure, and granularity in order to identify correspondences and mismatches. To produce an alignment, different matching approaches have been applied. Furthermore, we made necessary modifications and extensions on KDSF classes and properties.
  • Item
    Figures in Scientific Open Access Publications
    (New York, NY : Springer, 2018) Sohmen, Lucia; Charbonnier, Jean; Blümel, Ina; Wartena, Christian; Heller, Lambert; Méndez, E.; Crestani, F.; Ribeiro, C.; David, G.; Lopes, J.
    This paper summarizes the results of a comprehensive statistical analysis on a corpus of open access articles and contained figures. It gives an insight into quantitative relationships between illustrations or types of illustrations, caption lengths, subjects, publishers, author affiliations, article citations and others.
  • Item
    SemSur: A Core Ontology for the Semantic Representation of Research Findings
    (Amsterdam [u.a.] : Elsevier, 2018) Fathalla, Said; Vahdati, Sahar; Auer, Sören; Lange, Christoph; Fensel, Anna; de Boer, Victor; Pellegrini, Tassilo; Kiesling, Elmar; Haslhofer, Bernhard; Hollink, Laura; Schindler, Alexander
    The way how research is communicated using text publications has not changed much over the past decades. We have the vision that ultimately researchers will work on a common structured knowledge base comprising comprehensive semantic and machine-comprehensible descriptions of their research, thus making research contributions more transparent and comparable. We present the SemSur ontology for semantically capturing the information commonly found in survey and review articles. SemSur is able to represent scientific results and to publish them in a comprehensive knowledge graph, which provides an efficient overview of a research field, and to compare research findings with related works in a structured way, thus saving researchers a significant amount of time and effort. The new release of SemSur covers more domains, defines better alignment with external ontologies and rules for eliciting implicit knowledge. We discuss possible applications and present an evaluation of our approach with the retrospective, exemplary semantification of a survey. We demonstrate the utility of the SemSur ontology to answer queries about the different research contributions covered by the survey. SemSur is currently used and maintained at OpenResearch.org.
  • Item
    Interaction Network Analysis Using Semantic Similarity Based on Translation Embeddings
    (Berlin ; Heidelberg : Springer, 2019) Manzoor Bajwa, Awais; Collarana, Diego; Vidal, Maria-Esther; Acosta, Maribel; Cudré-Mauroux, Philippe; Maleshkova, Maria; Pellegrini, Tassilo; Sack, Harald; Sure-Vetter, York
    Biomedical knowledge graphs such as STITCH, SIDER, and Drugbank provide the basis for the discovery of associations between biomedical entities, e.g., interactions between drugs and targets. Link prediction is a paramount task and represents a building block for supporting knowledge discovery. Although several approaches have been proposed for effectively predicting links, the role of semantics has not been studied in depth. In this work, we tackle the problem of discovering interactions between drugs and targets, and propose SimTransE, a machine learning-based approach that solves this problem effectively. SimTransE relies on translating embeddings to model drug-target interactions and values of similarity across them. Grounded on the vectorial representation of drug-target interactions, SimTransE is able to discover novel drug-target interactions. We empirically study SimTransE using state-of-the-art benchmarks and approaches. Experimental results suggest that SimTransE is competitive with the state of the art, representing, thus, an effective alternative for knowledge discovery in the biomedical domain.