Search Results

Now showing 1 - 10 of 131
  • Item
    Semi-supervised identification of rarely appearing persons in video by correcting weak labels
    (New York City : Association for Computing Machinery, 2016) Müller, Eric; Otto, Christian; Ewerth, Ralph
    Some recent approaches for character identification in movies and TV broadcasts are realized in a semi-supervised manner by assigning transcripts and/or subtitles to the speakers. However, the labels obtained in this way achieve only an accuracy of 80% - 90% and the number of training examples for the different actors is unevenly distributed. In this paper, we propose a novel approach for person identification in video by correcting and extending the training data with reliable predictions to reduce the number of annotation errors. Furthermore, the intra-class diversity of rarely speaking characters is enhanced. To address the imbalance of training data per person, we suggest two complementary prediction scores. These scores are also used to recognize whether or not a face track belongs to a (supporting) character whose identity does not appear in the transcript etc. Experimental results demonstrate the feasibility of the proposed approach, outperforming the current state of the art.
  • Item
    Quo vadis, VIVO? Stand und Entwicklung
    (Zenodo, 2017) Hauschke, Christian
    Vortrag mit den Highlights der VIVO-Conference 2017, einem Überblick über VIVO 1.10 und einem Ausblick auf die VIVO-Aktivitäten der TIB.
  • Item
    Wissenschaftliche Videos im Semantic Web - das AV Portal der TIB in der Linked Open Data Cloud
    (Reutlingen : Berufsverband Information Bibliothek e. V., 2017) Saurbier, Felix
    Die Technische Informationsbibliothek (TIB) hat sich zum Ziel gesetzt, die Nutzung und Verbreitung ihrer Sammlungen nachhaltig zu fördern und setzt dazu konsequent auf Semantic Web-Technologien. Durch die Bereitstellung von "Linked Library Data" können Bibliotheken und Informationsdienstleister die Sicht- und Auffindbarkeit ihrer Bestände signifikant erhöhen. Denn zum einen vereinfachen strukturierte Daten, die interoperabel sowie maschinenlesbar sind, die Nachnutzung durch Dritte entscheidend. Zum anderen ermöglichen sie wesentlich differenziertere sowie effizientere Suchanfragen und unterstützen Bibliotheksnutzer sowohl im Retrieval als auch in der Weiterverarbeitung der für sie relevanten Informationen. Vor diesem Hintergrund veröffentlicht die TIB umfangreiche Meta- und Erschließungsdaten der wissenschaftlichen Filme ihres AV-Portals im standardisierten Resource Description Format (RDF) und stellt auf diesem Weg einen neuen und innovativen Service zur Nachnutzung und Verlinkung ihrer Datensätze zur Verfügung. In unserem Vortrag möchten wir zeigen, welche Mehrwerte sich auf Basis der eingesetzten Linked Open Data-Technolgien im Kontext audiovisueller Medien generieren lassen und die Nutzung von Linked Open Data im AV-Portal der TIB vorstellen. Besonderes Augenmerk soll dabei erstens auf den semantischen Erschließungsdaten liegen, die durch automatisierte Verfahren der Bild-, Text- und Spracherkennung generiert werden. Zweitens sollen die darauf aufbauenden Mehrwertdienstleistungen - wie die semantische Anreicherung mit zusätzlichen relevanten Informationen und die Verlinkung weiterführender Ressourcen - vorgestellt werden. Schließlich soll drittens demonstriert werden, wie durch die Bereitstellung der autoritativen sowie zeitbasierten, automatisch generierten Metadaten als Linked Open Data unter einer Creative Commons-Lizenz die freie Nachnutzung der Daten des AV-Portals durch Dritte ermöglicht wird.
  • Item
    Open-Access-Publikationsfonds der Leibniz-Gemeinschaft – Zentraler Fonds für eine dezentrale Forschungsorganisation
    (Zenodo, 2016) Tullney, Marco; Eppelin, Anita
    [no abstract available]
  • Item
    To OER or not to OER? A question for academic libraries
    (Den Haag : IFLA, 2016) Stummeyer, Sabine
    The growing demand for higher education and the ongoing developments in ICT infrastructure have created unique challenges for higher education institutions. Open Educational Resources (OER) were once created to provide an easy access to learning material in order to support especially the education systems of developing countries. Now they can play an important role for higher education institutions in supporting their teaching staff to create effective teaching and learning environments for their students to encourage greater individual engagement with information. Academic librarians and libraries have a long tradition in providing information to their users. With regard to OER, key roles are creating digital repositories, providing metadata, resource description and indexing, managing and clearing intellectual property rights or storing and dissemination of OER. New challenges can be promoting „openness“ and „open resources“ and the role that librarians and library professionals play by helping users describe, discover, manage and disseminate OER and related copyright expertise. As an added value, academic libraries are offering infrastructure, trusted relationships and communities of practice to the OER-movement. They are integrating collaborative, open cooperation to teaching and research work – the library as a "OER knowledge manager” and therefore they are strengthening their central position to the academic community.
  • Item
    Open Science - Eine Chance für den Fortschritt? ...und etwas #ScholComm-Praxis
    (Hannover : Technische Informationsbibliothek, 2018) Heller, Lambert
    [no abstract available]
  • Item
    Umsetzung des KDSF-Datenmodells in VIVO
    (Zenodo, 2017) Walther, Tatiana; Hauschke, Christian
    Im Rahmen des Projekts „Umsetzung Kerndatensatz Forschung in VIVO“ wird am Open Science Lab der Technischen Informationsbibliothek Hannover (TIB) der Versuch unternommen, den Kerndatensatz Forschung in das Forschungsinformationssystem VIVO zu integrieren. Entwurf KDSF-VIVO-Alignment und KDSF-VIVO-Extension: https://github.com/VIVO-DE/VIVO-KDSF-Integration
  • Item
    “Are machines better than humans in image tagging?” - A user study adds to the puzzle
    (Heidelberg : Springer, 2017) Ewerth, Ralph; Springstein, Matthias; Phan-Vogtmann, Lo An; Schütze, Juliane
    “Do machines perform better than humans in visual recognition tasks?” Not so long ago, this question would have been considered even somewhat provoking and the answer would have been clear: “No”. In this paper, we present a comparison of human and machine performance with respect to annotation for multimedia retrieval tasks. Going beyond recent crowdsourcing studies in this respect, we also report results of two extensive user studies. In total, 23 participants were asked to annotate more than 1000 images of a benchmark dataset, which is the most comprehensive study in the field so far. Krippendorff’s α is used to measure inter-coder agreement among several coders and the results are compared with the best machine results. The study is preceded by a summary of studies which compared human and machine performance in different visual and auditory recognition tasks. We discuss the results and derive a methodology in order to compare machine performance in multimedia annotation tasks at human level. This allows us to formally answer the question whether a recognition problem can be considered as solved. Finally, we are going to answer the initial question.
  • Item
    TIB AV-Portal: A reliable infrastructure for scientific audiovisual media
    (Prague : National Technical Library, 2016) Plank, Margret
    With the AV Portal 1 , the German National Library of Science and Technology (TIB) 2 in collaboration with the Hasso Plattner Institute (HPI)3 has developed a user-oriented platform for scientific films. This portal offers free access to high-quality computer visualisations, simulations, experiments and interviews as well as recordings of lectures and conferences from the fields of science and technology. The automatic video analysis of the TIB AV Portal includes not only structural analysis (scene recognition), but also text, audio and image analysis. Automatic indexing by the AV Portal describes videos at the segment level, enabling pinpoint searches to be made within videos. Films are allocated a Digital Object Identifier (DOI), which means they can be referenced clearly. Individual film segments are allocated a Media Fragment Identifier (MFID), which enables the video to be referenced down to the second and cited. The creator of the audiovisual media segment can choose between an Open Access licence and a declaration of consent, enabling them to decide how they wish to permit TIB to utilise the material. TIB recommends the “CC-Namensnennung – Deutschland 3.0” licence, which ensures that the creator is acknowledged and permits the comprehensive use of audiovisual media in research and teaching.
  • Item
    Do researchers need to care about PID systems?
    (Zenodo, 2018) Kraft, Angelina; Dreyer, Britta
    A survey across 1400 scientists in the natural sciences and engineering across Germany conducted in 2016 revealed that although more than 70 % of the researchers are using DOIs for journal publications, less than 10% use DOIs for research data. To the question of why they are not using DOIs more than half (56%) answered that they don’t know about the option to use DOIs for other publications (datasets, conference papers etc.) Therefore it is not surprising that the majority (57 %) stated that they had no need for DOI counselling services. 40% of the questioned researchers need more information and almost 30% cannot see a benefit. Publishers have been using PID systems for articles for years, and the DOI registration and citation are a natural part of the standard publication workflow. With the new digital age, the possibilities to publishing digital research objects beyond articles are bigger than ever – but the respective infrastructure providers are still struggling to provide integrated PID services. Infrastructure providers need to learn from publishers and offer integrated PID services, complementing existing workflows, using researcher’s vocabulary to support usability and promotion. Sell the benefit and enable researchers to focus on what they are best at: Do research (and not worry about the rest)!