Search Results

Now showing 1 - 5 of 5
Loading...
Thumbnail Image
Item

OpenBudgets.eu: A platform for semantically representing and analyzing open fiscal data

2018, Musyaffa, Fathoni A., Halilaj, Lavdim, Li, Yakun, Orlandi, Fabrizio, Jabeen, Hajira, Auer, Sören, Vidal, Maria-Esther

A paper describing the details of OpenBudgets.eu platform implementation. Pre-print version of the paper accepted at International Conference On Web Engineering (ICWE) 2018 in Caceres, Spain.

Loading...
Thumbnail Image
Item

Experience: Open fiscal datasets, common issues, and recommendations

2018, Musyaffa, Fathoni A., Engels, Christiane, Vidal, Maria-Esther, Orlandi, Fabrizio, Auer, Sören

A pre-print paper detailing recommendation for publishing fiscal data, including assessment framework for fiscal datasets. This paper has been accepted at ACM Journal of Data and Information Quality (JDIQ) in 2018.

Loading...
Thumbnail Image
Item

Why reinvent the wheel: Let's build question answering systems together

2018, Singh, K., Radhakrishna, A.S., Both, A., Shekarpour, S., Lytra, I., Usbeck, R., Vyas, A., Khikmatullaev, A., Punjani, D., Lange, C., Vidal, Maria-Esther, Lehmann, J., Auer, Sören

Modern question answering (QA) systems need to flexibly integrate a number of components specialised to fulfil specific tasks in a QA pipeline. Key QA tasks include Named Entity Recognition and Disambiguation, Relation Extraction, and Query Building. Since a number of different software components exist that implement different strategies for each of these tasks, it is a major challenge to select and combine the most suitable components into a QA system, given the characteristics of a question. We study this optimisation problem and train classifiers, which take features of a question as input and have the goal of optimising the selection of QA components based on those features. We then devise a greedy algorithm to identify the pipelines that include the suitable components and can effectively answer the given question. We implement this model within Frankenstein, a QA framework able to select QA components and compose QA pipelines. We evaluate the effectiveness of the pipelines generated by Frankenstein using the QALD and LC-QuAD benchmarks. These results not only suggest that Frankenstein precisely solves the QA optimisation problem but also enables the automatic composition of optimised QA pipelines, which outperform the static Baseline QA pipeline. Thanks to this flexible and fully automated pipeline generation process, new QA components can be easily included in Frankenstein, thus improving the performance of the generated pipelines.

Loading...
Thumbnail Image
Item

Formalizing Gremlin pattern matching traversals in an integrated graph Algebra

2019, Thakkar, Harsh, Auer, Sören, Vidal, Maria-Esther, Samavi, Reza, Consens, Mariano P., Khatchadourian, Shahan, Nguyen, Vinh, Sheth, Amit, Giménez-García, José M., Thakkar, Harsh

Graph data management (also called NoSQL) has revealed beneficial characteristics in terms of flexibility and scalability by differ-ently balancing between query expressivity and schema flexibility. This peculiar advantage has resulted into an unforeseen race of developing new task-specific graph systems, query languages and data models, such as property graphs, key-value, wide column, resource description framework (RDF), etc. Present-day graph query languages are focused towards flex-ible graph pattern matching (aka sub-graph matching), whereas graph computing frameworks aim towards providing fast parallel (distributed) execution of instructions. The consequence of this rapid growth in the variety of graph-based data management systems has resulted in a lack of standardization. Gremlin, a graph traversal language, and machine provide a common platform for supporting any graph computing sys-tem (such as an OLTP graph database or OLAP graph processors). In this extended report, we present a formalization of graph pattern match-ing for Gremlin queries. We also study, discuss and consolidate various existing graph algebra operators into an integrated graph algebra.

Loading...
Thumbnail Image
Item

Towards an Open Research Knowledge Graph

2018, Auer, Sören, Blümel, Ina, Ewerth, Ralph, Garatzogianni, Alexandra, Heller,, Lambert, Hoppe, Anett, Kasprzik, Anna, Koepler, Oliver, Nejdl, Wolfgang, Plank, Margret, Sens, Irina, Stocker, Markus, Tullney, Marco, Vidal, Maria-Esther, van Wezenbeek, Wilma

The document-oriented workflows in science have reached (or already exceeded) the limits of adequacy as highlighted for example by recent discussions on the increasing proliferation of scientific literature and the reproducibility crisis. Despite an improved and digital access to scientific publications in the last decades, the exchange of scholarly knowledge continues to be primarily document-based: Researchers produce essays and articles that are made available in online and offline publication media as roughly granular text documents. With current developments in areas such as knowledge representation, semantic search, human-machine interaction, natural language processing, and artificial intelligence, it is possible to completely rethink this dominant paradigm of document-centered knowledge exchange and transform it into knowledge-based information flows by representing and expressing knowledge through semantically rich, interlinked knowledge graphs. The core of the establishment of knowledge-based information flows is the distributed, decentralized, collaborative creation and evolution of information models, vocabularies, ontologies, and knowledge graphs for the establishment of a common understanding of data and information between the various stakeholders as well as the integration of these technologies into the infrastructure and processes of search and knowledge exchange in the research library of the future. By integrating these information models into existing and new research infrastructure services, the information structures that are currently still implicit and deeply hidden in documents can be made explicit and directly usable. This revolutionizes scientific work because information and research results can be seamlessly interlinked with each other and better mapped to complex information needs. As a result, scientific work becomes more effective and efficient, since results become directly comparable and easier to reuse. In order to realize the vision of knowledge-based information flows in scholarly communication, comprehensive long-term technological infrastructure development and accompanying research are required. To secure information sovereignty, it is also of paramount importance to science – and urgency to science policymakers – that scientific infrastructures establish an open counterweight to emerging commercial developments in this area. The aim of this position paper is to facilitate the discussion on requirements, design decisions and a minimum viable product for an Open Research Knowledge Graph infrastructure. TIB aims to start developing this infrastructure in an open collaboration with interested partner organizations and individuals.