Search Results

Now showing 1 - 7 of 7
  • Item
    Soft Inkjet Circuits: Rapid Multi-Material Fabrication of Soft Circuits using a Commodity Inkjet Printer
    (New York City : Association for Computing Machinery, 2019) Khan, Arshad; Roo, Joan Sol; Kraus, Tobias; Steimle, Jürgen
    Despite the increasing popularity of soft interactive devices, their fabrication remains complex and time consuming. We contribute a process for rapid do-it-yourself fabrication of soft circuits using a conventional desktop inkjet printer. It supports inkjet printing of circuits that are stretchable, ultrathin, high resolution, and integrated with a wide variety of materials used for prototyping. We introduce multi-ink functional printing on a desktop printer for realizing multi-material devices, including conductive and isolating inks. We further present DIY techniques to enhance compatibility between inks and substrates and the circuits' elasticity. This enables circuits on a wide set of materials including temporary tattoo paper, textiles, and thermoplastic. Four application cases demonstrate versatile uses for realizing stretchable devices, e-textiles, body-based and re-shapeable interfaces.
  • Item
    corr2D: Implementation of Two-Dimensional Correlation Analysis in R
    (Los Angeles : UCLA, 2019) Geitner, Robert; Fritzsch, Robby; Bocklitz, Thomas W.; Popp, Jürgen
    In the package corr2D two-dimensional correlation analysis is implemented in R. This paper describes how two-dimensional correlation analysis is done in the package and how the mathematical equations are translated into R code. The paper features a simple tutorial with executable code for beginners, insight into the calculations done before the correlation analysis, a detailed look at the parallelization of the fast Fourier transformation based correlation analysis and a speed test of the calculation. The package corr2D offers the possibility to preprocess, correlate and postprocess spectroscopic data using exclusively the R language. Thus, corr2D is a welcome addition to the toolbox of spectroscopists and makes two-dimensional correlation analysis more accessible and transparent.
  • Item
    Survey vs Scraped Data: Comparing Time Series Properties of Web and Survey Vacancy Data
    (Berlin : Springer Nature, 2019) De Pedraza, P.; Visintin, S.; Tijdens, K.; Kismihók, G.
    This paper studies the relationship between a vacancy population obtained from web crawling and vacancies in the economy inferred by a National Statistics Office (NSO) using a traditional method. We compare the time series properties of samples obtained between 2007 and 2014 by Statistics Netherlands and by a web scraping company. We find that the web and NSO vacancy data present similar time series properties, suggesting that both time series are generated by the same underlying phenomenon: the real number of new vacancies in the economy. We conclude that, in our case study, web-sourced data are able to capture aggregate economic activity in the labor market.
  • Item
    MAgPIE 4-a modular open-source framework for modeling global land systems
    (Göttingen : Copernicus GmbH, 2019) Dietrich, J.P.; Bodirsky, B.L.; Humpenöder, F.; Weindl, I.; Stevanović, M.; Karstens, K.; Kreidenweis, U.; Wang, X.; Mishra, A.; Klein, D.; Ambrósio, G.; Araujo, E.; Yalew, A.W.; Baumstark, L.; Wirth, S.; Giannousakis, A.; Beier, F.; Meng-Chuen, Chen, D.; Lotze-Campen, H.; Popp, A.
    The open-source modeling framework MAgPIE (Model of Agricultural Production and its Impact on the Environment) combines economic and biophysical approaches to simulate spatially explicit global scenarios of land use within the 21st century and the respective interactions with the environment. Besides various other projects, it was used to simulate marker scenarios of the Shared Socioeconomic Pathways (SSPs) and contributed substantially to multiple IPCC assessments. However, with growing scope and detail, the non-linear model has become increasingly complex, computationally intensive and non-transparent, requiring structured approaches to improve the development and evaluation of the model. Here, we provide an overview on version 4 of MAgPIE and how it addresses these issues of increasing complexity using new technical features: modular structure with exchangeable module implementations, flexible spatial resolution, in-code documentation, automatized code checking, model/output evaluation and open accessibility. Application examples provide insights into model evaluation, modular flexibility and region-specific analysis approaches. While this paper is focused on the general framework as such, the publication is accompanied by a detailed model documentation describing contents and equations, and by model evaluation documents giving insights into model performance for a broad range of variables. With the open-source release of the MAgPIE 4 framework, we hope to contribute to more transparent, reproducible and collaborative research in the field. Due to its modularity and spatial flexibility, it should provide a basis for a broad range of land-related research with economic or biophysical, global or regional focus.
  • Item
    Web-based access, aggregation, and visualization of future climate projections with emphasis on agricultural assessments
    (Amsterdam : Elsevier B.V., 2018) Villoria, N.B.; Elliott, J.; Müller, C.; Shin, J.; Zhao, L.; Song, C.
    Access to climate and spatial datasets by non-specialists is restricted by technical barriers involving hardware, software and data formats. We discuss an open-source online tool that facilitates downloading the climate data from the global circulation models used by the Inter-Sectoral Impacts Model Intercomparison Project. The tool also offers temporal and spatial aggregation capabilities for incorporating future climate scenarios in applications where spatial aggregation is important. We hope that streamlined access to these data facilitates analysis of climate related issues while considering the uncertainties derived from future climate projections and temporal aggregation choices.
  • Item
    When humans and machines collaborate: Cross-lingual Label Editing in Wikidata
    (New York City : Association for Computing Machinery, 2019) Kaffee, L.-A.; Endris, K.M.; Simperl, E.
    The quality and maintainability of a knowledge graph are determined by the process in which it is created. There are different approaches to such processes; extraction or conversion of available data in the web (automated extraction of knowledge such as DBpedia from Wikipedia), community-created knowledge graphs, often by a group of experts, and hybrid approaches where humans maintain the knowledge graph alongside bots. We focus in this work on the hybrid approach of human edited knowledge graphs supported by automated tools. In particular, we analyse the editing of natural language data, i.e. labels. Labels are the entry point for humans to understand the information, and therefore need to be carefully maintained. We take a step toward the understanding of collaborative editing of humans and automated tools across languages in a knowledge graph. We use Wikidata as it has a large and active community of humans and bots working together covering over 300 languages. In this work, we analyse the different editor groups and how they interact with the different language data to understand the provenance of the current label data.
  • Item
    Why reinvent the wheel: Let's build question answering systems together
    (New York City : Association for Computing Machinery, 2018) Singh, K.; Radhakrishna, A.S.; Both, A.; Shekarpour, S.; Lytra, I.; Usbeck, R.; Vyas, A.; Khikmatullaev, A.; Punjani, D.; Lange, C.; Vidal, Maria-Esther; Lehmann, J.; Auer, Sören
    Modern question answering (QA) systems need to flexibly integrate a number of components specialised to fulfil specific tasks in a QA pipeline. Key QA tasks include Named Entity Recognition and Disambiguation, Relation Extraction, and Query Building. Since a number of different software components exist that implement different strategies for each of these tasks, it is a major challenge to select and combine the most suitable components into a QA system, given the characteristics of a question. We study this optimisation problem and train classifiers, which take features of a question as input and have the goal of optimising the selection of QA components based on those features. We then devise a greedy algorithm to identify the pipelines that include the suitable components and can effectively answer the given question. We implement this model within Frankenstein, a QA framework able to select QA components and compose QA pipelines. We evaluate the effectiveness of the pipelines generated by Frankenstein using the QALD and LC-QuAD benchmarks. These results not only suggest that Frankenstein precisely solves the QA optimisation problem but also enables the automatic composition of optimised QA pipelines, which outperform the static Baseline QA pipeline. Thanks to this flexible and fully automated pipeline generation process, new QA components can be easily included in Frankenstein, thus improving the performance of the generated pipelines.