Search Results

Now showing 1 - 10 of 59
  • Item
    Lab::Measurement—A portable and extensible framework for controlling lab equipment and conducting measurements
    (Amsterdam : North Holland Publ. Co., 2019) Reinhardt, S.; Butschkow, C.; Geissler, S.; Dirnaichner, A.; Olbrich, F.; Lane, C.E.; Schröer, D.; Hüttel, A.K.
    Lab::Measurement is a framework for test and measurement automatization using Perl 5. While primarily developed with applications in mesoscopic physics in mind, it is widely adaptable. Internally, a layer model is implemented. Communication protocols such as IEEE 488 [1], USB Test & Measurement [2], or, e.g., VXI-11 [3] are addressed by the connection layer. The wide range of supported connection backends enables unique cross-platform portability. At the instrument layer, objects correspond to equipment connected to the measurement PC (e.g., voltage sources, magnet power supplies, multimeters, etc.). The high-level sweep layer automates the creation of measurement loops, with simultaneous plotting and data logging. An extensive unit testing framework is used to verify functionality even without connected equipment. Lab::Measurement is distributed as free and open source software. Program summary: Program Title: Lab::Measurement 3.660 Program Files doi: http://dx.doi.org/10.17632/d8rgrdc7tz.1 Program Homepage: https://www.labmeasurement.de Licensing provisions: GNU GPL v23 Programming language: Perl 5 Nature of problem: Flexible, lightweight, and operating system independent control of laboratory equipment connected by diverse means such as IEEE 488 [1], USB [2], or VXI-11 [3]. This includes running measurements with nested measurement loops where a data plot is continuously updated, as well as background processes for logging and control. Solution method: Object-oriented layer model based on Moose [4], abstracting the hardware access as well as the command sets of the addressed instruments. A high-level interface allows simple creation of measurement loops, live plotting via GnuPlot [5], and data logging into customizable folder structures. [1] F. M. Hess, D. Penkler, et al., LinuxGPIB. Support package for GPIB (IEEE 488) hardware, containing kernel driver modules and a C user-space library with language bindings. http://linux-gpib.sourceforge.net/ [2] USB Implementers Forum, Inc., Universal Serial Bus Test and Measurement Class Specification (USBTMC), revision 1.0 (2003). http://www.usb.org/developers/docs/devclass_docs/ [3] VXIbus Consortium, VMEbus Extensions for Instrumentation VXIbus TCP/IP Instrument Protocol Specification VXI-11 (1995). http://www.vxibus.org/files/VXI_Specs/VXI-11.zip [4] Moose—Apostmodern object system for Perl 5. http://moose.iinteractive.com [5] E. A. Merritt, et al., Gnuplot. An Interactive Plotting Program. http://www.gnuplot.info/ © 2018 The Author(s)
  • Item
    Simple, accurate, and efficient implementation of 1-electron atomic time-dependent Schrödinger equation in spherical coordinates
    (Amsterdam : North Holland Publ. Co., 2015) Patchkovskii, Serguei; Müller, Harm Geert
    Modelling atomic processes in intense laser fields often relies on solving the time-dependent Schrödinger equation (TDSE). For processes involving ionisation, such as above-threshold ionisation (ATI) and high-harmonic generation (HHG), this is a formidable task even if only one electron is active. Several powerful ideas for efficient implementation of atomic TDSE were introduced by H.G. Muller some time ago (Muller, 1999), including: separation of Hamiltonian terms into tri-diagonal parts; implicit representation of the spatial derivatives; and use of a rotating reference frame. Here, we extend these techniques to allow for non-uniform radial grids, arbitrary laser field polarisation, and non-Hermitian terms in the Hamiltonian due to the implicit form of the derivatives (previously neglected). We implement the resulting propagator in a parallel Fortran program, adapted for multi-core execution. Cost of TDSE propagation scales linearly with the problem size, enabling full-dimensional calculations of strong-field ATI and HHG spectra for arbitrary field polarisations on a standard desktop PC.
  • Item
    Interaction Network Analysis Using Semantic Similarity Based on Translation Embeddings
    (Berlin ; Heidelberg : Springer, 2019) Manzoor Bajwa, Awais; Collarana, Diego; Vidal, Maria-Esther; Acosta, Maribel; Cudré-Mauroux, Philippe; Maleshkova, Maria; Pellegrini, Tassilo; Sack, Harald; Sure-Vetter, York
    Biomedical knowledge graphs such as STITCH, SIDER, and Drugbank provide the basis for the discovery of associations between biomedical entities, e.g., interactions between drugs and targets. Link prediction is a paramount task and represents a building block for supporting knowledge discovery. Although several approaches have been proposed for effectively predicting links, the role of semantics has not been studied in depth. In this work, we tackle the problem of discovering interactions between drugs and targets, and propose SimTransE, a machine learning-based approach that solves this problem effectively. SimTransE relies on translating embeddings to model drug-target interactions and values of similarity across them. Grounded on the vectorial representation of drug-target interactions, SimTransE is able to discover novel drug-target interactions. We empirically study SimTransE using state-of-the-art benchmarks and approaches. Experimental results suggest that SimTransE is competitive with the state of the art, representing, thus, an effective alternative for knowledge discovery in the biomedical domain.
  • Item
    Digital Humanities Handbuch
    (2015-08-12) Hahn, Helene; Kalman, Tibor; Pielström, Steffen; Puhl, Johanna; Kolbmann, Wibke; Kollatz, Thomas; Neuschäfer, Markus; Stiller, Juliane; Tonne, Danah
    Um das Handbuch möglichst praxisnah zu gestalten, haben wir uns entschieden, zuerst einzelne DH-Projekte vorzustellen, um die Möglichkeiten der DH den Lebringen und ihnen zu zeigen, was in der Praxis in dem Bereich derzeit schon umgesetzt wurde. So zeigen wir in Kapitel 2, wie mit TextGrid Texte editiert und meCodicology Handschriften analysiert werden. Die folgenden drei Kapitel beschäftigen sich mit den drei Säulen, die jedes Projekt in den Digital Humanities trag Methoden und Werkzeuge, und Infrastruktur. Die Kapitel bieten erste Einführungen in die jeweilige Thematik und vermitteln den Lesern an die Praxis angelehntsie in eigenen DH-Projekten anwenden können. Die Kapitel Daten und Alles was Recht ist - Urheberrecht und Lizenzierung von Forschungsdaten weisen in die Grundlage wissenschaftlichen Forschens ein und bieten Hilfestellungen im Umgang mit Lizenzen und Dateiformaten. Das Kapitel Methoden und Werkzeuge ze Digital Humanities auf und verweist beispielhaft auf digitale Werkzeuge, die für die Beantwortung geisteswissenschaftlicher Forschungsfragen herangezogen weKapitel Infrastruktur werden Digitale Infrastrukturen, deren Komponenten und Zielstellungen näher beschrieben. Sie sind unerlässlich, um die digitale Forschunund nachhaltig zu gestalten.
  • Item
    Temporal Role Annotation for Named Entities
    (Amsterdam [u.a.] : Elsevier, 2018) Koutraki, Maria; Bakhshandegan-Moghaddam, Farshad; Sack, Harald; Fensel, Anna; de Boer, Victor; Pellegrini, Tassilo; Kiesling, Elmar; Haslhofer, Bernhard; Hollink, Laura; Schindler, Alexander
    Natural language understanding tasks are key to extracting structured and semantic information from text. One of the most challenging problems in natural language is ambiguity and resolving such ambiguity based on context including temporal information. This paper, focuses on the task of extracting temporal roles from text, e.g. CEO of an organization or head of a state. A temporal role has a domain, which may resolve to different entities depending on the context and especially on temporal information, e.g. CEO of Microsoft in 2000. We focus on the temporal role extraction, as a precursor for temporal role disambiguation. We propose a structured prediction approach based on Conditional Random Fields (CRF) to annotate temporal roles in text and rely on a rich feature set, which extracts syntactic and semantic information from text. We perform an extensive evaluation of our approach based on two datasets. In the first dataset, we extract nearly 400k instances from Wikipedia through distant supervision, whereas in the second dataset, a manually curated ground-truth consisting of 200 instances is extracted from a sample of The New York Times (NYT) articles. Last, the proposed approach is compared against baselines where significant improvements are shown for both datasets.
  • Item
    Precise Navigation of Small Agricultural Robots in Sensitive Areas with a Smart Plant Camera
    (Basel : MDPI, 2015) Dworak, Volker; Huebner, Michael; Selbeck, Joern
    Most of the relevant technology related to precision agriculture is currently controlled by Global Positioning Systems (GPS) and uploaded map data; however, in sensitive areas with young or expensive plants, small robots are becoming more widely used in exclusive work. These robots must follow the plant lines with centimeter precision to protect plant growth. For cases in which GPS fails, a camera-based solution is often used for navigation because of the system cost and simplicity. The low-cost plant camera presented here generates images in which plants are contrasted against the soil, thus enabling the use of simple cross-correlation functions to establish high-resolution navigation control in the centimeter range. Based on the foresight provided by images from in front of the vehicle, robust vehicle control can be established without any dead time; as a result, off-loading the main robot control and overshooting can be avoided.
  • Item
    A Case for Integrated Data Processing in Large-Scale Cyber-Physical Systems
    (Maui, Hawaii : HICSS, 2019) Glebke, René; Henze, Martin; Wehrle, Klaus; Niemietz, Philipp; Trauth, Daniel; Mattfeld, Patrick; Bergs, Thomas; Bui, Tung X.
    Large-scale cyber-physical systems such as manufacturing lines generate vast amounts of data to guarantee precise control of their machinery. Visions such as the Industrial Internet of Things aim at making this data available also to computation systems outside the lines to increase productivity and product quality. However, rising amounts and complexities of data and control decisions push existing infrastructure for data transmission, storage, and processing to its limits. In this paper, we exemplarily study a fine blanking line which can produce up to 6.2 Gbit/s worth of data to showcase the extreme requirements found in modern manufacturing. We consequently propose integrated data processing which keeps inherently local and small-scale tasks close to the processes while at the same time centralizing tasks relying on more complex decision procedures and remote data sources. Our approach thus allows for both maintaining control of field-level processes and leveraging the benefits of “big data” applications.
  • Item
    Formalizing Gremlin pattern matching traversals in an integrated graph Algebra
    (Aachen, Germany : RWTH Aachen, 2019) Thakkar, Harsh; Auer, Sören; Vidal, Maria-Esther; Samavi, Reza; Consens, Mariano P.; Khatchadourian, Shahan; Nguyen, Vinh; Sheth, Amit; Giménez-García, José M.; Thakkar, Harsh
    Graph data management (also called NoSQL) has revealed beneficial characteristics in terms of flexibility and scalability by differ-ently balancing between query expressivity and schema flexibility. This peculiar advantage has resulted into an unforeseen race of developing new task-specific graph systems, query languages and data models, such as property graphs, key-value, wide column, resource description framework (RDF), etc. Present-day graph query languages are focused towards flex-ible graph pattern matching (aka sub-graph matching), whereas graph computing frameworks aim towards providing fast parallel (distributed) execution of instructions. The consequence of this rapid growth in the variety of graph-based data management systems has resulted in a lack of standardization. Gremlin, a graph traversal language, and machine provide a common platform for supporting any graph computing sys-tem (such as an OLTP graph database or OLAP graph processors). In this extended report, we present a formalization of graph pattern match-ing for Gremlin queries. We also study, discuss and consolidate various existing graph algebra operators into an integrated graph algebra.
  • Item
    The Open Quantum Materials Database (OQMD): assessing the accuracy of DFT formation energies
    (London : Nature Publ. Group, 2015) Kirklin, Scott; Saal, James E.; Meredig, Bryce; Thompson, Alex; Doak, Jeff W.; Aykol, Muratahan; Rühl, Stephan; Wolverton, Chris
    The Open Quantum Materials Database (OQMD) is a high-throughput database currently consisting of nearly 300,000 density functional theory (DFT) total energy calculations of compounds from the Inorganic Crystal Structure Database (ICSD) and decorations of commonly occurring crystal structures. To maximise the impact of these data, the entire database is being made available, without restrictions, at www.oqmd.org/download. In this paper, we outline the structure and contents of the database, and then use it to evaluate the accuracy of the calculations therein by comparing DFT predictions with experimental measurements for the stability of all elemental ground-state structures and 1,670 experimental formation energies of compounds. This represents the largest comparison between DFT and experimental formation energies to date. The apparent mean absolute error between experimental measurements and our calculations is 0.096 eV/atom. In order to estimate how much error to attribute to the DFT calculations, we also examine deviation between different experimental measurements themselves where multiple sources are available, and find a surprisingly large mean absolute error of 0.082 eV/atom. Hence, we suggest that a significant fraction of the error between DFT and experimental formation energies may be attributed to experimental uncertainties. Finally, we evaluate the stability of compounds in the OQMD (including compounds obtained from the ICSD as well as hypothetical structures), which allows us to predict the existence of ~3,200 new compounds that have not been experimentally characterised and uncover trends in material discovery, based on historical data available within the ICSD.
  • Item
    A survey on Bluetooth multi-hop networks
    (Amsterdam [u.a.] : Elsevier Science, 2019) Todtenberg, Nicole; Kraemer, Rolf
    Bluetooth was firstly announced in 1998. Originally designed as cable replacement connecting devices in a point-to-point fashion its high penetration arouses interest in its ad-hoc networking potential. This ad-hoc networking potential of Bluetooth is advertised for years - but until recently no actual products were available and less than a handful of real Bluetooth multi-hop network deployments were reported. The turnaround was triggered by the release of the Bluetooth Low Energy Mesh Profile which is unquestionable a great achievement but not well suited for all use cases of multi-hop networks. This paper surveys the tremendous work done on Bluetooth multi-hop networks during the last 20 years. All aspects are discussed with demands for a real world Bluetooth multi-hop operation in mind. Relationships and side effects of different topics for a real world implementation are explained. This unique focus distinguishes this survey from existing ones. Furthermore, to the best of the authors’ knowledge this is the first survey consolidating the work on Bluetooth multi-hop networks for classic Bluetooth technology as well as for Bluetooth Low Energy. Another individual characteristic of this survey is a synopsis of real world Bluetooth multi-hop network deployment efforts. In fact, there are only four reports of a successful establishment of a Bluetooth multi-hop network with more than 30 nodes and only one of them was integrated in a real world application - namely a photovoltaic power plant. © 2019 The Authors