Search Results

Now showing 1 - 5 of 5
Loading...
Thumbnail Image
Item

FLIM data analysis based on Laguerre polynomial decomposition and machine-learning

2021, Guo, Shuxia, Silge, Anja, Bae, Hyeonsoo, Tolstik, Tatiana, Meyer, Tobias, Matziolis, Georg, Schmitt, Michael, Popp, Jürgen, Bocklitz, Thomas

Significance: The potential of fluorescence lifetime imaging microscopy (FLIM) is recently being recognized, especially in biological studies. However, FLIM does not directly measure the lifetimes, rather it records the fluorescence decay traces. The lifetimes and/or abundances have to be estimated from these traces during the phase of data processing. To precisely estimate these parameters is challenging and requires a well-designed computer program. Conventionally employed methods, which are based on curve fitting, are computationally expensive and limited in performance especially for highly noisy FLIM data. The graphical analysis, while free of fit, requires calibration samples for a quantitative analysis. Aim: We propose to extract the lifetimes and abundances directly from the decay traces through machine learning (ML). Approach: The ML-based approach was verified with simulated testing data in which the lifetimes and abundances were known exactly. Thereafter, we compared its performance with the commercial software SPCImage based on datasets measured from biological samples on a time-correlated single photon counting system. We reconstructed the decay traces using the lifetime and abundance values estimated by ML and SPCImage methods and utilized the root-mean-squared-error (RMSE) as marker. Results: The RMSE, which represents the difference between the reconstructed and measured decay traces, was observed to be lower for ML than for SPCImage. In addition, we could demonstrate with a three-component analysis the high potential and flexibility of the ML method to deal with more than two lifetime components.

Loading...
Thumbnail Image
Item

Impacts of climate change on agro-climatic suitability of major food crops in Ghana

2020, Chemura, Abel, Schauberger, Bernhard, Gornott, Christoph

Climate change is projected to impact food production stability in many tropical countries through impacts on crop potential. However, without quantitative assessments of where, by how much and to what extent crop production is possible now and under future climatic conditions, efforts to design and implement adaptation strategies under Nationally Determined Contributions (NDCs) and National Action Plans (NAP) are unsystematic. In this study, we used extreme gradient boosting, a machine learning approach to model the current climatic suitability for maize, sorghum, cassava and groundnut in Ghana using yield data and agronomically important variables. We then used multi-model future climate projections for the 2050s and two greenhouse gas emissions scenarios (RCP 2.6 and RCP 8.5) to predict changes in the suitability range of these crops. We achieved a good model fit in determining suitability classes for all crops (AUC = 0.81–0.87). Precipitation-based factors are suggested as most important in determining crop suitability, though the importance is crop-specific. Under projected climatic conditions, optimal suitability areas will decrease for all crops except for groundnuts under RCP8.5 (no change: 0%), with greatest losses for maize (12% under RCP2.6 and 14% under RCP8.5). Under current climatic conditions, 18% of Ghana has optimal suitability for two crops, 2% for three crops with no area having optimal suitability for all the four crops. Under projected climatic conditions, areas with optimal suitability for two and three crops will decrease by 12% as areas having moderate and marginal conditions for multiple crops increase. We also found that although the distribution of multiple crop suitability is spatially distinct, cassava and groundnut will be more simultaneously suitable for the south while groundnut and sorghum will be more suitable for the northern parts of Ghana under projected climatic conditions.

Loading...
Thumbnail Image
Item

Computational tissue staining of non-linear multimodal imaging using supervised and unsupervised deep learning

2021, Pradhan, Pranita, Meyer, Tobias, Vieth, Michael, Stallmach, Andreas, Waldner, Maximilian, Schmitt, Michael, Popp, Juergen, Bocklitz, Thomas

Hematoxylin and Eosin (H&E) staining is the 'gold-standard' method in histopathology. However, standard H&E staining of high-quality tissue sections requires long sample preparation times including sample embedding, which restricts its application for 'real-time' disease diagnosis. Due to this reason, a label-free alternative technique like non-linear multimodal (NLM) imaging, which is the combination of three non-linear optical modalities including coherent anti-Stokes Raman scattering, two-photon excitation fluorescence and second-harmonic generation, is proposed in this work. To correlate the information of the NLM images with H&E images, this work proposes computational staining of NLM images using deep learning models in a supervised and an unsupervised approach. In the supervised and the unsupervised approach, conditional generative adversarial networks (CGANs) and cycle conditional generative adversarial networks (cycle CGANs) are used, respectively. Both CGAN and cycle CGAN models generate pseudo H&E images, which are quantitatively analyzed based on mean squared error, structure similarity index and color shading similarity index. The mean of the three metrics calculated for the computationally generated H&E images indicate significant performance. Thus, utilizing CGAN and cycle CGAN models for computational staining is beneficial for diagnostic applications without performing a laboratory-based staining procedure. To the author's best knowledge, it is the first time that NLM images are computationally stained to H&E images using GANs in an unsupervised manner.

Loading...
Thumbnail Image
Item

Learning from urban form to predict building heights

2020, Milojevic-DupontI, Nikola, Hans, Nicolai, Kaack, Lynn H., Zumwald, Marius, Andrieux, François, de Barros Soares, Daniel, Lohrey, Steffen, PichlerI, Peter-Paul, Creutzig, Felix

Understanding cities as complex systems, sustainable urban planning depends on reliable high-resolution data, for example of the building stock to upscale region-wide retrofit policies. For some cities and regions, these data exist in detailed 3D models based on real-world measurements. However, they are still expensive to build and maintain, a significant challenge, especially for small and medium-sized cities that are home to the majority of the European population. New methods are needed to estimate relevant building stock characteristics reliably and cost-effectively. Here, we present a machine learning based method for predicting building heights, which is based only on open-access geospatial data on urban form, such as building footprints and street networks. The method allows to predict building heights for regions where no dedicated 3D models exist currently. We train our model using building data from four European countries (France, Italy, the Netherlands, and Germany) and find that the morphology of the urban fabric surrounding a given building is highly predictive of the height of the building. A test on the German state of Brandenburg shows that our model predicts building heights with an average error well below the typical floor height (about 2.5 m), without having access to training data from Germany. Furthermore, we show that even a small amount of local height data obtained by citizens substantially improves the prediction accuracy. Our results illustrate the possibility of predicting missing data on urban infrastructure; they also underline the value of open government data and volunteered geographic information for scientific applications, such as contextual but scalable strategies to mitigate climate change.

Loading...
Thumbnail Image
Item

Towards the automatic detection of social biomarkers in autism spectrum disorder: introducing the simulated interaction task (SIT)

2020, Drimalla, Hanna, Scheffer, Tobias, Landwehr, Niels, Baskow, Irina, Roepke, Stefan, Behnia, Behnoush, Dziobek, Isabel

Social interaction deficits are evident in many psychiatric conditions and specifically in autism spectrum disorder (ASD), but hard to assess objectively. We present a digital tool to automatically quantify biomarkers of social interaction deficits: the simulated interaction task (SIT), which entails a standardized 7-min simulated dialog via video and the automated analysis of facial expressions, gaze behavior, and voice characteristics. In a study with 37 adults with ASD without intellectual disability and 43 healthy controls, we show the potential of the tool as a diagnostic instrument and for better description of ASD-associated social phenotypes. Using machine-learning tools, we detected individuals with ASD with an accuracy of 73%, sensitivity of 67%, and specificity of 79%, based on their facial expressions and vocal characteristics alone. Especially reduced social smiling and facial mimicry as well as a higher voice fundamental frequency and harmony-to-noise-ratio were characteristic for individuals with ASD. The time-effective and cost-effective computer-based analysis outperformed a majority vote and performed equal to clinical expert ratings. © 2020, The Author(s).