Search Results

Now showing 1 - 7 of 7
  • Item
    A Review on Data Fusion of Multidimensional Medical and Biomedical Data
    (Basel : MDPI, 2022) Azam, Kazi Sultana Farhana; Ryabchykov, Oleg; Bocklitz, Thomas
    Data fusion aims to provide a more accurate description of a sample than any one source of data alone. At the same time, data fusion minimizes the uncertainty of the results by combining data from multiple sources. Both aim to improve the characterization of samples and might improve clinical diagnosis and prognosis. In this paper, we present an overview of the advances achieved over the last decades in data fusion approaches in the context of the medical and biomedical fields. We collected approaches for interpreting multiple sources of data in different combinations: image to image, image to biomarker, spectra to image, spectra to spectra, spectra to biomarker, and others. We found that the most prevalent combination is the image-to-image fusion and that most data fusion approaches were applied together with deep learning or machine learning methods.
  • Item
    Evolutionary design of explainable algorithms for biomedical image segmentation
    ([London] : Nature Publishing Group UK, 2023) Cortacero, Kévin; McKenzie, Brienne; Müller, Sabina; Khazen, Roxana; Lafouresse, Fanny; Corsaut, Gaëlle; Van Acker, Nathalie; Frenois, François-Xavier; Lamant, Laurence; Meyer, Nicolas; Vergier, Béatrice; Wilson, Dennis G.; Luga, Hervé; Staufer, Oskar; Dustin, Michael L.; Valitutti, Salvatore; Cussat-Blanc, Sylvain
    An unresolved issue in contemporary biomedicine is the overwhelming number and diversity of complex images that require annotation, analysis and interpretation. Recent advances in Deep Learning have revolutionized the field of computer vision, creating algorithms that compete with human experts in image segmentation tasks. However, these frameworks require large human-annotated datasets for training and the resulting “black box” models are difficult to interpret. In this study, we introduce Kartezio, a modular Cartesian Genetic Programming-based computational strategy that generates fully transparent and easily interpretable image processing pipelines by iteratively assembling and parameterizing computer vision functions. The pipelines thus generated exhibit comparable precision to state-of-the-art Deep Learning approaches on instance segmentation tasks, while requiring drastically smaller training datasets. This Few-Shot Learning method confers tremendous flexibility, speed, and functionality to this approach. We then deploy Kartezio to solve a series of semantic and instance segmentation problems, and demonstrate its utility across diverse images ranging from multiplexed tissue histopathology images to high resolution microscopy images. While the flexibility, robustness and practical utility of Kartezio make this fully explicable evolutionary designer a potential game-changer in the field of biomedical image processing, Kartezio remains complementary and potentially auxiliary to mainstream Deep Learning approaches.
  • Item
    Deep learning a boon for biophotonics
    (Weinheim : Wiley-VCH-Verl., 2020) Pradhan, Pranita; Guo, Shuxia; Ryabchykov, Oleg; Popp, Juergen; Bocklitz, Thomas W.
    This review covers original articles using deep learning in the biophotonic field published in the last years. In these years deep learning, which is a subset of machine learning mostly based on artificial neural network geometries, was applied to a number of biophotonic tasks and has achieved state-of-the-art performances. Therefore, deep learning in the biophotonic field is rapidly growing and it will be utilized in the next years to obtain real-time biophotonic decision-making systems and to analyze biophotonic data in general. In this contribution, we discuss the possibilities of deep learning in the biophotonic field including image classification, segmentation, registration, pseudostaining and resolution enhancement. Additionally, we discuss the potential use of deep learning for spectroscopic data including spectral data preprocessing and spectral classification. We conclude this review by addressing the potential applications and challenges of using deep learning for biophotonic data. © 2020 The Authors. Journal of Biophotonics published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
  • Item
    DeepsmirUD: Prediction of Regulatory Effects on microRNA Expression Mediated by Small Molecules Using Deep Learning
    (Basel : Molecular Diversity Preservation International, 2023) Sun, Jianfeng; Ru, Jinlong; Ramos-Mucci, Lorenzo; Qi, Fei; Chen, Zihao; Chen, Suyuan; Cribbs, Adam P.; Deng, Li; Wang, Xia
    Aberrant miRNA expression has been associated with a large number of human diseases. Therefore, targeting miRNAs to regulate their expression levels has become an important therapy against diseases that stem from the dysfunction of pathways regulated by miRNAs. In recent years, small molecules have demonstrated enormous potential as drugs to regulate miRNA expression (i.e., SM-miR). A clear understanding of the mechanism of action of small molecules on the upregulation and downregulation of miRNA expression allows precise diagnosis and treatment of oncogenic pathways. However, outside of a slow and costly process of experimental determination, computational strategies to assist this on an ad hoc basis have yet to be formulated. In this work, we developed, to the best of our knowledge, the first cross-platform prediction tool, DeepsmirUD, to infer small-molecule-mediated regulatory effects on miRNA expression (i.e., upregulation or downregulation). This method is powered by 12 cutting-edge deep-learning frameworks and achieved AUC values of 0.843/0.984 and AUCPR values of 0.866/0.992 on two independent test datasets. With a complementarily constructed network inference approach based on similarity, we report a significantly improved accuracy of 0.813 in determining the regulatory effects of nearly 650 associated SM-miR relations, each formed with either novel small molecule or novel miRNA. By further integrating miRNA–cancer relationships, we established a database of potential pharmaceutical drugs from 1343 small molecules for 107 cancer diseases to understand the drug mechanisms of action and offer novel insight into drug repositioning. Furthermore, we have employed DeepsmirUD to predict the regulatory effects of a large number of high-confidence associated SM-miR relations. Taken together, our method shows promise to accelerate the development of potential miRNA targets and small molecule drugs.
  • Item
    Early Detection of Stripe Rust in Winter Wheat Using Deep Residual Neural Networks
    (Lausanne : Frontiers Media, 2021) Schirrmann, Michael; Landwehr, Niels; Giebel, Antje; Garz, Andreas; Dammer, Karl-Heinz
    Stripe rust (Pst) is a major disease of wheat crops leading untreated to severe yield losses. The use of fungicides is often essential to control Pst when sudden outbreaks are imminent. Sensors capable of detecting Pst in wheat crops could optimize the use of fungicides and improve disease monitoring in high-throughput field phenotyping. Now, deep learning provides new tools for image recognition and may pave the way for new camera based sensors that can identify symptoms in early stages of a disease outbreak within the field. The aim of this study was to teach an image classifier to detect Pst symptoms in winter wheat canopies based on a deep residual neural network (ResNet). For this purpose, a large annotation database was created from images taken by a standard RGB camera that was mounted on a platform at a height of 2 m. Images were acquired while the platform was moved over a randomized field experiment with Pst-inoculated and Pst-free plots of winter wheat. The image classifier was trained with 224 × 224 px patches tiled from the original, unprocessed camera images. The image classifier was tested on different stages of the disease outbreak. At patch level the image classifier reached a total accuracy of 90%. To test the image classifier on image level, the image classifier was evaluated with a sliding window using a large striding length of 224 px allowing for fast test performance. At image level, the image classifier reached a total accuracy of 77%. Even in a stage with very low disease spreading (0.5%) at the very beginning of the Pst outbreak, a detection accuracy of 57% was obtained. Still in the initial phase of the Pst outbreak with 2 to 4% of Pst disease spreading, detection accuracy with 76% could be attained. With further optimizations, the image classifier could be implemented in embedded systems and deployed on drones, vehicles or scanning systems for fast mapping of Pst outbreaks.
  • Item
    Computational tissue staining of non-linear multimodal imaging using supervised and unsupervised deep learning
    (Washington, DC : OSA, 2021) Pradhan, Pranita; Meyer, Tobias; Vieth, Michael; Stallmach, Andreas; Waldner, Maximilian; Schmitt, Michael; Popp, Juergen; Bocklitz, Thomas
    Hematoxylin and Eosin (H&E) staining is the 'gold-standard' method in histopathology. However, standard H&E staining of high-quality tissue sections requires long sample preparation times including sample embedding, which restricts its application for 'real-time' disease diagnosis. Due to this reason, a label-free alternative technique like non-linear multimodal (NLM) imaging, which is the combination of three non-linear optical modalities including coherent anti-Stokes Raman scattering, two-photon excitation fluorescence and second-harmonic generation, is proposed in this work. To correlate the information of the NLM images with H&E images, this work proposes computational staining of NLM images using deep learning models in a supervised and an unsupervised approach. In the supervised and the unsupervised approach, conditional generative adversarial networks (CGANs) and cycle conditional generative adversarial networks (cycle CGANs) are used, respectively. Both CGAN and cycle CGAN models generate pseudo H&E images, which are quantitatively analyzed based on mean squared error, structure similarity index and color shading similarity index. The mean of the three metrics calculated for the computationally generated H&E images indicate significant performance. Thus, utilizing CGAN and cycle CGAN models for computational staining is beneficial for diagnostic applications without performing a laboratory-based staining procedure. To the author's best knowledge, it is the first time that NLM images are computationally stained to H&E images using GANs in an unsupervised manner.
  • Item
    Correcting systematic errors by hybrid 2D correlation loss functions in nonlinear inverse modelling
    (San Francisco, California, US : PLOS, 2023) Mayerhöfer, Thomas G.; Noda, Isao; Pahlow, Susanne; Heintzmann, Rainer; Popp, Jürgen
    Recently a new family of loss functions called smart error sums has been suggested. These loss functions account for correlations within experimental data and force modeled data to obey these correlations. As a result, multiplicative systematic errors of experimental data can be revealed and corrected. The smart error sums are based on 2D correlation analysis which is a comparably recent methodology for analyzing spectroscopic data that has found broad application. In this contribution we mathematically generalize and break down this methodology and the smart error sums to uncover the mathematic roots and simplify it to craft a general tool beyond spectroscopic modelling. This reduction also allows a simplified discussion about limits and prospects of this new method including one of its potential future uses as a sophisticated loss function in deep learning. To support its deployment, the work includes computer code to allow reproduction of the basic results.