Robust Fusion of Time Series and Image Data for Improved Multimodal Clinical Prediction

dc.bibliographicCitation.firstPage174107
dc.bibliographicCitation.journalTitleIEEE Access
dc.bibliographicCitation.lastPage174121
dc.bibliographicCitation.volume12
dc.contributor.authorRasekh, Ali
dc.contributor.authorHeidari, Reza
dc.contributor.authorHosein Haji Mohammad Rezaie, Amir
dc.contributor.authorSharifi Sedeh, Parsa
dc.contributor.authorAhmadi, Zahra
dc.contributor.authorMitra, Prasenjit
dc.contributor.authorNejdl, Wolfgang
dc.date.accessioned2025-02-26T09:28:32Z
dc.date.available2025-02-26T09:28:32Z
dc.date.issued2024
dc.description.abstractWith the increasing availability of diverse data types, particularly images and time series data from medical experiments, there is a growing demand for techniques designed to combine various modalities of data effectively. Our motivation comes from the important areas of predicting mortality and phenotyping where using different modalities of data could significantly improve our ability to predict. To tackle this challenge, we introduce a new method that uses two separate encoders, one for each type of data, allowing the model to understand complex patterns in both visual and time-based information. Apart from the technical challenges, our goal is to make the predictive model more robust in noisy conditions and perform better than current methods. We also deal with imbalanced datasets and use an uncertainty loss function, yielding improved results while simultaneously providing a principled means of modeling uncertainty. Additionally, we include attention mechanisms to fuse different modalities, allowing the model to focus on what's important for each task. We tested our approach using the comprehensive multimodal MIMIC dataset, combining MIMIC-IV and MIMIC-CXR datasets. Our experiments show that our method is effective in improving multimodal deep learning for clinical applications. The code for this work is publicly available at: https://github.com/AliRasekh/TSImageFusion.eng
dc.description.fondsTIB_Fonds
dc.description.versionpublishedVersioneng
dc.identifier.urihttps://oa.tib.eu/renate/handle/123456789/18571
dc.identifier.urihttps://doi.org/10.34657/17590
dc.language.isoeng
dc.publisherNew York, NY : IEEE
dc.relation.doihttps://doi.org/10.1109/access.2024.3497668
dc.relation.essn2169-3536
dc.rights.licenseCC BY 4.0 Unported
dc.rights.urihttps://creativecommons.org/licenses/by/4.0
dc.subject.ddc004
dc.subject.ddc621,3
dc.subject.otherattention mechanismeng
dc.subject.otherMultimodal learningeng
dc.subject.otherphenotypingeng
dc.subject.otherrobustnesseng
dc.subject.othertime serieseng
dc.titleRobust Fusion of Time Series and Image Data for Improved Multimodal Clinical Predictioneng
dc.typeArticle
dc.typeText
tib.accessRightsopenAccess
wgl.contributorTIB
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Robust_Fusion_of_Time_Series_and_Image_Data_for_Improved_Multimodal_Clinical_Prediction.pdf
Size:
1.98 MB
Format:
Adobe Portable Document Format
Description: