Principal Investigator: Faisal Mahmood
During the patient clinical path, there are a lot of diverse data being acquired, each providing different insights on the state of the disease. Radiology and histology images are the keys to brain cancer diagnosis. While magnetic resonance imaging (MRI) scans show tumor presence in brain tissue and provide the initial insights on the disease, whole-slide images (WSI) show tumor microscopic structures. Each of these data types provides specific information into the glioma condition and guides clinical decisions. However, it is not clear why some patients respond better to therapy and what the cause of varying survival rates is. The hypothesis is that incorporating information from diverse macro and microscopic data could improve predictions on patient outcomes. Given the large complexity of these data, it is beyond human comprehension to analyze such data manually. Here AI methods can help to discover new relations within and across different data. We used methods that take raw patient MRI and WSI data as input and are trained to predict survival outcomes. The AI methods alone could identify which information is relevant for predictions. We also show that the glioma survival prediction improves after combining the information from radiology and histology.
Management of aggressive malignancies, such as glioma, is complicated by a lack of predictive biomarkers that could stratify patients based on the treatment outcome. AI provides a tool to examine complex features from diverse data and enhance patient outcome prediction. We present a weakly-supervised, multimodal deep learning-based model fusing histopathological and radiology features for glioma survival predictions. We deploy an attention-based multiple instance learning approach, effectively surpassing the need for manual annotations of predictive regions. The model is trained on a set of 205 patients with glioma data – paired whole-slide images (WSI) and multimodal magnetic resonance imaging (MRI) – from The Cancer Genome Atlas and The Cancer Imaging Archive. Unimodal networks are trained individually on WSIs and MRIs for the survival task. The multimodal model fuses the features from unimodal models using the Kronecker product to model pairwise feature interactions. Performing 10-fold cross-validation, the unimodal algorithms trained on radiology or pathology data obtained an average concordance index (c- index) of 0.704 and 0.712 respectively. The multimodal algorithm resulted in higher performance, with an average c-index of 0.733. The presented framework demonstrates the feasibility of weakly-supervised multimodal integration of radiology and histology data for improved survival prediction in glioma patients.
If the PDF viewer does not load initially, please try refreshing the page.