Brigham Research Institute Poster Session Site logo-1
Search
Close this search box.

Jana Lipkova, PhD

Pronouns

She/Her/Hers

Job Title

Research Fellow

Academic Rank

Research Fellow

Department

Pathology

Authors

Jana Lipkova, Luoting Zhuang, Richard J. Chen, Drew F. K. Williamson$, Tiffany Miller and Faisal Mahmood

Principal Investigator

Faisal Mahmood

Research Category: Cancer

Tags

AI-based multimodal integration of radiology, pathology and genomics for outcome prediction

Scientific Abstract

Accurate prognostic determinations and stratification of patients into distinct risk groups can assist with cancer treatment planning, assessment of disease trajectories, and guide surveillance decisions. Prognostic determinations are often made by experts after considering reports from subjectively assessed radiology scans and pathology slides in combination with genomic alterations, temporally tracked electronic medical records and familial histories. However, the vast amount of medical data makes it difficult for experts to adequately assess patient prognosis under the multimodal context. Deep learning is an effective tool for integrating heterogeneous medical data, selecting relevant features, and discovering significant associations across multiple diverse modalities. Here, we present a deep learning-based multimodal integration framework to fuse histopathology, radiology, and genomics to improve outcome prediction. The proposed framework is weakly-supervised and does not require annotations, tumor segmentation, or hand-crafted features and can easily scale to larger cohorts and diverse disease models. The feasibility of the model is tested on glioma and non-small cell lung cancer, indicating benefits of multimodal data integration for better patient risk prediction and stratification. An independent international external cohort is used for the model validation. The proposed approach demonstrates the potential of integrating orthogonal data modalities to build prognostic models and paves the way for clinical trials to establish the efficacy of AI-driven multimodal prognostic models in cancer.

Lay Abstract

Precision medicine offers a tremendous opportunity to shape the future of cancer care by providing treatment tailored to patient-specific conditions. Current precision medicine is heavily based on prognostic biomarkers which forecast risks of clinical outcomes, such as survival or recurrence. Despite the central role of these biomarkers, it is often not clear why patients with similar profiles respond differently to treatment or what the reasons are for varying survival rates. This can be partially attributed to the unimodal nature of current biomarkers, which may not always capture the larger clinical context described by diverse medical data ranging from radiology, histology, genomics, electronic health records to wearable devices. To address this challenge we propose an AI-based model that allows to integrate information from diverse medical data (radiology, histology, genomics) to provide more accurate patient outcome predictions. The feasibility of the model is tested on glioma and non-small cell lung cancer, indicating benefits of multimodal data integration for better patient risk prediction and stratification. Interpretability methods are use reveal the prognostic markers identified by the model across different modalities, enabling validation of the model. The improved prognostic determinations can reveal differences in patient outcomes and guide treatment planning and surveillance decisions.

Clinical Implications

AI offers means to explore complex and diverse medical data to provide more accurate prognostic determinations for each patient. The large clinical context has potential to reveal differences in patient outcomes and guide personalized treatment planning.