Cardiac rejection is a serious condition occurring in 30-40% of patients within one-year post- transplantation. Since most rejections appear without clinical symptoms, histopathological evaluation of cardiac biopsies represents the gold standard for patient-monitoring and treatment- planning. The transplantation guidelines recognize three treatment-relevant rejection types (cellular, antibody, combined cellular+antibody) and two grades (low, high). The standard guideline — based on a qualitative description of rejection characteristics visible in microscopy images — is prone to subjective interpretations, and thus high inter-rater variability. Mistreatment of rejection due to diagnostic inaccuracy poses significant harm to the patient. The artificial- intelligence (AI) approaches could increase diagnostic objectivity and efficiency by identifying rejection state and clinically-relevant biopsy regions.
We present an AI-model for rejection assessment in microscopy images of cardiac biopsies. In the learning phase, the model is provided only with the microscopy images and patient diagnosis, and it alone has to identify the characteristics and the image regions relevant for the diagnosis. This provides a significant benefit over contemporary models that require expensive manual annotation of predictive regions. The model — trained on biopsies collected at BWH in 2010-2020 — reached high accuracy for all rejection types and grades. This study highlights the potential of AI-based models in assisting cardiac-rejection management.
Histological evaluation of endomyocardial biopsies (EMBs) represents the gold standard for patient-monitoring and treatment-planning in cardiac allograft rejection. Three types of treatment- relevant rejection: acute-cellular, antibody-mediated, combined cellular-antibody; and two grades: low (grade-1) and high (grade-2,3), have been recognized by the ISHLT. The standard histologic grading of rejection is prone to subjective interpretations, and thus high inter-rater variability. Deep-learning (DL) models could increase diagnostic objectivity and efficiency by identifying the rejection state and also clinically relevant biopsy regions.
We present a DL-model for the assessment of CAR in H&E-stained whole-slide images (WSIs) collected from 2010-2020 at BWH. A multi-task, multi-label network is constructed to simultaneously identify healthy tissue and the different rejection types. A separate classifier is trained to estimate the rejection grade. A multiple-instance-learning is used to enable training from the patient’s diagnosis as the only labels, in contrast to supervised models relying on expensive pixel-level annotations of predictive regions. The model reached high-accuracy for all rejections: cellular (0.92), antibody (0.92), cellular-antibody (0.94) and grade (0.76) prediction. Moreover, the estimated diagnosis-relevant regions of WSI coincide with the areas considered by pathologists. This proof-of-concept study highlights the potential of DL-based assisting-tool in CAR-management.