Brigham Research Institute Poster Session Site logo-1

Bowen Chen



Job Title

Research Assistant

Academic Rank




Bowen Chen, Ming Y. Lu, Tiffany Chen, Faisal Mahmood

Principal Investigator

Faisal Mahmood

Research Category: Digital Health, Imaging, and Informatics


Localizing regions of interest in whole slide images via reinforcement learning

Scientific Abstract

With drastic improvements in the performance of neural networks and computer vision algorithms, deep learning-based image analysis models have been applied to a wide variety of fields. Within computational pathology, while rapid progress has been made in the past few years in cancer diagnosis, subtyping, and survival prediction using whole pathology slide images (WSIs), these methods involve processing the entire WSI at the highest resolution. However, many decisions in pathology are based on small regions of interest (ROIs) that make up a tiny proportion of the WSI, such as identifying cancer metastasis or diseased glomeruli. Additionally, though WSIs generally have a multi-resolution image pyramid format, current approaches process the slide at a fixed resolution. Taking advantage of this, we propose a method trained with reinforcement learning that identifies ROIs by examining the slide at lower resolutions and selectively zooming into higher-resolution patches. We apply our method to the task of localizing glomeruli in kidney biopsies and show that it significantly lowers the proportion of the WSI sampled at high resolutions while maintaining high precision and recall in localizing the structures of interest.

Lay Abstract

Though rapid progress has been made in computational pathology to create artificial intelligence solutions for diagnosing or subtyping cancer from whole slide images (WSIs), these methods process the entire WSI at the highest magnification. This is computationally costly since these images tend to be on the scale of billions of pixels. However, looking at the whole image at the highest resolution is generally not needed, since in most pathologist workflows, the task at hand only involves making a decision based on small regions of interest (ROIs) in the image. For example, to diagnose a slide containing metastatic cancer tissue, the pathologist does not need to examine the entire slide at the highest magnification. To take advantage of this, we propose a deep learning method that looks at the slide at lower magnifications and chooses to zoom into certain higher-resolution regions that are likely to contain ROIs. We test this method on finding glomeruli in kidney biopsies and show that it can accurately localize the structures of interest while looking at a significantly smaller portion of the slide.

Clinical Implications

Our method has the potential to reduce the time and cost needed to make decisions in computational workflows as well as to integrate into traditional pathology workflows by preselecting regions of interest from a pathology tissue slide.