Whole Slide Image (WSI) analysis plays a pivotal role in computer-aided diagnosis and disease prognosis in digital pathology. While the emergence of deep learning and self-supervised learning (SSL) techniques helps capture relevant information in WSIs, directly relying on deep features overlooks essential domain-specific information captured by traditional handcrafted features. To address this issue, we propose fusing handcrafted and deep features in the multiple instance learning (MIL) framework for WSI classification. Inspired by advancements in transformers, we propose a novel cross-attention fusion mechanism “CA-Fuse-MIL,” to learn complementary information from handcrafted and deep features. We demonstrate that Cross-Attention fusion outperforms WSI classification using either just handcrafted or deep features. On the TCGA Lung Cancer dataset, our proposed fusion technique boosts the accuracy by upto 5.21% and 1.56% over two different set of deep features baseline. We also explore a variant of CA-Fuse-MIL which utilizes multiple cross-attention layers.
A proper cancer diagnosis is imperative for determining the medical treatment for a patient. It necessitates a good staging and classification of the tumor alongside with additional factors to predict response to treatment. Mitotic count-based tumor proliferation grade provides the most reproducible and independent prognostic value. In practice, pathologists examine H&E-stained, giga-pixel-sized digital whole-slide images of a tissue specimen for counting the mitotic index. Considering the enormity of the images, focus for analysis is centered on specific, so-called, high-power- fields (HPFs) on the periphery of the invasive parts of the tumor. Selection of the HPFs is very subjective. Additionally, tumor heterogeneity impacts both the region selection and the quality of the area analyzed. Several efforts have been made to automate the tumor proliferation score estimation by counting the mitotic figures in certain regions-of-interest. But the region selection algorithms are inconspicuous and do not ensure to encompass the crucial regions interesting for pathological analysis, thereby, making the grading sub-optimal. In this work, we aim at addressing this problem by proposing to visualize a distance weighted mitotic distribution in the entire invasive tumor region. Our approach provides a holistic view of the mitotic activity and localizes active proliferating regions in the tumor with tissue architecture context, enabling the pathologists to more objectively select the HPFs. We propose a deep learning-based framework to generate the mitotic activity heat-maps. Additionally, in the framework, we develop a number of significant tools for digital pathology; a semi-supervised tumor region delineation tool, a fast nuclei segmentation and detection tool, and a mitotic figure localization tool.
Rapid digitization of whole-slide images (WSIs) with slide scanners, along with the advancements in deep learning strategies has empowered the development of computerized image analysis algorithms for automated diagnosis, prognosis, and prediction of various types of cancers in digital pathology. These analyses can be enhanced and expedited by confining them to relevant tumor region on the large-sized and multi-resolution WSIs. The detection of tumor-region-of-interest (TRoI) on WSIs can facilitate to automatically measure the tumor size as well as to compute the distance to the resection margin. It can also ease the process of identifying high-power-fields (HPFs), which are essential towards the grading of tumor proliferation scores. In practice, pathologists select these regions by visual inspection of WSIs, which is a cumbersome, time-consuming process and affected by inter- and intra- pathologist variability. State-of-the-art deep learning-based methods perform well on the TRoI detection task by using supervised algorithms, however, they require accurate TRoI and non-TRoI annotations to train the algorithms. Acquiring such annotations is a tedious task and incurs observational variability. In this work, we propose a positive and unlabeled learning approach that uses a few examples of HPF regions (positive annotations) to localize the invasive TRoIs on breast cancer WSIs. We use unsupervised deep autoencoders with Gaussian Mixture Model-based clustering to identify the TRoI in a patch-wise manner. The algorithm is developed using 90 HPF-annotated WSIs and is validated on 30 fully-annotated WSIs. It yielded a Dice coefficient of 75.21%, a true positive rate of 78.62% and a true negative rate of 97.48% in terms of pixel-bypixel evaluation compared to the pathologists annotations. Significant correspondence between the results of the proposed algorithm and the state-of-the-art supervised ConvNet indicates the efficacy of the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.