Histopathology whole-slide image (WSI) captures detailed structural and morphological features of tumor tissue, offering rich histological and molecular information to support clinical practice. With the development of artificial intelligence, deep learning (DL) methods have emerged to assist in automatically analyzing histopathology WSIs. It alleviates the need for tedious, time-consuming, and error-prone inspections by clinicians. Up to now, employing DL models for histopathology WSI analysis is still challenging due to the intrinsic complexity of histology characteristics of tumor tissue, high image resolution, and large image size. In this study, we proposed a transformer-based classifier with feature aggregation for cancer subtype classification using histopathology WSIs while addressing these challenges. Our method shows three advantages to improve classification performance. First, an aggregate transformer decoder is employed to learn both global and local features from WSIs. Second, the transformer architecture facilitates the decoder to learn spatial correlations among different regions in a WSI. Third, the self-attention mechanism of the transformer facilitates the generation of saliency maps to highlight regions of interest in WSIs. We evaluated our model on three cancer subtype classification tasks and demonstrated its effectiveness and performance.
High-resolution histopathological images have rich characteristics of cancer tissues and cells. Recent studies have shown that digital pathology analysis can aid clinical decision-making by identifying metastases, subtyping and grading tumors, and predicting clinical outcomes. Still, the analysis of digital histologic images remains challenging due to the imbalance of the training data, the intrinsic complexity of histology characteristics of tumor tissue, and the extremely heavy computation burden for processing extremely high-resolution whole slide imaging (WSI) images. In this study, we developed a new deep learning-based classification framework that addresses these unique challenges to support clinical decision-making. The proposed method is motivated by our recently developed adversarial learning strategy with two major innovations. First, an image pre-processing module was designed to process the high-resolution histology images to reduce computational burden and keep informative features, alleviating the risk of overfitting issues when training the network. Second, recently developed StyleGAN2 with powerful generative capability was employed to recognize complex texture patterns and stain information in histology images and learn deep classification-relevant information, further improving the classification and reconstruction performance of our method. The experimental results on three different histology image datasets for different classification tasks demonstrated superior classification performance compared to traditional deep learning-based methods, and the generality of the proposed method to be applied to various applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.