Subjective interpretation of histology slides forms the basis of cancer diagnosis, prognosis, and therapeutic response prediction. Deep learning models can potentially help serve as an efficient, unbiased tool for this task if trained on large amounts of labeled data. However, labeled medical data, such as small regions of interests, are often costly to curate. In this work, we propose a flexible, semi-supervised framework for histopathological classification that first uses Contrastive Predictive Coding (CPC) to learn semantic features in an unsupervised manner and then use an attention-based multiple Instance Learning (MIL) for classification without requiring patch-level annotations.
Despite holding enormous potential in elucidating the tumor microenvironment and its phenotypic morphological heterogeneity, whole-slide image slides are underutilized in the analysis of survival outcomes and biomarker discovery, with very few methods developed that seek to integrate transcriptome profiles with histopathology data. In this work, we propose to fuse of molecular and histology features using artificial intelligence, and train an end-to-end multimodal deep neural network for survival outcome prediction. Our research establishes insight and theory on how to combine multimodal biomedical data, which will be integral for other problems in medicine with heterogenous data sources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.