Automatic nuclear instance segmentation is a crucial task in computational pathology as this information is required for extracting cell-based features in down-stream analysis. However, instance segmentation is a challenging task due to the nature of histology images which show large variations and irregularities in nuclei appearances and arrangements. Various deep learning-based methods have tried to tackle these challenges, however, most of these methods fail to segment the nuclei instances in crowded scenes accurately, or they are not fast enough for practical usage. In this paper, we propose a double-stage neural network for nuclear instance segmentation which leverages the power of an interactive model, NuClick, for accurate instance segmentation by replacing the user input with a nuclei detection module, YOLOv5. We optimized the proposed method to be lightweight and fast and show that it can achieve promising results when tested on the largest publicly available nuclei dataset.
There are two main types of lung cancer: small cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC), which are grouped accordingly due to similarity in behaviour and response to treatment. The main types of NSCLC are lung adenocarcinoma (LUAD), which accounts for about 40% of all lung cancers and lung squamous cell carcinoma (LUSC), which accounts for about 25-30% of all lung cancers. Due to their differences, automated classification of these two main subtypes of NSCLC is a critical step in developing a computer aided diagnostic system. We present an automated method for NSCLC classification, that consists of a two-part approach. Firstly, we implement a deep learning framework to classify input patches as LUAD, LUSC or non-diagnostic (ND). Next, we extract a collection of statistical and morphological measurements from the labeled whole-slide image (WSI) and use a random forest regression model to classify each WSI as lung adenocarcinoma or lung squamous cell carcinoma. This task is part of the Computational Precision Medicine challenge at the MICCAI 2017 conference, where we achieved the greatest classification accuracy with a score of 0.81.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.