KEYWORDS: Breast density, Image classification, Education and training, Mammography, Deep learning, Surgery, Medical imaging, Machine learning, Statistical modeling, Breast
Classification of Breast Imaging Reporting and Data System (BI-RADS) breast density categories generally reflects the amount of dense/fibroglandular tissue in the breast. Studies have consistently shown that breast with higher density has a higher risk of developing breast cancer compared to breast with lower density. In this paper, we propose a novel end-to-end method, namely, Medical Knowledge-guided Deep Learning (MKDL), for breast mammogram density classification. The principle behind MKDL lies in the fact that many breast image density classification tasks are partly or largely based on certain pre-known image features, such as image contrast and brightness. These pre-known features can be computationally represented and then leveraged as prior knowledge to facilitate more effective model learning and thus boost the classification performance. We designed specific knowledge-based transformations for breast density classification and showed that our model outperformed several state-of-the-art models.
Human Epithelial-2 (HEp-2) cell images staining patterns classification have been widely used to identify autoimmune diseases by the anti-Nuclear antibodies (ANA) test in the Indirect Immunofluorescence (IIF) protocol. Because manual test is time consuming, subjective and labor intensive, image-based Computer Aided Diagnosis (CAD) systems for HEp-2 cell classification are developing. However, methods proposed recently are mostly manual features extraction with low accuracy. Besides, the scale of available benchmark datasets is small, which does not exactly suitable for using deep learning methods. This issue will influence the accuracy of cell classification directly even after data augmentation. To address these issues, this paper presents a high accuracy automatic HEp-2 cell classification method with small datasets, by utilizing very deep convolutional networks (VGGNet). Specifically, the proposed method consists of three main phases, namely image preprocessing, feature extraction and classification. Moreover, an improved VGGNet is presented to address the challenges of small-scale datasets. Experimental results over two benchmark datasets demonstrate that the proposed method achieves superior performance in terms of accuracy compared with existing methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.