Dual-energy (DE) chest radiography provides the capability of selectively imaging two clinically relevant materials, namely soft tissues, and osseous structures, to better characterize a wide variety of thoracic pathology and potentially improve diagnosis in posteroanterior (PA) chest radiographs. However, DE imaging requires specialized hardware and a higher radiation dose than conventional radiography, and motion artifacts some- times happen due to involuntary patient motion. In this work, we learn the mapping between conventional radiographs and bone suppressed radiographs. Specifically, we propose to utilize two variations of generative adversarial networks (GANs) for image-to-image translation between conventional and bone suppressed radio- graphs obtained by DE imaging technique. We compare the effectiveness of training with patient-wisely paired and unpaired radiographs. Experiments show both training strategies yield radio-realistic" radiographs with suppressed bony structures and few motion artifacts on a hold-out test set. While training with paired images yields slightly better performance than that of unpaired images when measuring with two objective image quality metrics, namely Structural Similarity Index (SSIM) and Peak Signal-to-Noise Ratio (PSNR), training with unpaired images demonstrates better generalization ability on unseen anteroposterior (AP) radiographs than paired training.
Lesion segmentation on computed tomography (CT) scans is an important step for precisely monitoring changes in lesion/tumor growth. This task, however, is very challenging since manual segmentation is prohibitively timeconsuming, expensive, and requires professional knowledge. Current practices rely on an imprecise substitute called response evaluation criteria in solid tumors (RECIST). Although these markers lack detailed information about the lesion regions, they are commonly found in hospitals’ picture archiving and communication systems (PACS). Thus, these markers have the potential to serve as a powerful source of weak-supervision for 2D lesion segmentation. To approach this problem, this paper proposes a convolutional neural network (CNN) based weakly-supervised lesion segmentation method, which first generates the initial lesion masks from the RECIST measurements and then utilizes co-segmentation to leverage lesion similarities and refine the initial masks. In this work, an attention-based co-segmentation model is adopted due to its ability to learn more discriminative features from a pair of images. Experimental results on the NIH DeepLesion dataset demonstrate that the proposed co-segmentation approach significantly improves lesion segmentation performance, e.g the Dice score increases about 4.0% (from 85.8% to 89.8%).
In machine learning, one-class classification tries to classify data of a specific category amongst all data, by learning from a training set containing only the data of that unique category. In the field of medical imaging, one-class learning can be developed to model only normality (similar to semi-supervised classification or anomaly detection), since the samples of all possible abnormalities are not always available, as some forms of anomaly are very rare. The one-class learning approach can be naturally adapted to the way radiologists identify anomalies in medical images: usually they are able to recognize lesions by comparing them with normal images and surroundings. Inspired by the traditional one-class learning approach, we propose an end-to-end deep adversarial one-class learning (DAOL) approach for semi-supervised normal and abnormal chest radiograph (X-ray) classification, by training only from normal X-ray images. The DAOL framework consists of deep convolutional generative adversarial networks (DCGAN) and an encoder at each end of the DCGAN. The DAOL generator is able to reconstruct the normal X-ray images while not adequate for well reconstructing the abnormalities in abnormal X-rays in the testing phase, since only the normal X-rays were used for training the network, and the abnormal images with various abnormalities were unseen during training. We propose three adversarial learning objectives which optimize the training of DAOL. The proposed network achieves an encouraging result (AUC 0.805) in classifying normal and abnormal chest X-rays on the challenging NIH Chest X-ray dataset in a semi-supervised setting.
As an important task in medical imaging analysis, automatic lymph node segmentation from computed tomography (CT) scans has been studied well in recent years, but it is still very challenging due to the lack of adequately-labeled training data. Manually annotating a large number of lymph node segmentations is expensive and time-consuming. For this reason, data augmentation can be considered as a surrogate of enriching the data. However, most of the traditional augmentation methods use a combination of affine transformations to manipulate the data, which cannot increase the diversity of the data’s contextual information. To mitigate this problem, this paper proposes a data augmentation approach based on generative adversarial network (GAN) to synthesize a large number of CT-realistic images from customized lymph node masks. In this work, the pix2pix GAN model is used due to its strength for image generation, which can learn the structural and contextual information of lymph nodes and their surrounding tissues from CT scans. With these additional augmented images, a robust U-Net model is learned for lymph node segmentation. Experimental results on NIH lymph node dataset demonstrate that the proposed data augmentation approach can produce realistic CT images and the lymph node segmentation performance is improved effectively using the additional augmented data, e.g. the Dice score increased about 2.2% (from 80.3% to 82.5%).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.