Primary Angle Closure Disease (PACD) is the most common cause of vision impairment worldwide. Early treatment is crucial in preventing vision loss. Anterior Segment Optical Coherence Tomography (AS-OCT) is an imaging modality that produces images of anterior structures such as the Anterior Chamber Angle (ACA) and the scleral spur. However, adoption of the AS-OCT modality has been gradual due to AS-OCT analysis not being standardized and inefficient. medical professionals typically must annotate each image by hand using proprietary software and use expert knowledge to diagnose PACD based on the key features annotated. Using an imaging-informatics-based approach on a dataset of almost 1200 images, we have developed a DICOM-compatible system to streamline and standardize AS-OCT analysis, utilizing a HIPAA-compliant database requiring a secure login to protect patient privacy. Previously, we developed a streamlined approach towards annotating key features in AS-OCT images which will be used to validate the results produced by SimpleMind – an open-source software framework supporting deep neural networks with machine learning and automatic parameter tuning. SimpleMind is integrated into the system to increase the efficiency of analyzing AS-OCT images and eliminate the need to annotate images for clinical diagnosis. The goal is to develop a comprehensive and robust hybrid system combining traditional and deep learning image processing methods to detect the scleral spur and estimate a measure of the anterior chamber angle’s degree of openness from AS-OCT images. This paper presents a hybrid method of determining the ACA boundary region to produce an angle measurement that can help indicate PACD.
Primary angle closure disease (PACD) is a leading cause of permanent vision loss worldwide, so early treatment of patients suffering from symptoms of PACD is crucial to prevent vision loss. Gonioscopy is the current clinical standard for diagnosing PACD. However, gonioscopy is a qualitative subjective assessment method. Thus, there is a need for a quantitative method to diagnose PACD. Anterior Segment Optical Coherence Tomography (AS-OCT) is an imaging modality which produces images of anterior structures such as the anterior chamber angle. Adoption of AS-OCT has been slow due to AS-OCT analysis not being standardized and inefficient. Currently, users must annotate each image by hand using proprietary software and use expert knowledge to diagnose PACD based on the key features annotated. Using an imaging-informatics based approach on a dataset of over 900 images we have developed a system to streamline and standardize AS-OCT analysis. This system will be DICOM compatible to promote standardization of AS-OCT images. This system will be attached to a HIPAA compliant database and will require a secure login to protect patient privacy. We have developed a streamlined approach towards annotating key features in AS-OCT images which will be used to validate the results produced by SimpleMind – an open source software framework supporting deep neural networks with machine learning and automatic parameter tuning. SimpleMind is integrated into the system to increase the efficiency of analyzing AS-OCT images and eliminate the need to annotate images for clinical diagnosis.
This study investigates the effect of radiation dose reduction of a renal perfusion CT protocol on quantitative imaging features for patients of different sizes. Our findings indicate that the impact of dose reduction is significantly different between patients of different sizes for standard deviation, entropy, and GLCM joint average at all dose levels evaluated, and for mean at the lowest dose level evaluated (p < .001). These results suggest that a size-based scanning protocol may be needed to provide quantitative results that are robust with respect to patient size.
Deep convolutional neural networks (CNNs) are widely used in medical image segmentation, but they tend to ignore smaller foreground classes in the training process and thus degrade segmentation accuracy, which is known as the class imbalance problem. Central venous catheter (CVC) segmentation suffers from such problems, leading to low accuracy. The purpose of this study is to address the class imbalance problem in CNN training for segmenting the right internal jugular lines (RIJLs), the most common type of CVCs, in chest X-ray (CXR) images. We applied an inverse class frequency weight to the standard Dice loss to formulate a class frequency weighted Dice loss (CFDL) function to train the CNNs. A U-net based segmentation model was constructed with multichannel pre-processing, including normalization, denoising, and histogram equalization, and post-processing including thresholding segmentation candidates and interpolating discontinued line segmentations. The segmentation model was trained on CXR images with the Dice loss and the CFDL respectively. A separate test set of images were used to evaluate the CNN output performances with the Dice Similarity Coefficient (DSC). Between the Dice loss and the CFDL, the CFDL-trained CNN generated segmentation results with a mean DSC of 0.581 on the separate test set, which indicates a statistically significant difference (p=0.001) from the Dice loss-trained CNN outputs with a mean DSC of 0.562. The inverse class frequency weighted Dice loss function improved the RIJL segmentation with a U-net to the state-of-the-art performance.
Purpose: Convolutional Neural Networks (CNN) are frequently used for organ segmentation in medical imaging. Many CNNs, however, struggle for acceptance into clinical practice because of mis-segmentations that are obvious to radiologists or other experts. We propose using a Cognitive AI framework that applies anatomical knowledge and machine reasoning to check and improve CNN segmentations and avoid obvious mis-segmentations. Methods: We used the open-source SimpleMind framework that allows post-processing and machine reasoning to be applied to CNN segmentation results. Within this framework, a 3D-UNet CNN was trained on 212 contrast-enhanced kidney CT scans. From an anatomical knowledge base, SimpleMind derived the following Cognitive AI post-processing steps: (1) identification of the abdomen, (2) segmentation of the spine by searching for bone in the posterior region of the abdomen, (3) CNN output separation into right and left kidneys using the spine as reference, (4) refinement of individual kidney segmentations using thresholds of volume, HU, and voxel count, which are designed to eliminate disconnected false positives, (5) morphological opening to reduce connected false positives bordering the kidneys’ segmentations. On a test set of 53 scans, using reference annotations, Dice Coefficient (DC), Hausdorff Distance (HD) and Average Symmetric Surface Distances (ASSD) were computed and compared for the CNN and post-processed outputs. Results: Post-processing with Cognitive AI reduced the kidney segmentation HD from 46.4±55.8 mm to 30.7±17.6 mm, with the decrease in variance being statistically significant (p = 0.0296). DC and ASSD metrics were also improved. Conclusions: This initial work demonstrates that the incorporation of anatomical knowledge using Cognitive AI techniques can improve the segmentation accuracy of CNNs. The CNN provides very good overall kidney segmentation performance, and in cases where segmentation errors occur the post processing was able to improve performance.
Kidneys are most easily segmented by convolutional neural networks (CNN) on contrast enhanced CT (CECT) images, but their segmentation accuracy may be reduced when only non-contrast CT (NCCT) images are available. The purpose of this work was to investigate the improvement in segmentation accuracy when implementing a generative adversarial network (GAN) to create virtual contrast enhanced (vCECT) images from non-contrast inputs. A 2D cycleGAN model, incorporating an additional idempotent loss function to restrict the GAN from making unnecessary modifications to data already in the translated domain, was trained to generate virtual contrast enhanced images on 286 paired non-contrast and contrast enhanced inputs. A 3D CNN trained on contrast enhanced images was applied to segment the kidneys in a test set of 20 paired non-contrast and contrast enhanced images. The non-contrast images were converted to virtual contrast enhanced images, then kidneys in both image conditions were segmented by the CNN. Segmentation results were compared to analyst annotations on non-contrast images visually and by Dice Coefficient (DC). Segmentation on virtual contrast enhanced images were more complete with fewer extraneous detections compared to non-contrast images in 16/20 cases. Mean(±SD) DC was 0.88(±0.80), 0.90(±0.03), and 0.95(±0.05) for non-contrast, virtual contrast enhanced, and real contrast enhanced, respectively. Virtual contrast enhancement visually improved segmentation quality, poor performing cases had their performance improved resulting in an overall reduction in DC variation, and the minimum DC increased from 0.65 to 0.85. This work provides preliminary results demonstrating the potential effectiveness of using a GAN for virtual contrast enhancement to improve CNN-based kidney segmentation on non-contrast images.
A novel physics-based data augmentation (PBDA) is introduced, to provide a representative approach to introducing variance during training of a deep-learning model. Compared to traditional geometric-based data augmentation (GBDA), we hypothesize that PBDA will provide more realistic variation representative of potential imaging conditions that may be seen beyond the initial training data, and thereby train a more robust model (particularly in the scope of medical imaging). PBDA is tested in the context of false-positive reduction in nodule detection in low-dose lung CT and is shown to exhibit superior performance and robustness across a wide range of imaging conditions.
Translation of radiomics into clinical practice requires confidence in its interpretations. This may be obtained via understanding and overcoming the limitations in current radiomic approaches. Currently there is a lack of standardization in radiomic feature extraction. In this study we examined a few factors that are potential sources of inconsistency in characterizing lung nodules, such as 1)different choices of parameters and algorithms in feature calculation, 2)two CT image dose levels, 3)different CT reconstruction algorithms (WFBP, denoised WFBP, and Iterative). We investigated the effect of variation of these factors on entropy textural feature of lung nodules. CT images of 19 lung nodules identified from our lung cancer screening program were identified by a CAD tool and contours provided. The radiomics features were extracted by calculating 36 GLCM based and 4 histogram based entropy features in addition to 2 intensity based features. A robustness index was calculated across different image acquisition parameters to illustrate the reproducibility of features. Most GLCM based and all histogram based entropy features were robust across two CT image dose levels. Denoising of images slightly improved robustness of some entropy features at WFBP. Iterative reconstruction resulted in improvement of robustness in a fewer times and caused more variation in entropy feature values and their robustness. Within different choices of parameters and algorithms texture features showed a wide range of variation, as much as 75% for individual nodules. Results indicate the need for harmonization of feature calculations and identification of optimum parameters and algorithms in a radiomics study.
Quantitative imaging in lung cancer CT seeks to characterize nodules through quantitative features, usually from a region of interest delineating the nodule. The segmentation, however, can vary depending on segmentation approach and image quality, which can affect the extracted feature values. In this study, we utilize a fully-automated nodule segmentation method – to avoid reader-influenced inconsistencies – to explore the effects of varied dose levels and reconstruction parameters on segmentation.
Raw projection CT images from a low-dose screening patient cohort (N=59) were reconstructed at multiple dose levels (100%, 50%, 25%, 10%), two slice thicknesses (1.0mm, 0.6mm), and a medium kernel. Fully-automated nodule detection and segmentation was then applied, from which 12 nodules were selected. Dice similarity coefficient (DSC) was used to assess the similarity of the segmentation ROIs of the same nodule across different reconstruction and dose conditions.
Nodules at 1.0mm slice thickness and dose levels of 25% and 50% resulted in DSC values greater than 0.85 when compared to 100% dose, with lower dose leading to a lower average and wider spread of DSC values. At 0.6mm, the increased bias and wider spread of DSC values from lowering dose were more pronounced. The effects of dose reduction on DSC for CAD-segmented nodules were similar in magnitude to reducing the slice thickness from 1.0mm to 0.6mm. In conclusion, variation of dose and slice thickness can result in very different segmentations because of noise and image quality. However, there exists some stability in segmentation overlap, as even at 1mm, an image with 25% of the lowdose scan still results in segmentations similar to that seen in a full-dose scan.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.