KEYWORDS: Digital breast tomosynthesis, Computer aided diagnosis and therapy, Reconstruction algorithms, 3D image processing, Computer-aided diagnosis, Detection and tracking algorithms, 3D image reconstruction, 3D image enhancement, Breast, Digital mammography, Deep learning, Convolutional neural networks, Mammography, 3D displays, Image enhancement
In a typical 2D mammography workflow scenario, a computer-aided detection (CAD) algorithm is used as a second reader producing marks for a radiologist to review. In the case of 3D digital breast tomosynthesis (DBT), the display of CAD detections at multiple reconstruction heights would lead to an increased image browsing and interpretation time. We propose an alternative approach in which an algorithm automatically identifies suspicious regions of interest from 3D reconstructed DBT slices and then merges the findings with the corresponding 2D synthetic projection image which is then reviewed. The resultant enhanced synthetic 2D image combines the benefits of a familiar 2D breast view with superior appearance of suspicious locations from 3D slices. Moreover, clicking on 2D suspicious locations brings up the display of the corresponding 3D regions in a DBT volume allowing navigation between 2D and 3D images. We explored the use of these enhanced synthetic images in a concurrent read paradigm by conducting a study with 5 readers and 30 breast exams. We observed that the introduction of the enhanced synthetic view reduced radiologist's average interpretation time by 5.4%, increased sensitivity by 6.7% and increased specificity by 15.6%.
KEYWORDS: Digital breast tomosynthesis, Reconstruction algorithms, Computer aided diagnosis and therapy, Tissues, Computer-aided diagnosis, Neural networks, Mammography, Digital mammography, Detection and tracking algorithms, Evolutionary algorithms, Deep learning, Convolutional neural networks, Breast, Image segmentation, Medical imaging
Computer-aided detection (CAD) has been used in screening mammography for many years and is likely to be utilized for digital breast tomosynthesis (DBT). Higher detection performance is desirable as it may have an impact on radiologist's decisions and clinical outcomes. Recently the algorithms based on deep convolutional architectures have been shown to achieve state of the art performance in object classification and detection. Similarly, we trained a deep convolutional neural network directly on patches sampled from two-dimensional mammography and reconstructed DBT volumes and compared its performance to a conventional CAD algorithm that is based on computation and classification of hand-engineered features. The detection performance was evaluated on the independent test set of 344 DBT reconstructions (GE SenoClaire 3D, iterative reconstruction algorithm) containing 328 suspicious and 115 malignant soft tissue densities including masses and architectural distortions. Detection sensitivity was measured on a region of interest (ROI) basis at the rate of five detection marks per volume. Moving from conventional to deep learning approach resulted in increase of ROI sensitivity from 0:832 ± 0:040 to 0:893 ± 0:033 for suspicious ROIs; and from 0:852 ± 0:065 to 0:930 ± 0:046 for malignant ROIs. These results indicate the high utility of deep feature learning in the analysis of DBT data and high potential of the method for broader medical image analysis tasks.
One widely accepted classification of a prostate is by a central gland (CG) and a peripheral zone (PZ). In some
clinical applications, separating CG and PZ from the whole prostate is useful. For instance, in prostate cancer
detection, radiologist wants to know in which zone the cancer occurs. Another application is for multiparametric
MR tissue characterization. In prostate T2 MR images, due to the high intensity variation between CG and PZ,
automated differentiation of CG and PZ is difficult. Previously, we developed an automated prostate boundary
segmentation system, which tested on large datasets and showed good performance. Using the results of the
pre-segmented prostate boundary, in this paper, we proposed an automated CG segmentation algorithm based
on Layered Optimal Graph Image Segmentation of Multiple Objects and Surfaces (LOGISMOS). The designed
LOGISMOS model contained both shape and topology information during deformation. We generated graph cost
by training classifiers and used coarse-to-fine search. The LOGISMOS framework guarantees optimal solution
regarding to cost and shape constraint. A five-fold cross-validation approach was applied to training dataset
containing 261 images to optimize the system performance and compare with a voxel classification based reference
approach. After the best parameter settings were found, the system was tested on a dataset containing another
261 images. The mean DSC of 0.81 for the test set indicates that our approach is promising for automated CG
segmentation. Running time for the system is about 15 seconds.
Fully automated prostate segmentation helps to address several problems in prostate cancer diagnosis and treatment:
it can assist in objective evaluation of multiparametric MR imagery, provides a prostate contour for
MR-ultrasound (or CT) image fusion for computer-assisted image-guided biopsy or therapy planning, may facilitate
reporting and enables direct prostate volume calculation. Among the challenges in automated analysis of
MR images of the prostate are the variations of overall image intensities across scanners, the presence of nonuniform
multiplicative bias field within scans and differences in acquisition setup. Furthermore, images acquired
with the presence of an endorectal coil suffer from localized high-intensity artifacts at the posterior part of the
prostate. In this work, a three-dimensional method for fast automated prostate detection based on normalized
gradient fields cross-correlation, insensitive to intensity variations and coil-induced artifacts, is presented and
evaluated. The components of the method, offline template learning and the localization algorithm, are described
in detail.
The method was validated on a dataset of 522 T2-weighted MR images acquired at the National Cancer
Institute, USA that was split in two halves for development and testing. In addition, second dataset of 29 MR
exams from Centre d'Imagerie Médicale Tourville, France were used to test the algorithm. The 95% confidence
intervals for the mean Euclidean distance between automatically and manually identified prostate centroids were
4.06 ± 0.33 mm and 3.10 ± 0.43 mm for the first and second test datasets respectively. Moreover, the algorithm
provided the centroid within the true prostate volume in 100% of images from both datasets. Obtained results
demonstrate high utility of the detection method for a fully automated prostate segmentation.
Manual delineation of the prostate is a challenging task for a clinician due to its complex and irregular shape.
Furthermore, the need for precisely targeting the prostate boundary continues to grow. Planning for radiation
therapy, MR-ultrasound fusion for image-guided biopsy, multi-parametric MRI tissue characterization, and
context-based organ retrieval are examples where accurate prostate delineation can play a critical role in a successful
patient outcome. Therefore, a robust automated full prostate segmentation system is desired. In this
paper, we present an automated prostate segmentation system for 3D MR images. In this system, the prostate is
segmented in two steps: the prostate displacement and size are first detected, and then the boundary is refined by
a shape model. The detection approach is based on normalized gradient fields cross-correlation. This approach
is fast, robust to intensity variation and provides good accuracy to initialize a prostate mean shape model. The
refinement model is based on a graph-search based framework, which contains both shape and topology information
during deformation. We generated the graph cost using trained classifiers and used coarse-to-fine search and
region-specific classifier training. The proposed algorithm was developed using 261 training images and tested
on another 290 cases. The segmentation performance using mean DSC ranging from 0.89 to 0.91 depending on
the evaluation subset demonstrates state of the art performance. Running time for the system is about 20 to 40
seconds depending on image size and resolution.
This work describes a method that can discriminate between a solid pulmonary nodule and a pulmonary vessel
bifurcation point at a given candidate location on a CT scan using the method of standard moments. The
algorithm starts with the estimation of a spherical window around a nodule candidate center that best captures
the local shape properties of the region. Then, given this window, the standard set of moments, invariant to
rotation and scale is computed over the geometric representation of the region. Finally, a feature vector composed
of the moment values is classified as either a nodule or a vessel bifurcation point.
The performance of this technique was evaluated on a dataset containing 276 intraparenchymal nodules and
276 selected vessel bifurcation points. The method resulted in 99% sensitivity and 80% specificity in identifying
nodules, which makes this technique an efficient filter for false positives reduction. Its efficiency was further
evaluated on the dataset of 656 low-dose chest CT scans. Inclusion of this filter into a design of an experimental
detection system resulted in up to a 69% decrease in false positive rate in detection of intraparenchymal nodules
with less than 1% loss in sensitivity.
The primary stage of a pulmonary nodule detection system is typically a candidate generator that efficiently
provides the centroid location and size estimate of candidate nodules. A scale-normalized Laplacian of Gaussian
(LOG) filtering method presented in this paper has been found to provide high sensitivity along with precise
locality and size estimation. This approach involves a computationally efficient algorithm that is designed to
identify all solid nodules in a whole lung anisotropic CT scan.
This nodule candidate generator has been evaluated in conjunction with a set of discriminative features that
target both isolated and attached nodules. The entire detection system was evaluated with respect to a sizeenriched
dataset of 656 whole-lung low-dose CT scans containing 459 solid nodules with diameter greater than 4
mm. Using a soft margin SVM classifier, and setting false positive rate of 10 per scan, we obtained a sensitivity
of 97% for isolated, 93% for attached, and 89% for both nodule types combined. Furthermore, the LOG filter
was shown to have good agreement with the radiologist ground truth for size estimation.
An estimation of the so called Ground Truth (GT), i.e. the actual lesion region, can minimize readers' subjectivity
if multiple readers' markings are combined. Two methods perform this estimate by considering the
spatial location of voxels: Thresholded Probability-Map (TPM) and Simultaneous Truth and Performance Level
Estimation (STAPLE). An analysis of these two methods has already been performed. The purpose of this study,
however, is gaining a new insight into the method outcomes by comparing the estimated regions. A subset of the
publicly available Lung Image Database Consortium archive was used, selecting pulmonary nodules documented
by all four radiologists. The TPM estimator was computed by assigning to each voxel a value equal to average
number of readers that included such voxel in their markings and then applying a threshold of 0.5. Our STAPLE
implementation is loosely based on a version from ITK, to which we added the graph cut post-processing. The
pair-wise similarities between the estimated ground truths were analyzed by computing the respective Jaccard
coefficients. Then, the sign test of the differences between the volumes of TPM and STAPLE was performed.
A total of 35 nodules documented on 26 scans by all four radiologists were available. The spatial agreement
had a one-sided 90% Confidence Interval of [0.92, 1.00]. The sign test of the differences had a p-value less than
0.001. We found that (a) the differences in their volume estimates are statistically significant, (b) the spatial
disagreement between the two estimators is almost completely due to the exclusion of voxels marked by exactly
two readers, (c) STAPLE tends to weight more, in its GT estimate, readers marking broader regions.
A wide range of pulmonary diseases, including common ones such as COPD, affect the airways. If the dimensions
of airway can be measured with high confidence, the clinicians will be able to better diagnose diseases as well
as monitor progression and response to treatment. In this paper, we introduce a method to assess the airway
dimensions from CT scans, including the airway segments that are not oriented axially. First, the airway lumen
is segmented and skeletonized, and subsequently each airway segment is identified. We then represent each
airway segment using a segment-centric generalized cylinder model and assess airway lumen diameter (LD)
and wall thickness (WT) for each segment by determining inner and outer wall boundaries. The method was
evaluated on 14 healthy patients from a Weill Cornell database who had two scans within a 2 month interval.
The corresponding airway segments were located in two scans and measured using the automated method. The
total number of segments identified in both scans was 131. When 131 segments were considered altogether, the
average absolute change over two scans was 0.31 mm for LD and 0.12 mm for WT, with 95% limits of agreement
of [-0.85, 0.83] for LD and [-0.32, 0.26] for WT. The results were also analyzed on per-patient basis, and the
average absolute change was 0.19 mm for LD and 0.05 mm for WT. 95% limits of agreement for per-patient
changes were [-0.57, 0.47] for LD and [-0.16, 0.10] for WT.
The performance of automated pulmonary nodule detection systems is typically qualified with respect to some
minimum size of nodule to be detected. Also, an evaluation dataset is typically constructed by expert radiologists
with all nodules larger than the minimum size being designated as true positives while all other smaller detected "nodules" are considered to be false positives. In this paper, we consider the negative impact that size estimation
error, either in the establishment of ground truth for the evaluation dataset or by the automated detection
method for the size estimate of nodule candidates, has on the measured performance of the detection system.
Furthermore, we propose a modified evaluation procedure that addresses the size estimation error issue.
The impact of the size measurement error was estimated for a documented research image database consisting
of whole-lung CT scans for 509 cases in which 690 nodules have been documented. We compute FROC curves
both with and without size error compensation and we found that for a minimum size limit of 4 mm the
performance of the system is underestimated by a sensitivity reduction of 5% and a false positive rate increase
of 0.25 per case. Therefore, error in nodule size estimation should be considered in the evaluation of automated
detection systems.
We present an automated method for delineation of coronary arteries from Cardiac CT Angiography (CTA) images. Coronary arteries are narrow blood vessels and when imaged using CTA, appear as thin cylindrical structures of varying curvature. This appearance is often affected by heart motion and image reconstruction artifacts. Moreover, when an artery is diseased, it may appear as a non-continuous structure of widely varying width and image intensity. Defining the boundaries of the coronary arteries is an important and necessary step for further analysis and diagnosis of coronary disease. For this purpose, we developed a method using cylindrical structure modeling. For each vessel segment a best fitting cylindrical template is found. By applying this technique sequentially along the vessel, its entire volume can be reconstructed. The algorithm is seeded with a manually specified starting point at the most distal discernible portion of an artery and then it proceeds iteratively toward the aorta. The algorithm makes necessary corrections to account for CTA image artifacts and is able to perform in diseased arteries. It stops when it identifies the vessels junction with the aorta. Five cardiac 3D CT angiography studies were used for algorithm validation. For each study, the four longest visually discernible branches of the major coronary arteries were evaluated. Central axes obtained from our automated method were compared with ground truth markings made by an experienced radiologist. In 75% of the cases, our algorithm was able to extract the entire length of the artery from single initialization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.