In this paper, we present a novel method for modeling both layers of the aortic walls in cases of aortic dissections for analysis from Computed Tomography Angiography. It involves a fast initialization of the associated physiological and pathological lumina and further editing on non-linearly formatted and cross-sectional views. Fast and accurate derivation of 3D models of these inner and outer vessel walls is crucial to analyze true and false lumen, to accelerate processing times in research studies, and to answer therapy questions. Since the aorta is a relatively large vessel, our system makes use of a point-based surface interpolation with compactly supported radial basis functions requiring only few surface constraints. Where possible, we use a semi-automatic approach to segment the vessel walls using an Active Contour Model, which detects the contours in the vessel’s cross-sectional planes, stating the constraints for interpolation. After initialization, editing on non-linearly formatted and crosssectional views is possible due to handling user input through tangent frame bundles to dismiss contradictory surface samples before updating the models with the new constraints. Our proposed method was evaluated in a user study to measure processing times and achievable model accuracy with respect to an expert-defined ground truth. The users needed 19 minutes on average to derive one model (both walls) and attained a mean surface distance of about 1.0 mm for the outer vessel wall, respectively, 1.6 mm for the inner wall. Using our method instead of open source program for geometric modeling saves 26 minutes per dataset.
In this paper, we present a fully automated approach to coronary vessel segmentation, which involves calcification or soft plaque delineation in addition to accurate lumen delineation, from 3D Cardiac Computed Tomography Angiography data. Adequately virtualizing the coronary lumen plays a crucial role for simulating blood ow by means of fluid dynamics while additionally identifying the outer vessel wall in the case of arteriosclerosis is a prerequisite for further plaque compartment analysis. Our method is a hybrid approach complementing Active Contour Model-based segmentation with an external image force that relies on a Random Forest Regression model generated off-line. The regression model provides a strong estimate of the distance to the true vessel surface for every surface candidate point taking into account 3D wavelet-encoded contextual image features, which are aligned with the current surface hypothesis. The associated external image force is integrated in the objective function of the active contour model, such that the overall segmentation approach benefits from the advantages associated with snakes and from the ones associated with machine learning-based regression alike. This yields an integrated approach achieving competitive results on a publicly available benchmark data collection (Rotterdam segmentation challenge).
Spinal bone lesion detection is a challenging and important task in cancer diagnosis and treatment monitoring.
In this paper we present a method for fully-automatic osteolytic spinal bone lesion detection from 3D CT data.
It is a multi-stage approach subsequently applying multiple discriminative models, i.e., multiple random forests,
for lesion candidate detection and rejection to an input volume. For each detection stage an internal control
mechanism ensures maintaining sensitivity on unseen true positive lesion candidates during training. This way
a pre-defined target sensitivity score of the overall system can be taken into account at the time of model
generation. For a lesion not only the center is detected but also, during post-processing, its spatial extension
along the three spatial axes defined by the surrounding vertebral body's local coordinate system. Our method
achieves a cross-validated sensitivity score of 75% and a mean false positive rate of 3.0 per volume on a data
collection consisting of 34 patients with 105 osteolytic spinal bone lesions. The median sensitivity score is 86%
at 2.0 false positives per volume.
Detection and segmentation of abnormal masses within organs in Computed Tomography (CT) images of
patients is of practical importance in computer-aided diagnosis (CAD), treatment planning, and analysis of
normal as well as pathological regions. For intervention planning e.g. in radiotherapy the detection of abnormal
masses is essential for patient diagnosis, personalized treatment choice and follow-up. The unpredictable nature
of disease often makes the detection of the presence, appearance, shape, size and number of abnormal masses
a challenging task, which is particularly tedious when performed by hand. Moreover, in cases in which the
imaging protocol specifies the administration of a contrast agent, the contrast agent phases at which the patient
images are acquired have a dramatic influence on the shape and appearance of the diseased masses. In this
paper we propose a method to automatically detect candidate lesions (CLs) in 3D CTs of liver lesions. We
introduce a novel multilevel candidate generation method that proves clearly advantageous in a comparative
study with a state of the art approach. A learning-based selection module and a candidate fusion module are
then introduced to reduce both redundancy and the false positive rate. The proposed workflow is applied to
the detection of both hyperdense and hypodense hepatic lesions in all contrast agent phases, with resulting
sensitivities of 89.7% and 92% and positive predictive values of 82.6% and 87.6% respectively.
Whole body CT scanning is a common diagnosis technique for discovering early signs of metastasis or for
differential diagnosis. Automatic parsing and segmentation of multiple organs and semantic navigation inside
the body can help the clinician in efficiently obtaining accurate diagnosis. However, dealing with the large amount
of data of a full body scan is challenging and techniques are needed for the fast detection and segmentation of
organs, e.g., heart, liver, kidneys, bladder, prostate, and spleen, and body landmarks, e.g., bronchial bifurcation,
coccyx tip, sternum, lung tips. Solving the problem becomes even more challenging if partial body scans are
used, where not all organs are present. We propose a new approach to this problem, in which a network of 1D
and 3D landmarks is trained to quickly parse the 3D CT data and estimate which organs and landmarks are
present as well as their most probable locations and boundaries. Using this approach, the segmentation of seven
organs and detection of 19 body landmarks can be obtained in about 20 seconds with state-of-the-art accuracy
and has been validated on 80 CT full or partial body scans.
Comprehensive quantitative evaluation of tumor segmentation technique on large scale clinical data sets is crucial
for routine clinical use of CT based tumor volumetry for cancer diagnosis and treatment response evaluation.
In this paper, we present a systematic validation study of a semi-automatic image segmentation technique for
measuring tumor volume from CT images. The segmentation algorithm was tested using clinical data of 200
tumors in 107 patients with liver, lung, lymphoma and other types of cancer. The performance was evaluated
using both accuracy and reproducibility. The accuracy was assessed using 7 commonly used metrics that can
provide complementary information regarding the quality of the segmentation results. The reproducibility was
measured by the variation of the volume measurements from 10 independent segmentations. The effect of
disease type, lesion size and slice thickness of image data on the accuracy measures were also analyzed. Our
results demonstrate that the tumor segmentation algorithm showed good correlation with ground truth for all
four lesion types (r = 0.97, 0.99, 0.97, 0.98, p < 0.0001 for liver, lung, lymphoma and other respectively). The
segmentation algorithm can produce relatively reproducible volume measurements on all lesion types (coefficient
of variation in the range of 10-20%). Our results show that the algorithm is insensitive to lesion size (coefficient
of determination close to 0) and slice thickness of image data(p > 0.90). The validation framework used in this
study has the potential to facilitate the development of new tumor segmentation algorithms and assist large scale
evaluation of segmentation techniques for other clinical applications.
Colon cancer is one of the most frequent causes of death. CT colonography is a novel method for the detection of polyps
and early cancer. The general principle of CT colonography includes a cathartic bowel preparation. The resulting
discomfort for patients leads to limited patient acceptance and therefore to limited cancer detection rates.
Reduced bowel preparation, techniques for stool tagging, and electronic cleansing, however, improve the acceptance
rates. Hereby, the high density of oral contrast material highlights residual stool and can be digitally removed.
Known subtraction methods cause artifacts: additional 3D objects are introduced and small bowel folds are perforated.
We propose a new algorithm that is based on the 2nd derivative of the image data using the Hessian matrix and the
following principal axis transform to detect tiny folds which shall not be subtracted together with tagged stool found by a
thresholding method. Since the stool is usually not homogenously tagged with contrast media a detection algorithm for
island-like structures is incorporated. The interfaces of air-stool level and colon wall are detected by a 3-dimensional
difference of Gaussian module. A 3-dimensional filter smoothes the transitions between removed stool and colon tissue.
We evaluated the efficacy of the new algorithm with 10 patient data sets. The results showed no introduced artificial
objects and no perforated folds. The artifacts at the air-stool and colon tissue-stool transitions are considerably reduced
compared to those known from the literature.
In the diagnosis of coronary artery disease, 3D-multi-slice
computed tomography (MSCT) has recently become more and more
important. In this work, an anatomical-based method for the
segmentation of atherosclerotic coronary arteries in MSCT is
presented. This technique is able to bridge severe stenosis, image
artifacts or even full vessel occlusions. Different anatomical
structures (aorta, blood-pool of the heart chambers, coronary
arteries and their orifices) are detected successively to
incorporate anatomical knowledge into the algorithm. The coronary
arteries are segmented by a simulated wave propagation method to
be able to extract anatomically spatial relations from the result.
In order to bridge segmentation breaks caused by stenosis or image
artifacts, the spatial location, its anatomical relation and
vessel curvature-propagation are taken into account to span a
dynamic search space for vessel bridging and gap closing. This
allows the prevention of vessel misidentifications and improves
segmentation results significantly. The robustness of this method
is proven on representative medical data sets.
We present a new framework to estimate and visualize heart motion from echocardiograms. For velocity estimation, we have developed a novel multiresolution optical flow algorithm. In order to account for typical heart motions like contraction/expansion and shear, we use a local affine model for the velocity in space and time. The motion parameters are estimated in the least-squares sense inside a sliding spatio-temporal window.
The estimated velocity field is used to track a region of interest which is represented by spline curves. In each frame, a set of sample points on the curves is displaced according to the estimated motion field. The contour in the subsequent frame is obtained by a least-squares spline fit to the displaced sample points. This ensures robustness of the contour tracking. From the estimated velocity, we compute a radial velocity field with respect to a reference point. Inside the time-varying region of interest, the radial velocity is color-coded and superimposed on the original image sequence in a semi-transparent fashion. In contrast to conventional Tissue Doppler methods, this approach is independent of the incident angle of the ultrasound beam.
The motion analysis and visualization provides an objective and robust method for the detection and quantification of myocardial malfunctioning. Promising results are obtained from synthetic and clinical echocardiographic sequences.
We present a new wavelet-based strategy for autonomous feature extraction and segmentation of cardiac structures in dynamic ultrasound images. Image sequences subjected to a multidimensional (2D plus time) wavelet transform yield a large number of individual subbands, each coding for partial structural and motion information of the ultrasound sequence. We exploited this fact to create an analysis strategy for autonomous analysis of cardiac ultrasound that builds on shape- and motion specific wavelet subband filters. Subband selection was in an automatic manner based on subband statistics. Such a collection of predefined subbands corresponds to the so-called footprint of the target structure and can be used as a multidimensional multiscale filter to detect and localize the target structure in the original ultrasound sequence. Autonomous, unequivocal localization by the autonomous algorithm is then done using a peak finding algorithm, allowing to compare the findings with a reference standard. Image segmentation is then possible using standard region growing operations. To test the feasibility of this multiscale footprint algorithm, we tried to localize, enhance and segment the mitral valve autonomously in 182 non-selected clinical cardiac ultrasound sequences. Correct autonomous localization by the algorithm was feasible in 165 of 182 reconstructed ultrasound sequences, using the experienced echocardiographer as reference. This corresponds to a 91% accuracy of the proposed method in unselected clinical data. Thus, multidimensional multiscale wavelet footprints allow successful autonomous detection and segmentation of the mitral valve with good accuracy in dynamic cardiac ultrasound sequences which are otherwise difficult to analyse due to their high noise level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.