A new method for the estimation of the intensity distributions of the images prior to normalized mutual information (NMI) based registration is presented. Our method is based on the K-means clustering algorithm as opposed to the generally used equidistant binning method. K-means clustering is a binning method with a variable size for each bin which is adjusted to achieve a natural clustering. Registering clinical MR-CT and MR-PET images with K-means clustering based intensity distribution estimation shows that a significant reduction is computational time without loss of accuracy as compared to the standard equidistant binning based registration is possible. Further inspection shows a reduction in the NMI variance and a reduction in local maxima for K-means clustering based NMI registration as opposed to equidistant binning based NMI registration.
An automated method for the segmentation of thrombus in abdominal aortic aneurysms from CTA data is presented. The method is based on Active Shape Model (ASM) fitting in sequential slices, using the contour obtained in one slice as the initialisation in the adjacent slice. The optimal fit is defined by maximum correlation of grey value profiles around the contour in successive slices, in contrast to the original ASM scheme as proposed by Cootes and Taylor, where the correlation with profiles from training data is maximised. An extension to the proposed approach prevents the inclusion of low-intensity tissue and allows the model to refine to nearby edges. The applied shape models contain either one or two image slices, the latter explicitly restricting the shape change from slice to slice. To evaluate the proposed methods a leave-one-out experiment was performed, using six datasets containing 274 slices to segment. Both adapted ASM schemes yield significantly better results than the original scheme (p<0.0001). The extended slice correlation fit of a one-slice model showed best overall performance. Using one manually delineated image slice as a reference, on average a number of 29 slices could be automatically segmented with an accuracy within the bounds of manual inter-observer variability.
A much-used measure for registration of three-dimensional medical images is mutual information, which originates from information theory. However, information theory offers many more measures that may be suitable for image registration. Such measures denote the divergence of the joint grey value distribution of two images from the joint distribution for complete independence of the images. This paper compares the performance of mutual information as a registration measure with that of other information measures. The measures are applied to rigid registration of clinical PET/MR and MR/CT images, for 35 and 41 image pairs respectively. An accurate gold standard transformation is available for the images, based on implanted markers. Both registration performance and accuracy of the measures are studied. The results indicate that some information measures perform very poorly for the chosen registration problems, yielding many misregistrations, even when using a good starting estimate. Other measures, however, were shown to produce significantly more accurate results than mutual information.
A semi-automatic method for localisation and segmentation of bifurcated aortic endografts in CTA images is presented. The graft position is established through detection of radiopaque markers sewn on the outside of the graft. The user indicates the first and the last marker, whereupon the rest of the markers are detected automatically by second order scaled derivative analysis combined with prior knowledge of graft shape and marker configuration. The marker centres obtained approximate the graft sides and central axis. The graft boundary is determined, either in the original CT slices or in reformatted slices orthogonal to the local graft axis, by maximizing the local gradient in the radial direction along a deformable contour passing through both sides. The method has been applied to ten CTA images. In all cases, an adequate segmentation is obtained. Compared to manual segmentations an average similarity (i.e. relative volume of overlap) of 0.93 +/- 0.02 for the graft body and 0.84 +/- 0.05 for the limbs is found.
A semi-automatic segmentation method for Tuberous Sclerosis (TS) lesions in the brain has been developed. Both T1 images and Fluid Attenuated Inversion Recovery (FLAIR) images are integrated in the segmentation procedure. The segmentation procedure is mainly based on the notion of fuzzy connectedness. This approach uses the two basic concepts of adjacency and affinity to form a fuzzy relation between voxels in the image. The affinity is defined using two quantities that are both based on characteristics of the intensities in the lesion and surrounding brain tissue (grey and white matter). The semi-automatic method has been compared to results of manual segmentation. Manual segmentation is prone to interobserver and intraobserver variability. This was especially true for this particular study, where large variations were observed, which implies that a golden standard for comparison was not available. The method did perform within the variability of the observers and therefore has the potential to improve reproducibility of quantitative measurements.
Shading is a prominent phenomenon in microscopy, reflecting the inherent imperfections of the image formation process and manifesting itself via spurious intensity variations not present in the original scene. The elimination of shading effects is frequently necessary for subsequent image processing tasks, especially if quantitative analysis is the final goal. In this paper a novel method for retrospective shading correction is proposed. First, the image formation process and the corresponding shading effects are described by a linear image formation model, consisting of an additive and a multiplicative shading component that are modeled by the parametric polynomial surfaces. Second, shading correction is performed by the inverse of the image formation model, whose shading components are estimated retrospectively by minimizing the entropy of the acquired images. The method was qualitatively and quantitatively evaluated by using artificial and real microscopical images of muscle fibers. A number of qualitative results confirmed that entropy is an appropriate measure for shading correction. Quantitative results indicate that the method does not introduce additional intensity variations but only reduces them if they exist. In conclusion, the proposed method uses all the information available in the images, it enables the optimization of arbitrarily complex image formation models, and as such may have applications in and beyond the field of microscopical imaging, for example, in MRI.
Registration algorithms often require the estimation of grey values at image locations that do not coincide with image grid points. Because of the intrinsic uncertainty, the estimation process will invariably be a source of error in the registration process. For measures based on entropy, such as mutual information, an interpolation method that changes the amount of dispersion in the probability distributions of the grey values of the images will influence the registration measure. With two images that have equal grid distances in one or more corresponding dimensions, a large number of grid points can be aligned for certain geometric transformations. As a result, the level of interpolation is dependent on the image transformation and hence, so is the interpolation-induced change in dispersion of the histograms. When an entropy based registration measure is plotted as a function of transformation, it will show sudden changes in value for the grid-aligning transformations. Such patterns of local extrema impede the optimization process. More importantly, they rule out subvoxel accuracy. Interpolation-induced artifacts are shown to occur in registration of clinical images, both for trilinear and partial volume interpolation. Furthermore, the results suggest that improved registration accuracy for scale-corrected MR images may be partly accounted for by the inequality of grid distances that is a result of scale correction.
Methods based on mutual information have shown promising results for matching of multimodal brain images. This paper discusses a multiscale approach to mutual information matching, aiming for an acceleration of the matching process while considering the accuracy and robustness of the method. Scaling of the images is done by equidistant sampling. Rigid matching of 3D magnetic resonance and computed tomography brain images is performed on datasets of varying resolution and quality. The experiments show that a multiscale approach to mutual information matching is an appropriate method for images of high resolution and quality. For such images an acceleration up to a factor of around 3 can be achieved. For images of poorer quality caution is advised with respect to the multiscale method, since the optimization method used (Powell) was shown to be highly sensitive to the local optima occurring in these cases. When incorrect intermediate results are avoided, an acceleration up to a factor of around 2 can be achieved for images of lower resolution.
Recent studies indicate that maximizing the mutual information of the joint histogram of two images is an accurate and robust way to rigidly register two mono- or multimodal images. Using mutual information for registration directly in a local manner is often not admissible owing to the weakened statistical power of the local histogram compared to a global one. We propose to use a global joint histogram based on optimized mutual information combined with a local registration measure to enable local elastic registration.
Surface (e.g., skin or cortex) based methods to register SPECT and MR images are well known and widely used. Such methods have the disadvantage of needing some kind of segmentation to obtain the surface, which is often a high-level task and possibly error-prone. Also, when reducing the gray-valued images to surfaces, i.e., binary structures, valuable information may be lost in the process. We propose to use the same surfaces, but in a fuzzy way, i.e., we determine the 'surfaceness,' thus retaining more information and avoiding binary segmentation. Such surfaceness can be computed using various techniques. We found a simple morphological approach (which is easy to implement and has very low computational complexity) to be quite sufficient. The resultant surfaceness images are subsequently registered by optimizing the cross-correlation value. The optimization is done using a hierarchical approach to cut the number of required computer operations down to an acceptable level.
This article concerns the integration of multimodal volumetric brain images. Integration consists of two steps: matching or registration, where the images are brought into spatial agreement, and fusion or simultaneous display where the registered multimodal image information is presented in an integrated fashion. Approaches to register multiple images are divided into extrinsic methods based on artificial markers, and intrinsic matching methods based solely on the patient related image data. The various methods are compared by a number of characteristics, which leads to a clear preference for one class of intrinsic methods, viz. voxel based matching. Furthermore, two- and three-dimensional techniques to display multimodality image information are outlined.
This paper describes a new approach to register images obtained from different modalities. Differential operators in scale space are used to extract geometric features from the images corresponding to similar structures. The resulting feature images may be matched by minimizing some function of the distances between the features in the respective images. Our first application concerns matching of brain images. We discuss a differential operator that produces ridge-like feature images from which the center curve of the cranium is easily extracted in CT and MRI. Results of the performance of these operators in 2-D matching tasks are presented. In addition, the potential of this approach for multimodality matching of 3-D medical images is illustrated by the striking similarity of the ridge images extracted from CT and MR images by the 3-D version of the operator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.