Delineation of tumor and organs at risk on each phase of 4D CT images is an essential step in adaptive radiotherapy
planning. Manual contouring of the large amount of data is time-consuming and impractical. (Semi-) automated methods
typically rely on deformable image registration techniques to automatically map the manual contours drawn in one
image to all the other phases in order to get complete 4D contouring, a procedure known as automatic re-contouring.
Disadvantages of such approaches are that the manual contouring information is not used in the registration process and
the whole volume registration is highly inefficient. In this work, we formulate the automatic re-contouring in a
deformable surface model framework, which effectively restricts the computation to a lower dimensional space. The
proposed framework was inspired by the morphing active contour model proposed by Bertalmio et al. [1], but we
address some limitations of the original method. First, a surface-based regularization is introduced to improve robustness
with respect to noise. Second, we design a multi-resolution approach to further improve computational efficiency and to
account for large deformations. Third, discrete meshes are used to represent the surface model instead of the implicit
level set framework for better computational speed and simpler implementation. Experiment results show that the new
morphing active surface model method performs as accurately as a volume registration based re-contouring method but
is nearly an order of magnitude faster. The new formulation also allows easy combination of registration and
segmentation techniques for further improvement in accuracy and robustness.
Accurate models of human anatomy are obligatory for modern cancer radiotherapy. Maps of individual patient anatomies are usually drawn manually from CT images. Manual contouring is expensive and time-consuming because of the complexity of the anatomy, the low contrast of soft tissues in CT, and blurred image detail due to respiratory motion. We have developed automated contouring methods based on relative entropy and more general divergence measures from information theory and statistics that produce average minimum error inference like the traditional maximum likelihood (ML) and maximum a posteriori (MAP) classifiers. Unlike the ML/MAP classifiers that are frequently implemented assuming Gaussian models for the data, the information theoretic divergences require no data model. We have concentrated on the Jensen-Renyi divergence (JRD) by which multiple contours can be obtained simultaneously with the optimization of a single objective function. Region segmentation is accomplished by maximizing the divergence of pixel feature distributions inside and outside a flexible, closed, parametric curve. Recently we have integrated multivariate region segmentation with edge detection, also done by maximizing the JRD over sets of region-interior and region-edge pixels in edge-enhanced versions of the image. Further, region and edge detection are combined with prior shape constraints in which the combined JRD objective function is penalized if the flexible curve parameters deviate too far from those of a prior known shape. Though the performance of the JRD program is a complex function of pixel feature number and kind, edges, and shape priors, we demonstrate accurate contours computed from image data distributions and estimates of prior shape.
Image segmentations based on maximum likelihood (ML) or maximum a posteriori (MAP) analyses of object textures, edges, and shape often assume stationary Gaussian distributions for these features. For real images, neither Gaussianity nor stationarity may be realistic, so model-free inference methods would have advantages over those that are model-dependent. Relative entropy provides model-free inference, and a generalization--the Jensen-Renyi divergence (JRD)--computes optimal n-way decisions. We apply these results to patient anatomy contouring in X-ray computed tomography (CT) for radiotherapy treatment planning.
Image alignment is an absolute requirement for creating three-dimensional reconstructions from serial sections. The rotational and translational components of misalignment can be corrected by an iterative correlation procedure, but for images having significant differences, alignment can fail with a likelihood proportional to the extent of the differences. We found that translational correction was much more reliably determined when lowpass filters were applied to the product transforms from which the correlations were calculated. Also, rotational corrections based on polar analyses of the images' autocorrelations instead of the images directly contributed to more accurate alignments. These methods were combined to generate 3-D reconstructions of brain capillaries imaged by transmission electron microscopy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.