Parkinson's Disease (PD) is a movement disorder characterized by the loss of dopamine neurons in the substantia nigra pars compacta (SNpc) and norepinephrine neurons in the locus coeruleus (LC). To further understand the pathophysiology of PD, the input neurons of the SNpc and LC will be transsynapticly traced in mice using a fluorescent recombinant rabies virus (RbV) and imaged using serial two-photon tomography (STP). A mapping between these images and a brain atlas must be found to accurately determine the locations of input neurons in the brain. Therefore a registration pipeline to align the Allen Reference Atlas (ARA) to these types of images was developed. In the preprocessing step, a brain mask was generated from the transsynaptic tracing images using simple morphological operators. The masks were then registered to the ARA using Large Deformation Diffeomorphic Metric Mapping (LDDMM), an algorithm specialized for calculating anatomically realistic transforms between images. The pipeline was then tested on an STP scan of a mouse brain labeled by an adeno-associated virus (AAV). Based on qualitative evaluation of the registration results, the pipeline was found to be sufficient for use with transsynaptic RbV tracing.
To facilitate rigorous virtual clinical trials using model observers for breast imaging optimization and evaluation, we
demonstrated a method of defining statistical models, based on 177 sets of breast CT patient data, in order to generate
tens of thousands of unique digital breast phantoms.
In order to separate anatomical texture from variation in breast shape, each training set of breast phantoms were
deformed to a consistent atlas compressed geometry. Principal component analysis (PCA) was then performed on the
shape-matched breast CT volumes to capture the variation of patient breast textures. PCA decomposes the training set of
N breast CT volumes into an N-1-dimensional space of eigenvectors, which we call eigenbreasts. By summing weighted
combinations of eigenbreasts, a large ensemble of different breast phantoms can be newly created.
Different training sets can be used in eigenbreast analysis for designing basis models to target sub-populations defined
by breast characteristics, such as size or density. In this work, we plan to generate ensembles of 30,000 new phantoms
based on glandularity for an upcoming virtual trial of lesion detectability in digital breast tomosynthesis.
Our method extends our series of digital and physical breast phantoms based on human subject anatomy, providing the
capability to generate new, unique ensembles consisting of tens of thousands or more virtual subjects. This work
represents an important step towards conducting future virtual trials for tasks-based assessment of breast imaging, where
it is vital to have a large ensemble of realistic phantoms for statistical power as well as clinical relevance.
KEYWORDS: Computed tomography, Motion models, 3D modeling, Data modeling, Monte Carlo methods, Image segmentation, Image quality, Detection and tracking algorithms, 3D acquisition, 3D image processing
With the increased use of CT examinations, the associated radiation dose has become a large concern, especially for pediatrics. Much research has focused on reducing radiation dose through new scanning and reconstruction methods. Computational phantoms provide an effective and efficient means for evaluating image quality, patient-specific dose, and organ-specific dose in CT. We previously developed a set of highly-detailed 4D reference pediatric XCAT phantoms at ages of newborn, 1, 5, 10, and 15 years with organ and tissues masses matched to ICRP Publication 89 values. We now extend this reference set to a series of 64 pediatric phantoms of a variety of ages and height and weight percentiles, representative of the public at large. High resolution PET-CT data was reviewed by a practicing experienced radiologist for anatomic regularity and was then segmented with manual and semi-automatic methods to form a target model. A Multi-Channel Large Deformation Diffeomorphic Metric Mapping (MC-LDDMM) algorithm was used to calculate the transform from the best age matching pediatric reference phantom to the patient target. The transform was used to complete the target, filling in the non-segmented structures and defining models for the cardiac and respiratory motions. The complete phantoms, consisting of thousands of structures, were then manually inspected for anatomical accuracy. 3D CT data was simulated from the phantoms to demonstrate their ability to generate realistic, patient quality imaging data. The population of pediatric phantoms developed in this work provides a vital tool to investigate dose reduction techniques in 3D and 4D pediatric CT.
KEYWORDS: 3D modeling, Motion models, Computed tomography, Image segmentation, Medical imaging, Monte Carlo methods, Medical research, Imaging devices, Computer simulations, Data modeling
Computerized phantoms are finding an increasingly important role in medical imaging research. With the ability to
simulate various imaging conditions, they offer a practical means with which to quantitatively evaluate and improve
imaging devices and techniques. This is especially true in CT due to the high radiation levels involved with it. Despite
their utility, due to the time required to develop them, only a handful of computational models currently exist of varying
detail. Most phantoms available are limited to 3D and not capable of modeling patient motion. We have previously
developed a technique to rapidly create highly detailed 4D extended cardiac-torso (XCAT) phantoms based on patient
CT data [1].
In this study, we utilize this technique to generate 58 new adult XCAT phantoms to be added to our growing library of
virtual patients available for imaging research. These computerized patients provide a valuable tool for investigating
imaging devices and the effects of anatomy and motion in imaging. They also provide the essential tools to investigate
patient-specific dose estimation and optimization for adults undergoing CT procedures.
KEYWORDS: Computed tomography, 3D modeling, Bone, Image segmentation, Data modeling, Mathematical modeling, Natural surfaces, Medical imaging, Chest, Algorithm development
We create a series of detailed computerized phantoms to estimate patient organ and effective dose in pediatric CT and
investigate techniques for efficiently creating patient-specific phantoms based on imaging data. The initial anatomy of
each phantom was previously developed based on manual segmentation of pediatric CT data. Each phantom was
extended to include a more detailed anatomy based on morphing an existing adult phantom in our laboratory to match
the framework (based on segmentation) defined for the target pediatric model. By morphing a template anatomy to
match the patient data in the LDDMM framework, it was possible to create a patient specific phantom with many
anatomical structures, some not visible in the CT data. The adult models contain thousands of defined structures that
were transformed to define them in each pediatric anatomy. The accuracy of this method, under different conditions, was
tested using a known voxelized phantom as the target. Errors were measured in terms of a distance map between the
predicted organ surfaces and the known ones. We also compared calculated dose measurements to see the effect of
different magnitudes of errors in morphing. Despite some variations in organ geometry, dose measurements from
morphing predictions were found to agree with those calculated from the voxelized phantom thus demonstrating the
feasibility of our methods.
We propose a method which allows for the flexibility of a Gaussian mixture model - with model complexity selected adaptively from the data - for each tissue class. Our procedure involves modelling each class as a semiparametric mixture of Gaussians. The major difficulty associated with employing such semiparametric methods is overcome by solving dynamically the model selection problem. The crucial step of determining class-conditional mixture complexities for (unlabeled) test data in the unsupervised case is accomplished by matching models to a predefined data base of hand labelled experimental tissue samples. We model the class-conditional probability density functions via the "alternatinv kernel and mixture" (AKM) method which involves (1) semi-parametric estimation of subject-specific class-conditional marginal densities for a set of training volumes, (2) nearest neighbor matching of the test data to the training models providing for semi-automated class-conditional mixture complexities, (3) parameter fitting of the selected training model to the test data, and (4) plug-in Bayes classification of unlabeled voxels. Compared with previous approaches using partial volume mixtures for ten cingulate gyri, the hierarchical mixture model methodology provides a superior automatic segmentation results with a performance improvement that is statistically significant (p=0.03 for a paired one-sided t-test).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.