Medical imaging plays a key role in guiding treatment of traumatic brain injury (TBI) and for diagnosing intracranial hemorrhage; most commonly rapid computed tomography (CT) imaging is performed. Outcomes for patients with TBI are variable and difficult to predict upon hospital admission. Quantitative outcome scales (e.g., the Marshall classification) have been proposed to grade TBI severity on CT, but such measures have had relatively low value in staging patients by prognosis. Herein, we examine a cohort of 1,003 subjects admitted for TBI and imaged clinically to identify potential prognostic metrics using a “big data” paradigm. For all patients, a brain scan was segmented with multi-atlas labeling, and intensity/volume/texture features were computed in a localized manner. In a 10-fold crossvalidation approach, the explanatory value of the image-derived features is assessed for length of hospital stay (days), discharge disposition (five point scale from death to return home), and the Rancho Los Amigos functional outcome score (Rancho Score). Image-derived features increased the predictive R2 to 0.38 (from 0.18) for length of stay, to 0.51 (from 0.4) for discharge disposition, and to 0.31 (from 0.16) for Rancho Score (over models consisting only of non-imaging admission metrics, but including positive/negative radiological CT findings). This study demonstrates that high volume retrospective analysis of clinical imaging data can reveal imaging signatures with prognostic value. These targets are suited for follow-up validation and represent targets for future feature selection efforts. Moreover, the increase in prognostic value would improve staging for intervention assessment and provide more reliable guidance for patients.
The optic nerve (ON) plays a critical role in many devastating pathological conditions. Segmentation of the ON has the ability to provide understanding of anatomical development and progression of diseases of the ON. Recently, methods have been proposed to segment the ON but progress toward full automation has been limited. We optimize registration and fusion methods for a new multi-atlas framework for automated segmentation of the ONs, eye globes, and muscles on clinically acquired computed tomography (CT) data. Briefly, the multi-atlas approach consists of determining a region of interest within each scan using affine registration, followed by nonrigid registration on reduced field of view atlases, and performing statistical fusion on the results. We evaluate the robustness of the approach by segmenting the ON structure in 501 clinically acquired CT scan volumes obtained from 183 subjects from a thyroid eye disease patient population. A subset of 30 scan volumes was manually labeled to assess accuracy and guide method choice. Of the 18 compared methods, the ANTS Symmetric Normalization registration and nonlocal spatial simultaneous truth and performance level estimation statistical fusion resulted in the best overall performance, resulting in a median Dice similarity coefficient of 0.77, which is comparable with inter-rater (human) reproducibility at 0.73.
KEYWORDS: Image segmentation, Optic nerve, Magnetic resonance imaging, Eye, In vivo imaging, Error analysis, Image fusion, Biomedical optics, Magnetorheological finishing, Control systems
Multiatlas methods have been successful for brain segmentation, but their application to smaller anatomies remains relatively unexplored. We evaluate seven statistical and voting-based label fusion algorithms (and six additional variants) to segment the optic nerves, eye globes, and chiasm. For nonlocal simultaneous truth and performance level estimation (STAPLE), we evaluate different intensity similarity measures (including mean square difference, locally normalized cross-correlation, and a hybrid approach). Each algorithm is evaluated in terms of the Dice overlap and symmetric surface distance metrics. Finally, we evaluate refinement of label fusion results using a learning-based correction method for consistent bias correction and Markov random field regularization. The multiatlas labeling pipelines were evaluated on a cohort of 35 subjects including both healthy controls and patients. Across all three structures, nonlocal spatial STAPLE (NLSS) with a mixed weighting type provided the most consistent results; for the optic nerve NLSS resulted in a median Dice similarity coefficient of 0.81, mean surface distance of 0.41 mm, and Hausdorff distance 2.18 mm for the optic nerves. Joint label fusion resulted in slightly superior median performance for the optic nerves (0.82, 0.39 mm, and 2.15 mm), but slightly worse on the globes. The fully automated multiatlas labeling approach provides robust segmentations of orbital structures on magnetic resonance imaging even in patients for whom significant atrophy (optic nerve head drusen) or inflammation (multiple sclerosis) is present.
The optic nerve is a sensitive central nervous system structure, which plays a critical role in many devastating pathological conditions. Several methods have been proposed in recent years to segment the optic nerve automatically, but progress toward full automation has been limited. Multi-atlas methods have been successful for brain segmentation, but their application to smaller anatomies remains relatively unexplored. Herein we evaluate a framework for robust and fully automated segmentation of the optic nerves, eye globes and muscles. We employ a robust registration procedure for accurate registrations, variable voxel resolution and image fieldof- view. We demonstrate the efficacy of an optimal combination of SyN registration and a recently proposed label fusion algorithm (Non-local Spatial STAPLE) that accounts for small-scale errors in registration correspondence. On a dataset containing 30 highly varying computed tomography (CT) images of the human brain, the optimal registration and label fusion pipeline resulted in a median Dice similarity coefficient of 0.77, symmetric mean surface distance error of 0.55 mm, symmetric Hausdorff distance error of 3.33 mm for the optic nerves. Simultaneously, we demonstrate the robustness of the optimal algorithm by segmenting the optic nerve structure in 316 CT scans obtained from 182 subjects from a thyroid eye disease (TED) patient population.
Label fusion is a critical step in many image segmentation frameworks (e.g., multi-atlas segmentation) as it provides a mechanism for generalizing a collection of labeled examples into a single estimate of the underlying segmentation. In the multi-label case, typical label fusion algorithms treat all labels equally – fully neglecting the known, yet complex, anatomical relationships exhibited in the data. To address this problem, we propose a generalized statistical fusion framework using hierarchical models of rater performance. Building on the seminal work in statistical fusion, we reformulate the traditional rater performance model from a multi-tiered hierarchical perspective. This new approach provides a natural framework for leveraging known anatomical relationships and accurately modeling the types of errors that raters (or atlases) make within a hierarchically consistent formulation. Herein, we describe several contributions. First, we derive a theoretical advancement to the statistical fusion framework that enables the simultaneous estimation of multiple (hierarchical) performance models within the statistical fusion context. Second, we demonstrate that the proposed hierarchical formulation is highly amenable to the state-of-the-art advancements that have been made to the statistical fusion framework. Lastly, in an empirical whole-brain segmentation task we demonstrate substantial qualitative and significant quantitative improvement in overall segmentation accuracy.
Spleen segmentation on clinically acquired CT data is a challenging problem given the complicity and variability of abdominal anatomy. Multi-atlas segmentation is a potential method for robust estimation of spleen segmentations, but can be negatively impacted by registration errors. Although labeled atlases explicitly capture information related to feasible organ shapes, multi-atlas methods have largely used this information implicitly through registration. We propose to integrate a level set shape model into the traditional label fusion framework to create a shape-constrained multi-atlas segmentation framework. Briefly, we (1) adapt two alternative atlas-to-target registrations to obtain the loose bounds on the inner and outer boundaries of the spleen shape, (2) project the fusion estimate to registered shape models, and (3) convert the projected shape into shape priors. With the constraint of the shape prior, our proposed method offers a statistically significant improvement in spleen labeling accuracy with an increase in DSC by 0.06, a decrease in symmetric mean surface distance by 4.01 mm, and a decrease in symmetric Hausdorff surface distance by 23.21 mm when compared to a locally weighted vote (LWV) method.
Multi-atlas registration-based segmentation is a popular technique in the medical imaging community, used to transform anatomical and functional information from a set of atlases onto a new patient that lacks this information. The accuracy of the projected information on the target image is dependent on the quality of the registrations between the atlas images and the target image. Recently, we have developed a technique called AQUIRC that aims at estimating the error of a non-rigid registration at the local level and was shown to correlate to error in a simulated case. Herein, we extend upon this work by applying AQUIRC to atlas selection at the local level across multiple structures in cases in which non-rigid registration is difficult. AQUIRC is applied to 6 structures, the brainstem, optic chiasm, left and right optic nerves, and the left and right eyes. We compare the results of AQUIRC to that of popular techniques, including Majority Vote, STAPLE, Non-Local STAPLE, and Locally-Weighted Vote. We show that AQUIRC can be used as a method to combine multiple segmentations and increase the accuracy of the projected information on a target image, and is comparable to cutting edge methods in the multi-atlas segmentation field.
Anatomical contexts (spatial labels) are critical for interpretation of medical imaging content. Numerous approaches have been devised for segmentation, query, and retrieval within the Picture Archive and Communication System (PACS) framework. To date, application-based methods for anatomical localization and tissue classification have yielded the most successful results, but these approaches typically rely upon the availability of standardized imaging sequences. With the ever expanding scope of PACS archives — including multiple imaging modalities, multiple image types within a modality, and multi-site efforts, it is becoming increasingly burdensome to devise a specific method for each data type. To address the challenge of generalizing segmentations from one modality to another, we consider multi-atlas segmentation to transfer label information from labeled T1-weighted MRI data to unlabeled data collected in a diffusion tensor imaging (DTI) experiment. The label transfer approach is fully automated and enables a generalizable cross-modality segmentation method. Herein, we propose a multi-tier multi-atlas segmentation framework for the segmentation of previously unlabeled imaging modalities (e.g.,B0 images for DTI analysis). We show that this approach can be used to construct informed structure-wise noise estimates for fractional anisotropy (FA) measurements of DTI. Although this label transfer methodology is demonstrated in the context of quality control of DTI images, the proposed framework is applicable to any application where the segmentation of unlabeled modalities is limited due to the current collection of available atlases.
Ventral hernias (VHs) are abnormal openings in the anterior abdominal wall that are common side effects of surgical intervention. Repair of VHs is the most commonly performed procedure by general surgeons worldwide, but VH repair outcomes are not particularly encouraging (with recurrence rates up to 43%). A variety of open and laparoscopic techniques are available for hernia repair, and the specific technique used is ultimately driven by surgeon preference and experience. Despite routine acquisition of computed tomography (CT) for VH patients, little quantitative information is available on which to guide selection of a particular approach and/or optimize patient-specific treatment. From anecdotal interviews, the success of VH repair procedures correlates with hernia size, location, and involvement of secondary structures. Herein, we propose an image labeling protocol to segment the anterior abdominal area to provide a geometric basis with which to derive biomarkers and evaluate treatment efficacy. Based on routine clinical CT data, we are able to identify inner and outer surfaces of the abdominal walls and the herniated volume. This is the first formal presentation of a protocol to quantify these structures on abdominal CT. The intra- and inter rater reproducibilities of this protocol are evaluated on 4 patients with suspected VH (3 patients were ultimately diagnosed with VH while 1 was not). Mean surfaces distances of less than 2mm were achieved for all structures.
Labeling or segmentation of structures of interest on medical images plays an essential role in both clinical and scientific
understanding of the biological etiology, progression, and recurrence of pathological disorders. Here, we focus on the
optic nerve, a structure that plays a critical role in many devastating pathological conditions – including glaucoma,
ischemic neuropathy, optic neuritis and multiple-sclerosis. Ideally, existing fully automated procedures would result in
accurate and robust segmentation of the optic nerve anatomy. However, current segmentation procedures often require
manual intervention due to anatomical and imaging variability. Herein, we propose a framework for robust and fully-automated
segmentation of the optic nerve anatomy. First, we provide a robust registration procedure that results in
consistent registrations, despite highly varying data in terms of voxel resolution and image field-of-view. Additionally,
we demonstrate the efficacy of a recently proposed non-local label fusion algorithm that accounts for small scale errors
in registration correspondence. On a dataset consisting of 31 highly varying computed tomography (CT) images of the
human brain, we demonstrate that the proposed framework consistently results in accurate segmentations. In particular,
we show (1) that the proposed registration procedure results in robust registrations of the optic nerve anatomy, and (2)
that the non-local statistical fusion algorithm significantly outperforms several of the state-of-the-art label fusion
algorithms.
Malignant gliomas are the most common form of primary neoplasm in the central nervous system, and one of the
most rapidly fatal of all human malignancies. They are treated by maximal surgical resection followed by radiation
and chemotherapy. Herein, we seek to improve the methods available to quantify the extent of tumors using newly
presented, collaborative labeling techniques on magnetic resonance imaging. Traditionally, labeling medical images
has entailed that expert raters operate on one image at a time, which is resource intensive and not practical for very
large datasets. Using many, minimally trained raters to label images has the possibility of minimizing laboratory
requirements and allowing high degrees of parallelism. A successful effort also has the possibility of reducing
overall cost. This potentially transformative technology presents a new set of problems, because one must pose the
labeling challenge in a manner accessible to people with little or no background in labeling medical images and
raters cannot be expected to read detailed instructions. Hence, a different training method has to be employed. The
training must appeal to all types of learners and have the same concepts presented in multiple ways to ensure that all
the subjects understand the basics of labeling. Our overall objective is to demonstrate the feasibility of studying
malignant glioma morphometry through statistical analysis of the collaborative efforts of many, minimally-trained
raters. This study presents preliminary results on optimization of the WebMILL framework for neoplasm labeling
and investigates the initial contributions of 78 raters labeling 98 whole-brain datasets.
Image labeling is an essential step for quantitative analysis of medical images. Many image labeling algorithms require
seed identification in order to initialize segmentation algorithms such as region growing, graph cuts, and the random
walker. Seeds are usually placed manually by human raters, which makes these algorithms semi-automatic and can be
prohibitive for very large datasets. In this paper an automatic algorithm for placing seeds using multi-atlas registration
and statistical fusion is proposed. Atlases containing the centers of mass of a collection of neuroanatomical objects are
deformably registered in a training set to determine where these centers of mass go after labels transformed by
registration. The biases of these transformations are determined and incorporated in a continuous form of Simultaneous
Truth And Performance Level Estimation (STAPLE) fusion, thereby improving the estimates (on average) over a single
registration strategy that does not incorporate bias or fusion. We evaluate this technique using real 3D brain MR image
atlases and demonstrate its efficacy on correcting the data bias and reducing the fusion error.
Segmentation plays a critical role in exposing connections between biological structure and function. The process of
label fusion collects and combines multiple observations into a single estimate. Statistically driven techniques provide
mechanisms to optimally combine segmentations; yet, optimality hinges upon accurate modeling of rater behavior.
Traditional approaches, e.g., Majority Vote and Simultaneous Truth and Performance Level Estimation (STAPLE), have
been shown to yield excellent performance in some cases, but do not account for spatial dependences of rater
performance (i.e., regional task difficulty). Recently, the COnsensus Level, Labeler Accuracy and Truth Estimation
(COLLATE) label fusion technique augmented the seminal STAPLE approach to simultaneously estimate regions of
relative consensus versus confusion along with rater performance. Herein, we extend the COLLATE framework to
account for multiple consensus levels. Toward this end, we posit a generalized model of rater behavior of which
Majority Vote, STAPLE, STAPLE Ignoring Consensus Voxels, and COLLATE are special cases. The new algorithm is
evaluated with simulations and shown to yield improved performance in cases with complex region difficulties. Multi-COLLATE achieve these results by capturing different consensus levels. The potential impacts and applications of
generative model to label fusion problems are discussed.
Labeling or segmentation of structures of interest in medical imaging plays an essential role in both clinical and
scientific understanding. Two of the common techniques to obtain these labels are through either fully automated
segmentation or through multi-atlas based segmentation and label fusion. Fully automated techniques often result in
highly accurate segmentations but lack the robustness to be viable in many cases. On the other hand, label fusion
techniques are often extremely robust, but lack the accuracy of automated algorithms for specific classes of problems.
Herein, we propose to perform simultaneous automated segmentation and statistical label fusion through the
reformulation of a generative model to include a linkage structure that explicitly estimates the complex global
relationships between labels and intensities. These relationships are inferred from the atlas labels and intensities and
applied to the target using a non-parametric approach. The novelty of this approach lies in the combination of previously
exclusive techniques and attempts to combine the accuracy benefits of automated segmentation with the robustness of a
multi-atlas based approach. The accuracy benefits of this simultaneous approach are assessed using a multi-label multi-atlas
whole-brain segmentation experiment and the segmentation of the highly variable thyroid on computed tomography
images. The results demonstrate that this technique has major benefits for certain types of problems and has the potential
to provide a paradigm shift in which the lines between statistical label fusion and automated segmentation are
dramatically blurred.
Labeling or parcellation of structures of interest on magnetic resonance imaging (MRI) is essential in quantifying and
characterizing correlation with numerous clinically relevant conditions. The use of statistical methods with automated
techniques or complete data sets from several different raters has been proposed to simultaneously estimate both rater
reliability and true labels. An extension to these statistical based methodologies was proposed that allowed for missing
labels, repeated labels and training trials. Herein, we present and demonstrate the viability of these statistical based
methodologies using real world data contributed by minimally trained human raters. The consistency of the statistical
estimates, the accuracy compared to the individual observations and the variability of both the estimates and the
individual observations with respect to the number of labels are discussed. It is demonstrated that the Gaussian based
statistical approach using the previously presented extensions successfully performs label fusion in a variety of contexts
using data from online (Internet-based) collaborations among minimally trained raters. This first successful
demonstration of a statistically based approach using "wild-type" data opens numerous possibilities for very large scale
efforts in collaboration. Extension and generalization of these technologies for new application spaces will certainly
present fascinating areas for continuing research.
Labeling structures on medical images is crucial in determining clinically relevant correlations with morphometric and
volumetric features. For the exploration of new structures and new imaging modalities, validated automated methods do
not yet exist, and so researchers must rely on manually drawn landmarks. Voxel-by-voxel labeling can be extremely
resource intensive, so large-scale studies are problematic. Recently, statistical approaches and software have been
proposed to enable Internet-based collaborative labeling of medical images. While numerous labeling software tools
have been created, the use of these packages as high-throughput labeling systems has yet to become entirely viable given
training requirements. Herein, we explore two modifications to a typical mouse-based labeling system: (1) a platform
independent overlay for recognition of mouse gestures and (2) an inexpensive touch-screen tracking device for nonmouse
input. Through this study we characterize rater reliability in point, line, curve, and region placement. For the
mouse input, we find a placement accuracy of 2.48±5.29 pixels (point), 0.630±1.81 pixels (curve), 1.234±6.99 pixels
(line), and 0.058±0.027 (1 - Jaccard Index for region). The gesture software increased labeling speed by 27% overall
and accuracy by approximately 30-50% on point and line tracing tasks, but the touch screen module lead to slower and
more error prone labeling on all tasks, likely due to relatively poor sensitivity. In summary, the mouse gesture
integration layer runs as a seamless operating system overlay and could potentially benefit any labeling software; yet, the
inexpensive touch screen system requires improved usability optimization and calibration before it can provide an
efficient labeling system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.