Parkinson disease (PD) is a common neurodegenerative pathology, whose accurate diagnosis is still a challenge. PET imaging obtained with [18F]-fluorodeoxyglucose provides a metabolic pattern, highlighting the brain substructures related to PD, thus constituting a valuable diagnosis tool. Besides, it has been reported that incorporating MRI into the analysis enhances the performance of methods aiming to discriminate between healthy subjects and PD patients. In this research, a methodology is proposed that allows: to integrate structural and metabolic imaging information at specific substructures of interest; to spatially align both modalities; to normalize functional images and to extract the adequate biomarkers. Among structural parameters, compacity and tortuosity are proposed, while metabolic biomarkers are extracted from histogram analyses. The random forest algorithm is used for classification and feature selection tasks. The studied populations consisted of nine patients with PD diagnosis and 12 healthy controls. Structural biomarkers showed a small contribution to discriminate between groups, while metabolic biomarkers resulted in 85% (training) to 100% (final test) accuracies. The proposed methodology is promising to diagnose PD and can be extended to other movement disorders.
An indirect method of tissue consistency measurement is proposed, based on intensity and texture features of conventional ultrasound (US) cervix images. Calibration and validation were carried out in five phantoms simulating different cervical firmness, as well as in short and long cervices. Several image features attributed to the histogram, the co–occurrence matrix and the run–length encoding matrix were extracted and analyzed to evaluate their ability to distinguish between degrees of phantoms’ firmness. The most indicative of firmness indices were selected by correlating their values with the phantoms’ elasticities determined through Young’s moduli. Also, a random forest classifier was implemented, allowing to identify the features that contribute the most to class separation between phantoms. Using both tests, six features were selected: mean, standard deviation, entropy, skewness and two RLE-matrix features. A 6–fold cross validation was used to evaluate the model, obtaining a 98.9±0.79% accuracy. Finally, a preliminary case study was conducted upon closed and opened cervical US images, classifying them between both groups using a random forest model, obtaining an 84.34% accuracy. The indicated tests show that intensity and texture features extracted from conventional US images provide indirect and less–invasive information than other methods regarding tissue consistency, and therefore may be used to measure changes in cervical firmness.
Ultrasound (US) images are necessary in obstetrics because they provide the most important clinical parameters for fetal health assessment during the second and third trimesters: head circumference, biparietal diameter, abdominal circumference and femur length. These fetometric indices are helpful for gestational age and fetal weight estimation; they are also helpful for obstetricians to diagnose fetal development abnormalities. However, these indices are obtained manually, which provokes high intra and interobserver variability and lack of repeatability. A fully automatic method to segment and measure femur’s length is presented in this paper. The proposed methodology incorporates texture information and introduces a novel curvature analysis to adequately detect the femur. It consists on pre–processing US images with an anisotropic diffusion filter, followed by morphological operations and thresholding to isolate femur–candidate regions. A normalized metric composed of intensity, length, centroid position and entropy is assigned to each region in order to select the most probable candidate to be femur. This selected region is afterwards thinned to a one–pixel line, whose curvature is analyzed with an angle threshold criterion to accurately locate femur’s extrema. The method was tested on 64 US images (20 taken on the second and 44 on the third trimester of pregnancy); a correlation coefficient of 0.984 and an error of 1.016±2.764 mm were achieved between expert–obtained manual measures and automatically calculated indices. Results are consistent, outperform those reported previously by other authors and show a high correlation with measures obtained by experts; therefore, the developed method is suitable to be adapted for clinical use.
The thickness of the nuchal fold is one of the main markers for the detection of Down syndrome during the second trimester of pregnancy. In this paper are reported our preliminary results of the automatic segmentation and measurement of the nuchal fold thickness in ultrasound images of the fetal brain. The method is based on a 2D active shape model used to segment the brain structures involved in the measurement of the nuchal fold: cerebellum; brain midline; the outer edge of the occipital plate; and the outer skin edge. The algorithm was trained and tested in 10 different ultrasound images, using leave one out cross validation. We have obtained an average difference of 0.23 mm from the expert measurement of the nuchal fold, with a standard deviation of 0.1 mm.
During the first trimester of pregnancy fetal health assessment is especially important. In the clinical practice, the gestational sac (GS) volume estimation is manually done using a tedious procedure which is prone to physicians' subjectivity. The method proposed in this paper consists on a semiautomatic delimitation of the GS and a segmentation of its content with minimal expert intervention. It is based on spreading active contours (SAC), following a planimetric strategy to define the GS' edges. Additionally, an optimal thresholding method was used to separate solid matter and amniotic fluid. The comparison between manual GS segmentations and those obtained with the proposed SAC method, shows Dice similarities of 90% and a mean Hausdor distance of 5.63 ± 1.94 mm, while the correlation index between SAC and the clinical reference (VOCAL) is 0.997. However, with statistical tests (t-paired) a value of p < 0.05 was obtained, which suggests a difference in the measured volume by the compared methods. The proposed method (SAC) has shown to be reliable, besides of being easy to implement.
Three dimensional ultrasound imaging has become the main modality for fetal health diagnostics, with extensive use in fetal brain imaging. According to the fetal position and the stage of development of the fetal skull, a specific plane of image acquisition is required. In most cases for a single plane of acquisition, the image quality is limited by the shadows produced by the skull. In this work a new method for registration of multiple views of 3D ultrasound of the fetal brain is reported, which results in improved imaging of the internal brain structures. In the initial stage, texture, intensity and edge features are used, with a support vector machine (SVM) for the segmentation of the skull in each of the 3D ultrasound views to be registered. The segmentation of each skull is modelled as a set of points with the centre determined with a Gaussian mixture model, where each point is assigned a probability of membership to a Gaussian determined by the posterior probability assigned by the SVM. Our method has shown improved results compared to intensity based registration, with a 52% reduction in the target registration error (TRE), and a 39% reduction in the TRE compared to feature based registration. These are encouraging results for the future development of an automatic method for registration and fusion of multiple views of 3D fetal ultrasound.
Ultrasound (US) images of the fetal brain provide the experts with valuable indicators of the fetal development. However as the skull thickens, it obstructs the transmission of the acoustic waves, which in turn occludes the anatomy behind the thickened fetal skull. A viable option to improve the visibility of the fetal brain, before complete calcification of the skull, is the calculation of a compounded image made of different views of the same anatomical plane. In this work we report a new method for the composition of ultrasound images based on the Weighted Mean of the pixels, from different views, which correspond to each position (x, y) in the final compounded image. A support vector machine (SVM) is used to calculate the weights of each pixel from a different view, based on intensity, entropy and variance features. We present the initial test results of our method on synthetic US images of a head phantom, contaminated with speckle noise; we report the signal to noise ratio (SNR) and the normalized mutual information (NMI), for different number of views (2, 3, and 5), and compare the results against images compounded using the Mean, Root Mean Square (RMS), and Geometrical Mean composition methods. With our scheme we were able to recover the occluded information to increase the NMI from 16% to 26%, representing a 58% improvement.
We present a discrete compactness (DC) index, together with a classification scheme, based both on the size and shape features extracted from brain volumes, to determine different aging stages: healthy controls (HC), mild cognitive impairment (MCI), and Alzheimer’s disease (AD). A set of 30 brain magnetic resonance imaging (MRI) volumes for each group was segmented and two indices were measured for several structures: three-dimensional DC and normalized volumes (NVs). The discrimination power of these indices was determined by means of the area under the curve (AUC) of the receiver operating characteristic, where the proposed compactness index showed an average AUC of 0.7 for HC versus MCI comparison, 0.9 for HC versus AD separation, and 0.75 for MCI versus AD groups. In all cases, this index outperformed the discrimination capability of the NV. Using selected features from the set of DC and NV measures, three support vector machines were optimized and validated for the pairwise separation of the three classes. Our analysis shows classification rates of up to 98.3% between HC and AD, 85% between HC and MCI, and 93.3% for MCI and AD separation. These results outperform those reported in the literature and demonstrate the viability of the proposed morphological indices to classify different aging stages.
KEYWORDS: Brain, Magnetic resonance imaging, Image segmentation, Neuroimaging, Alzheimer's disease, 3D modeling, 3D metrology, Feature extraction, Medical imaging, Pathology
Reported studies describing normal and abnormal aging based on anatomical MRI analysis do not consider morphological brain changes, but only volumetric measures to distinguish among these processes. This work presents a classification scheme, based both on size and shape features extracted from brain volumes, to determine different aging stages: healthy control (HC) adults, mild cognitive impairment (MCI), and Alzheimer's disease (AD). Three support vector machines were optimized and validated for the pair-wise separation of these three classes, using selected features from a set of 3D discrete compactness measures and normalized volumes of several global and local anatomical structures. Our analysis show classification rates of up to 98.3% between HC and AD; of 85% between HC and MCI and of 93.3% for MCI and AD separation. These results outperform those reported in the literature and demonstrate the viability of the proposed morphological indexes to classify different aging stages.
In this paper, a nonparametric statistical segmentation procedure based on the computation of the mean shift within the joint space-range feature representation of brain MR images is presented. The mean shift is a simple, nonparametric estimator, which can be implemented in a data-driven approach. The number of classes and other initialization parameters are not needed to compute the mean shift. The procedure estimates the local modes of the probability density function in order to define the cluster centers on the feature space. Local segmentation quality is improved by including a measure of edge confidence among adjacent segmented regions. This measure drives the iterative application of transitive closure operations on the region adjacency graph until convergence to a stable set of regions. In this manner, edge detection and region segmentation techniques are combined for the extraction of weak but significant edges from brain images. With the proposed methodology, the modes of the classes' distribution can be robustly estimated and homogeneous regions defined, but also fine borders are preserved. The main contribution of this work is the combined use of mean shift estimation, together with a robust, edge-oriented region fusion technique to delineate structures in brain MRI.
A segmentation procedure using a radial basis function network (RBFN), coupled with an active contour (AC) model based on a cubic splines formulation is presented for the detection of the gray-white matter boundary in axial MMRI (T1, T2 and PD). A RBFN classifier has been previously introduced for MMRI segmentation, with good generalization at a rate of 10% misclassification over white and gray matter pixels on the validation set. The coupled RBFN and AC model system incorporates the posterior probability estimation map into the AC energy term as a restriction force. The RBFN output is also employed to provide an initial contour for the AC. Furthermore, an adaptation strategy for the network weights, guided by a feedback from the contour model adjustment at each iteration, is described. In order to compare the algorithm's performance, the segmentations using the adaptive, as well as the non-adaptive schemes were computed. It was observed that the major differences are located around deep circonvolutions, where the result of the adaptive process is superior than that obtained with the non-adaptive scheme, even in moderate noise conditions. In summary, the RBFN provides a good initial contour for the AC, the coupling of both processes keeps the final contour within the desired region and the adaptive strategy enhances the contour location.
A fully automatic method to deform medical images is proposed. The procedure is based on the application of a set of consecutive local linear transformations at fixed landmarks, generating a global non-linear deformation. Continuity is guaranteed by a smooth change form the landmark point to the neighborhood, which is a homotopy between an affine transformation and the identity map. Landmarks are distributed uniformly throughout both reference and target images and their density is increased to reach the desired similarity between both images. A hybrid genetic optimization algorithm is used to search for the transformation parameters by maximizing the normalized mutual information. It is shown, by means of the transformation of a circle into a triangle and vice versa, that the method has the capability to generate either sharp of smooth deformations. For magnetic resonance images, it is proved that the successive application of the local linear transformations allows us to increase the similarity between geometrically deformed images and target. The results suggest that the method can be applied to a wide range of non-rigid image registration problems.
Spatial quantification of relevant brain structures, is usually carried out through the analysis of a stack of magnetic resonance (MR) images by means of some image segmentation approach. In this paper, multispectral MR imaging segmentation based on a modified radial-basis function network is presented. Multispectral MR image sets are constructed by collecting data for the same anatomical structures under T1, T2 and FLAIR excitation sequences. Classification features for the network are extended beyond the normalized intensities in each band to also include the cylindrical coordinates of the image pixels. Such coordinates are determined within a reference image space upon which all targets are registered to. The network classifier was designed to differentiate three structures: gray matter, white matter and image background. The classification layer was also modified to accommodate the pixel cylindrical coordinates as inputs. With the designed network, background pixels are correctly classified for all cases, while gray and white matter pixels are misclassified for about 10% of the cases in the validation set. The source of these errors can be traced to smooth transitions in the output nodes for these two classes. Thresholding the outputs of these nodes to include a reject class reduces the misclassification error. The small and simple architecture of the network shows good generalization, and thus good segmentation over unseen stacks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.