Doppler echocardiography is valuable for the diagnosis and management of several cardiovascular diseases. Automated analysis of Doppler images can significantly assist in decreasing the known variability of manual measurements and the burdensome of manual delineation and calculation. We propose a novel and fully automated method to detect and analyze spectral Doppler waves used for assessment of diastolic function from mitral inflow [MV] (peak E and A wave velocity), mitral annulus [MA] (peak E' and A' wave velocity), and pulmonary pressure (peak tricuspid regurgitation [TR] velocity). We used the Faster R-CNN deep learning-based method for Doppler, ECG, and anatomical ROIs localization. We then used ECG to segment Doppler signals into individual beats followed by assessing the quality of these beats using density-based method and Structural Similarity Index (SSIM). To segment the spectral envelope for each beat, we used a novel combination of k-means clustering algorithm and Gradient Vector Flow (GVF) snake algorithm. We used 701 Doppler images, collected from 100 patients acquired in the Clinical Center at the National Institutes of Health, to evaluate the performance of the proposed method against expert manual peak velocity estimation. The experimental results demonstrate the efficiency and robustness of the proposed framework in estimating peak velocity, and thus making it a viable candidate for use in clinical settings.
KEYWORDS: Magnetic resonance imaging, Heart, 3D modeling, Process modeling, 3D acquisition, Target detection, Databases, Detection and tracking algorithms, Functional analysis, Detector development
Cardiac perfusion magnetic resonance imaging (MRI) has proven clinical significance in diagnosis of heart diseases.
However, analysis of perfusion data is time-consuming, where automatic detection of anatomic landmarks
and key-frames from perfusion MR sequences is helpful for anchoring structures and functional analysis of
the heart, leading toward fully automated perfusion analysis. Learning-based object detection methods have
demonstrated their capabilities to handle large variations of the object by exploring a local region, i.e., context.
Conventional 2D approaches take into account spatial context only. Temporal signals in perfusion data present
a strong cue for anchoring. We propose a joint context model to encode both spatial and temporal evidence. In
addition, our spatial context is constructed not only based on the landmark of interest, but also the landmarks
that are correlated in the neighboring anatomies. A discriminative model is learned through a probabilistic
boosting tree. A marginal space learning strategy is applied to efficiently learn and search in a high dimensional
parameter space. A fully automatic system is developed to simultaneously detect anatomic landmarks and key
frames in both RV and LV from perfusion sequences. The proposed approach was evaluated on a database of
373 cardiac perfusion MRI sequences from 77 patients. Experimental results of a 4-fold cross validation show
superior landmark detection accuracies of the proposed joint spatial-temporal approach to the 2D approach that
is based on spatial context only. The key-frame identification results are promising.
KEYWORDS: Magnetic resonance imaging, Image segmentation, Magnetism, Reconstruction algorithms, In vivo imaging, Image processing, 3D image processing, Tissues, Blood, Gadolinium
Purpose: To validate a computer algorithm for measuring myocardial infarct size on gadolinium enhanced MR images. The results of computer infarct sizing are studied on phase-sensitive and magnitude imaging against a histopathology reference. Materials andMethods: Validations were performed in 9 canine myocardial infarctions determined by triphenyltetrazolium chloride (TTC). The algorithm analyzed the pixel intensity distribution within manually traced myocardial regions. Pixels darker than an automatically determined threshold were first excluded from further analysis. Selected image features were used to remove false positive regions. A threshold 50% between bright and dark regions was then used to minimize partial volume errors. Post-processing steps were applied to identify microvascular obstruction. Both phase sensitive and magnitude reconstructed MR images were measured by the computer algorithm in units of % of the left ventricle (LV) infarction and compared to TTC. Results: Correlations of MR and TTC infarct size were 0.96 for both phase sensitive and magnitude imaging. Bland Altman analysis showed no consistent bias as a function of infarct size. The average error of computer infarct sizing was less than 2% of the LV for both reconstructions. Fixed intensity thresholding was less accurate compared to the computer algorithm. Conclusions: MR can accurately depict myocardial infarction. The proposed computer algorithm accurately measures infarct size on contrast-enhanced MR images against the histopathology reference. It is effective for both phase-sensitive and magnitude imaging.
KEYWORDS: Computed tomography, Lung, Image registration, 3D image processing, Image segmentation, Lung cancer, Cancer, 3D vision, Chest, Signal to noise ratio
Several 3-D tools were developed to facilitate the radiologists for the examination of thoracic CT images. The image functions include segmentation of suspected regions, characterization of the nodules, localized 3-D view of the nodules, 3-D transparent view, 3-D image matching and 3-D volume registration. The last two functions are particularly useful for the temporal CT examination which the change of suspected regions must be evaluated. The majority of 3-D functions can be used to form a clinical workstation. As far as temporal image matching is concerned, the volume registration method can be made more accurately than slice matching methods. If an image subtraction function is used, fewer artifacts would be associated with the volume registration CT pair than with the slice matching CT pair.
We have developed various segmentation and analysis methods for the quantification of lung nodules in thoracic CT. Our methods include the enhancement of lung structures followed by a series of segmentation methods to extract the nodule and to form 3D configuration at an area of interest. The vascular index, aspect ratio, circularity, irregularity, extent, compactness, and convexity were also computed as shape features for quantifying the nodule boundary. The density distribution of the nodule was modeled based on its internal homogeneity and/or heterogeneity. We also used several density related features including entropy, difference entropy as well as other first and second order moments. We have collected 48 cases of lung nodules scanned by thin-slice diagnostic CT. Of these cases, 24 are benign and 24 are malignant. A jackknife experiment was performed using a standard back-propagation neural network as the classifier. The LABROC result showed that the Az of this preliminary study is 0.89.
In this paper, we present an automated multi-modality registration algorithm based on hierarchical feature extraction. The approach, which has not been used previously, can be divided into two distinct stages: feature extraction (edge detection, surface extraction), and geometric matching. Two kinds of corresponding features -- edge and surface -- are extracted hierarchically from various image modalities. The registration then is performed using least-squares matching of the automatically extracted features. Both the robustness and accuracy of feature extraction and geometric matching steps are evaluated using simulated and patient images. The preliminary results show the error is on the average of one voxel. We have shown the proposed 3D registration algorithm provides a simple and fast method for automatic registering of MR-to-CT and MR-to-PET image modalities. Our results are comparable to other techniques and require no user interaction.
In this paper, we present an automated multi-modality registration algorithm based on hierarchical feature extraction. The approach, which has not ben used previously, can be divided into two distinct stages: feature extraction and geometric matching. Two kinds of corresponding features - edge and surface - are extracted hierarchically from various image modalities. The registration then is performed using least-squares matching of the automatically extracted features. Both the robustness and accuracy of feature extraction and geometric marching steps are evaluated using simulated and patient images. The preliminary results show the error is on the average of one voxel. We have shown the proposed 3D registration algorithm provides a simple and fast method for automatic registering of MR-to-CT and MR-to- PET image modalities. Our results are comparable to other techniques and require no user interaction.
Image segmentation is considered one of the essential steps in medical image analysis. Cases such as classification of tissue structures for quantitative analysis, reconstruction of anatomical volumes for visualization, and registration of multi-modality images for complementary study often require the segmentation of the brain to accomplish the task. In many clinical applications, parts of this task are performed either manually or interactively. Not only is this proces often tedious and time-consuming, it introduces additional external factors of inter- and intra-rater variability. In this paper, we present a 3D automated algorithm for segmenting the brain from various MR images. This algorithm consists of a sequence of pre-determined steps: First, an intensity window for initial separation of the brain volume from the background and non-brain structures is selected by using probability curves fitting on the intensity histogram. Next, a 3D isotropic volume is interpolated and an optimal threshold value is determined to construct a binary brain mask. The morphological and connectivity processes are then applied on this 3D mask for eliminating the non-brain structures. Finally, a surface extraction kernel is applied to extract the 3D brain surface. Preliminary results from the same subjects with different pulse sequences are compared with the manual segmentation. The automatically segmented brain volumes are compared with the manual results using the correlation coefficient and percentage overlay. Then the automatically detected surfaces are measured with the manual contouring in terms of RMS distance. The introduced automatic segmentation algorithm is effective on different sequences of MR data sets without any parameter tuning. It requires no user interaction so variability introduced by manual tracing or interactive thresholding can be eliminated. Currently, the introduced segmentation algorithm is applied in the automated inter- and intra-modality image registration. It will furthermore be used in different applications such as quantitative analysis of normal and abnormal brain tissues.
Image registration is a correlation procedure that allows either the complementary study of images obtained from different modalities, or enables the analysis of images obtained by the same modality but at different times. Applied to a variety of clinical and investigational problems, image registration can offer a major advance in diagnostic imaging. In this paper, we present an automated multi-modality registration algorithm based on hierarchical feature extraction. Two kinds of shape representations -- edges and surfaces (skin surface, inner skull surface, and outer brain surface) -- are extracted hierarchically from different image modalities. The registration is then performed using the user- specified (but automatically extracted) corresponding features. Both the robustness of the algorithm and the registration accuracy using different registration features are compared in this paper. The preliminary results show that the use of edge and surface features could succeed over a large range of geometric displacements. The results also indicate that neither the edge nor the surface feature is clearly superior in terms of registration accuracy. Using the edge feature could, however, have the advantage of eliminating the surface segmentation step which requires extra complexity, variability, and time cost. We have shown the proposed 3D registration algorithm provides a simple and fast method for automatic registration of CT and MR image modalities. Preliminary results using our registration algorithm are comparable to those obtained by other techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.