This paper presents a learning based vessel detection and segmentation method in real-patient ultrasound (US) liver images. We aim at detecting multiple shaped vessels robustly and automatically, including vessels with weak and ambiguous boundaries. Firstly, vessel candidate regions are detected by a data-driven approach. Multi-channel vessel enhancement maps with complement performances are generated and aggregated under a Conditional Random Field (CRF) framework. Vessel candidates are obtained by thresholding the saliency map. Secondly, regional features are extracted and the probability of each region being a vessel is modeled by random forest regression. Finally, a fast levelset method is developed to refine vessel boundaries. Experiments have been carried out on an US liver dataset with 98 patients. The dataset contains both normal and abnormal liver images. The proposed method in this paper is compared with a traditional Hessian based method, and the average precision is promoted by 56 percents and 7.8 percents for vessel detection and classification, respectively. This improvement shows that our method is more robust to noise, therefore has a better performance than the Hessian based method for the detection of vessels with weak and ambiguous boundaries.
Adaptive thresholding is a useful technique for document analysis. In medical image processing, it is also helpful for segmenting structures, such as diaphragms or blood vessels. This technique sets a threshold using local information around a pixel, then binarizes the pixel according to the value. Although this technique is robust to changes in illumination, it takes a significant amount of time to compute thresholds because it requires adding all of the neighboring pixels. Integral images can alleviate this overhead; however, medical images, such as ultrasound, often come with image masks, and ordinary algorithms often cause artifacts. The main problem is that the shape of the summing area is not rectangular near the boundaries of the image mask. For example, the threshold at the boundary of the mask is incorrect because pixels on the mask image are also counted. Our key idea to cope with this problem is computing the integral image for the image mask to count the valid number of pixels. Our method is implemented on a GPU using CUDA, and experimental results show that our algorithm is 164 times faster than a naïve CPU algorithm for averaging.
Tumor tracking is very important to deal with a cancer in a moving organ in clinical applications such as radiotherapy, HIFU etc. Respiratory monitoring systems are widely used to find location of the cancers in the organs because respiratory signal is highly correlated with the movement of organs such as the lungs and liver. However the
conventional respiratory system doesn’t have enough accuracy to track the location of a tumor as well as they need additional effort or devices to use. In this paper, we propose a novel method to track a liver tumor in real time by extracting respiratory signals directly from B-mode images and using a deformed liver model generated from CT images of the patient. Our method has several advantages. 1) There is no additional radiation dose and is cost effective due to use of an ultrasound device. 2) A high quality respiratory signal can be directly extracted from 2D images of the diaphragm. 3) Using a deformed liver model to track a tumor’s 3D position, our method has an accuracy of 3.79mm in tracking error.
We present a new method for patient-specific liver deformation modeling for tumor tracking. Our method focuses on deforming two main blood vessels of the liver – hepatic and portal vein – to utilize them as features. A novel centerline editing algorithm based on ellipse fitting is introduced for vessel deformation. Centerline-based blood vessel model and various interpolation methods are often used for generating a deformed model at the specific time t. However, it may introduce artifacts when models used in interpolation are not consistent. One of main reason of this inconsistency is the location of bifurcation points differs from each image. To solve this problem, our method generates a base model from one of patient’s CT images. Next, we apply a rigid iterative closest point (ICP) method to the base model with centerlines of other images. Because the transformation is rigid, the length of each vessel’s centerline is preserved while some part of the centerline is slightly deviated from centerlines
of other images. We resolve this mismatch using our centerline editing algorithm. Finally, we interpolate three deformed models of liver, blood vessels, tumor using quadratic B´ezier curves. We demonstrate the effectiveness of the proposed approach with the real patient data.
KEYWORDS: 3D modeling, 3D image processing, Tumors, Liver, Motion models, Image processing, Computed tomography, Magnetic resonance imaging, Data modeling, Veins
This paper presents a novel method of using 2D ultrasound (US) cine images during image-guided therapy to accurately track the 3D position of a tumor even when the organ of interest is in motion due to patient respiration. Tracking is possible thanks to a 3D deformable organ model we have developed. The method consists of three processes in succession. The first process is organ modeling where we generate a personalized 3D organ model from high quality 3D CT or MR data sets captured during three different respiratory phases. The model includes the organ surface, vessel and tumor, which can all deform and move in accord with patient respiration. The second process is registration of the organ model to 3D US images. From 133 respiratory phase candidates generated from the deformable organ model, we resolve the candidate that best matches the 3D US images according to vessel centerline and surface. As a result, we can determine the position of the US probe. The final process is real-time tracking using 2D US cine images captured by the US probe. We determine the respiratory phase by tracking the diaphragm on the image. The 3D model is then deformed according to respiration phase and is fitted to the image by considering the positions of the vessels. The tumor’s 3D positions are then inferred based on respiration phase. Testing our method on real patient data, we have found the accuracy of 3D position is within 3.79mm and processing time is 5.4ms during tracking.
Automatic segmentation of anatomical structure is crucial for computer aided diagnosis and image guided online
treatment. In this paper, we present a novel approach for fully automatic segmentation of all anatomical structures from a
target liver organ in a coherent framework. Firstly, all regional anatomical structures such as vessel, tumor, diaphragm
and liver parenchyma are detected simultaneously using random forest classifiers. They share the same feature set and
classification procedure. Secondly, an efficient region segmentation algorithm is used to obtain the precise shape of these
regional structures. It is based on level set with proposed active set evolution and multiple features handling which
achieves 10 times speedup over existing algorithms. Thirdly, the liver boundary curve is extracted via a graph-based
model. The segmentation results of regional structures are incorporated into the graph as constraints to improve the
robustness and accuracy. Experiment is carried out on an ultrasound image dataset with 942 images captured with liver
motion and deformation from a number of different views. Quantitative results demonstrate the efficiency and
effectiveness of the proposed algorithm.
Respiratory motion tracking has been issues for MR/CT imaging and noninvasive surgery such as HIFU and
radiotherapy treatment when we apply these imaging or therapy technologies to moving organs such as liver, kidney or
pancreas. Currently, some bulky and burdensome devices are placed externally on skin to estimate respiratory motion of
an organ. It estimates organ motion indirectly using skin motion, not directly using organ itself. In this paper, we propose
a system that measures directly the motion of organ itself only using ultrasound image. Our system has automatically
selected a window in image sequences, called feature window, which is able to measure respiratory motion robustly even
to noisy ultrasound images. The organ's displacement on each ultrasound image has been directly calculated through the
feature window. It is very convenient to use since it exploits a conventional ultrasound probe. In this paper, we show that
our proposed method can robustly extract respiratory motion signal with regardless of reference frame. It is superior to
other image based method such as Mutual Information (MI) or Correlation Coefficient (CC). They are sensitive to what
the reference frame is selected. Furthermore, our proposed method gives us clear information of the phase of respiratory
cycle such as during inspiration or expiration and so on since it calculate not similarity measurement like MI or CC but
actual organ's displacement.
KEYWORDS: 3D modeling, Laser induced plasma spectroscopy, Head, 3D image processing, Motion models, Data modeling, Statistical modeling, Control systems, Visual process modeling, Nose
We propose a novel markerless 3D facial motion capture system using only one common camera. This system is simple
and easy to transfer facial expressions of a user's into virtual world. It has robustly tracking facial feature points
associated with head movements. In addition, it estimates high accurate 3D points' locations. We designed novel
approaches to the followings; Firstly, for precisely 3D head motion tracking, we applied 3D constraints using a 3D face
model on conventional 2D feature points tracking approach, called Active Appearance Model (AAM). Secondly, for
dealing with various expressions of a user's, we designed 2D face generic models from around 5000 images data and 3D
shape data including symmetric and asymmetric facial expressions. Lastly, for accurately facial expression cloning, we
invented a manifold space to successfully transfer 2D low dimensional feature points to 3D high dimensional points. The
manifold space is defined by eleven facial expression bases.
'Fast and robust' are the most beautiful keywords in computer vision. Unfortunately they are in trade-off relationship.
We present a method to have one's cake and eat it using adaptive feature selections. Our chief insight is that it compares
reference patterns to query patterns, so that it selects smartly more important and useful features to find target. The
probabilities of pixels in the query to belong to the target are calculated from importancy of features. Our framework has
three distinct advantages: 1 - It saves computational cost dramatically to the conventional approach. This framework
makes it possible to find location of an object in real-time. 2 - It can smartly select robust features of a reference pattern
as adapting to a query pattern. 3- It has high flexibility on any feature. It doesn't matter which feature you may use. Lots
of color space, texture, motion features and other features can fit perfectly only if the features meet histogram criteria.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.