Visual marker based tracking is one of the most widely used tracking techniques in Augmented Reality (AR) applications. Generally, multiple square markers are needed to perform robust and accurate tracking. Various marker
based methods for calibrating relative marker poses have already been proposed. However, the calibration accuracy of
these methods relies on the order of the image sequence and pre-evaluation of pose-estimation errors, making the method offline. Several studies have shown that the accuracy of pose estimation for an individual square marker depends on camera distance and viewing angle. We propose a method to accurately model the error in the estimated pose and translation of a camera using a single marker via an online method based on the Scaled Unscented Transform (SUT). Thus, the pose estimation for each marker can be estimated with highly accurate calibration results independent of the order of image sequences compared to cases when this knowledge is not used. This removes the need for having multiple markers and an offline estimation system to calculate camera pose in an AR application.
Motivation: The existing visualization of the Camera augmented mobile C-arm (CamC) system does not have enough cues for depth information and presents the anatomical information in a confusing way to surgeons. Methods: We propose a method that segments anatomical information from X-ray and then augment it on the video images. To provide depth cues, pixels belonging to video images are classified as skin and object classes. The augmentation of anatomical information from X-ray is performed only when pixels have a larger probability of belonging to skin class. Results: We tested our algorithm by displaying the new visualization to 2 expert surgeons and 1 medical student during three surgical workflow sequences of the interlocking of intramedullary nail procedure, namely: skin incision, center punching, and drilling. Via a survey questionnaire, they were asked to assess the new visualization when compared to the current alphablending overlay image displayed by CamC. The participants all agreed (100%) that occlusion and instrument tip position detection were immediately improved with our technique. When asked if our visualization has potential to replace the existing alpha-blending overlay during interlocking procedures, all participants did not hesitate to suggest an immediate integration of the visualization for the correct navigation and guidance of the procedure. Conclusion: Current alpha blending visualizations lack proper depth cues and can be a source of confusion for the surgeons when performing surgery. Our visualization concept shows great potential in alleviating occlusion and facilitating clinician understanding during specific workflow steps of the intramedullary nailing procedure.
Motivation: In prostate brachytherapy, intra-operative dosimetry would be ideal to allow for rapid evaluation of
the implant quality while the patient is still in the treatment position. Such a mechanism, however, requires 3-D
visualization of the currently deposited seeds relative to the prostate. Thus, accurate, robust, and fully-automatic
seed segmentation is of critical importance in achieving intra-operative dosimetry. Methodology: Implanted
brachytherapy seeds are segmented by utilizing a region-based implicit active contour approach. Overlapping
seed clusters are then resolved using a simple yet effective declustering technique. Results: Ground-truth
seed coordinates were obtained via a published segmentation technique. A total of 248 clinical C-arm images
from 16 patients were used to validate the proposed algorithm resulting in a 98.4% automatic detection rate
with a corresponding 2.5% false-positive rate. The overall mean centroid error between the ground-truth and
automatic segmentations was measured to be 0.42 pixels, while the mean centroid error for overlapping seed
clusters alone was measured to be 0.67 pixels. Conclusion: Based on clinical data evaluation and validation,
robust, accurate, and fully-automatic brachytherapy seed segmentation can be achieved through the implicit
active contour framework and subsequent seed declustering method.
The Camera-Augmented Mobile C-arm (CamC) augments X-ray by optical camera images and is used as an
advanced visualization and guidance tool in trauma and orthopedic surgery. However, in its current form the
calibration is suboptimal. We investigated and compared calibration and distortion correction between: (i) the
existing CamC calibration framework (ii) Zhang's calibration for video images, and (iii) the traditional C-arm
fluoroscopy calibration technique. Accuracy of the distortion correction for each of the three methods is
compared by analyzing the error based on a synthetic model and the linearity and cross-ratio properties. Also,
the accuracy of calibrated X-ray projection geometry is evaluated by performing C-arm pose estimation using
a planar pattern with known geometry. The RMS errors based on a synthetic model and pose estimation
shows that the traditional C-arm method (μ=0.39 pixels) outperforms both Zhang (μ=0.68 pixels) and original
CamC (μ=1.07 pixels) methods. The relative pose estimation comparison shows that the translation error of
the traditional method (μ=0.25mm) outperforms Zhang (μ=0.41mm) and CamC (μ=1.13mm) method. In
conclusion, we demonstrated that the traditional X-ray calibration procedure outperforms the existing CamC
solution and Zhang's method for the calibration of C-arm X-ray projection geometry.
Motivation: In prostate brachytherapy, real-time dosimetry would be ideal to allow for rapid evaluation of the implant
quality intra-operatively. However, such a mechanism requires an imaging system that is both real-time and which
provides, via multiple C-arm fluoroscopy images, clear information describing the three-dimensional position of the
seeds deposited within the prostate. Thus, accurate tracking of the C-arm poses proves to be of critical importance to the
process. Methodology: We compute the pose of the C-arm relative to a stationary radiographic fiducial of known
geometry by employing a hybrid registration framework. Firstly, by means of an ellipse segmentation algorithm and a
2D/3D feature based registration, we exploit known FTRAC geometry to recover an initial estimate of the C-arm pose.
Using this estimate, we then initialize the intensity-based registration which serves to recover a refined and accurate
estimation of the C-arm pose. Results: Ground-truth pose was established for each C-arm image through a published and
clinically tested segmentation-based method. Using 169 clinical C-arm images and a ±10° and ±10 mm random
perturbation of the ground-truth pose, the average rotation and translation errors were 0.68° (std = 0.06°) and 0.64 mm
(std = 0.24 mm). Conclusion: Fully automated C-arm pose estimation using a 2D/3D hybrid registration scheme was
found to be clinically robust based on human patient data.
The CARTO XP is an electroanatomical cardiac mapping system that provides 3D color-coded maps of the
electrical activity of the heart, however it is expensive and it can only use a single costly magnetic catheter for each
patient intervention. Aim: To develop an affordable fluoroscopic navigation system that could shorten the duration of RF
ablation procedures and increase its efficacy. Methodology: A 4-step filtering technique was implemented in order to
project the tip electrode of an ablation catheter visible in single-view C-arm images in order to calculate its width. The
width is directly proportional to the depth of the catheter. Results: For phantom experimentation, when displacing a 7-
French catheter at 1cm intervals away from an X-ray source, the recovered depth using a single image was 2.05 ± 1.47
mm, whereas depth errors improved to 1.55 ± 1.30 mm when using an 8-French catheter. In clinic experimentation,
twenty posterior and left lateral images of a catheter inside the left ventricle of a mongrel dog were acquired. The
standard error of estimate for the recovered depth of the tip-electrode of the mapping catheter was 13.1 mm and 10.1 mm
respectively for the posterior and lateral views. Conclusions: A filtering implementation using single-view C-arm images
showed that it was possible to recover depth in phantom study and proved adequate in clinical experimentation based on
isochronal map fusion results.
Motivation: In prostate brachytherapy, transrectal ultrasound (TRUS) is used to visualize the anatomy, while implanted
seeds can be seen in C-arm fluoroscopy or CT. Intra-operative dosimetry optimization requires localization of the
implants in TRUS relative to the anatomy. This could be achieved by registration of TRUS images and the implants
reconstructed from fluoroscopy or CT. Methods: TRUS images are filtered, compounded, and registered on the
reconstructed implants by using an intensity-based metric based on a 3D point-to-volume registration scheme. A
phantom was implanted with 48 seeds, imaged with TRUS and CT/X-ray. Ground-truth registration was established
between the two. Seeds were reconstructed from CT/X-ray. Seven TRUS filtering techniques and two image similarity
metrics were analyzed as well. Results: For point-to-volume registration, noise reduction combined with beam profile
filter and mean squares metrics yielded the best result: an average of 0.38 ± 0.19 mm seed localization error relative to
the ground-truth. In human patient data C-arm fluoroscopy images showed 81 radioactive seeds implanted inside the
prostate. A qualitative analysis showed clinically correct agreement between the seeds visible in TRUS and
reconstructed from intra-operative fluoroscopy imaging. The measured registration error compared to the manually
selected seed locations by the clinician was 2.86 ± 1.26 mm. Conclusion: Fully automated seed localization in TRUS
performed excellently on ground-truth phantom, adequate in clinical data and was time efficient having an average
runtime of 90 seconds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.