|
1.IntroductionPreservation of the cavernous nerves during prostate cancer surgery is critical in preserving a man’s ability to have spontaneous erections following surgery. These microscopic nerves course along the surface of the prostate within a few millimeters of the prostate capsule, and they vary in size and location from one patient to another, making preservation of the nerves difficult during dissection and removal of a cancerous prostate gland. These observations may explain in part the wide variability in reported potency rates (9 to 86%) following prostate cancer surgery.1 Any technology capable of providing improved identification, imaging, and visualization of the cavernous nerves during prostate cancer surgery would be of great assistance in increasing sexual function rates after surgery. Optical coherence tomography (OCT) is a noninvasive optical imaging technique used to perform high-resolution cross-sectional in vivo and in situ imaging of microstructure in biological tissues.2 OCT imaging of the cavernous nerves in the rat and human prostate has recently been demonstrated.3, 4, 5 However, further improvement in the quality of the images is necessary before OCT can be used in the clinic as an intraoperative diagnostic tool during nerve-sparing prostate cancer surgery. Three-dimensional (3-D) prostate segmentation, which allows clinicians to design an accurate brachytherapy treatment plan for prostate cancer, has been previously reported using computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound.6, 7 Recently, various segmentation approaches have also been applied in retinal OCT imaging. Ishikawa described an approach to segment retinal layers and extract thickness of the layers.8 Their algorithm searches for borders of retinal layers by applying an adaptive thresholding technique. Bagci described an algorithm to detect layers within the retinal tissue by enhancing edges along the image vertical dimension.9 Methods based on a Markov model and deformable splines were reported for determination of optic nerve-head geometry and thickness of retinal nerve fibers, respectively. 10, 11 However, large irregular voids in prostate OCT images require a segmentation approach different than that used for segmentation of the more regular structure of retinal layers. Our research group recently applied the wavelet shrinkage denoising technique to improve the quality of OCT images of the prostate for identification of the cavernous nerves.12 Building on these earlier results, the segmentation technique reported here has the advantage that it is not dependent on the depth of the nerves below the tissue surface. In this regard, the proposed segmentation approach is a more versatile method. In this study, 2-D prostate images are segmented into three regions of background, nerve, and prostate gland using a nearest-neighbor classifier. 2.Segmentation SystemA block diagram of the segmentation system is provided in Fig. 1 . The input image is first processed to form three feature images. The features are generated by Gabor filtering, Daubechies wavelet transform, and Laws filter mask, respectively. The prostate image is then segmented into nerve, prostate, and background classes using a -nearest neighbors classifier and the three feature images. Last, -ary morphological postprocessing is used to remove small voids. The generation of the feature images are first described here, followed by descriptions of the classifier and postprocessing. 2.1.Gabor FilterThe first feature image is generated by a Gabor filter with impulse response ,13 whereThe Gabor function is a complex sinusoid centered at frequency and modulated by a Gaussian envelope . The spatial extent of the Gaussian envelope is determined by parameters . The 2-D Fourier transform of is whereis the Fourier transform of . The parameters determine . Equations 3, 4 show that the Gabor function is essentially a bandpass filter centered about frequency with bandwidth determined by . The Gabor feature center frequency of is applied with standard deviations of 3 and 6 in the and directions, respectively, based on experimental observation of minimum segmentation error.2.2.Daubechies Wavelet TransformThe second feature is generated by an 8-tap Daubechies orthonormal wavelet transform, which is the representation of a function by scaled and translated copies of a finite-length or fast-decaying oscillating wave form that can be used to analyze signals at multiple scales. Wavelet coefficients carry both time and frequency information, as the basis functions vary in position and scale. The discrete wavelet transform (DWT) converts a signal to its wavelet representation. In a one-level DWT, the image is split into an approximation part and a detail part . In a multilevel DWT, each subsequent is split into an approximation and detail . For 2-D images, each is split into an approximation and three detail channels , and for horizontally, vertically, and diagonally oriented details, respectively, as illustrated in Fig. 2 . The inverse DWT (IDWT) reconstructs each from and . In the present work, the approximation part is chosen as the filtered image for the second feature. 2.3.Laws FilterThe third feature is generated by the Laws feature extraction method. The set of nine Laws impulse response arrays (Ref. 14) is convolved with a texture field to accentuate its microstructure. The ’th microstructure image is defined as Then, the energy of these microstructure arrays is measured by forming their moving window standard deviation according to where sets the window size, and is the mean value of over the window.For the present system, Laws feature extraction is applied by using the Laws 2 mask as follows: Standard deviation computation of Eq. 6 is performed after the Laws mask filtering to complete the Laws feature extraction. 2.4.-Nearest Neighbors ClassifierThe -nearest neighbors algorithm ( -NN) is a method for classifying objects where classification is based on the -closest training samples in the feature space. It is implemented by the following steps:
2.5.-ary Morphological PostprocessingThe -ary morphological postprocessing method for eliminating small misclassified regions proceeds in two steps.15 In the first step, pixels whose neighborhood consists entirely of one class in the classified image are left unchanged. Otherwise, the pixel value is set to zero to indicate that the pixel is no longer assigned to any class. In the second step, each unassigned pixel is assigned to the most prevalent class within the 8-neighborhood surrounding the pixel. 3.ResultsOCT images were taken in vivo in a rat model using a clinical endoscopic OCT system (Imalux, Cleveland, Ohio) based on an all single-mode fiber (SMF) common-path interferometer-based scanning system (Optiphase, Van Nuys, California). Mathcad 14.0 (Parametric Technology Corporation, Needham, Massachusetts) was used for implementation of the segmentation algorithm described earlier. Figures 3a, 3c, 3e show the original OCT images of the cavernous nerves at different orientations (longitudinal, cross-sectional, and oblique) coursing along the surface of the rat prostate. Figures 3b, 3d, 3f show the same OCT images after segmentation using the system of Fig. 1. The cavernous nerves could be differentiated from the prostate gland using this segmentation algorithm. The error rate was calculated by: Error=(No. of error pixels)/(No. of total pixels), where (No. of error pixels)=(No. of false-positives+No. of false-negatives). The overall error rate for the segmentation was 0.058 with a standard deviation of 0.019, indicating the robustness of our technique. The error rate was measured as a mean of error measurements for three different sample images at different orientations (longitudinal, cross-sectional, and oblique). A different image was used for training. The error rate was determined by comparing manually segmented images to the automatically segmented images. These manually segmented images of the cavernous nerves were previously created according to histologic correlation with OCT images.12 Overall, the proposed image segmentation of Fig. 1 performed well for identification of the cavernous nerves in the prostate. Areas that need improvement include the classification of prostate gland in which there are a few small scattered regions (shown in white) in the prostate that are erroneously segmented as part of the nerves [e.g., Fig. 3b]. For the present study, it was advantageous to manually vary the Gabor filter parameters so that the Gabor filter efficacy could be directly observed in the filtered images. Based on prior investigations,13, 16 the present results demonstrate the potential of our overall approach, although future work could include automation of Gabor filter parameter selection. Cross-validation, parameter optimization, and evaluation of alternative classifiers could also be performed. Nevertheless, our current results provide a foundation for more comprehensive studies. Last, it should be noted that the rat model represents an idealized version of the prostate anatomy because the cavernous nerve lies on the surface of the prostate and is therefore directly visible. However, in the human anatomy, there may be intervening tissue between the OCT probe and the nerves, making identification more difficult. An important advantage of the proposed classifier-based segmentation approach is that the classifier should also be able to locate the cavernous nerve when it lies at various depths beneath the surface. AcknowledgmentsThis research was supported by the Department of Defense Prostate Cancer Research Program, Grant No. PC073709. The authors thank Nancy Tresser of Imalux Corporation (Cleveland, Ohio) for lending us the Niris OCT system for these studies. ReferencesA. Burnett, G. Aus, E. Canby-Hagino, M. Cookson, A. D’Amico, R. Domchowski, D. Eton, J. Forman, S. Goldenberg, J. Hernandez, C. Higano, S. Kraus, M. Liebert, J. Moul, C. Tangen, J. Thrasher, and I. Thompson,
“Function outcome reporting after clinically localized prostate cancer treatment,”
J. Urol., 178 597
–601
(2007). 0022-5347 Google Scholar
D. Huang, E. Swanson, C. Lin, J. Schuman, W. Stinson, W. Chang, M. Hee, T. Flotte, K. Gregory, C. Puliafito, and J. Fujimoto,
“Optical coherence tomography,”
Science, 254 1178
–1181
(1991). https://doi.org/10.1126/science.1957169 0036-8075 Google Scholar
M. Aron, J. Kaouk, N. Hegarty, J. Colombo, G. Haber, B. Chung, M. Zhou, and I. Gill,
“Preliminary experience with the niris optical coherence tomography system during laparoscopic and robotic prostatectomy,”
J. Endourol., 21 814
–818
(2007). https://doi.org/10.1089/end.2006.9938 0892-7790 Google Scholar
N. Fried, S. Rais-Bahrami, G. Lagoda, A. Chuang, A. Burnett, and L. Su,
“Imaging the cavernous nerves in rat prostate using optical coherence tomography,”
Lasers Surg. Med., 39 36
–41
(2007). https://doi.org/10.1002/lsm.20454 0196-8092 Google Scholar
S. Rais-Bahrami, A. Levinson, N. Fried, G. Lagoda, A. Hristov, A. Chuang, A. Burnett, and L. Su,
“Optical coherence tomography of cavernous nerves: a step toward real-time intraoperative imaging during nerve-sparing radical prostatectomy,”
Urology, 72 198
–204
(2008). https://doi.org/10.1016/j.urology.2007.11.084 0090-4295 Google Scholar
D. Freedman, R. Radke, T. Zhang, Y. Jeong, D. Lovelock, and G. Chen,
“Model-based segmentation of medical imagery by matching distributions,”
IEEE Trans. Med. Imaging, 24 281
–292
(2005). https://doi.org/10.1109/TMI.2004.841228 0278-0062 Google Scholar
Y. Zhan and D. Shen,
“Deformable segmentation of 3-D ultrasound prostate images using statistical texture matching method,”
IEEE Trans. Med. Imaging, 25 256
–272
(2006). https://doi.org/10.1109/TMI.2005.862744 0278-0062 Google Scholar
H. Ishikawa, D. Stein, G. Wollstein, S. Beaton, J. Fujimoto, and J. Schuman,
“Macular segmentation with optical coherence tomography,”
Invest. Ophthalmol. Visual Sci., 46 2012
–2017
(2005). https://doi.org/10.1167/iovs.04-0335 0146-0404 Google Scholar
A. Bagci, R. Ansari, and M. Shahidi,
“A method for detection of retinal layers by optical coherence tomography image segmentation,”
144
(2007). Google Scholar
K. Boyer, A. Herzog, and C. Roberts,
“Automatic recovery of the optic nervehead geometry in optical coherence tomography,”
IEEE Trans. Med. Imaging, 25 553
–570
(2006). https://doi.org/10.1109/TMI.2006.871417 0278-0062 Google Scholar
M. Mujat, R. Chan, B. Cense, B. Park, C. Joo, T. Akkin, T. Chen, and J. de Boer,
“Retinal nerve fiber layer thickness map determined from optical coherence tomography images,”
Opt. Express, 13 9480
–9491
(2005). https://doi.org/10.1364/OPEX.13.009480 1094-4087 Google Scholar
S. Chitchian, M. Fiddy, and N. Fried,
“Denoising during optical coherence tomography of the prostate nerves via wavelet shrinkage using dual-tree complex wavelet transform,”
J. Biomed. Opt., 14 014031
(2009). https://doi.org/10.1117/1.3081543 1083-3668 Google Scholar
T. Weldon, W. Higgins, and D. Dunn,
“Efficient Gabor filter design for texture segmentation,”
Pattern Recogn., 29 2005
–2015
(1996). https://doi.org/10.1016/S0031-3203(96)00047-7 0031-3203 Google Scholar
W. Pratt, Digital Image Processing, Wiley, Hoboken. NJ (2007). Google Scholar
T. Weldon,
“Removal of image segmentation boundary errors using an -ary morphological operator,”
509
(2007). Google Scholar
T. Weldon and W. Higgins,
“Designing multiple Gabor filters for multitexture image segmentation,”
Opt. Eng., 38 1478
–1489
(1999). https://doi.org/10.1117/1.602196 0091-3286 Google Scholar
|