Regular Articles

Image segmentation on adaptive edge-preserving smoothing

[+] Author Affiliations
Kun He, Dan Wang

Sichuan University, School of Computer Science, No. 24 in Round One Road, Chengdu 610065, China

Xiuqing Zheng

Sichuan Normal University, School of Computer Science, No. 5 Jing An Road JinJiang District, Chengdu 610066, China

J. Electron. Imaging. 25(5), 053022 (Oct 04, 2016). doi:10.1117/1.JEI.25.5.053022
History: Received May 10, 2016; Accepted September 9, 2016
Text Size: A A A

Open Access Open Access

Abstract.  Nowadays, typical active contour models are widely applied in image segmentation. However, they perform badly on real images with inhomogeneous subregions. In order to overcome the drawback, this paper proposes an edge-preserving smoothing image segmentation algorithm. At first, this paper analyzes the edge-preserving smoothing conditions for image segmentation and constructs an edge-preserving smoothing model inspired by total variation. The proposed model has the ability to smooth inhomogeneous subregions and preserve edges. Then, a kind of clustering algorithm, which reasonably trades off edge-preserving and subregion-smoothing according to the local information, is employed to learn the edge-preserving parameter adaptively. At last, according to the confidence level of segmentation subregions, this paper constructs a smoothing convergence condition to avoid oversmoothing. Experiments indicate that the proposed algorithm has superior performance in precision, recall, and F-measure compared with other segmentation algorithms, and it is insensitive to noise and inhomogeneous-regions.

Figures in this Article

As a technology used to extract a region of interest automatically or semiautomatically, image segmentation is a key step in image analysis and understanding studies.1 It is used for object model representation, parameter extraction, object recognition, and for video encoding of objects in MPEG4.2 Until now, there have been lots of segmentation methods for all kinds of purposes, such as organ extraction in medical applications3 and object detection in the remote sensing systems.4 However, they are all only used for some specific purposes, and it is difficult to generalize them to any image segmentation tasks. Consequently, a uniform segmentation framework is required for researchers and developers.5

Generally, image segmentation is implemented based on similarity and dissimilarity among subregion features,6 such as color, intensity, statistical characteristics, and specific shape. However, real images contain a large amount of inhomogeneous subregions and are unavoidably affected by noise. On the one hand, inhomogeneous subregions may form weak edges or deteriorate the similarity of the subregion, i.e., intensity uniformity. On the other hand, since noise causes pseudo-edges and weakens the significant difference among subregions,7 the nonrobustness of the subregion characteristics is aggravated.

Active contour models for image segmentation are popular algorithms for dividing an image into foreground and background. The basic idea is a deformable curve, which conforms to various shapes of objects. Combining the piecewise smoothing with the statistical properties of the noise, Chan and Vese proposed the region-based active contour model (CV model).8 In this model, the object and background regions are represented as the mean of subregions respectively. Thus, it is insensitive to noise and helpful for enhancing the computational efficiency of the Mumford–Shah model.9

The results of segmentation using the CV model are unsatisfactory for real images. The reason is that in-homogeneity reduces significant differences of the mean of subregions. In order to improve the segmentation performance, Tsai and Yezzi proposed a piecewise smooth (PS) model10 by approximating pixels of subregion into a function. Compared to the CV model, the PS model is insensitive to inhomogeneous subregions, but it is difficult to apply it in practice due to the expensive computational cost. Therefore, Li and Kao proposed a local binary fitting model,11 which employs the Gauss kernel function to approximate the neighborhood pixels of the active contour. Peng and Liu proposed an active contour model driven by normalized local image fitting energy.12 Although the above models have strong locating capabilities, the results of segmentation rely on the assumption of approximation function and the initial curve.

In order to overcome these shortcomings, a large number of research works were investigated. To strengthen the robustness of the initial curve, Jiang and Feng proposed a segmentation model based on improved level set and region growth, which takes the statistical information of an object as seed.13 Based on regional-similarity and the level set, Kong and Wang proposed the segmentation model to improve the hypothesis of approximation function model.14

Although region-based active contour segmentation models are generally robust to noise, they are valid only for images with homogeneous region given the number of subregions. According to the relationship between contour and edge, Li et al. proposed an edge-based active contour segmentation model.15 Unfortunately, it is sensitive to noise and inhomogeneous subregions. To cope with this, images are often smoothed with a Gaussian filter. However, a Gaussian filter is an isotropic point diffusion that crosses the boundaries of subregions and leads to the level set curve converging at the neighbor of object contours. Furthermore, a Gaussian filter with large standard variance may seriously blur boundaries formed by weak edges. This leads to the overconvergence of curves, and conversely the curves will be premature. It is difficult to adaptively choose standard variance for different regions in an image.16 By incorporating prior object-shape information into the initial evolving curve, Yeo and Xie improved the accuracy of segmentation for specific shape regions.17

Under- and over-segmentation phenomenon exist when traditional active contour models are applied to real images due to inhomogeneous subregions and weak edges. In order to smooth the inhomogeneous subregions and preserve edges, we propose the smoothing conditions for image segmentation: (1) isotropic smoothing inside the subregions and (2) anisotropic smoothing along the edge. Unfortunately, the aforementioned conditions are incompatible. Inspired by total variation,18 we construct an edge-preserving smoothing model, which is a compromise of the above conditions. Further, due to the location of edges, a fixed edge-preserving parameter is not a reasonable trade-off between edge-preserving and subregion smoothing. It will cause blurred-edges and residual nonuniformity in the smoothing component. To solve this problem, we investigate the two-clustering algorithm of center pixel and four neighbors to adjust the edge-preserving parameter adaptively. Fixed-point iteration is employed to compute the smoothing component. While the number of iteration is high, the smoothing component converges to the mean of the image, and the difference between the feature of an object and that of the surrounding region is not significant. To avoid these, we construct a smoothing convergence condition according to the confidence level of segmentation subregions on different smoothing components. The experimental results show that this segmentation model is insensitive to noise and inhomogeneous-regions.

The outline of the paper is as follows. In the next section, two conditions on image piecewise smoothing are proposed to construct an edge-preserving smoothing model, and the clustering algorithm is employed to learn the edge-preserving parameter. In Sec. 3, a new segmentation model for the edge-preserving smoothing component is proposed. The proposed image segmentation model is implemented in Sec. 4. The experimental results are given in Sec. 5. Finally, the conclusion is given in Sec. 6.

The active contour model for image segmentation is curve evolution implementation based on the Mumford–Shah model.9 It is formulated as the following minimization problem: Display Formula

E(u,C)=τ2Ω[u(x,y)u0(x,y)]2dxdy+Ω/C|u(x,y)|2dxdy+γC,(1)
where C is the segmentation curve, u0:Ω[0,1] is a given image, u is a piecewise smoothing component of an image u0 and contains homogeneous subregions and significant differences among subregions. The piecewise smoothing component u is a solution of the following problem: Display Formula
infu{E(u,C)=τ2Ω(uu0)2dxdy+Ω/Cf(|u|)dxdy}.(2)
To analyze the diffusibility performance for the smoothing function f(|u|), the function f(|u|) is decomposed using the local image structures, i.e., the tangent and normal directions. The diffusibility performances along the tangent and normal directions are denoted by ρT and ρN, respectively: Display Formula
ρT=f(|u|)|u|,ρN=f(|u|).(3)

Edge-Preserving Smoothing

In order to smooth subregions and preserve edges for real images, the diffusibility performance of the function f(|u|) along the tangent and normal directions should satisfy the following two conditions:

  1. Inside the subregion where gradients are low, we would like to encourage smoothing along the tangent and normal directions, which makes the intensities of the subregions equal or nearly equal to constant. In other words, it is isotropic diffusion. Assume that the function is regular, this condition may be achieved by imposing: Display Formula
    lim|u|0ρT=lim|u|0ρN=α>0.(4)
  2. In an edge where the image presents a strong gradient, we prefer to diffuse along this edge and not across it. To do this, it is sufficient to annihilate, for strong gradients, the coefficient of ρN and assume that the ρT does not vanish: Display Formula
    lim|u|ρT=β>0,lim|u|ρN=0.(5)

In the Mumford–Shah model,9 the L2-norm of the gradient is a smoothing function for segmentation. The diffusibility performances of this function along the tangent and normal directions are the same, i.e., ρT=ρN=1. The diffusion actions in the normal direction cross the edge; thus, this function cannot satisfy the second condition. To preserve the edge, Chan et al.18 proposed the total variation that the L1-norm of the gradient replaces the L2-norm. The performance along the normal direction is zero. It does not satisfy the first condition, which leads to a pseudoedge in the smoothing subregions.

Unfortunately, the above two conditions are incompatible. Compared with the piecewise smoothing functions of the Mumford–Shah model9 and total variation,18 we design an edge-preserving smoothing function for image segmentation. Display Formula

f(|u|)=ln(1+|u|).(6)

The diffusibility performance in the tangent and normal directions are as follows: Display Formula

ρT=1(1+|u|)|u|,ρN=1(1+|u|)2.(7)
Inside the subregion where gradients are low, lim|u|0ρT=, lim|u|0ρN=1; at the edge where gradients are strong, lim|u|ρT=lim|u|ρN=0, and lim|u|ρNρT=1 (see Fig. 1). It is a compromise of two conditions on edge-preserving smoothing. This function preserves the edge and smooths the inhomogeneous subregion. Therefore, the edge-preserving smoothing model is given as Display Formula
infu{E(EP)(u)=τ2Ω(uu0)2dxdy+Ωln(1+|u|)dxdy}.(8)

Graphic Jump Location
Fig. 1
F1 :

The diffusibility performance of this function. The solid and dash dot curve denote the diffusibility in the tangent and normal direction, respectively.

This problem allows for a unique solution characterized by the Euler–Lagrange equation: Display Formula

τ(uu0)div[u(1+|u|)|u|]=0.(9)
To compute the smoothing component, we use a semi-implicit finite difference scheme. Let a set Λ is four neighbors region of the center pixel (i,j), and p is a member of set Λ, the approximation of Eq. (9) can be simply written as Display Formula
u(i,j)=1τ+pΛω(p)[τu0(i,j)+pΛω(p)u(p)],(10)
where τ is the edge-preserving parameter, in other words, it is the weight coefficient of the center pixel. The ω(p) is the weight coefficient of the neighbor pixel p, the relationship between ω(p) and the gradient is shown in Fig. 2. Display Formula
ω(p)=1[1+|u(p)|]|u(p)|.(11)

Graphic Jump Location
Fig. 2
F2 :

The weight coefficient of the neighbor pixel.

Adaptive Edge-Preserving Smoothing

In traditional edge-preserving smoothing algorithms (i.e., TV18), the edge-preserving parameter τ in Eq. (10) often is fixed by an artificial setting. If τω(p), then u(i,j)u0(i,j). The smoothing component contains the redundancy of inhomogeneous subregion, which causes the level set converge to local optima. If τω(p), the u(i,j) approximates the weighted-mean of four neighborhood pixels. The edge of the smoothing component is blurred, the segmentation-curve is overconvergence. Above all, the fixed parameter cannot weigh up the edge-preserving and subregion smoothing according to local information of an image.

To solve the above problem, we analyze the two-clustering of the center pixel and four neighbors based on their possible spatial relationship.

  • If four neighborhood pixels locate in the object region, the center pixel belongs to the object region according to the subregion connectivity.
  • If the center pixel locates at the object boundaries, one of the following three cases applies:
    • One of the four neighbor pixels locates in background, others in an object region. There are (41)=4 kinds of situations.
    • Two of the four neighbor pixels locate in background and others in an object region, there are (42)=6 kinds of situations.
    • According to the continuity of the object contour, three or all of neighbor pixels cannot locate in the background.

The two-clustering of the center pixel and its four neighbors are shown in Fig. 3. Observed from Fig. 3, the edge-preserving parameter τ is set as the medium of the weight coefficients of the center pixel and its four-neighbor: Display Formula

τ=k×medium{1[1+|u0(i,j)|]|u0(i,j)|,ω(p)}pΛ0,(12)
where k is the constant, which normalizes parameter τ.

Graphic Jump Location
Fig. 3
F3 :

The two-clustering of the center pixel and its four neighbors, the white and black circles denote object and background, respectively. (a) All pixels in the object region, (b) one of the four neighbor pixels locates in background and others in object region, and (c) two of the four neighbor pixels locate in background and others in object region.

In terms of the analysis in Sec. 2, the segmentation model on the smoothing component is proposed as the following minimization problem: Display Formula

infC{E(u,C)=Ω(u(x,y)u0(x,y)2dxdy+Ωln(1+|u(x,y)|)dxdy+γC}.(13)

During image segmentation, the curve may have a topological deformation (split or merge). To cope with this problem, active contours based on the level set are applied into image segmentation. The basic idea is to represent contours as the level set of an implicit function φ(x,y), i.e., C={(x,y)|φ(x,y)=0}. The inside-region denotes {(x,y)|φ(x,y)<0} and outside-region is {(x,y)|φ(x,y)>0}. To simplify, both regions are approximated by the Heaviside function H(φ). The curve is represented as the Dirac measure δ(φ), which is the derivative of H(φ), where H(φ) and δ(φ) are defined as, respectively, Display Formula

H(φ)={1φ00φ<0,δ(φ)=dH(φ)dφ.(14)
However, φ(x,y) cannot satisfy the regularity condition |φ|=1, the penalty term is introduced15Display Formula
p(φ)=12Ω|φ1|2dxdy.(15)

Since the circumference and the area of the closed curve become smaller, the optimal segmentation curve is represented as Display Formula

infφ{ϵ(φ)=λΩgδ(φ)|φ|dxdy+νΩgH(φ)dxdy+μ2Ω|φ1|2dxdy},(16)
where λ and ν are the weight of the circumference and area of curve, respectively, and g is the edge indicator function of the smoothing component Display Formula
g=(1+|u|)1.(17)

If the level set curve locates in where the gradients are low, the edge indicator function is almost the maximum of the entire image. Otherwise, the edge indicator function is the minimum, and the level set curve convergence to the boundary.

Consequently, we incorporate the edge-preserving smoothing model into the above segmentation model, and construct the energy function of the edge-preserving smoothing segmentation model. Display Formula

infφ{E(u,φ)=Ω[u(x,y)u0(x,y)]2dxdy+τ2Ωln[1+|u(x,y)|]dxdy+λΩgδ(φ)|φ|dxdy+νΩgH(φ)dxdy+μ2Ω|φ1|2dxdy}.(18)
To minimize the function E(u,φ), we denote the Gateaux derivative19 of the function E(φ,u) as E(φ,u)/φ. By calculating variations, the Gateaux derivative of the function E(φ,u) in Eq. (18) can be written as Display Formula
E(u,φ)φ=μ[Δφdiv(φ|φ|)]λδ(φ)div(gφ|φ|)νgδ(φ),(19)
where Δ is the Laplacican operator. Therefore, φ satisfies the Euler–Lagrange equation. By introducing an artificial temporal variable t, we use the steepest descent process to get minimization of the function E(u,φ), whose gradient flow is Display Formula
φt=μ[Δφdiv(φ|φ|)]+λδ(φ)div(gφ|φ|)+νgδ(φ).(20)

In Eq. (20), the Dirac measure δ(φ) is noncontinuous. When calculating the level set, take continuous δa(φ) (a=1.5) instead of δ(φ)Display Formula

δa(φ)={0|φ|>a12a[1+cos(πφa)]|φ|a.(21)

In this paper, the φ/t is approximated by the forward difference, and the φ is approximated by the central difference. The approximation of Eq. (20) for the smoothing component um can be simply written as Display Formula

φi,jk+1φi,jkΔt=μ[Δ(φi,jk)div(φi,jk|φi,jk|)]+λδa(φi,jk)div(gi,jmφi,jk|φi,jk|)+νgi,jmδa(φi,jk),(22)
where, Δt is the time step, and gi,jm is edge indicator function of the smoothing component um. um is calculated by the fixed-point iteration algorithm Display Formula
ui,jm=1τm+pΛ0ωm(p)[pΛ0ωm(p)um1(p)+τmu0(i,j)].(23)
For Eq. (23), the smoothing component converges to the mean of the initial image without constraint conditions, which leads to the difference between the features of the object and the surrounding region not being significant. To avoid this phenomenon, we present the confidence level of segmented subregions on two adjacent iterations of the smoothing component, which is defined as following: Display Formula
Pr=card(AmAm1)max[card(Am),card(Am1)].(24)
Here the set Am and Am1 represent the segmented subregions [(x,y)|φ(x,y)0] for the smoothing component um and um1, respectively. When the confidence level satisfies the following condition, the smooth is terminated: Display Formula
PrT,(25)
where T is the threshold of the regional confidence level.

The steps of image segmentation (output)

{Initial: k,λ,μ,ν,Δt,T,φ0(x,y) and u0=u0}

N is the iterative number of image smoothing

Begin

N:= 0;

Repeat

  Computing the weight coefficient ω(p) of smoothing component uN uses: Display Formula

ωN(p)=1(1+|uN1(p)|)|uN1(p)|.(26)

  Computing the edge-preserving parameter τ: Display Formula

τN=k×medium{1(1+|u0(i,j)|)|u0(i,j)|,ωN(p)}pΛ0.(27)

  Computing the smoothing component uN uses: Display Formula

ui,jN=1τN+pΛ0ωN(p)[pΛ0ωN(p)uN1(p)+τNu0(i,j)].(28)

  Computing the edge indicator function of the smoothing component uN uses: Display Formula

g(uN)=11+|uN|.(29)

Segmentation of the smoothing component uN uses: Eq. (22)

Until

The convergence condition: Eq. (25)

Output: the result of segmentation.

End

Implementation Details

The experiments are conducted using VC 6.0 on a PC with Intel-Core i5 CPU @ 3.40 GHz and 4 GB of RAM without any particular code optimization. During the implementation of the proposed model, we used the parameters λ=5.0, μ=0.04, ν=3.0 and time step Δt=5.0 for all experiments. Here, we propose the following functions as the initial function φ0(x,y). Let Ω0 be all the points on the boundaries of Ω0 which is a subset in the image domain Ω. Then, the initial function φ0(x,y) is defined as Display Formula

φ0(x,y)={ξ(x,y)Ω0Ω00(x,y)Ω0ξ(x,y)ΩΩ0,(30)
where ξ is a constant. We suggest to choose ξ larger than 2a, where a is the width in the definition of the regularized Dirac function δa(φ) in Eq. (21).

In this paper, we use three universally agreed on, standard, and easy-to-understand measures for evaluating a segmentation model, those are precision, recall, and F-measure. The first two evaluation metrics are based on the overlapping area between ground truth and segmentation regions. Usually, neither precision nor recall can comprehensively evaluate the quality of segmentation. So the F-measure is proposed as a harmonic mean of them. For a segmentation object region, we can convert it to a binary mask M and compute precision and recall by comparing M with ground-truth GDisplay Formula

precision=|MG||M|,recall=|MG||G|,F-measure=2×precision×recallprecision+recall.(31)

Discussion

The image is smoothed using the edge-preserving smoothing model in the proposed model, so the segmentation performance depends on the parameter k in Eq. (12). To analyze the relationship between k and the scores of image segmentation (precision, recall, and F-measure), a 480×320-pixel image with grass and sand in the Berkeley segmentation database is smoothed with different parameter k, and the results of segmentation are shown in Fig. 4. The initial curve and ground truth are represented as a red rectangle and yellow curve, respectively, in the top left-hand corner subimage of Fig. 4.

Graphic Jump Location
Fig. 4
F4 :

The results of segmentation, edge indicator functions, and smoothing components with different parameter k. (a) The flowerbed, (b) the edge indicator function, and (c) the cartoon component. Row 1: original images and initial curves, rows 2 to 5: the results of segmentation, edge indicator function, and smoothing component using 0.005, 0.05, 0.2, 0.25, and 0.5, respectively.

As illustrated in the second row, the subregion pixels close to constant and the edge are blurred by this smoothing algorithm with k=0.005. The blurred edge leads to the over-convergence of the level set, which results in parts of the object region pixels being mistaken as the background. Thus, the recall is low at 0.862. The precision and the F-measure are 0.995 and 0.915, respectively.

As illustrated in the last row, when k=0.5, the smoothing component contains remnants of an inhomogeneous subregion, which leads to the level set convergence at the local optimum. The precision is low, and the F-measure is 0.848.

Figure 5 shows CPU times and scores of the segmentation on Fig. 4 using this model with different parameters k. While k is smaller, the precision, recall, and F-measure of the segmentation are lower and the CPU time is shorter.

Graphic Jump Location
Fig. 5
F5 :

The CPU time (in seconds) and score of segmentation in Fig. 4. (a) The CPU time of the segmentation using the different parameter k. (b) The red, green, and blue curves show the F-measure, precision, and recall of the segmentation the different parameter using the different parameter k, respectively.

If k is large, the remnants inhomogeneous subregion leads to the level set’s fast convergence at the local optimum. When k[0.05,0.18], this model preserves the edge and smooths the inhomogeneous subregion. The maximum difference of the F-measure is 0.005, e.g., the F-measures of the segmentation with k=0.05 and 0.18 are 0.98 and 0.975, respectively.

In this model, the smooth components converge to the mean of the image without constrained conditions, which leads to the difference between the feature of the object and that of the surrounding region not being significant. To avoid this, we use the threshold of segmented subregions confidence level. To validate how the threshold affects the segmentation performance, a 480×320-pixel potted-tree image of the Berkeley segmentation database, in which some subregions are inhomogeneous (crown of the tree), is segmented with different thresholds.

The segmentation results are shown in Fig. 6. The initial curve and the ground truth, represented as a red rectangle and blue curve, respectively, are shown in Fig. 6(a). The smoothing component retains an inhomogeneous subregion by this smoothing algorithm using the threshold T=0.95, which leads to parts of background region pixels being mistaken as the object [shown in Fig. 6(b)]. The precision of segmentation is 0.937, and the F-Measure is 0.959. When T=0.99, the weak edge is smoothed [see the inside circle in Fig. 6(f)] and the computation time is longer. The F-measure, recall, and precision are 0.978, 0.961, and 0.997. The CPU time and score of segmentation using this model with the different threshold are shown in Fig. 7. As shown in Fig. 7, when the threshold increases, the computation time using this model is longer. If the threshold closes to one, the F-measures of segmentation descend.

Graphic Jump Location
Fig. 6
F6 :

The results of segmentation with different thresholds. (a) Initial curve and the ground truth and (b)–(f) the results of segmentation using different thresholds 0.95, 0.96, 0.97, 0.98, and 0.99, respectively.

Graphic Jump Location
Fig. 7
F7 :

The CPU time and score of segmentation in Fig. 6. (a) The CPU time of the segmentation using the different threshold T. (b) The red, green, and blue curves show the F-measure, precision, and recall of segmentation with the different threshold T, respectively.

The parameter k is the constant that normalizes parameter τ and parameter T is the threshold of the regional confidence level. In order to preserve the edge and smooth the inhomogeneous subregion, we suggest to choose k=0.05 and T=0.98.

Segmentation Algorithm Comparisons

To test segmentation performance using the proposed method on real images with slightly inhomogeneous subregions, the experiments are carried on to compare with the Li’s model,15 TB model,20 and the CV model.8 The choice of these algorithms is motivated by the following reasons: these four algorithms all employ the level set. Li’s model and TB model exploited the edge feature; the image is preprocessed by the Gaussian filter and the classical TV, respectively. In the TB model, the TV smoothing and the smoothing-component segmentation are individual steps. The number of iterations was not taken into consideration. The CV model uses the regional characteristics of the subregion to represent the object as the mean of the subregion. The different sizes of images are segmented, which are from the International web and the Berkeley segmentation database. The partial results are shown is shown in Fig. 8. The effects of the four algorithms on the homogeneous image are almost similar, such as Fig. 8(a). For the image with weak edges [i.e., Fig. 8(b)], the results of segmentation using the proposed method and TB model are better than those of the other two models. Segmentation performance using the CV model is poor for inhomogeneous subregions, e.g., Fig. 8(c), and the F-measure is 0.812. This is the reason that the intensity mean of a subregion indicates the region.

Graphic Jump Location
Fig. 8
F8 :

Comparison of the proposed method with Li’s model13 and the TB model,20 and the CV model8 on real images. (a) The lotus, (b) the eagles, and (b) the butterfly. Row 1: original images and initial curves, row 2: segmentation results of the CV, row 3: the segmentation results of Li’s model, row 4: segmentation results of the TB model, row 5: the segmentation results of the proposed model, and row 6: the ground truth.

However, the effect of the proposed method for images with a seriously inhomogeneous region, such as the images in Fig. 9, is better than that of the other three models.

Graphic Jump Location
Fig. 9
F9 :

Comparison of the proposed method with Li’s models13 and the CV model8 on real images with serious in-homogeneity. (a) The blossom, (b) the viburnum, and (c) the cycas. Row 1: original images and initial curves, row 2: segmentation results of the CV model, row 3: the segmentation results of Li’s model, row 4: segmentation results of the TB model, row 5: the segmentation results of the proposed model, and row 6: the ground truth.

The objector region is divided into many subregions using the CV model,8 which is that the object region contains many subregions with different intensities. The Li’s model13 avoids oversegmentation, but the segmentation curve is far away from the true boundaries where gradients are low. The positional accuracy using the TB model20 is higher than that of Li’s model, there is oversegmentation, as shown in Fig. 9(c). The segmentation curve using the proposed method cannot locate at the object boundaries with weak edges [i.e., Fig. 9(a)]. For the images in Figs. 8 and 9, the CPU time and scores of segmentation are given in Table 1.

Table Grahic Jump Location
Table 1The comparison of CPU time and scores of segmentation in Figs. 8 and 9.

Compared to the effect of Li’s model, the TB model and the CV model on real images with inhomogeneous subregions and weak edges, the effect of the proposed method is better. Nevertheless, the computation time using the proposed method is costly, which is the reason that this method uses iteration smoothing to deal with the inhomogeneous subregion. However, Li’s model uses the Gaussian smoothing only one time, and the CV model does not smooth the image. For images of the same size, the iterative number mainly depends on the region’s inhomogeneous degree, such as for images in Figs. 8(b) and 9(a), the CPU time using the proposed method is 8.985 and 11.66 s, respectively.

To test the proposed method’s robustness against noise, segmented experiments on a 320×240 degraded image with additive white noise are conducted and compared with Li’s model,15 the TB model,20 and the CV model.8 Partial results are shown in Fig. 10. With the PSNR decreasing, the isotropic diffusion in Li’s model blurs the object contour, and the fixed variance of Gauss kernel function cannot remove all kinds of noise. The level set curve could not locate accurately. The subregions in the object are separated using the CV model. Furthermore, the results of segmentation become terrible with lower PSNR. Compared to the ground truth, the positional accuracy of the segmentation curve using the proposed method is higher than that of the TB model. The edge-preserving parameter preserves edges and smooths subregions, according to local information. The scores of different models with different PSNR are shown in Table 2.

Graphic Jump Location
Fig. 10
F10 :

Comparison of the proposed method with Li’s models13 and the CV model8 on real images with noise. (a) The original image, (b) noisy image with PSNR=23.4, and (c) noisy image with PSNR=18.8. Row 1: noisy images and initial curves, row 2: segmentation results of the CV model, row 3: the segmentation results of Li’s model, row 4: segmentation results of the TB model, row 5: the segmentation results of the proposed model, and row 6: the ground truth.

Table Grahic Jump Location
Table 2The scores of different algorithms on noisy images (where Pre, Rec, and F-M denote precision, recall, and F-measure, respectively).

From Table 2, with decreasing image quality, the precise and the F-measure of these four segmentation models reduce. The variance of F-measure using the proposed method, Li’s model,15 the TB model,20 and the CV model8 are 0.015, 0.081, 0.047, and 0.043, respectively. The variance of F-measure using the proposed method is smaller than those of other three models. The mean of the F-measure using the proposed method, Li’s model, the TB model, and the CV model are 0.892, 0.745, 0.878, and 0.829, respectively. The mean of F-measure using the proposed method is higher than those of other three models. It shows that our proposed method is insensitive to noise. Although the proposed method is insensitive to noise, the computer time is longer than that of other three models. The CPU time comparison of segmentation on an image with noise is shown in Table 3.

Table Grahic Jump Location
Table 3The comparison of CPU time (in seconds) with noisy images.

To test robustness against salt-and-pepper noise of the proposed method, segmented experiments on a 500×375 degraded image are conducted and compared with Li’s model,15 the TB model,20 and the CV model.8 Partial results are shown in Fig. 11. The linear smoothing (Gauss smoothing) cannot effectively remove salt-and-pepper noise, so Li’s segmentation curve could not converge with the object contour, and the F-Measure is 0.849. The CV model could converge with the object contour, but oversegmentation exists, the F-measure is 0.909, and the recall is 0.853; nonlinear smoothing (TV or median filter) can effectively remove the salt-and-pepper noise, the precisions of the proposed model and the TB model are 0.994 and 0.986, respectively. In the TB model, the TV smoothing and the smoothing-component segmentation are individual steps. It could not adaptively adjust the relationship between the number of smoothing iteration and region-confidence level. The F-measure of the proposed method is 0.98, and 0.021 higher than that of TB model.

Graphic Jump Location
Fig. 11
F11 :

Comparison of the proposed method with Li’s models13 and the CV model8 on real images with salt-and-pepper noise. (a) Initial curves, (b) the ground truth, (c) the proposed model, (d) the segmentation results of the CV model, (e) the segmentation results of the Li’s model, and (f) segmentation results of the TB model.

To improve segmentation performance of the active contour model on real images, we propose an image segmentation model based on edge-preserving smoothing. Compared to Li’s model, the CV model, and TB model on real images, the experimental results have shown that this method is insensitive to noise and can deal with inhomogeneous subregions. However, the proposed edge-preserving smoothing just retains the edge information, but could not sharpen weak edges. Thus, the proposed method cannot precisely locate the object contour formed by a weak edge. Furthermore, the computational cost is high. In the future, we plan to design an efficient model to sharpen the weaken edge.

This work was supported by the Sichuan Province Natural Science Foundation of China (Grant No. 2013SZ0157). The author Kun He and the author Dan Wang give the improved algorithm and the structure of the article. The algorithm implementation and the article writing are collaborative efforts. Partial experimental results in the article are given by Xiuqing Zheng.

Sen  Y. K.  et al., “Image segmentation methods for intracranial aneurysm haemodynamic research,” J. Biomech.. 47, (5 ), 1014 –1019 (2014). 0021-9290 CrossRef
Zhao  R. C., and Ma  Y. D., “A novel region segmentation algorithm with neural network for segmented image coding,” Acta Electron. Sin.. 42, (7 ), 1277 –1283 (2014). 0372-2112 CrossRef
Caon  M.  et al., “Computer-assisted segmentation of CT images by statistical region merging for the production of voxel models of anatomy for CT dosimetry,” Australas. Phys. Eng. Sci. Med.. 37, (2 ), 393 –403 (2014).CrossRef
Yang  X.  et al., “Improving level set method for fast auroral oval segmentation,” IEEE Trans. Image Process.. 23, (7 ), 2854 –2865 (2014). 1057-7149 CrossRef
Shen  J. B., , Du  Y. F., and Li  X. L., “Interactive segmentation using constrained Laplacian optimization,” IEEE Trans. Circuits Syst. Video Technol.. 24, (7 ), 1086 –1099 (2014). 1051-8215 CrossRef
Khan  M. W., “A survey: image segmentation techniques,” Int. J. Future Comput. Commun.. 3, (2 ), 89 –93 (2014).CrossRef
Wang  L.  et al., “Joint segmentation and recognition of categorized objects from noisy web image collection,” IEEE Trans. Image Process.. 23, (9 ), 4070 –4086 (2014). 1057-7149 CrossRef
Chan  T. F., and Vese  L., “Active contours without edges,” IEEE Trans. Image Process.. 10, (2 ), 266 –277 (2001). 1057-7149 CrossRef
Mumford  D., and Shah  J., “Optimal approximations of piecewise smooth functions and associated variational problems,” Commun. Pure Appl. Math.. 42, (5 ), 577 –685 (1989). 0010-3640 CrossRef
Tsai  A.  et al., “Curve evolution implementation of the mumford-shah functional for image segmentation, denoising, interpolation, and magnification,” IEEE Trans. Image Process.. 10, (8 ), 1169 –1186 (2001).CrossRef
Li  C.  et al., “Implicit active contours driven by local binary fitting energy,” in  Proc. IEEE Conf. on Computer Vision and Pattern recognition , pp. 1 –7 (2007).CrossRef
Peng  Y.  et al., “Active contours driven by normalized local image fitting energy,” J. Syst. Eng. Electron.. 25, (2 ), 307 –313 (2014).CrossRef
Jiang  H. Y., and Feng  R. J., “Image segmentation method research based on improved variational level set and region growth,” Acta Electron. Sin.. 40, (8 ), 1659 –1664 (2012). 0372-2112 CrossRef
Kong  D., and Wang  G., “Region-similarity based active contour model for SAR image segmentation,” J. Comput.-Aided Des. Comput. Graphics. 22, (9 ), 1554 –1560 (2010).
Li  C.  et al., “Level set evolution without re-initialization: a new variational formulation,” in  the Proc. of the 2005 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition , pp. 430 –436 (2005).CrossRef
Wen  Q.  et al., “Decomposition and active contour method for medical noise image segmentation,” J. Comput.-Aided Des. Comput. Graphics. 23, (11 ), 1882 –1889 (2011).
Yeo  S. Y.  et al., “Segmentation of biomedical images using active contour model with robust image feature and shape prior,” Int. J. Numer. Methods Biomed. Eng.. 30, (2 ), 232 –248 (2014).CrossRef
Chan  T. F., , Osher  S., and Shen  J. H., “The digital TV filter and nonlinear denoising,” IEEE Trans. Image Process.. 10, (2 ), 231 –241 (2001). 1057-7149 CrossRef
Evans  L., Partial Differential Equations. ,  American Mathematical Society ,  Providence  (1998).
He  K., , Zheng  X. Q., and Zhang  Y. L., “Image segmentation on texture blurring,” J. Sichuan Univ.. 47, (4 ), 111 –117 (2015).CrossRef

Kun He received his PhD in electrical and computer engineering from Sichuan University in 2006. Since 2006, he has been working as a professor research fellow in the School of Computer Science, Sichuan University. His research interests include pattern recognition, computer vision, and image processing.

Dan Wang received the bachelor's degree in software engineering from Sichuan University in 2014. She is currently a graduate student of the software engineer, the Key National Defense Laboratory of Visual Synthesis Graphic and Image, Sichuan University. Her major work was pattern recognition, image processing, and medical image analysis.

Xiuqing Zheng received her PhD in computer science and technology, Sichuan University. Currently, she served as the associate dean of Applied Technology College in Sichuan Normal University. Her research interests include intelligent information processing and image processing. She has undertaken and presided over many scientific and technological projects.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.

Citation

Kun He ; Dan Wang and Xiuqing Zheng
"Image segmentation on adaptive edge-preserving smoothing", J. Electron. Imaging. 25(5), 053022 (Oct 04, 2016). ; http://dx.doi.org/10.1117/1.JEI.25.5.053022


Figures

Graphic Jump Location
Fig. 1
F1 :

The diffusibility performance of this function. The solid and dash dot curve denote the diffusibility in the tangent and normal direction, respectively.

Graphic Jump Location
Fig. 2
F2 :

The weight coefficient of the neighbor pixel.

Graphic Jump Location
Fig. 3
F3 :

The two-clustering of the center pixel and its four neighbors, the white and black circles denote object and background, respectively. (a) All pixels in the object region, (b) one of the four neighbor pixels locates in background and others in object region, and (c) two of the four neighbor pixels locate in background and others in object region.

Graphic Jump Location
Fig. 4
F4 :

The results of segmentation, edge indicator functions, and smoothing components with different parameter k. (a) The flowerbed, (b) the edge indicator function, and (c) the cartoon component. Row 1: original images and initial curves, rows 2 to 5: the results of segmentation, edge indicator function, and smoothing component using 0.005, 0.05, 0.2, 0.25, and 0.5, respectively.

Graphic Jump Location
Fig. 5
F5 :

The CPU time (in seconds) and score of segmentation in Fig. 4. (a) The CPU time of the segmentation using the different parameter k. (b) The red, green, and blue curves show the F-measure, precision, and recall of the segmentation the different parameter using the different parameter k, respectively.

Graphic Jump Location
Fig. 6
F6 :

The results of segmentation with different thresholds. (a) Initial curve and the ground truth and (b)–(f) the results of segmentation using different thresholds 0.95, 0.96, 0.97, 0.98, and 0.99, respectively.

Graphic Jump Location
Fig. 7
F7 :

The CPU time and score of segmentation in Fig. 6. (a) The CPU time of the segmentation using the different threshold T. (b) The red, green, and blue curves show the F-measure, precision, and recall of segmentation with the different threshold T, respectively.

Graphic Jump Location
Fig. 8
F8 :

Comparison of the proposed method with Li’s model13 and the TB model,20 and the CV model8 on real images. (a) The lotus, (b) the eagles, and (b) the butterfly. Row 1: original images and initial curves, row 2: segmentation results of the CV, row 3: the segmentation results of Li’s model, row 4: segmentation results of the TB model, row 5: the segmentation results of the proposed model, and row 6: the ground truth.

Graphic Jump Location
Fig. 9
F9 :

Comparison of the proposed method with Li’s models13 and the CV model8 on real images with serious in-homogeneity. (a) The blossom, (b) the viburnum, and (c) the cycas. Row 1: original images and initial curves, row 2: segmentation results of the CV model, row 3: the segmentation results of Li’s model, row 4: segmentation results of the TB model, row 5: the segmentation results of the proposed model, and row 6: the ground truth.

Graphic Jump Location
Fig. 10
F10 :

Comparison of the proposed method with Li’s models13 and the CV model8 on real images with noise. (a) The original image, (b) noisy image with PSNR=23.4, and (c) noisy image with PSNR=18.8. Row 1: noisy images and initial curves, row 2: segmentation results of the CV model, row 3: the segmentation results of Li’s model, row 4: segmentation results of the TB model, row 5: the segmentation results of the proposed model, and row 6: the ground truth.

Graphic Jump Location
Fig. 11
F11 :

Comparison of the proposed method with Li’s models13 and the CV model8 on real images with salt-and-pepper noise. (a) Initial curves, (b) the ground truth, (c) the proposed model, (d) the segmentation results of the CV model, (e) the segmentation results of the Li’s model, and (f) segmentation results of the TB model.

Tables

Table Grahic Jump Location
Table 1The comparison of CPU time and scores of segmentation in Figs. 8 and 9.
Table Grahic Jump Location
Table 2The scores of different algorithms on noisy images (where Pre, Rec, and F-M denote precision, recall, and F-measure, respectively).
Table Grahic Jump Location
Table 3The comparison of CPU time (in seconds) with noisy images.

References

Sen  Y. K.  et al., “Image segmentation methods for intracranial aneurysm haemodynamic research,” J. Biomech.. 47, (5 ), 1014 –1019 (2014). 0021-9290 CrossRef
Zhao  R. C., and Ma  Y. D., “A novel region segmentation algorithm with neural network for segmented image coding,” Acta Electron. Sin.. 42, (7 ), 1277 –1283 (2014). 0372-2112 CrossRef
Caon  M.  et al., “Computer-assisted segmentation of CT images by statistical region merging for the production of voxel models of anatomy for CT dosimetry,” Australas. Phys. Eng. Sci. Med.. 37, (2 ), 393 –403 (2014).CrossRef
Yang  X.  et al., “Improving level set method for fast auroral oval segmentation,” IEEE Trans. Image Process.. 23, (7 ), 2854 –2865 (2014). 1057-7149 CrossRef
Shen  J. B., , Du  Y. F., and Li  X. L., “Interactive segmentation using constrained Laplacian optimization,” IEEE Trans. Circuits Syst. Video Technol.. 24, (7 ), 1086 –1099 (2014). 1051-8215 CrossRef
Khan  M. W., “A survey: image segmentation techniques,” Int. J. Future Comput. Commun.. 3, (2 ), 89 –93 (2014).CrossRef
Wang  L.  et al., “Joint segmentation and recognition of categorized objects from noisy web image collection,” IEEE Trans. Image Process.. 23, (9 ), 4070 –4086 (2014). 1057-7149 CrossRef
Chan  T. F., and Vese  L., “Active contours without edges,” IEEE Trans. Image Process.. 10, (2 ), 266 –277 (2001). 1057-7149 CrossRef
Mumford  D., and Shah  J., “Optimal approximations of piecewise smooth functions and associated variational problems,” Commun. Pure Appl. Math.. 42, (5 ), 577 –685 (1989). 0010-3640 CrossRef
Tsai  A.  et al., “Curve evolution implementation of the mumford-shah functional for image segmentation, denoising, interpolation, and magnification,” IEEE Trans. Image Process.. 10, (8 ), 1169 –1186 (2001).CrossRef
Li  C.  et al., “Implicit active contours driven by local binary fitting energy,” in  Proc. IEEE Conf. on Computer Vision and Pattern recognition , pp. 1 –7 (2007).CrossRef
Peng  Y.  et al., “Active contours driven by normalized local image fitting energy,” J. Syst. Eng. Electron.. 25, (2 ), 307 –313 (2014).CrossRef
Jiang  H. Y., and Feng  R. J., “Image segmentation method research based on improved variational level set and region growth,” Acta Electron. Sin.. 40, (8 ), 1659 –1664 (2012). 0372-2112 CrossRef
Kong  D., and Wang  G., “Region-similarity based active contour model for SAR image segmentation,” J. Comput.-Aided Des. Comput. Graphics. 22, (9 ), 1554 –1560 (2010).
Li  C.  et al., “Level set evolution without re-initialization: a new variational formulation,” in  the Proc. of the 2005 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition , pp. 430 –436 (2005).CrossRef
Wen  Q.  et al., “Decomposition and active contour method for medical noise image segmentation,” J. Comput.-Aided Des. Comput. Graphics. 23, (11 ), 1882 –1889 (2011).
Yeo  S. Y.  et al., “Segmentation of biomedical images using active contour model with robust image feature and shape prior,” Int. J. Numer. Methods Biomed. Eng.. 30, (2 ), 232 –248 (2014).CrossRef
Chan  T. F., , Osher  S., and Shen  J. H., “The digital TV filter and nonlinear denoising,” IEEE Trans. Image Process.. 10, (2 ), 231 –241 (2001). 1057-7149 CrossRef
Evans  L., Partial Differential Equations. ,  American Mathematical Society ,  Providence  (1998).
He  K., , Zheng  X. Q., and Zhang  Y. L., “Image segmentation on texture blurring,” J. Sichuan Univ.. 47, (4 ), 111 –117 (2015).CrossRef

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

Advertisement
  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.