Open Access
3 June 2014 Sequential application of viscous opening and lower leveling for three-dimensional brain extraction on magnetic resonance imaging T1
Jorge Domingo Mendiola-Santibañez, Martín Gallegos-Duarte, Miguel Octavio Arias-Estrada, Israel Marcos Santillán-Méndez, Juvenal Rodríguez-Reséndiz, Iván Ramón Terol-Villalobos
Author Affiliations +
Abstract
A composition of the viscous opening and the lower leveling is introduced to extract brain in magnetic resonance imaging T1. The innovative transformation disconnects chained components and has better control on the reconstruction process of the marker inside of the original image. However, the sequential operator requires setting several parameters, making its application difficult. Due to this situation, a simplification is carried out on it to obtain a more practical method. The proposed morphological transformations were tested with the Internet Brain Segmentation Repository (IBSR) database, which is used as a benchmark among the community. The results are compared using the Jaccard and Dice indices with respect to (i) manual segmentations obtained from the IBSR, (ii) mean indices reported in the current literature, and (iii) segmentations obtained from the Brain Extraction Tool, since this is one of the most popular algorithms used for brain segmentation. The average indices of Jaccard and Dice indicate that the reduced transformation produces similar results to the other methods reported in the literature while the sequential operator presents a better performance.

1.

Introduction

The segmentation of the brain is a task commonly developed in neuroimaging laboratories. The difficulty and importance of the skull stripping problem has led to a wide range of proposals being developed to tackle it. Some techniques reported in the literature to solve this issue are, for example, surfacing models,1 deformable models,13 watershed,4,5 morphology,6 atlas-based methods,7 hybrid techniques,8,9 fuzzy regions of interest,10 histogram analysis,11 active contours,12,13 multiresolution approach,14 multiatlas propagation and segmentation (MAPS),15,16 topological constraints,17 and others. Some revision papers concerning brain segmentation can be found in Refs. 1819.20.21.

The problem that arises when having many viable techniques is to choose the ones that have the best performance for a particular visualization task. In Refs. 2223.24, the authors selected the popular skull-stripping algorithms reported in the literature and carried out a comparison among them. These algorithms include Brain Extraction Tool (BET),3 3dIntracranial,25 Hybrid Watershed Algorithm,8 Brain Surface Extractor (BSE),26 and Statistical Parametric Mapping v.2 (SPM2).27 The two common and popular methods mentioned in Refs. 2224 are BET and BSE. According to the results presented in Ref. 23, the authors found that BET and BSE produce similar brain extractions if adequate parameters are used in those algorithms. Interesting information about BET is reported in Ref. 28, where the authors found that the BET algorithm’s performance is improved after the removal of the neck slices. Due to the popularity of BET, we will compare our results to that algorithm. Two important characteristics of BET are: it is fast and it generates approximated segmentations.

In this paper, a morphological transformation that disconnects chained components is proposed and applied to segment brain from magnetic resonance imaging (MRI) T1. The operator is built as a composition between the viscous opening29 and the lower leveling30 and it is implemented in MATLAB R2010a on a 2.5 GHz Intel Core i5 processor with 2 GB RAM memory. To illustrate the performance of our proposals, two brain MRI datasets of 20 and 18 normal subjects,31 obtained from the Internet Brain Segmentation Repository (IBSR) and developed by the Centre for Morphometric Analysis (CMA) at the Massachusetts General Hospital ( http://www.nitrc.org), were processed.

In order to introduce our proposals, Sec. 2 provides a background on some morphological transformations such as opening and closing by reconstruction,32 viscous opening, and lower leveling. Other approaches on viscous transformations can be found in Refs. 3334.35.36; however, these transformations work differently because they consider structuring elements that change dynamically, while our proposals work with a geodesic approach.29

In Sec. 3, a new transformation is built through the composition of the viscous opening and the leveling. Because the composed transformation uses several parameters, a simplification of it is introduced for facilitating its application. Such a reduction results in approximated segmentations and less time is utilized during its execution. It is noteworthy to mention that the two operators employ determinate size parameters deduced from a granulometric analysis.37

The experimental results are presented in Sec. 4. In Sec. 4.1, an explanation is given about the parameters involved in the proposed transformations and the performance of each one is illustrated with several pictures. In Sec. 4.2, the results obtained with the morphological transformations are compared using the mean values of the Jaccard38 and Dice39 indices with respect to those obtained from: (i) the BET algorithm and (ii) the results reported in Refs. 11, 12, and 40, which utilize the same databases. In Sec. 4.3, the advantages and disadvantages of our method for brain MR image extraction are presented. Section 5 contains our conclusions.

2.

Background on Some Morphological Transformations

2.1.

Opening and Closing by Reconstruction

In mathematical morphology (MM), the basic transformations are the erosion εμB(f)(x) and dilation δμB(f)(x), where B represents the three-dimensional (3-D) structuring element which has its origin in the center. Figure 1 illustrates the shape of the structuring element used in this paper. B˘ denotes the transposed set of B with respect its origin, B˘={x:xB}, μ is a size parameter, f:Z3Z is the input image, and x is a point on the definition domain.

Fig. 1

Three-dimensional (3-D) structuring element used in this paper.

JEI_23_3_033010_f001.png

The next equations represent the morphological erosion εμB(f)(x) and dilation δμB(f)(x):41

εμB(f)(x)={f(y):yμB˘x}
and
δμB(f)(x)={f(y):yμB˘x},
where and represent the inf and sup operators. The morphological erosion and dilation permit us to build other types of transformations; these include the morphological opening γμB(f)(x) and closing φμB(f)(x) defined as
γμB(f)(x)=δμB˘[εμB(f)](x)
and
φμB(f)(x)=εμB˘[δμB(f)](x).

In addition, the opening (closing) by reconstruction has the characteristic of modifying the regional maxima (minima) without affecting the remaining components to a large extent. These operators use the geodesic transformations.32,42 The geodesic dilation δf1(g) and erosion εf1(g) are expressed as δf1(g)=fδB(g) with gf, and εf1(g)=fεB(g) considering gf, respectively. When the function g is equal to the morphological erosion or dilation, the opening γ˜μB(f)(x) or closing φ˜μB(f)(x) by reconstruction is obtained. Formally, the next expressions represent them

Eq. (1)

γ˜μB(f)(x)=δf1δf1δf1[εμB(f)]until stability(x)
and
φ˜μB(f)(x)=εf1εf1εf1[δμB(f)]until stability(x).

2.2.

Viscous Opening γ˜λ,μ and Viscous Difference

The viscous opening γ˜λ,μ and closing φ˜λ,μ defined in Ref. 29 allow one to deal with overlapped or chained components. These transformations are denoted by

Eq. (2)

γ˜λ,μ(f)=δλγ˜μλελ(f)withλμ
and
φ˜λ,μ(f)=ελφ˜μλδλ(f)withλμ.

Equation (2) uses three operators, the morphological erosion ελ, the opening by reconstruction γ˜μλ, and the morphological dilation δλ. The morphological erosion ελ(f) allows one to discover and disconnect the λ-components (all components where the structuring element can go from one place to another by a continuous path made of squares and whose centers move along this path). Then, the opening by reconstruction γ˜μλελ(f) removes all regions less than μλ around the λ-components. Finally, because the viscous opening is defined on the lattice of dilatation, the δλ must be obtained on γ˜μλελ(f). An example of Eq. (2) is given in Fig. 2. The original image is exhibited in Fig. 2(a). Figure 2(b) shows the morphological erosion ελ=6(f). Notice that there are several components around the brain. The image in Fig. 2(c) corresponds to the transformation γ˜μ=16λ=10ελ=6(f). In this image, several components have been eliminated by the process of opening by reconstruction. Figure 2(d) displays the result of the transformation δλ=6 γ˜μ=10ελ=6(f).

Fig. 2

Viscous opening illustration: (a) original volume f, (b) ελ=6(f), (c) γ˜μ=16λ=10ελ=6(f), and (d) δλ=6 γ˜μ=10ελ=6(f).

JEI_23_3_033010_f002.png

Viscous openings permit the sieving of the image through the viscous difference.29 This is defined in

Eq. (3)

γ˜λ,μ1(f)÷γ˜λ,μ2(f)=δλ[γ˜μ1λ(ελ)γ˜μ2λ(ελ)]withλμ1μ2.

According to the explanation given above, the erosion ελ discovers the λ-components and the difference γ˜μ1λ(ελ)γ˜μ2λ(ελ) with λμ1μ2 sieving the image, whereas δλ is necessary to obtain the viscous component. The viscous difference gives the information of all discovered disconnected components of a certain size λ when μ is increased.

2.3.

Lower Leveling

The lower leveling transformation is presented as follows:30

Eq. (4)

ψμ,α1(f,g)=f{g[δμ(g)α]},
where f is the reference image, g is a marker, α[0,255] is a positive scalar called slope, and μ is the size of the structuring element. Equation (4) is iterated until stability is reached with the purpose of reconstructing the marker g at the interior of the original mask f, i.e.,

Eq. (5)

Ψμ,α(f,g)=limnψμ,αn(f,g)=ψμ,α1ψμ,α1[ψμ,α1(g)]until stability.

On the other hand, the selection of the marker g is very important. In Ref. 30, for example, the following marker was used to segment the brain:

Eq. (6)

g=γμB(f).

The parameter α helps to control the reconstruction of the marker g into f. An example to illustrate the performance of Eq. (5) is given in the next section.

3.

Segmentation of Brain in MRI T1

Equations (2), (5), and (6) were applied in Refs. 29 and 30 to separate the skull and the brain on slices of an MRI T1 for the two-dimensional (2-D) case. These transformations allow disconnecting overlapped components, because they can control the reconstruction process. In 2-D, the structuring element moves and uniquely touches one image. However, for the 3-D case, neighbors within the structuring element are taken from three adjacent brain slices. Due to this, several regions are connected through the shape of the 3-D structuring element among the different brain sections, resulting in, as a consequence, a major connectivity. The increase in connectivity originates that Eqs. (2) and (5) for the 3-D case do not show the same performance as in the 2-D case, and the component of the brain cannot be separated by applying such operators once.

This situation is illustrated in Fig. 3 where Eq. (5) has been applied considering the marker obtained by Eq. (6). The original volume appears in Fig. 3(a). Figure 3(b) displays a portion of the brain obtained from Eq. (6) with μ=25. The set of images in Figs. 3(c)3(f) illustrates the control in the reconstruction process using different slopes α. However, the extracted brain contains additional components since this condition is inadequate. The next section provides a solution to this problem.

Fig. 3

3-D brain segmentation using Eqs. (5) and (6). (a) Original volume f; (b) marker by opening defined in Eq. (6); (c) result of Eq. (5) using the marker obtained in (b) with α=4; (d) result of Eq. (5) using the marker obtained in (b) with α=3; (e) result of Eq. (5) using the marker obtained in (b) with α=2, and (f) result of Eq. (5) using the marker obtained in (b) with α=1.

JEI_23_3_033010_f003.png

3.1.

Composition of Morphological Connected Transformations

As previously stated, viscous opening and lower leveling allow separating the chained components, and it is natural to think of combining both operators to get one transformation capable of having better control on the reconstruction process. Following this idea, an option is to use the viscous opening as a marker of the lower leveling to eliminate a great portion of the skull; posteriorly, the resulting image is again processed with a similar filter to eliminate the remaining regions around the brain. Such a procedure represents a sequential application of the combined transformations considering different parameters in order to have increasing control in the reconstruction process. The purpose is to eliminate the skull softly in two steps. The following equation permits the disconnection of the chained components and comes from the combination of Eqs. (2) and (5):

Eq. (7)

ημ,α2,λ2,μ2,α1,λ1,μ1(f)(x)=Ψμ,α2(f,γ˜λ2,μ2{Ψμ,α1[f,γ˜λ1,μ1(f)]})(x).

Nevertheless, Eq. (7) produces an unsatisfactory performance. Figure 4 shows an example of the transformation ημ=1,α2=20,λ2=8,μ2=10,α1=10,λ1=10,μ1=12(f). Some slices of the original volume can be seen in Fig. 4(a) where the segmentation results in the creation of holes on the brain, as is illustrated in Fig. 4(b). The viscous opening causes this behavior, since all components not supporting the morphological erosion of size λ will merge with the background because the 3-D structuring element produces stronger changes as the size of the structuring element is increased. One way to get better segmentations consists of computing the operator in Eq. (8), where f is the input image, h¯ρ represents the mean filter of size ρ, and Ta expresses a threshold between [a,255] sections. The mean filter h¯ρ partially closes the holes, Ta and permits the selection of certain regions of interest, and ξρ,a helps to obtain a portion of the original image

Eq. (8)

ξρ,a(f)=fTah¯ρ(f).

Fig. 4

Segmentation of a volume [taken from the database Internet Brain Segmentation Repository (IBSR) with 20 subjects] using Eq. (7). (a) Slices of the original volume and (b) slices corresponding to ημ=1,α2=20,λ2=8,μ2=10,α1=10,λ1=10,μ1=12(f).

JEI_23_3_033010_f004.png

The combination of Eqs. (7) and (8) gives the next operator as a result

Eq. (9)

ηρ4,a4,α2,ρ3,a3,λ2,μ2,ρ2,a2,μ,α1,ρ1,a1,λ1,μ1*(f)(x)=ξρ4,a4Ψμ,α2(f,ξρ3,a3γ˜λ2,μ2{ξρ2,a2Ψμ,α1[f,ξρ1,a1γ˜λ1,μ1(f)]})(x).

To apply Eq. (9), the reasoning below needs to be considered.

3.2.

Parameters α, λ, and μ

3.2.1.

Parameter α

The following analysis corresponds to Eq. (5) since Eq. (9) uses it. Large α values produce (i) a time reduction in reaching the final result and (ii) a larger control in the reconstruction process.

The quantification of the time when Eq. (5) is applied on a volume of 60 slides—using as a marker γ˜λ=10,μ=12 and the lower leveling Ψμ=1,α with α=1, 3, 6, 9, 12—is presented in Fig. 5. This figure displays several slices belonging to the output volumes obtained for different α values.

Fig. 5

Execution time of Eq. (5) and some output slices corresponding to several processed volumes. (a) Time spent to compute Eq. (5) considering 60 slices, the marker corresponds to the viscous opening with λ=10, μ=12. The leveling is applied considering α=1, 3, 6, 9, 12; (b) graph corresponding to the data presented in (a); (c) slice of the original volume; (d) brain section taking from the viscous opening and used as marker; (e)–(i) set of slices taken from the volumes processed with α=1, 3, 6, 9, 12, as is illustrated in Fig. 4.

JEI_23_3_033010_f005.png

3.2.2.

Parameter λ

The adequate election of the parameter λ will bring, as a consequence, the disconnection between the skull and the brain. Such a parameter is computed from a granulometric analysis applying Eq. (10)37

Eq. (10)

υ=vol[γλ(f)]vol[γλ+1(f)]vol[f],
where vol stands for the volume, i.e., the sum of all gray levels in the image, and γλ(f) represents the morphological opening size λ. The graph in Fig. 6(a) corresponds to the application of Eq. (10) taking λ[1,30]. This graph has three important intervals. The interval where λ[1,6] shows the elimination of an important part of the skull. Hence, in order to detect the brain, λ will take values greater than 6, i.e., λ6.

Fig. 6

Granulometric curves computed on certain volume taken from the database IBSR with 20 subjects. (a) Granulometry computed from Eq. (10) and (b) viscous granulometry obtained from Eq. (11).

JEI_23_3_033010_f006.png

3.2.3.

Parameter μ

The parameter μ will be computed using the viscous granulometry χ which is the term of the viscous difference defined in Eq. (3)29

Eq. (11)

χ=vol[γ˜λ,μ1(f)÷γ˜λ,μ2(f)]vol(f).

To apply Eq. (11), the next parameters will be considered: μ1, μ2[1,30], and λ=7 (from the previous analysis for λ), in order to detect the brain component.

Figure 6(b) displays the graph of Eq. (11). For μ[7,13], the brain component is detected.

3.3.

Simplification of Eq. (9)

The fact of sequentially applying two transformations along with a mask to obtain Eq. (9) brings as a consequence the use of a large number of parameters. This problem is considered and a simplification is proposed as follows:

Eq. (12)

τμ,ρ1,a1,λ1,μ1(f)(x)=Ψμ,α1[f,ξρ1,a1γ˜λ1,μ1(f)](x).

According to Eq. (12), the marker ξρ1,a1γ˜λ1,μ1 is obtained from the viscous opening, and it is propagated by the lower leveling transformation Ψμ,α1. Equation (12) presents the following benefits when compared with respect to Eq. (9): (1) the use of fewer parameters and (2) a reduction of the execution time. The performances of Eqs. (9) and (12) are illustrated in Sec. 4.

4.

Experimental Results

For the purpose of measuring the performance of our proposed method, the following MRI databases taken from the IBSR and developed by the CMA at the Massachusetts General Hospital ( http://www.nitrc.org)31 are utilized: (i) 20 simulated T1W MRI images (denoted as IBSR1) and (ii) 18 real T1W MRI images (denoted as IBSR2), with a slice thickness of 1.5 mm.

4.1.

Parameters Involved in Eqs. (9) and (12)

Tables 1Table 2Table 34 contain the parameters used in Eqs. (9) and (12) to segment the volumes belonging to the IBSR1 and IBSR2 datasets. In the volumes of IBSR1, the neck was cropped to obtain similar images to those of IBSR2. Differences among the volumes with respect to the intensity, size, and connectivity, originate the parameters’ variation. The guidelines for the parameter selection are given below:

  • Analysis for Eq. (9):

    • i. Parameters λ1, μ1, λ2, and μ2 take their values into the interval [8, 14]. Furthermore, those volumes complying with λ1λ2 and μ1μ2λ1λ2 fulfill the order relation γ˜λ1,μ1γ˜λ2,μ2.

    • ii. The parameter ρ takes its values within the interval [3, 12]. The mean filter will partially close several holes produced by the viscous opening as those presented in Fig. 4(b). Into MM, the closing transformation fills holes; however, from Fig. 4(b), the background and the holes represent the same region. This means that larger sizes of the structuring element will close the holes; nevertheless, this practice will increase the execution time of Eq. (9).

    • iii. The parameter a represents a threshold. This varies in the interval [1, 120]. The application of a threshold obeys two things: (a) it eliminates some of the undesirable dark components such as dura matter, skin, and fat and (b) it obtains an appropriated marker.

    • iv. The lower leveling defined in Eq. (4) and utilized in Eq. (5) uses the parameters μ and α. The size μ=1 of the morphological dilation keeps its value during the processing with the purpose of detecting the different structures of the brain closer to the input image, whereas the slope α varies in the interval [7, 120]. When parameter α increases, a finer control is obtained, i.e., a smooth transition is generated in each iteration. Figure 7 shows an example of Eq. (9), considering the information of Table 1.

  • Analysis for Eq. (12):

    • v. The intervals defined previously are valid for Eq. (12). However, notice that the viscous opening and the lower leveling are applied once. For this situation, the viscous opening must get an appropriated marker containing the brain, and the lower leveling will reconstruct this marker inside the original volume. The transition (dura mater) between the skull and the brain avoids the lower leveling reconstructing the skull completely. Figure 8 displays an example of Eq. (12) by considering the information of Table 2. The input volume used to exemplify Eq. (12) was the same as that used in Fig. 7.

Fig. 7

Some brain slices corresponding to the segmentation of the volume IBSR1_100 using Eq. (9) with the parameters defined in Table 1.

JEI_23_3_033010_f007.png

Fig. 8

Some brain slices corresponding to the segmentation of the volume IBSR1_100 using Eq. (12) with the parameters defined in Table 2.

JEI_23_3_033010_f008.png

Table 1

Parameters corresponding to Eq. (9). The processed dataset was IBSR1.

Volumeλ1μ1ρ1a1μα1ρ2a2λ2μ2ρ3a3μα2ρ4a4
IBSR1_001810390185605851001308100
IBSR1_00281039018560585100120880
IBSR1_0048108301781051010601120830
IBSR1_00512133101183501012350145530
IBSR1_00610124501254501213560120560
IBSR1_00710128101108508108501605100
IBSR1_00810128101118908108501405100
IBSR1_011121452011552010125100125840
IBSR1_0121112450130470111251001225120
IBSR1_0131214520115590891210013512120
IBSR1_015101481112818912401201280
IBSR1_016810320112560585601408110
IBSR1_017810320112560685601408110
IBSR1_1001214390185605851001308100
IBSR1_110101239012056058510011008100
IBSR1_1111214390185605851001308100
IBSR1_11210123901205605851001128100
IBSR1_19110123901205605851001208100
IBSR1_2021214390185605851001308100
IBSR1_20510123901205605851001208100

Table 2

Parameters corresponding to Eq. (12). The processed dataset was IBSR1.

Volumeλ1μ1ρ1a1μα1ρ2a2
IBSR1_0011012452114554
IBSR1_0021113553115655
IBSR1_004111355018620
IBSR1_00581051017615
IBSR1_0061012410110510
IBSR1_0071012452114554
IBSR1_0081012452114554
IBSR1_0111012452114554
IBSR1_0121012452114554
IBSR1_0131012452114570
IBSR1_0151012411351
IBSR1_0161012452114554
IBSR1_0171012452114554
IBSR1_1001012452114554
IBSR1_1101012452114590
IBSR1_1111012452114554
IBSR1_1121012452114554
IBSR1_1911012452114554
IBSR1_2021012452114554
IBSR1_2051012452114554

Table 3

Parameters corresponding to Eq. (9). The processed dataset was IBSR2.

Volumeλ1μ1ρ1a1μα1ρ2a2λ2μ2ρ3a3μα2ρ4a4
IBSR2_001101245211455469681137789
IBSR2_002101245211455469681137789
IBSR2_003141542012052056640140640
IBSR2_004101245211055056620130640
IBSR2_00510124521105501012640180640
IBSR2_006101245211055056620120650
IBSR2_007810520110510810830120420
IBSR2_008111245012560610450112610
IBSR2_00911124501358068680110680
IBSR2_010171842013550101265011650
IBSR2_01112134501858081068015680
IBSR2_0121112450135806868016660
IBSR2_01311124501358068680110680
IBSR2_01411124501358068680110680
IBSR2_015111245013580810680140680
IBSR2_0161112450135801012680125680
IBSR2_0171112450135801012680130680
IBSR2_0181112450135801012640115620

Table 4

Parameters corresponding to Eq. (12). The processed dataset was IBSR2.

Volumeλ1μ1ρ1a1μα1ρ2a2
IBSR2_0011012452114554
IBSR2_0021012452114554
IBSR2_003101245218530
IBSR2_0041012452114554
IBSR2_0051416452116580
IBSR2_0061012452114554
IBSR2_007101245214520
IBSR2_00881245014510
IBSR2_009101242015540
IBSR2_0106841012540
IBSR2_01181249014580
IBSR2_012121345014560
IBSR2_0131012452114554
IBSR2_0141012452114554
IBSR2_0151012452117554
IBSR2_0161012452118554
IBSR2_0171012452114554
IBSR2_018101245215530

4.2.

Comparison Results

Figure 9 illustrates the resulting segmentation corresponding to the volume IBSR1_16 under the application of Eqs. (9), (12), and by BET (default parameters) implemented in the MRIcro software.43 Figure 9(a) shows the original slices. Figure 9(b) displays the respective manual segmentations. Figure 9(c) presents the application of Eq. (9) with the parameters given in Table 1. Figure 9(d) presents the application of Eq. (12) with the parameters given in Table 2. Figure 9(e) shows the brain extraction using the BET algorithm. The parameters selected for the set of 20 brains are intensity threshold=0.50 and vertical gradient=0.0. In order to compare the segmentations, the Jaccard and the Dice coefficients are computed. Table 5 contains the indices corresponding to BET and Eqs. (12) and (9) for the IBSR1.

Fig. 9

Images illustrating the segmentation of volume IBSR1_016 through several methods. (a) Brain sections corresponding to the original volume IBSR1_016; (b) manual segmentations provided by the IBSR website; (c) application of Eq. (9) with the parameters defined in Table 1; (d) application of Eq. (12) with the parameters defined in Table 2; and (e) slices obtained from BET using a fractional intensity=0.5 and vertical gradient=0.0.

JEI_23_3_033010_f009.png

Table 5

Jaccard and Dice indexes corresponding to IBSR1 dataset and segmented with BET, Eqs. (9) and (12) considering the parameters presented in Tables 1 and 2.

VolumeBETEq. (12)Eq. (9)
JaccardDiceJaccardDiceJaccardDice
IBSR1_0010.79490.88570.90310.94910.95380.9763
IBSR1_0020.90910.95240.92670.96200.94310.9707
IBSR1_0040.85390.92120.83180.90820.94820.9734
IBSR1_0050.47210.64140.72810.84270.88170.9372
IBSR1_0060.53350.69580.79810.88770.89480.9445
IBSR1_0070.87900.93560.94410.97130.95860.9789
IBSR1_0080.75870.86280.93590.96690.94580.9722
IBSR1_0110.84440.91570.89720.94580.92330.9601
IBSR1_0120.81300.89680.88000.93620.90180.9484
IBSR1_0130.88730.94030.88220.93740.88550.9392
IBSR1_0150.39760.56900.70700.82840.92520.9612
IBSR1_0160.65750.79330.91150.95370.95260.9757
IBSR1_0170.67300.80450.91820.95730.95560.9773
IBSR1_1000.90850.95200.93370.96570.96170.9805
IBSR1_1100.90850.95200.91600.95620.93950.9688
IBSR_1110.82330.90310.89540.94480.93540.9666
IBSR_1120.83470.90990.91510.95570.93460.9662
IBSR_1910.92430.96070.94060.96940.96010.9797
IBSR_2020.90820.95190.93240.96500.95930.9792
IBSR_2050.90850.95200.93470.96630.95470.9768

A similar procedure is applied to the IBSR2 database. Figure 10 presents the segmentations corresponding to the IBSR2_04 volume. Figure 10(a) shows the original slices. Figure 10(b) displays the respective manual segmentations. Figure 10(c) presents the application of Eq. (9) with the parameters given in Table 3. Figure 9(d) presents the application of Eq. (12) with the parameters given in Table 4. Figure 9(e) shows the brain extraction using the BET algorithm with the default parameters. Table 6 contains the Jaccard and Dice indices corresponding to BET and Eqs. (12) and (9) for the IBSR2.

Fig. 10

Images illustrating the segmentation of volume IBSR2_04 through several methods. (a) Brain sections corresponding to the original volume IBSR2_04; (b) manual segmentations provided by the IBSR website; (c) application of Eq. (9) with the parameters defined in Table 3; (d) application of Eq. (12) with the parameters defined in Table 4; and (e) slices obtained from BET using a fractional intensity=0.5 and vertical gradient=0.0.

JEI_23_3_033010_f010.png

Table 6

Jaccard and Dice indexes corresponding to IBSR2 dataset and segmented with BET, Eqs. (9) and (12) considering the parameters presented in Tables 3 and 4.

VolumeBETEq. (12)Eq. (9)
JaccardDiceJaccardDiceJaccardDice
IBSR2_010.78020.87650.92810.96270.96990.9847
IBSR2_020.81120.89580.95160.97520.94800.9733
IBSR2_030.86110.92540.94430.97140.96030.9798
IBSR2_040.83360.90920.96290.98110.95400.9765
IBSR2_050.78680.88070.92520.96110.90830.9519
IBSR2_060.78470.87940.95120.97500.92330.9601
IBSR2_070.81130.89580.86540.92780.89310.9435
IBSR2_080.77870.87560.84020.91320.89390.9440
IBSR2_090.81080.89550.90770.95160.91640.9564
IBSR2_100.70990.83030.86060.92510.83550.9104
IBSR2_110.78610.88020.91910.95790.93380.9658
IBSR2_120.77980.87630.89470.94440.94410.9712
IBSR2_130.79120.88340.94350.97090.95220.9755
IBSR2_140.80820.89400.96340.98140.96550.9824
IBSR2_150.82060.90140.91440.95530.95110.9749
IBSR2_160.83850.91220.94950.97410.93810.9681
IBSR2_170.81710.89930.92680.96200.95410.9765
IBSR2_180.80670.89300.90660.95100.90660.9510

Table 7 contains the mean values of the indices presented in Tables 5 and 6 together with the mean values of the indices reported in Refs. 11, 12, and 40.

Table 7

Jaccard and Dice mean indexes corresponding to IBSR1 and IBSR2 datasets and some reported in the literature.

MethodJaccard meanDice mean
SMHASS (Ref. 11) IBSR10.9040.950
SMHASS (Ref. 11) IBSR20.9050.950
ACNM One (Ref. 12) IBSR10.8900.940
ACNM One (Ref. 12) IBSR20.9000.950
Equation (9) IBSR10.9350.966
Equation (9) IBSR20.9240.963
Equation (12) IBSR10.8660.938
Equation (12) IBSR20.9190.957
BET ISBR10.7840.869
BET ISBR20.8000.899
Method in Ref. 40 using IBSR10.9230.960

4.3.

Discussion

Some commentaries on the segmented volumes are presented as follows:

  • i. The time performance of our operators is slow compared to the BET algorithm. The table in Fig. 11(a) presents several times measured during the execution of Eq. (9) and considering an increasing numbers of slices. Its corresponding graph is shown in Fig. 9(b). A similar behavior is presented using Eq. (12), but within half the time.

    For the processing of 60 brain slices utilizing Eq. (9), our method spent 354.8 s [177.4 s using Eq. (12)], while the BET algorithm required 8 s.

    In Ref. 24, the measured time to separate brain components varies between 40 s (BET algorithm) to 35 min (SPM2) when considering a complete volume. In our case, Eq. (9) takes 16 min and Eq. (12) took 8 min.

  • ii. BET and the algorithms reported in Refs. 11, 12, and 40 use fewer parameters than Eqs. (9) and (12).

  • iii. Although the time spent to segment a volume is high compared to the BET, the segmentations obtained with our proposal [Eq. (9)] are better according to the Jaccard and Dice mean indices.

  • iv. Equation (12) works well; however, the indices presented in Table 7 indicate that Eq. (9) has a better performance.

Fig. 11

Time performance of Eq. (9). (a) Consumed time by Eq. (9) when the number of slices increases. (b) Graph of the information presented in (a).

JEI_23_3_033010_f011.png

5.

Conclusions

Two morphological transformations were proposed to extract brain in MRIs T1. The first operator [Eq. (9)] presents a better performance than the second one [Eq. (12)] according to the computed Jaccard and Dice mean indices.

The idea to segment the brain consisted of smoothly propagating a marker given by the viscous opening into the original volume. For this, adequate parameters must be obtained from a granulometric analysis. The sequential application of such transformations results in a new morphological operator [Eq. (9)] capable of better controlling the reconstruction process. Nevertheless, the new transformation employs several parameters; due to this, a second morphological transformation was obtained from a simplification of the first one [Eq. (12)].

Our proposals were tested using the two brain databases obtained from the IBSR home page. In total, 38 volumes of MR images of the brain were processed. The segmentations were compared through two popular indices with respect to the segmentations obtained from the BET algorithm, with manual segmentations obtained from the IBSR website and with respect to the values of the indices reported in the current literature. When the mean values of the Jaccard and Dice indices are compared, our proposal outperforms the other methodologies. This means that our segmentations are closer to the manual segmentations obtained from the IBSR website. However, the time spent to segment a volume with 160 slices, along with the number of parameters utilized in Eq. (9), is higher compared to the time and the parameters utilized by the BET algorithm.

Although Eq. (12) significantly reduces the number of parameters, Eq. (9) produces better segmentations. In this way, Eq. (12) can be used to get approximated segmentations of the brain.

The main problem of the BET algorithm is that several regions are not detected; the beginning of Fig. 10(e) clearly illustrates this situation. Due to this, the Jaccard and Dice indices fall considerably.

Finally, as future work, the proposal presented in this paper will be improved by the implementation of fast algorithms and/or parallel implementation on graphics processing units using compute unified device architectures technology, so the performance can improve for real-time applications.

Acknowledgments

The author Iván R. Terol-Villalobos would like to thank Diego Rodrigo and Darío T.G. for their great encouragement. This work was funded by the government agency CONACyT México under the Grant 133697.

References

1. 

A. M. DaleB. FischlM. I. Sereno, “Cortical surface-based analysis. I. Segmentation and surface reconstruction,” Neuroimage, 9 (2), 179 –194 (1999). http://dx.doi.org/10.1006/nimg.1998.0395 NEIMEF 1053-8119 Google Scholar

2. 

J. X. LiuY. S. ChenL. F. Chen, “Accurate and robust extraction of brain regions using a deformable model based on radial basis functions,” J. Neurosci. Methods, 183 (2), 255 –266 (2009). http://dx.doi.org/10.1016/j.jneumeth.2009.05.011 JNMEDT 0165-0270 Google Scholar

3. 

S. M. Smith, “Fast robust automated brain extraction,” Hum. Brain Mapp., 17 (3), 143 –155 (2002). http://dx.doi.org/10.1002/(ISSN)1097-0193 HBRME7 1065-9471 Google Scholar

4. 

H. HahnH.-O. Peitgen, “The skull stripping problem in MRI solved by a single 3D watershed transform,” Lect. Notes Comput. Sci., 1935 134 –143 (2000). http://dx.doi.org/10.1007/b12345 LNCSD9 0302-9743 Google Scholar

5. 

R. Beareet al., “Brain extraction using the watershed transform from markers,” Front. Neuroinf., 7 (32), 1 –15 (2013). http://dx.doi.org/10.3389/fninf.2013.00032 1662-5196 Google Scholar

6. 

B. DogdasD. W. ShattuckR. M. Leahy, “Segmentation of skull and scalp in 3-D human MRI using mathematical morphology,” Hum. Brain Mapp., 26 (4), 273 –285 (2005). http://dx.doi.org/10.1002/(ISSN)1097-0193 HBRME7 1065-9471 Google Scholar

7. 

S. SandorR. Leahy, “Surface-based labeling of cortical anatomy using a deformable database,” IEEE Trans. Med. Imaging, 16 (1), 41 –54 (1997). http://dx.doi.org/10.1109/42.552054 ITMID4 0278-0062 Google Scholar

8. 

F. Ségonneet al., “A hybrid approach to the skull stripping problem in MRI,” Neuroimage, 22 (3), 1060 –1075 (2004). http://dx.doi.org/10.1016/j.neuroimage.2004.03.032 NEIMEF 1053-8119 Google Scholar

9. 

J. E. Iglesiaset al., “Robust brain extraction across datasets and comparison with publicly available methods,” IEEE Trans. Med. Imaging, 30 (9), 1617 –1634 (2011). http://dx.doi.org/10.1109/TMI.2011.2138152 ITMID4 0278-0062 Google Scholar

10. 

F. LotteA. LecuyerB. Arnaldi, “FuRIA: a novel feature extraction algorithm for brain-computer interfaces using inverse models and fuzzy regions of interest,” in 3rd Int. IEEE/EMBS Conf. on Neural Engineering, 2007 (CNE ’07), 175 –178 (2007). Google Scholar

11. 

F. J. GaldamesF. JailletC. A. Perez, “An accurate skull stripping method based on simplex meshes and histogram analysis for magnetic resonance images,” J. Neurosci. Methods, 206 (2), 103 –119 (2012). http://dx.doi.org/10.1016/j.jneumeth.2012.02.017 JNMEDT 0165-0270 Google Scholar

12. 

S. Jianget al., “Brain extraction from cerebral MRI volume using a hybrid level set based active contour neighborhood model,” Biomed. Eng. Online, 12 (1), 31 (2013). http://dx.doi.org/10.1186/1475-925X-12-31 1475-925X Google Scholar

13. 

A. Huanget al., “Brain extraction using geodesic active contours,” Proc. SPIE, 6144 61444J (2006). http://dx.doi.org/10.1117/12.654160 PSISDG 0277-786X Google Scholar

14. 

S. F. Eskildsenet al., “The Alzheimer’s disease neuroimaging initiative, BEaST: brain extraction based on nonlocal segmentation technique,” Neuroimage, 59 (3), 2362 –2373 (2012). http://dx.doi.org/10.1016/j.neuroimage.2011.09.012 NEIMEF 1053-8119 Google Scholar

15. 

K. K. Leunget al., “Automated cross-sectional and longitudinal hippocampal volume measurement in mild cognitive impairment and Alzheimer’s disease,” Neuroimage, 51 (4), 1345 –1359 (2010). http://dx.doi.org/10.1016/j.neuroimage.2010.03.018 NEIMEF 1053-8119 Google Scholar

16. 

K. K. Leunget al., “Alzheimer’s disease neuroimaging initiative, brain MAPS: an automated, accurate and robust brain extraction technique using a template library,” Neuroimage, 55 (3), 1091 –1108 (2011). http://dx.doi.org/10.1016/j.neuroimage.2010.12.067 NEIMEF 1053-8119 Google Scholar

17. 

P. Dokládalet al., “Topologically controlled segmentation of 3D magnetic resonance images of the head by using morphological operators,” Pattern Recognit., 36 (10), 2463 –2478 (2003). http://dx.doi.org/10.1016/S0031-3203(03)00118-3 PTNRA8 0031-3203 Google Scholar

18. 

M. A. Balafaret al., “Review of brain MRI image segmentation methods,” Artif. Intell. Rev., 33 (3), 261 –274 (2010). http://dx.doi.org/10.1007/s10462-010-9155-0 AIREV6 0269-2821 Google Scholar

19. 

J. C. BezdekL. O. HallL. P. Clarke, “Review of MR image segmentation techniques using pattern recognition,” Med. Phys., 20 (4), 1033 –1048 (1993). http://dx.doi.org/10.1118/1.597000 MPHYA6 0094-2405 Google Scholar

20. 

A. P. ZijdenbosB. M. Dawant, “Brain segmentation and white matter lesion detection in MR images,” Crit. Rev. Biomed. Eng., 22 (5–6), 401 –465 (1994). CRBEDR 0278-940X Google Scholar

21. 

L. P. Clarkeet al., “MRI segmentation: methods and applications,” J. Magn. Reson. Imaging, 13 (3), 343 –368 (1995). http://dx.doi.org/10.1016/0730-725X(94)00124-L 1053-1807 Google Scholar

22. 

C. Fennema-Notestineet al., “Quantitative evaluation of automated skull-stripping methods applied to contemporary and legacy images: effects of diagnosis, bias correction, and slice location,” Hum. Brain Mapp., 27 (2), 99 –113 (2006). http://dx.doi.org/10.1002/(ISSN)1097-0193 HBRME7 1065-9471 Google Scholar

23. 

D. W. Shattucket al., “Online resource for validation of brain segmentation methods,” Neuroimage, 45 (2), 431 –439 (2009). http://dx.doi.org/10.1016/j.neuroimage.2008.10.066 NEIMEF 1053-8119 Google Scholar

24. 

K. Boesenet al., “Quantitative comparison of four brain extraction algorithms,” Neuroimage, 22 (3), 1255 –1261 (2004). http://dx.doi.org/10.1016/j.neuroimage.2004.03.010 NEIMEF 1053-8119 Google Scholar

25. 

B. D. Ward, Intracranial Segmentation, (2014) http://afni.nimh.nih.gov/pub/dist/doc/program_help/ May ). 2014). Google Scholar

26. 

D. W. Shattucket al., “Magnetic resonance image tissue classification using a partial volume model,” Neuroimage, 13 (5), 856 –876 (2001). http://dx.doi.org/10.1006/nimg.2000.0730 NEIMEF 1053-8119 Google Scholar

27. 

J. AshburnerK. J. Friston, “Voxel-based morphometry: the methods,” Neuroimage, 11 (6 Pt 1), 805 –821 (2000). http://dx.doi.org/10.1006/nimg.2000.0582 NEIMEF 1053-8119 Google Scholar

28. 

V. Popescuet al., “Optimizing parameter choice for FSL-Brain Extraction Tool (BET) on 3D T1 images in multiple sclerosis,” Neuroimage, 61 (4), 1484 –1494 (2012). http://dx.doi.org/10.1016/j.neuroimage.2012.03.074 NEIMEF 1053-8119 Google Scholar

29. 

I. Santillánet al., “Morphological connected filtering on viscous lattices,” J. Math. Imaging Vision, 36 (3), 254 –269 (2010). http://dx.doi.org/10.1007/s10851-009-0184-8 JIMVEC 0924-9907 Google Scholar

30. 

J. D. Mendiola-Santibañezet al., “Application of morphological connected openings and levelings on magnetic resonance images of the brain,” Int. J. Imaging Syst. Technol., 21 (4), 336 –348 (2011). http://dx.doi.org/10.1002/ima.v21.4 IJITEG 0899-9457 Google Scholar

31. 

Center for Morphometric Analysis, Massachusetts General Hospital, The Internet Brain Segmentation Repository (IBSR) ( (1995) http://www.cma.mgh.harvard.edu/ibsr/ May 2014). Google Scholar

32. 

L. Vincent, “Morphological grayscale reconstruction in image analysis: applications and efficient algorithms,” IEEE Trans. Image Process., 2 (2), 176 –201 (1993). http://dx.doi.org/10.1109/83.217222 IIPRE4 1057-7149 Google Scholar

33. 

F. MeyerC. Vachier, “Image segmentation based on viscous flooding simulation,” Mathematical Morphology, 69 –77 CSIRO Publishing, Melbourne (2002). Google Scholar

34. 

C. VachierF. Meyer, “The viscous watershed transform,” J. Math. Imaging Vis., 22 (2–3), 251 –267 (2005). http://dx.doi.org/10.1007/s10851-005-4893-3 JIMVEC 0924-9907 Google Scholar

35. 

P. MaragosC. Vachier, “A PDE formulation for viscous morphological operators with extensions to intensity-adaptative operators,” in Proc. 15th IEEE Int. Conf. in Image Processing, 2200 –2203 (2008). Google Scholar

36. 

C. VachierF. Meyer, “News from viscous land,” in Proc. of the 8th Int. Symposium on Mathematical Morphology, 189 –200 (2007). Google Scholar

37. 

L. Vincent, “Fast granulometric methods for the extraction of global image information,” in Proc. 11th Annual Symposium of the South African Pattern Recognition Association, 119 –133 (2000). Google Scholar

38. 

P. Jaccard, “The distribution of the flora in the alpine zone,” New Phytol., 11 (2), 37 –50 (1912). http://dx.doi.org/10.1111/nph.1912.11.issue-2 NEPHAV 0028-646X Google Scholar

39. 

L. R. Dice, “Measures of the amount of ecologic association between species,” J. Ecol., 26 (4), 297 –302 (1945). http://dx.doi.org/10.2307/1932409 JECOAB 0022-0477 Google Scholar

40. 

H. Zhanget al., “An automated and simple method for brain MR image extraction,” Biomed. Eng. Online, 10 (1), 81 (2011). http://dx.doi.org/10.1186/1475-925X-10-81 1475-925X Google Scholar

41. 

H. Heijmans, Morphological Image Operators, Academic Press, Boston, Massachusetts (1994). Google Scholar

42. 

J. SerraP. Salembier, “Connected operator and pyramids,” Proc. SPIE, 2030 65 –76 (1993). http://dx.doi.org/10.1117/12.146672 PSISDG 0277-786X Google Scholar

43. 

C. RordenM. Brett, “Stereotaxic display of brain lesions,” Behav. Neurol., 12 (4), 191 –200 (2000). http://dx.doi.org/10.1155/2000/421719 BNEUEI 0953-4180 Google Scholar

Biography

Jorge Domingo Mendiola-Santibañez received his PhD degree from the Universidad Autónoma de Querétaro (UAQ), México. Currently, he is a professor/researcher at the Universidad Autónoma de Querétaro. His research interests include morphological image processing and computer vision.

Martín Gallegos-Duarte is an MD and a PhD student at the Universidad Autonoma de Querétaro. He is head of the Strabismus Service at the Institute for the Attention of Congenital Diseases and Ophthalmology-Pediatric Service in the Mexican Institute of Ophthalmology in the state of Queretaro, Mexico.

Miguel Octavio Arias-Estrada is a researcher in computer science at National Institute of Astrophysics, Optics and Electronics, Puebla, Mexico, with a PhD degree in electrical engineering (computer vision) from Laval University (Canada) and BEng and MEng degrees in electronic engineering from University of Guanajuato (Mexico). Currently, he is a researcher at INAOE (Puebla, México). His interests are computer vision, FPGA and GPU algorithm acceleration for three-dimensional machine vision.

Israel Marcos Santillán-Méndez received the BS degree in engineering from the Instituto Tecnológico de Estudios Superiores de Monterrey and his MS degree and PhD degree in engineering from Facultad de Ingeniería de la Universidad Autónoma de Querétaro (México). His research interests include models of biological sensory and perceptual systems and mathematical morphology.

Juvenal Rodríguez-Reséndiz received his MS degree in automation control from University of Querétaro and PhD degree at the same institution. Since 2004, he has been part of the Mechatronics Department at the UAQ. He is the head of the Automation Department. His research interest includes signal processing and motion control. He serves as vice president of IEEE in Queretaro State.

Iván Ramón Terol-Villalobos received his BSc degree from Instituto Politécnico Nacional (I.P.N. México), his MSc degree from Centro de Investigación y Estudios Avanzados del I.P.N. (México). He received his PhD degree from the Centre de Morphologie Mathématique, Ecole des Mines de Paris (France). Currently, he is a researcher at CIDETEQ (Querétaro, México). His main current research interests include morphological image processing, morphological probabilistic models, and computer vision.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Jorge Domingo Mendiola-Santibañez, Martín Gallegos-Duarte, Miguel Octavio Arias-Estrada, Israel Marcos Santillán-Méndez, Juvenal Rodríguez-Reséndiz, and Iván Ramón Terol-Villalobos "Sequential application of viscous opening and lower leveling for three-dimensional brain extraction on magnetic resonance imaging T1," Journal of Electronic Imaging 23(3), 033010 (3 June 2014). https://doi.org/10.1117/1.JEI.23.3.033010
Published: 3 June 2014
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Brain

Image segmentation

Magnetic resonance imaging

Neuroimaging

Image processing

Databases

Skull

Back to Top