Open Access Paper
17 October 2022 Dual-domain network with transfer learning for reducing bowtie-filter induced artifacts in half-fan cone-beam CT
Sungho Yun, Uijin Jeong, Donghyeon Lee, Hyeongseok Kim, Seungryong Cho
Author Affiliations +
Proceedings Volume 12304, 7th International Conference on Image Formation in X-Ray Computed Tomography; 123041H (2022) https://doi.org/10.1117/12.2646923
Event: Seventh International Conference on Image Formation in X-Ray Computed Tomography (ICIFXCT 2022), 2022, Baltimore, United States
Abstract
In a cone-beam CT system, the use of bowtie-filter may induce artifacts in the reconstructed images. Through a Monte-Carlo simulation study, we confirm that the bowtie filter causes spatially biased beam energy difference thereby creating beam-hardening artifacts. We also note that cupping artifacts in conjunction with the object scatter and additional beam-hardening may manifest. In this study, we propose a dual-domain network for reducing the bowtie-filter induced artifacts by addressing the origin of artifacts. In the projection domain, the network compensates for the filter induced beam-hardening effects. In the image domain, the network reduces the cupping artifacts that generally appear in cone-beam CT images. Also, transfer learning scheme was adopted in the projection domain network to reduce the total training costs and to increase utility in the practical cases while maintaining the robustness of the dual-domain network. Thus, the pre-trained projection domain network using simple elliptical cylinder phantoms was utilized. As a result, the proposed network shows denoised and enhanced soft-tissue contrast images with much reduced image artifacts. For comparison, a single image domain U-net was also implemented as an ablation study. The proposed dual-domain network outperforms, in terms of soft-tissue contrast and residual artifacts, a single domain network that does not physically consider the cause of artifacts.

1.

INTRODUCTION

In a clinical cone-beam CT system, the bowtie-filter is often used to homogenize projection data across the field-of-view thereby better utilizing detector response characteristics and partly to reduce the amount of object scatter. However, it can induce eclipse shape artifacts that typically have a bright ring and dark shade in the reconstructed image as shown in Fig 1. Artifacts from the bowtie-filter may build up in various forms depending on the geometric shape of the filter and the scanning system conditions. Since the artifacts can severely degrade the soft-tissue contrast in CT images, there have been studies for reducing these artifacts and for clarifying their physical causes [1-3].

Figure 1.

Bowtie filter artifacts under two different bowtie-filter designs. The eclipse artifacts occurred in the images that have a bright ring and dark shade inside on top of the global cupping

00054_PSISDG12304_123041H_page_2_1.jpg

Due to the elliptic shape nature of human anatomy in transverse plane, the bowtie-filter gradually thickens towards the outside from the principal ray projection position. Therefore, the energy of the incident x-ray from the source can be hardened when it passes through the ticker part of the filter. This spatially varying spectral incident beam, together with the nonlinear detector energy-response characteristics, can cause data inconsistencies from the linear imaging model resulting in peculiar beam-hardening artifacts such as eclipse artifacts. M. Cai et al. [3] introduced a decoupling technique that decomposes the artifacts into bowtie filter-induced beam-hardening and object-induced cupping artifacts. The decoupling scheme is valid from the physical perspectives in that the artifacts originated from the bowtie-filter can be separated from the object-originated ones. However, the suggested method in [3] requires heuristic parameter optimization in each case and has to handle the correction mismatch in an iterative reconstruction framework that requires heavy computational cost. A neural-network-based artifact correction method can be an alternative; a single domain network (e.g., in image-domain) may not be a suitable candidate though since physical factors are hardly incorporated in such a network, possibly resulting in residual artifacts and structural distortions. In recent studies, a dual-domain network has been introduced to partly incorporate physical factor issues in the projection domain and shown promising results in metal artifacts reduction [4].

In this study, we propose a dual-domain network for reducing the bowtie-filter induced artifacts by addressing the causes of artifacts in the respective domains efficiently. In the projection domain, the network compensates for the filter induced beam-hardening effects. In the image domain, the network solves the cupping artifacts that are more associated with the imaged object. Also, transfer learning scheme was adopted in the projection domain training to reduce the total training costs and to increase utility in the practical cases.

2.

METHOD

2.1

Decoupling of Bowtie Artifacts

Through a Monte-Carlo simulation study, we have confirmed that the bowtie filter causes spatially biased beam energy distribution and creates an eclipse shape of beam-hardening artifacts in the image on top of the object-induced cupping artifacts. Figure 1 shows example slice images of the reconstructed uniform cylinder phantom. The two cases represent different shapes of the bowtie-filters. Overall the images are subject to cupping due to the object scatter and beam-hardening, and the central ring-shaped blooming is due to the bowtie-filter. Thus, we aim at correcting for the bowtie-filter artifacts in the projection domain and by doing so decoupling the bowtie-filter artifacts from the object related ones.

2.2

Dual-domain network with transfer-learning

The transfer learning can be applied when the size of datasets is small but when there exists a pre-trained network that performs similar tasks. In our case, the transfer-learning scheme was adopted in the projection domain network that intend to do the filter-induced beam-hardening correction. This is due to the limited availability of patient projection data under specific geometric conditions (filter shapes or system geometry). Therefore, simple elliptical cylinder phantoms that are easy to implement were used to prepare a pretrained network. After the projection-domain network, the image domain network addresses the remaining cupping artifacts, which can be interpreted as bottleneck feature in the transfer learning scheme.

2.3

Cone-beam CT system using Monte-Carlo simulation

The system geometry was designed according to Nano Focus Ray (inc)’s Phion v2.0 CBCT system and we used GPU-based Geant4 Monte-Carlo simulation tool [5]. The system uses a half-fan scan mode with a half-bowtie filter (aluminum based filter). The source to detector distance (SDD) and source to object distance (SOD) are 835mm and 480mm, respectively. In addition, an array size of 256x240 detector was used for data acquisition. The tube voltage was set to be 110kVp. The detailed scanning parameters are summarized in TABLE I

Table 1.

Geometry Table

SDD835mm
SOD480mm
Detector pitch1.16x1.16mm
Detector resolution256x240
Thickness of detector20mm
Tube voltage110kVp
Angular step size1 degree

2.4

Projection domain training

For projection domain training, 6 sets of elliptical cylinder water phantoms with its height of 400 mm were prepared with different ellipticities. They have a fixed major axis diameter of 265mm (the value that nearly fulfills the FOV under the given geometry) and only the minor axis diameter changes from 115mm to 241mm, which corresponds to an ellipticity ranging from 0.4 to 0.9. The patient shape is assumed to be a kind of elliptical cylinder and its ellipticity would range in the targeted range above. 360 projections were acquired for each phantom over a full rotation and half of them were used for training data considering redundant information of the elliptic phantoms placed in the isocenter. A total of 1080 projections from six phantoms were used for training. The projection data with the bowtie filter were used as inputs and the data without the bowtie filter were used as labels. We used projection image of the bowtie filter alone, i.e., without imaged object, as an additional input. The data were divided into 810 pairs for the training set and 270 pairs for the validation set for the network training.

One thing we would like to note is that we used a frequency splitting technique when preparing the input considering the drastic difference of the simple phantoms and the patient data. In order to make the input patient data more consistent with the elliptic cylinder phantoms, low-frequency information was extracted and fed into the network for removing the bowtie-filter artifacts which are also low-frequency dominated. A Gaussian filter was used for frequency splitting and only the low-frequency of data was given as an input to the network. After the bowtie-filter network, the high-frequency information was added to form the corrected projection data.

By doing so, beam-hardening correction can be effectively done by low-frequency matching high-frequency components of the patient projection data are preserved. For the network model, residual U-net was used with mean squared error loss.

Figure 2.

Total workflow of study. The frequency split technique was applied before projection domain training for preventing loss of high-frequency information of input

00054_PSISDG12304_123041H_page_4_1.jpg

2.5

Image domain training

For image domain training, 5 clinical patient CT volume data from Mayo clinic (AAPM low-dose CT challenge dataset) were used. The abdominal part of clinical data was segmented into 30 different materials with different densities based on HU unit to create ground-truth material maps. Then, polychromatic forward projection of the maps with known source energy spectrum was performed to acquire projection data and they were reconstructed to create ground-truth images for training. Also, to create bowtie artifact corrupted patient projection image, the bowtie filter inserted Monte-Carlo simulation was conducted. These data were passed to the pre-trained projection-domain network as explained above and reconstructed to use as input data for the image-domain network training. The reconstructed volume size is 256x256x240 for each patient.

Among total 1200 paired data, 720 pairs of data were used for the training set, 240 pairs of data were used for the validation, and 240 pairs of data were used for the test. For the network model, residual U-net was used with mean square error loss.

3.

RESULT AND DISCUSSION

3.1

Result of projection domain network

The network was trained to 24k epochs with 5e-6 learning rate with Adam optimizer. The validation loss was almost saturated over 16k epoch and it converged to 5.50e-5. The network parameter weight at the 23kth epoch was used for this study. The training loss of that point is 3.03e-6 and the validation loss is 5.49e-5. In Fig 3 (a), (b), and (c), the results of validation images are shown. The compensation was successfully done and also line-profile of projection was almost recovered as label data

Figure 3.

(a) is input image with line-profile, (b) is output image with line-profile, (c) is label image with line-profile. (d) is input image with line-profile of patient projection data, (e) is output image with line-profile of patient projection data. The network successfully compensates the beam-hardening artifacts and recovered the line-profile. The window level is [0.00 3.98] for (a),(b) and (c) and [0.00 2.82] for (d) and (e)

00054_PSISDG12304_123041H_page_5_1.jpg

Then, the patient projection data are also given to this pre-trained network. The beam-hardening due to the bowtie-filter was well recovered, and the line profile shows such recovery as shown in Fig 3. (d) and (e). Please note that there is no ground-truth for the patient projection in this case. These results are reconstructed and compared with the original image in Fig 4.

Figure 4.

(a) is original image that corrupted by bowtie artifacts, (b) reconstructed output image of projection domain network, (c) subtracted image between (a) and (b), second row show another body part results with same sequence

00054_PSISDG12304_123041H_page_6_1.jpg

As shown in Fig. 4, the eclipse artifact was clearly removed, and only cupping artifact remains. Also, due to the frequency split technique, the high-frequency structures are well preserved. We believe that removing these eclipse artifacts would help the image-domain network remove the remaining cupping artifacts. A single image-domain network would have to process the compounding artifacts without incorporating different physical factors.

3.2

Result of image domain network

The network trained to 1.4k epochs with 1e-4 learning rate with Adam optimizer. The training loss was 1.58e-7, validation loss was 1.76e-7 and test loss was 1.78e-7. To show the robustness of our dual-domain network, the results were compared with a single image-domain U-net. The compared network was trained with the bowtie artifact corrupted patient image and label patient image in the image domain. In the single image-domain network case, the training was performed to 900 epoch and a slight overfitting was detected around 500th epoch. Thus, the weight at 470th epoch was used for comparison. Meanwhile, 1e-4 learning rate was used and the same network model and optimizer used as the proposed network. The training loss was 2.05e-7, validation loss was 2.10e-7 and test loss was 2.12e-7. The compared test results are shown in Fig 5.

Figure 5.

(a) is output of proposed network, (b) output of single domain U-net, (c) ground-truth image and second row show another body part with same sequence. The window level is [0.017 0.03]

00054_PSISDG12304_123041H_page_6_2.jpg

In Fig 5, the proposed network clearly removes the remaining cupping artifacts. Furthermore, it shows enhanced soft-tissue contrast and denoised outputs. It is observed that the output images are smoother than the label image. We would like to note that this is due to the label image nature where scattering is neglected. In the Monte Carlo generated data, scatter would naturally come into play and the resulting image would not be as sharp as the label image. The compared network outputs also have enhanced contrast and denoised properties, however, some residual artifacts are still observed where the eclipse artifacts originally existed. Moreover, structure distortion was also observed in the results of the compared network (It is highlighted with red arrow in Fig. 5.)

For quantitative analysis, mean square error (MSE) loss, root mean square error (RMSE), and similarity structure index (SSIM) value were evaluated and presented in Table 2. The proposed dual-domain network outperforms a single domain network that does not physically consider the cause of artifacts

Table 2.

Quantitative analysis

MSE(loss)TrainingValidationTest
Proposed network1.58e-71.76e-71.78e-7
Single domain U-net2.05e-72.10e-72.12e-7
RMSETrainingValidationTest
Proposed network3.97e-44.19e-44.21e-4
Single domain U-net4.52e-74.57e-44.60e-4
SSIMTrainingValidationTest
Proposed network0.9630.9490.967
Single domain U-net0.9580.9460.961

4.

CONCLUSION

In this study, we proposed a dual-domain network to reduce bowtie-filter induced image artifacts in cone-beam CT. The promises of the dual-domain network has been successfully shown. Experimental validation of the proposed method is under our research and will be presented in the near future.

REFERENCES

[1] 

H. Zhang, V. Kong, K. Huang, and J. Y. Jin, “Correction of Bowtie-Filter Normalization and Crescent Artifacts for a Clinical CBCT System,” Technol. Cancer Res. Treat., 16 (1), (2017). https://doi.org/10.1177/1533034615627584 Google Scholar

[2] 

Y. Cao, T. Ma, S. F. de Boer, and I. Z. Wang, “Image artifacts caused by incorrect bowtie filters in cone-beam CT image-guided radiotherapy,” J. Appl. Clin. Med. Phys., 21 (7), (2020). https://doi.org/10.1002/acm2.v21.7 Google Scholar

[3] 

M. Cai, M. Byrne, B. Archibald-Heeren, P. Metcalfe, A. Rosenfeld, and Y. Wang, “Decoupling of bowtie and object effects for beam hardening and scatter artefact reduction in iterative cone-beam CT,” Phys. Eng. Sci. Med., 43 (4), (2020). https://doi.org/10.1007/s13246-020-00918-8 Google Scholar

[4] 

W. A. Lin, H. Liao, C. Peng, X. Sun, J. Zhang, J. Luo, R. Chellappa, and S. K. Zhou, “DuDoNet: Dual domain network for CT metal artifact reduction,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (2019). https://doi.org/10.1109/CVPR41558.2019 Google Scholar

[5] 

J. Bert, H. Perez-Ponce, Z. El Bitar, S. Jan, Y. Boursier, D. Vintache, A. Bonissent, C. Morel, D. Brasse, and D. Visvikis, “Geant4-based Monte Carlo simulations on GPU for medical applications,” Phys. Med. Biol., 58 (16), (2013). https://doi.org/10.1088/0031-9155/58/16/5593 Google Scholar

[6] 

S. M. Lee, T. Bayaraa, H. Jeong, C. M. Hyun, and J. K. Seo, “A direct sinogram correction method to reduce metal-related beam-hardening in computed tomography,” IEEE Access, 7 (2019). Google Scholar
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Sungho Yun, Uijin Jeong, Donghyeon Lee, Hyeongseok Kim, and Seungryong Cho "Dual-domain network with transfer learning for reducing bowtie-filter induced artifacts in half-fan cone-beam CT", Proc. SPIE 12304, 7th International Conference on Image Formation in X-Ray Computed Tomography, 123041H (17 October 2022); https://doi.org/10.1117/12.2646923
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Computed tomography

Sensors

Data modeling

Image filtering

Monte Carlo methods

Data acquisition

Nonlinear filtering

Back to Top