The stochastic nature of 3-D Monte Carlo (MC) photon transport simulations requires simulating a large number of photons to achieve stable solutions. In this work, we explore state-of-the-art deep-learning (DL) based image denoising techniques, including the proposal of cascaded DnCNN and UNet denoising networks, aiming at significantly reducing the stochastic noise in low-photon MC simulations to achieve both high speed and high image quality. We demonstrate that all tested DL based denoisiers are significantly more effective compared to model-based denoising methods. In our benchmarks, our cascaded denoisier has achieved a signal enhancement equivalent to running 25x-78x more photons.
KEYWORDS: Monte Carlo methods, Denoising, Signal to noise ratio, Photon transport, Computer simulations, Performance modeling, Model-based design, Image processing, Data modeling, Algorithm development
Significance: The Monte Carlo (MC) method is widely used as the gold-standard for modeling light propagation inside turbid media, such as human tissues, but combating its inherent stochastic noise requires one to simulate a large number photons, resulting in high computational burdens.Aim: We aim to develop an effective image denoising technique using deep learning (DL) to dramatically improve the low-photon MC simulation result quality, equivalently bringing further acceleration to the MC method.Approach: We developed a cascade-network combining DnCNN with UNet, while extending a range of established image denoising neural-network architectures, including DnCNN, UNet, DRUNet, and deep residual-learning for denoising MC renderings (ResMCNet), in handling three-dimensional MC data and compared their performances against model-based denoising algorithms. We also developed a simple yet effective approach to creating synthetic datasets that can be used to train DL-based MC denoisers.Results: Overall, DL-based image denoising algorithms exhibit significantly higher image quality improvements over traditional model-based denoising algorithms. Among the tested DL denoisers, our cascade network yields a 14 to 19 dB improvement in signal-to-noise ratio, which is equivalent to simulating 25 × to 78 × more photons. Other DL-based methods yielded similar results, with our method performing noticeably better with low-photon inputs and ResMCNet along with DRUNet performing better with high-photon inputs. Our cascade network achieved the highest quality when denoising complex domains, including brain and mouse atlases.Conclusions: Incorporating state-of-the-art DL denoising techniques can equivalently reduce the computation time of MC simulations by one to two orders of magnitude. Our open-source MC denoising codes and data can be freely accessed at http://mcx.space/.
KEYWORDS: Denoising, 3D image processing, X-ray computed tomography, Data modeling, Computed tomography, Image processing, Signal to noise ratio, RGB color model
CT continues to be one of the most widely used medical imaging modalities. Concerns about long term effect of x-ray radiation on patients have led to efforts to reduce the x-ray dose imparted during CT exams. Lowering CT dose results in a lower signal to noise ratio in CT data which lowers CT Image Quality (IQ). Deep learning algorithms have shown competitive denoising results against the state-of-art image-based denoising approaches. Among these deep learning algorithms, deep residual networks have demonstrated effectiveness for edge-preserving noise reduction and imaging performance improvement compared to traditional edge-preserving filters. Previously published Residual Encoder- Decoder Convolutional Neural Network (RED-CNN) showed significant achievement for noise suppression, structural preservation, and lesion detection. However, its 2D architecture makes it unsuitable for thin slice and reformatted (sagittal, coronal) imaging. In this work, we present a novel 3D RED-CNN architecture, evaluate the effect of model parameters on performance and IQ, and show steps to improve optimization convergence. We use standard imaging metrics (SSIM, PSNR) to assess imaging performance and compare to previously published algorithms. Compared to 2D RED-CNN, our proposed 3D RED CNN produces higher quality 3D results, as shown by reformatted (sagittal, coronal) views, while maintaining all advantages of the original RED-CNN in axial imaging.
KEYWORDS: Monte Carlo methods, Denoising, Signal to noise ratio, Image filtering, Digital filtering, Photon transport, Photon counting, Computer simulations, Visualization, Gaussian filters
The Monte Carlo (MC) method is widely recognized as the gold standard for modeling light propagation inside turbid media. Due to the stochastic nature of this method, MC simulations suffer from inherent stochastic noise. Launching large numbers of photons can reduce noise but results in significantly greater computation times, even with graphics processing units (GPU)-based acceleration. We develop a GPU-accelerated adaptive nonlocal means (ANLM) filter to denoise MC simulation outputs. This filter can effectively suppress the spatially varying stochastic noise present in low-photon MC simulations and improve the image signal-to-noise ratio (SNR) by over 5 dB. This is equivalent to the SNR improvement of running nearly 3.5 × more photons. We validate this denoising approach using both homogeneous and heterogeneous domains at various photon counts. The ability to preserve rapid optical fluence changes is also demonstrated using domains with inclusions. We demonstrate that this GPU-ANLM filter can shorten simulation runtimes in most photon counts and domain settings even combined with our highly accelerated GPU MC simulations. We also compare this GPU-ANLM filter with the CPU version and report a threefold to fourfold speedup. The developed GPU-ANLM filter not only can enhance three-dimensional MC photon simulation results but also be a valuable tool for noise reduction in other volumetric images such as MRI and CT scans.
We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.