Cerebral perfusion computed tomography (CPCT) imaging provides a rapid and accurate noninvasive measurements of the acute stroke by generating hemodynamic parameter maps with a qualitative and quantitative way. However, due to it performs a multiple consecutive scanning protocol at one area of the head, the radiation exposure is relatively higher than a routine protocol. And lowering radiation dose in CPCT protocol would increase the amount of noise and hence influence hemodynamic parameters for patients with acute stroke. Some advanced methods have been proposed and show a great potential in noise suppression for low-dose CPCT imaging. And most of them assume that the embedded noise obeys an independent and identically distribution (i.i.d), but the noise may be more complicated in practical scenarios. In this work, we first analyze the noise properties in low-dose CPCT images. And then present a novel perfusion deconvolution method with a self-relative structure similarity information and a mixture of Gaussians (MoG) noise model (named SR-MoG) to accurately estimate the hemodynamic parameters directly at the low radiation exposure. Experiments implemented on digital brain perfusion phantom verify that the presented SR-MoG method can achieve promising gains over the existing deconvolution approaches.
Energy-resolving CT (ErCT) with a photon counting detector (PCD) is able to generate multi-energy data with high spatial resolution, and it can be used to improve contrast-to-noise ratio (CNR) of iodinated tissues and to reduce beam hardening artifacts. In addition, ErCT allows for generating virtual mono-energetic CT images with improved CNR. However, most of ErCT scanners are lab-built, but little used in clinical research. Deep learning based methods can help to generate ErCT images from energy-integrating CT (EiCT) images via convolution neural networks (CNNs) because of its capability in learning features of the EiCT images and ErCT images. Nevertheless, current CNNs usually generate ErCT images at one energy bin at a time, and there is large room for improvement, such as, generating multi-energy ErCT images at a time. Therefore, in this work, we investigate to leverage a deep generative model (IuGAN-ErCT) to simultaneously generate ErCT images at multiple energy bins from existing EiCT images. Specifically, a unified generative adversarial network (GAN) is employed. With a single generator, the generative network learns the latent correlation between the EiCT images and ErCT images to estimate ErCT images from EiCT images. Moreover, to maintain the value accuracy of different ErCT images, we introduced a fidelity loss function. In the experiment, 1384 abdomen and chest images collected from 22 patients were utilized to train the proposed IuGAN-ErCT method and 130 slices were used for test. Result shows that the IuGAN-ErCT method can generate more accurate ErCT images than the uGAN-ErCT method both in quantitative and qualitative evaluation.
Deep learning-based algorithms have been widely used in the low-dose CT imaging field, and have achieved promising results. However, most of these algorithms only consider the information of the desired CT image itself, ignoring the external information that can help improve the imaging performance. Therefore, in this study, we present a convolutional neural network for low-dose CT reconstruction with non-local texture learning (NTL-CNN) approach. Specifically, different from the traditional network in CT imaging, the presented NTL- CNN approach takes into consideration the non-local features within the adjacent slices in 3D CT images. Then, both low-dose target CT images and the non-local features feed into the residual network to produce desired high-quality CT images. Real patient datasets are used to evaluate the performance of the presented NTL-CNN. The corresponding experiment results demonstrate that the presented NTL-CNN approach can obtain better CT images compared with the competing approaches, in terms of noise-induced artifacts reduction and structure details preservation.
Deep learning (DL) networks show a great potential in computed tomography (CT) imaging field. Most of them are supervised DL network greatly based on their capability and the amount of CT training data (i.e., low-dose CT measurements/high-quality ones). However, collection of large-scale CT datasets are time-consuming and expensive. In addition, the training and testing CT datasets used for supervised DL network are highly desired similarities in CT scan protocol (i.e., similar anatomical structure, and same kVp setting). These two issues are particularly critical in spectral CT imaging. In this work, to address the issues, we presents an unsupervised data fidelity enhancement network (USENet) to produce high-quality spectral CT images. Specifically, the presented USENet consists of two parts, i.e., supervised network and unsupervised network. In the supervised network, the spectral CT image pairs at 140 kVp (low-dose CT images/high-dose ones) are used for network training. It should be noted that there is a great difference of CT value between spectral CT images at 140 kVp and 80 kVp, and the supervised network trained with CT images at 140 kVp cannot be directly used for CT image reconstruction at 80 kVp. Then unsupervised network enrolls physical model and the spectral CT measurements at 80 kVp for fine-tuning the supervised network, which is the major contribution of the presented USENet method. Finally, accurate spectral CT reconstructions are achieved for the sparse-view and low-dose cases, which fully demonstrate the effectiveness of the presented USENet method.
With the development of deep learning (DL), many deep learning (DL) based algorithms have been widely used in the low-dose CT imaging and achieved promising reconstruction performance. However, most DL-based algorithms need to pre-collect a large set of image pairs (low-dose/high-dose image pairs) and trains networks in a supervised end-to-end manner. Actually, it is not feasible in clinical to obtain such a large amount of paired training data, especially for high-dose ones. Therefore, in this work, we present a semi-supervised learned sinogram restoration network (SLSR-Net) for low-dose CT image reconstruction. The presented SLSR-Net consists of supervised sub-network and unsupervised sub-network. Specifically, different from the traditional supervised DL networks which only use low-dose/high-dose sinogram pairs, the presented SLSR-Net method is capable of feeding only a few supervised sinogram pairs and massive unsupervised low-dose sinograms into the network training procedure. The supervised pairs are used to capture critical features (i.e., noise distribution, and tissue characteristics) latent in a supervised way and the unsupervised sub-network efficiently learns these features using a conventional weighted least-squares model with a regularization term. Moreover, another contribution of the presented SLSR-Net method is to adaptively transfer learned feature distribution from supervised subnetwork with the paired sinograms to unsupervised sub-network with unlabeled low-dose sinograms to obtain high-fidelity sinogram with a Kullback-Leibler divergence. Finally, the filtered backprojection algorithm is used to reconstruct CT images from the obtained sinograms. Real patient datasets are used to evaluate the performance of the presented SLSR-Net method and the corresponding experimental results show that compared with the traditional supervised learning method, the presented SLSR-Net method achieves competitive performance in terms of noise reduction and structure preservation in low-dose CT imaging.
Image quality assessment (IQA) is an important step to determine whether the computed tomography (CT) images are suitable for diagnosis. Since the high dose CT images are usually not accessible in clinical practice, no-reference (NR) CT IQA should be used. Most NR-IQA methods for CT images based on deep learning strategy focus on global information and ignores local performance, i.e., contrast, edge of local region. In this work, to address this issue, we presented a new NR-IQA framework combining global and local information for CT images. For simplicity, the NR-IQA framework is termed as NR-GL-IQA. In particular, the presented NR- GL-IQA adopts a convolutional neural network to predict entire image quality blindly without a reference image. In this stage, an elaborate strategy is used to automatically label the entire image quality for neural network training to cope with the problem of time-consuming in manually massive CT images annotation. Second, in the presented NR-GL-IQA method, Perception-based Image QUality Evaluator (PIQUE) is used to predict the local region quality because the PIQUE can adaptively capture the local region characteristics. Finally, the overall image quality is estimated by combining the global and local IQA together. The experimental results with Mayo dataset demonstrate that the presented NR-GL-IQA method can accurately predicts CT image quality and the combination of global and local IQA is closer to the radiologist assessment than that with only one single assessment.
Recently, deep neural networks (DNNs) have been widely applied in low-dose computed tomography (LDCT) imaging field. Their performances are highly related to the number of the pre-collected training data. Meanwhile, the training data is usually hard to obtain, especially for the high-dose CT (HDCT) images. And HDCT images sometimes contain undesired noises, which easily result in network overfitting. To address the two issues, we proposed a cooperative meta-learning strategy for CT image reconstruction (CmetaCT) combining the metalearning strategy and Co-teaching strategy. The meta-learning (teacher/student model) strategy allows for training network with a large number of LDCT images without the corresponding HDCT images and only a small number of labeled CT data in a semi-supervised learning manner. And the Co-teaching strategy is able to make a trade-off between overfitting and introducing extra errors, which includes a part of samples in every minibatch for updating model parameters. Due to the capacity of meta-learning, the presented CmetaCT method is flexible enough to utilize any existing CT restoration/reconstruction network in meta-learning framework. Finally, both quantitative and visual results indicated that the proposed CmetaCT method achieves a superior performance on low-dose CT imaging compared with the DnCNN method.
Fully supervised deep learning (DL) methods have been widely used in low-dose CT (LDCT) imaging field and can usually achieve high accuracy results. These methods require a large labeled training set which consists of pairs of LDCT images as well as their corresponding high-dose CT (HDCT) ones. They successfully learn intermediate concept of features describing important components in CT images, such as noise distribution, and structure details, which is important to capture dependencies from LDCT image to HDCT ones. However, it should be noted that it is quite time-consuming and costly to obtain such a large of labeled CT images especially the HDCT images are limited in clinics. In comparison, lots of unlabeled LDCT images are usually easily accessible and massive critical information latent in the unlabeled LDCT can be leveraged to further boost restoration performance. Therefore, in this work, we present a semi-supervised noise distribution learning network to suppress noise-induced artifacts in the LDCT images. For simplicity, the presented network in termed as "SNDL-Net". The presented SNDL-Net consists of two sub-networks, i.e., supervised network, and unsupervised network. In the supervised network, the LDCT/HDCT image pairs are used for network training. And the unsupervised network considers the complex noise distribution in the LDCT images, and model the noise with a Gaussian mixture framework, then learns the proper gradient of LDCT images in a purely unsupervised manner. Similar with the supervised network training, the gradient information in a large of unlabeled LDCT images can be used for unsupervised network training. Moreover, to learn the noise distribution accurately, the discrepancy between the learned noise distribution in the supervised network and learned noise distribution in the unsupervised network can be modeled by a Kullback-Leibler (KL) divergence. Experiments on the Mayo clinic dataset verify the method is effective in low-dose CT image restoration with only a small amount of labeled data compared to previous supervised deep learning methods.
Low-dose computed tomography (LDCT) examinations are of essential usages in clinical applications due to the lower radiation-associated cancer risks in CT imaging. Reductions in radiation dose can produce severe noise and artifacts that can affect the diagnostic accuracy of radiologists. Although deep learning networks have been widely proposed, most of these networks rely on a large number of annotated CT image pairs (LDCT images/high-dose CT (HDCT) images). Moreover, it is challenging for these networks to cope with the growing amount of CT images, especially large amount of medium-dose CT (MDCT) images that are easily to collect and have lower radiation dose than the HDCT images and higher radiation dose than the LDCT images. Therefore, in this work, we propose a progressive transfer-learning network (PETNet) for low-dose CT image reconstruction with limited annotated CT data and abundant corrupted CT data. The presented PETNet consists of two phases. In the first phase, a network is trained on a large amount of LDCT/MDCT image pairs, similar to the Noise2Noise network that has shown potential in yielding promising results with corrupted data for network training. It should be noted that this network would inevitably introduce undesired bias in the results due to the complex noise distribution in CT images. Then, in the second phase, we combined the pre-trained network and another simple network to construct the presented PETNet. In particular, the parameters of the pre-trained network are frozen and transferred directly to the presented PETNet, and the presented PETNet is trained on a small amount of LDCT/HDCT image pairs. Experimental results on Mayo clinic data demonstrate the superiority of the presented PETNet method both qualitatively and quantitatively compared with the network trained on LDCT/HDCT images pairs, and Noise2Noise method trained on LDCT/MDCT image pairs.
Photon counting computed tomography (PCCT) can simultaneously acquire measurements from multiple energies, and is able to differentiate material. However, material decomposition strategy typically leads to signal-tonoise ratio degradation and noise amplification due to limited photons detected at one energy bin in PCCT imaging. In this work, to address this issue, we present a statistical iterative material image reconstruction method to estimate material accurately. Specifically, the patch-based enhanced 3D total variation (PE3DTV) regularization is introduced into the statistical iterative model. Moreover, the PE3DTV extracts non-local similarities among all the desired material images, then stacks those similar patches to construct 3D tensor, and calculates the sparsity on the subspace of the 3D tensor based on gradient maps, encoding the correlation across nonlocal structures among material images. The numerical experiments show that the present method leads to reduced statistical bias and improved material image quality compared to the conventional TV-based method.
In this study we present a novel contrast-medium anisotropy-aware TTV (Cute-TTV) model to reflect intrinsic sparsity configurations of a cerebral perfusion Computed Tomography (PCT) object. We also propose a PCT reconstruction scheme via the Cute-TTV model to improve the performance of PCT reconstructions in the weak radiation tasks (referred as CuteTTV-RECON). An efficient optimization algorithm is developed for the CuteTTV-RECON. Preliminary simulation studies demonstrate that it can achieve significant improvements over existing state-of-the-art methods in terms of artifacts suppression, structures preservation and parametric maps accuracy with weak radiation.
KEYWORDS: CT reconstruction, Computed tomography, Signal to noise ratio, 3D image processing, Tissues, 3D displays, 3D modeling, 3D image reconstruction, Visualization, Lithium
With an advanced photon counting detector, multi-energy computed tomography (MECT) can classify the photons according to the presetting thresholds and then acquire CT measurements from multiple energy bins. However, the number of the photons at one energy bin is limited compared with that in the conventional polychromatic spectrum. Therefore, the MECT images could suffer from noise-induced artifacts. To address this issue, in this work, we present a MECT reconstruction scheme which incorporates a low-rank tensor decomposition with spatial-spectral total variation (LRTD_SSTV) regularization. Additionally, the prior information from the whole energy, i.e., the average image from the MECT images, is introduced to the LRTDSSTV regularization to further improve reconstruction performance. This reconstruction scheme is termed as “LRTD_SSTVavi”. Experimental results with a digital phantom demonstrate that the presented method produces better MECT images and more accurate basis images compared with the RPCA, TDL and LRTD_STTV methods.
Computed tomography perfusion (CTP) imaging can be used to detect ischemic stroke via high-resolution and quantitative hemodynamic maps. However, due to its repeated scanning protocol, CTP imaging involves a substantial radiation dose, which might increase potential cancer risks. Therefore, reducing radiation dose in CTP has raised significant research interests. In this work, we present a non-local convolution neural network (NL-Net) to yield high quality CTP images and high precision hemodynamic maps at low-dose cases. Specifically, different from the traditional network in CT imaging, this NL-Net takes into consideration the non-local information from adjacent frames as one of the input. Then, the low-dose CTP images combining with the non-local information feeds into the pre-trained network to produce desired CTP images with high quality. The clinical patient data are used to demonstrate the performance of the NL-Net, and corresponding results indicate that the presented NL-Net can obtain better CTP images and more accurate hemodynamic maps compared with the competing approaches.
KEYWORDS: Computed tomography, Dual energy imaging, Gold, Convolution, Bone, Convolutional neural networks, Signal attenuation, Medical imaging, Surgery, Biological research
Dual energy computed tomography (DECT) usually scans the object twice using different energy spectrum, and then DECT is able to get two unprecedented material decompositions by directly performing signal decomposition. In general, one is the water equivalent fraction and other is the bone equivalent fraction. It is noted that the material decomposition often depends on two or more different energy spectrum. In this study, we present a deep learning-based framework to obtain basic material images directly form single energy CT images via cascade deep convolutional neural networks (CD-ConvNet). We denote this imaging procedure as pseudo DECT imaging. The CD-ConvNet is designed to learn the non-linear mapping from the measured energy-specific CT images to the desired basic material decomposition images. Specifically, the output of the former convolutional neural networks (ConvNet) in the CD-ConvNet is used as part of inputs for the following ConvNet to produce high quality material decomposition images. Clinical patient data was used to validate and evaluate the performance of the presented CD-ConvNet. Experimental results demonstrate that the presented CD-ConvNet can yield qualitatively and quantitatively accurate results when compared against gold standard. We conclude that the presented CD-ConvNet can help to improve research utility of CT in quantitative imaging, especially in single energy CT.
Computed Tomography (CT) is one of the most important medical imaging modality. CT images can be used to assist in the detection and diagnosis of lesions and to facilitate follow-up treatment. However, CT images are vulnerable to noise. Actually, there are two major source intrinsically causing the CT data noise, i.e., the X-ray photo statistics and the electronic noise background. Therefore, it is necessary to doing image quality assessment (IQA) in CT imaging before diagnosis and treatment. Most of existing CT images IQA methods are based on human observer study. However, these methods are impractical in clinical for their complex and time-consuming. In this paper, we presented a blind CT image quality assessment via deep learning strategy. A database of 1500 CT images is constructed, containing 300 high-quality images and 1200 corresponding noisy images. Specifically, the high-quality images were used to simulate the corresponding noisy images at four different doses. Then, the images are scored by the experienced radiologists by the following attributes: image noise, artifacts, edge and structure, overall image quality, and tumor size and boundary estimation with five-point scale. We trained a network for learning the non-liner map from CT images to subjective evaluation scores. Then, we load the pre-trained model to yield predicted score from the test image. To demonstrate the performance of the deep learning network in IQA, correlation coefficients: Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) are utilized. And the experimental result demonstrate that the presented deep learning based IQA strategy can be used in the CT image quality assessment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.