|
1.INTRODUCTIONThis work is related to projection-domain material decomposition of energy-resolved X-ray projections, which aims to decompose energy-resolved projections onto a basis of material specific functions (Ref. 1). Since the early work of Alvarez and Macovski (Ref. 2), it is known that the linear attenuation coefficient μ can be modeled as a linear combination of a small number M of energy-dependent basis functions fm. In Equation 1, fm can for example be the linear attenuation of the material m (expressed in cm−1) and where By material decomposition, we mean recovering the coefficients Am from the measurements mb (or from their log sb, see Equation 4 below). Several approaches have been proposed. A rigourous and natural way is to inverse the forward model (m1, …, mB) = Φ(A1, …, Am) of Equation 2. This has been done with a maximum likelihood approach in Ref. 3 further regularized in Ref. 4 or with a regularized least-square approach in Ref. 5. In all cases, the forward model needs to be known and the quality of the decomposition depends on the accuracy of the model. In particular, the effective spectra need to be calibrated, e.g. with a spectrometer for the source spectrum and monochromatic sources for the detector response. To avoid such a cumbersome procedure, it is possible to calibrate a parametric model either of the direct mapping Φ (Ref. 2) or of the inverse mapping (A1, …, AM) = Φ−1(m1, …, mB) (or Φ−1(s1, …, sB)). In Ref. 6, the measured attenuations are related to the coefficients Am via a polynomial model. The polynomial coefficients are learnt from a set of calibration measurements at various combinations of basis material lengths, which cover the range of length combinations that will be present in the imaged object. This procedure only requires a specific calibration phantom with known thicknesses of the basis materials but it is a time-consuming procedure. In Ref. 7, the authors introduce an empirical dual-energy material decomposition method. It is three-step: first, a calibration phantom, made of the basis materials, is scanned. Second, the reconstructed phantom image is segmented and regions of interest (ROI) of each material are selected. Third, the coefficients of a polynomial approximation of the inverse mapping Φ−1 are estimated so that the reconstruction obtained by applying the polynomial coefficients to the measures fits the segmented ROIs. The procedure is called empirical because the inverse mapping Φ−1 is indirectly estimated to retrieve the Am from the mb measurements without knowing The aim of the project is to avoid the calibration scan in the material decomposition. The polynomial coefficients of the inverse mapping are estimated by enforcing data consistency conditions on the material-specific sinograms. Consistency conditions have been successfully used in a number of CT artefacts correction problems, e.g., geometric calibration, beam-hardening correction, scatter correction. In this paper, we consider 2D parallel geometry only and its corresponding set of consistency conditions known as Helgason-Ludwig consistency conditions. The proposed method does not require a calibration scan, only the scan of the object of interest. The decomposition of the sinograms is exclusively based on the raw data, plus a tiny a priori knowledge on the object. 2.THEORYThe method minimizes a consistency-based cost function (subject to some constraints) which is described in this section. For simplicity, we focus on a 2D parallel scanning geometry. Projections are acquired over a 180 degree angular range. In a coordinate system (O, x, y), we denote the projection angle θ (due to discretization, θ is assumed to vary in a set of discrete values Θ, whose cardinal is denoted |Θ|) and the corresponding unit vector 2.1Photon-counts and attenuationsAssuming a photon counting detector with B bins and according to Equation 2, the photon counts may be corrupted with Poisson noise. The measures become We only use mb(θ, p) in the rest of the paper and explicitely indicate wether data are corrupted with noise. The projections are then log-transformed according to where m0,b is the number of photons without object. 2.2The polynomial modelWe look for a a simplified model of the inverse mapping (A1, …, Am) = Φ−1(s1, …, sB). We choose a polynomial model. Each material sinogram Am is approximated by a polynomial ψm,D of degree D in the variables (s1, …, sB). Formally, where k = (k1, …, kB) is a multi-index, |k| = k1 + ⋯ + kB and 2.3The consistency metricIn 2D parallel geometry, the sought Am are the Radon transform of the material map am To account for the spectral nature of the decomposition problem, we combine the material sinograms Am into mono-energetic sinograms Cn, in view of applying the consistency metric to them. We choose N energy levels En in the energy range of the source and form the mono-energetic sinograms Cn The coefficients fm(En) are known (see Equation 1). A consistency condition of the Radon transform states that the integral of each projection (the order-0 moment) does not depend on the projection angle θ since each of these integrals equals the integral of the attenuation coefficient over the object. We define the moment Jn(θ) of the Cn by We use the variance of Jn to evaluate if Jn is constant over Θ. The consistency function hence reads: where The consistency metric would evaluate to zero on perfectly consistent material sinograms. 2.4MinimizationDue to the hardening of the beam in each bin (see e.g. Ref. 8), the measured attenuations sb do not satisfy the consistency condition. The loss ℓ is minimized with respect to the coefficients c to achieve mono-energetic and material sinograms which are as consistent as possible. Since we use only order-0 consistency conditions, a constant sinogram (i.e. Am(θ, p) = constant for all θ and p) is perfectly consistent. To prevent the minimization to output such undesirable solution, we follow the idea of Ref. 8 and constrain the minimization by some known values. First, if there is no attenuation in all bins (sb = 0, ∀b ∈ {1, …, B}), we enforce 0 in all material and mono-energetic sinograms by setting The values FBP(sk) 3.NUMERICAL EXPERIMENTS3.1Simulation of dataNumerical experiments used a 2D phantom made of an outer water disc of diameter 32 mm and five bone inserts (with diameters ranging from 2 to 5 mm), placed inside the water disc (see Figure 2). Two tiny inserts of bone and water (1 mm in diameter each) were placed outside the phantom. One voxel in each insert was chosen for the constraints. The material sinograms Am were analytically computed with RTK (Ref. 9). The simulated sinograms had 700 pixels with 0.05 mm spacing and 720 projections over a 180° angular range. Two effective spectra were used. The low-energy (LE) and high-energy (HE) spectra had a tube-voltage of 80 keV and 120 keV respectively (Figure 1). Without object, the detector received a total number of photons of 1.3 × 106 and 2.9 × 106 photons for the LE and HE spectra respectively. The photon counts mb were computed by applying Equation 2, then log-transformed according to Equation 4. Figure 2.Top: Material map obtained with the DCC-based decomposition. Grayscale is : 1 ± 0.3. The red line indicates the profile used in Figure 4. Botoom: Profiles along the red line for the DCC-based and the calibrated water (left) and bone (right) maps. The consistency metric at convergence was ℓ(c) = 0.0208. If evaluated on the calibrated sinograms, the consistency function was 0.0018. ![]() The degree of the sought polynomials was fixed to D = 3 and the consistency metric Equation 10 was minimized under the constraints defined above, with the Sequential Least Squares Programming algorithm. The total number of estimated polynomial coefficients was 18. The initial guess was always set to zero for all coefficients. 3.2Evaluation methodsOur method was compared to the calibration from a set of dedicated measurements with the same LE and HE spectra over a set of water and bone lengths. The set covered all combinations which were present in the phantom. More precisely, 100 equi-spaced lengths of each material were measured. Water lengths ranged from 0 to 32.6 mm and bone lengths ranged from 0 to 10.97 mm. All these combinations were irradiated with the same LE and HI spectra as the phantom. Then the polynomials We compared the poly-energetic reconstructions from raw-data with mono-energetic images computed from our DCC-based material maps and from the reference calibrated material maps. 4.RESULTS4.1Noise-less dataResults of the decomposition are shown in Figure 2. Water and bone are adequately separated. The profiles in Figure 2 (bottom row) indicate residual cross-talk between the two material maps. In the center of the water phantom, the low-constrast feature is visible. Since the consistency is enforced on the mono-energetic images, Figure 3 compares poly-energetic images, DCC-based and calibration-based mono-energetic images. Poly-energetic images clearly suffer from severe beam-hardening, which is almost completely corrected on both mono-energetic images. The profiles presented in Figure 4 reveal that the DCC-based and calibrated 40 keV images can hardly be distinguished. A slight discrepancy between the 80 keV images still subsists though, especially in the vicinity of the border of the phantom. 4.2Robustness to noiseThe photons count measurements mb were corrupted with Poisson noise according to Equation 3. The reference photon flux is given by the spectra in Figure 1, i.e. 1.3 × 106 and 2.9 × 106 photons for the LE and HE spectra respectively. The noise level was set by reducing the total number of emitted photons by a factor 1, 10 and 100. The influence of noise on the material maps is presented in Figure 5. The quality of the images is significantly degraded. The consistency function value at convergence increases with the level of noise. It is 0.2108 for reference noise level (factor 1), 1.4738 at factor 10 and 12.815 at factor 100. We expect the choice of the reference voxel to play a critical role in the presence of heavy noise. Figure 5.Mono-energetic images at 40 keV (top) and 80 keV (bottom), for Poisson noise with photon counts donwscaled by a factor 1, 10 and 100 (from left to right). Grayscale for all images : 0±250HU. The consistency function at convergence was ℓ(c) = 0.2108, 1.4738, 12.815 for factor 1, 10 and 100 respectively. ![]() 5.DISCUSSION AND CONCLUSIONWe have demonstrated a consistency-based material decomposition, which does not require any calibration procedure. Only the raw data and a tiny a priori knowledge on the object are sufficient to produce material specific sinograms. This tiny a priori can be implemented in practice by placing inserts in the field-of-view of the scanner. The resulting sinograms are free from beam-hardening. The method achieves (on simulated data) results that are close to those obtained with a standard calibration-based decomposition method. The influence of the choice of the reference voxels in the reconstruction may play a critical role and should be further investigated. By choosing a voxel in the reconstruction as a reference, we indirectly incorporate in the constraint all the projections values from lines passing through the voxel (with filtered back-projection, all the lines are incorporated but the ramp filter drops rapidly, so mainly the lines through the voxel are). If those lines “see” a wider range of length combinations, we expect that the decomposition is improved. Finally, we used a parallel geometry for its simplicity. But order-0 DCC are also available for divergent beam 3D data. In Ref. 8, they use such DCC to correct beam-hardening in a circular acquisition. We expect that the decomposition method presented in this work generalizes to multi-energy divergent projections. REFERENCESRit, S., Mory, C., and Noël, P.,
“Image Formation in Spectral Computed Tomography,”
Spectral, Photon Counting Computed Tomography, 355
–372 CRC Press(2020). https://doi.org/10.1201/9780429486111 Google Scholar
Alvarez, R. E. and Macovski, A.,
“Energy-selective reconstructions in X-ray computerized tomography,”
Physics in medicine and biology, 21
(5), 733
–44
(1976). https://doi.org/10.1088/0031-9155/21/5/002 Google Scholar
Roessl, E. and Proksa, R.,
“K-edge imaging in x-ray computed tomography using multi-bin photon counting detectors,”
Physics in Medicine and Biology,
(2007). https://doi.org/10.1088/0031-9155/52/15/020 Google Scholar
Brendel, Bernhard and Bergner, Frank and Brown, Kevin and Koehler, T.,
“Penalized likelihood decomposition for dual layer spectral CT,”
in Proc 4th intl mtg on image formation in X-ray CT,
41
–4
(2016). Google Scholar
Ducros, N., Abascal, J. F. P.-J., Sixou, B., Rit, S., and Peyrin, F.,
“Regularization of nonlinear decomposition of spectral x-ray projection images,”
Medical Physics, 44 e174
–e187
(2017). https://doi.org/10.1002/mp.12283 Google Scholar
Alvarez, R. E.,
“Estimator for photon counting energy selective x-ray imaging with multibin pulse height analysis,”
Medical Physics,
(2011). https://doi.org/10.1118/1.3570658 Google Scholar
Stenner, P., Berkus, T., and Kachelriess, M.,
“Empirical dual energy calibration (EDEC) for cone-beam computed tomography,”
Medical Physics, 34 3630
–3641
(2007). https://doi.org/10.1118/1.2769104 Google Scholar
Würfl, T., Maaß, N., Dennerlein, F., Huang, X., and Maier, A. K.,
“Epipolar Consistency Guided Beam Hardening Reduction-ECC 2,”
in 14th International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine,
181
–185
(2017). Google Scholar
Rit, S., Oliva, M. V., Brousmiche, S., Labarbe, R., Sarrut, D., and Sharp, G. C.,
“The Reconstruction Toolkit (RTK), an open-source cone-beam CT reconstruction toolkit based on the Insight Toolkit (ITK),”
in Journal of Physics: Conference Series,
12079
(2014). Google Scholar
|