Open Access
12 September 2017 System calibration method for Fourier ptychographic microscopy
An Pan, Yan Zhang, Tianyu Zhao, Zhaojun Wang, Dan Dan, Ming Lei, Baoli Yao
Author Affiliations +
Abstract
Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic.

1.

Introduction

Fourier ptychographic microscopy (FPM)14 is a recently proposed computational imaging technique. By recording multiple low-resolution (LR) intensity images of the sample from angle-varied illumination and iteratively stitching these different LR intensity images together in the Fourier space, FPM recovers a high-resolution (HR) complex amplitude image of the sample with a large field of view (FOV), which overcomes the physical space-bandwidth-product limit of a low numerical aperture (NA) imaging system. The final reconstruction resolution is determined by the sum of the objective lens and illumination NAs.5 Due to its flexible setup, good performance, and rich redundancy of the acquired data, FPM has been widely applied in three-dimensional imaging,6,7 fluorescence imaging,8,9 multiplexing imaging,1012 etc.

In current FPM imaging platforms, systematic error sources mainly come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely degrade the reconstruction results with similar artifacts despite the different generation mechanisms.1317 Therefore, it is hard to say which kind of error is going to be blamed for image degradation, and which kind of mechanism-targeted algorithm1316 will be applied. In this case, the apparent way to improve recovery quality may be a successive attempt at different algorithms. However, as these algorithms are pointed at a specific error mechanism, they take little effect to calibrate the mixed systematic errors.

But fortunately, they all solve parts of problems and share the same root with alternating projection (AP) method,18,19 which offers great flexibilities to be adapted to more complicated mathematical models for many advanced applications. Therefore, we set up a comprehensive mathematical model to explain all the error mechanisms and propose a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective. Four modules, numbered from 1 to 4, are involved in our procedure to address parameter imperfections, LED intensity fluctuation and noise, which are based on the simulated annealing (SA) algorithm14 and nonlinear regression process,14 LED intensity correction method,15 and adaptive step-size strategy,16 respectively. At first, a number of initial iterations (around 10 iterations) for bright-field (BF) images with low illumination NAs are implemented by modules 1 and 2 to correct those low-frequency apertures’ parameters, and module 3 to calibrate their intensity measurements, enabling the BF images suffering less from systematic noise and thus obtaining more precise initial parameters. After the correction of BF images, all the captured raw images are empirically iterated only once by modules 3 for intensity updating. Finally, the updated images are iterated several times by modules 1 and 2 to optimize global parameters, together with module 4 to resist the fluctuation of final reconstructions influenced by noise. Note that the parameters re-estimation in module 2 is from a global perspective to enhance the iterative accuracy, and the updating of coherent transfer function (CTF) is included in module 3 to offset aberrations,13 which differs from that in original LED intensity corrections.15 As validated in simulations and experiments, the proposed system calibration procedure improves the robustness of FPM, relaxes the experiment conditions, and does not require any prior knowledge, making the FPM more practical. Due to the universality and flexibility of the AP method, our work can be further combined with many excellent algorithms such as motion deblurring20 and multiplexing imaging,1012 producing a better performance of the FP approach. Once the system errors have been calibrated, some prior knowledge can be added into models and then using some noise suppression methods21,22 may further improve the results. Source code is released in Ref. 23 for noncommercial use.

2.

Method

2.1.

Model of FPM

The experiment configuration and data acquisition process of FPM measurements can be found in the literature14 and will not be detailed here. Numerically, for each LEDm,n (row m and column n) and its illumination wave vector (um,n and vm,n), the imaging sensor captures an LR intensity image, which is given by

Eq. (1)

Im,nc(x,y)=|F1{O(uum,n,vvm,n)·P(u,v)}|2,
where F1 is the inverse Fourier transform operator, O is the Fourier spectrum of the sample’s transmission function o, P(u,v) is the CTF, which acts as a low-pass filter of an imaging system, and (u,v) are the two-dimensional spatial frequency coordinates in the Fourier plane with respect to (x,y). The incident wave vector (um,n,vm,n) can be expressed as

Eq. (2)

um,n=2πλx0xm,n(x0xm,n)2+(y0ym,n)2+h2,vm,n=2πλy0ym,n(x0xm,n)2+(y0ym,n)2+h2,
where (x0,y0) is the central position of each small segment, xm,n and ym,n denote the position of the LED element on the row m, column n, and λ is the illumination wavelength, h is the distance between the LED array and sample. Then, the corresponding spectrum region of the sample estimation is updated as follow, termed PIE-based algorithm10,14 (see details in Appendix A):

Eq. (3)

Oi+1(uum,n,vvm,n)=Oi(uum,n,vvm,n)+α|Pi(u,v)|Pi*(u,v)|Pi(u,v)|max[|Pi(u,v)|2+δ1]Δφi,m,n,

Eq. (4)

Pi+1(u,v)=Pi(u,v)+β|Oi(uum,n,vvm,n)|Oi*(uum,n,vvm,n)|Oi(uum,n,vvm,n)|max[|Oi(uum,n,vvm,n)|2+δ2]Δφi,m,n,
where α and β are the step size of the update and usually α=β=1 is employed.18,19 δ1 and δ2 are regularization constants to prevent the denominator to be zero, i is the iteration times, Δφi,m,n is the auxiliary function for the updating process.10 We set δ1=1, δ2=1000 in our procedure for the best robustness and convergence efficiency (see details in Appendix A).

The whole iterative process is repeated for i times until the solution converges, which is judged by the evaluation of an error metric at each iteration indicated as

Eq. (5)

Ei=x,y,m,n[|ϕi,m,ne(x,y)|2Im,nc(x,y)]2x,y,m,nIm,nc(x,y).

In the ideal FPM setup, systematic parameters are usually accurate, but they are misaligned in a variety of forms in the real situations. The model of parameter imperfections with rotation factor θ, shift factors of center LED along x- and y-axes Δx, Δy, and height factor h have been presented in Fig. 2 of Ref. 14. Additional global or partial variables, such as pitch angle or the distance between adjacent LED elements,21 certainly can be added to this model but they would increase the computational burden. In fact, considering the great performance of PC-FPM,14 these four global variables are enough to establish the parameter imperfections model. The position of each LED element can be expressed as14

Eq. (6)

xm,n=dLED[cos(θ)m+sin(θ)n]+Δx,ym,n=dLED[sin(θ)m+cos(θ)n]+Δy,
where dLED presents the distance between adjacent LED elements. In this paper, we set dLED=4  mm in both simulations and experiments.

2.2.

Conflict Between Aberration Estimation and LED Intensity Correction

The SA algorithm,14 nonlinear regression process,14 and adaptive step-size strategy16 could easily be combined with a few modifications in step-size and update order. But there exists strong conflict between aberration estimation and LED intensity correction, which are in disagreement with the CTF updating. More specifically, if the sample spectrum as well as the CTF is simultaneously updated in the process of LED intensity correction, it is unlikely to obtain a satisfactory recovery quality due to the mutual transformation between the error of aberration and LED intensity, which also degrades the convergence properties of iterative algorithms.

If there exists LED intensity fluctuation, then Eq. (1) needs to be accordingly modified to

Eq. (7)

Im,nu(x,y)=cm,n·|F1[O(uum,n,vvm,n)·P(u,v)]|2,
where cm,n is defined as15

Eq. (8)

cm,n=x,y|ϕm,ne(x,y)|2x,yIm,nc(x,y).

Then, the captured intensity images are updated by

Eq. (9)

Im,nu(x,y)=cm,n·Im,nc(x,y).

An inverted light microscope, illuminated by a 15×15 LED matrix with the excitation wavelength of 632 nm, equipped with a 4×/0.1  NA objective and an image sensor with the pixel size of 6.5  μm, is modeled in our simulations. Figure 1 indicates the conflict between aberration estimation and LED intensity correction. Figures 1(a) and 1(b) show the HR input intensity and phase profiles within a small segment of 128×128  pixels, which are served as the ground truth of the complex sample. The distance between the sample and LED array is h=86  mm. The intensity and phase reconstruction accuracy is evaluated by root-mean-square error (RMSE). For illustration purpose only, 200% intensity fluctuation is artificially introduced by multiplying each raw image with a random constant ranging from zero to two. In general, different types of aberrations can be quantified as different Zernike modes at the pupil plane.13 Here, we introduce Z31(coma) as an example shown in Fig. 1(d). Figures 1(a1)1(c1) show the reconstructed images by original FPM algorithm at six iterations with only Eq. (3), which are blurred by the introduced intensity fluctuation, and the spectral artifacts could be clearly observed in Fig. 1(c1). The effeteness of LED intensity correction method could be observed in Figs. 1(a2)1(c2), but the CTF updating process is not available in this method. Figures 1(a3)1(d3) and 1(a4)1(d4) show the recovered results by PIE-based intensity correction at first iteration and 30 iterations, respectively, which introduces the CTF updating in the intensity correction method. Unexpectedly, the reconstructions are less satisfactory, showing a strong conflict between LED intensity correction and CTF updating (aberration correction), which may be attributed to the mutual transformation of different errors. In addition, the qualities degrade with the increase of iterations, and the best results appear at the first iteration. Note that even with original LED intensity correction method, the recovery results are extremely unstable with sharp oscillation as shown by red line in Figs. 1(e1) and 1(e2), as Eq. (9) is not from an overall perspective, where the intensity of each raw image is updated by different coefficients. It is worth mentioning that if multiplying each raw image with the same constant, the constant can be ignored in the FPM model that will have no effect on the final reconstruction. For addressing this issue, the updating operation needs to run at the same standard by introducing a unified standard intensity. To reduce the conflict between aberration correction and intensity correction, the PIE-based intensity correction method would be employed only once. So, the modified solution is as follows.

Fig. 1

Conflict between aberration estimation and LED intensity correction. Groups (a–d) show the recovery results of intensity, phase, spectrum, and aberrations, respectively, with different algorithms. (e1) and (e2) present the intensity and phase reconstruction accuracy versus iteration times for different algorithms.

JBO_22_9_096005_f001.png

First, calculate the ratio cm,n after the first iteration and update the raw images for the second iteration. After that, without the intensity correction process, calibrate the aberration only through Eqs. (3) and (4) for the rest iterations. Here, the center LED is set as the reference that is supposed to be free of intensity fluctuation. Then, Eq. (8) needs to be modified to

Eq. (10)

cm,n=x,y|ϕm,ne(x,y)|2c0,0·x,yIm,nc(x,y)(m,n0),
where c0,0=x,y|ϕ0,0e(x,y)|2/x,yI0,0c(x,y) and the update operation is

Eq. (11)

Im,nu(x,y)={cm,n·Im,nc(x,y)(m,n0)I0,0c(x,y)(m=n=0).

The reconstructions by our modified LED intensity correction method are shown as in Figs. 1(a5)1(d5) at nine iterations, which enable one to obtain aberration while correcting intensity. In addition, the modified solution features a better stability and stronger robustness over original intensity correction method, which is demonstrated by pink line in Figs. 1(e1) and 2(e2).

Fig. 2

Flow chart of SC-FPM method.

JBO_22_9_096005_f002.png

2.3.

System Calibration Algorithm Framework

Figure 2 shows a flow chart of SC-FPM procedure. At first, an initial guess of the sample spectrum O0(u,v) and CTF function P0(u,v) are provided to start the algorithm. Second, we define the LED updating range Si for each iteration. Normally, all of those 225 LR images are alternately iterated to update the sample spectrum and CTF. However, the low-frequency components dominate the processing order of the captured images Im,nc(x,y) in the iterative reconstructions. As a consequence, in the beginning, a number of initial iterations for BF images with low illumination NAs are implemented by modules 1 and 2 to correct those low-frequency apertures’ parameters, and module 3 to calibrate their intensity measurements, enabling the BF images suffering less from systematic noise and thus obtaining more precise initial parameters. For SC-FPM, in the first ten iterations, where i=1,,10, the process repeats for 5×5 BF images with the LED updating range Si={(m,n)|m=2,,2,n=2,,2} to obtain the initial value of the four global factors (θ,Δx,Δy,h). Here, 10 initial iterations are employed in this work empirically, which enable an accurate correction for the BF apertures’ positions even under the extreme conditions. Note that Oi(u,v) and Pi(u,v) need to be initialized at the end of each iteration, as the correction of low-frequency apertures would greatly distort the object’s profile. According to the good performance of our modified intensity correction method, only the first iteration is implemented to calculate the ratio cm,n and update the captured raw images Im,nc(x,y) for the next iteration. For convenience, two iterations are constituted as a group and each even iteration is implemented for the intensity initialization before next updating. After 10 initial iterations (five groups) for the BF images, all the intensity measurements are iterated several times by module 4, namely adaptive step-size strategy, to optimize global parameters and resist the fluctuation of final reconstructions influenced by noise. During these iterations, module 2 is only employed at the 12th iteration without initialization, to minimize the conflict between aberration estimation and LED intensity correction. Therefore, in SC-FPM, the LED updating range Si for each iteration is defined as

Eq. (12)

Si={{(m,n)|m=2,,2,n=2,,2}i10{(m,n)|m=7,,7,n=7,,7}else.

The variable ±Δi,u,v in module 1 begins at a predefined value and gradually decreases to a small (or zero) value with a set number of iterations, which is defined as the searching step length of SA algorithm. Also, we choose Δ1,u,v=8 in our procedure due to the introduction of the extreme systematic error. The step length is then decreased by half to compress the frequency searching range at each odd iteration within the first 10 initial iterations. But it is not supposed to be less than 2 before all the captured images are iterated. The step length updating is expressed as follows:

Eq. (13)

Δi+1,u,v={Δi,u,vi=1,3,Δi,u,v2i=2,4,26i101else.

3.

Simulations

Before employed in the experiments, the effectiveness of SC-FPM is validated by several groups of simulations, and the system parameters are the same as Sec. 2. The systematic errors, such as aberration, LED intensity fluctuation, parameter imperfections, and the noise are deliberately exaggerated in our simulations to better demonstrate the robustness and superiorities of SC-FPM over other existing algorithms. Positional misalignment is introduced by four positional factors with random values. Here for illustration, we set θ=5  deg, Δx=1  mm, Δy=1  mm, h=87  mm as the real situation, while θ=0  deg, Δx=0  mm, Δy=0  mm, h=86  mm as the ideal condition. The noise is artificially introduced by corrupting each LR image with 40% Gaussian noise with different variances. The noise level is quantified by the average mean absolute error (MAE),16 defined as AMAE=|InI|/I, where I is the mean value of all noise-free DF intensity images and |InI| is the averaged MAE of the corresponding noisy images.

Figure 3 shows the performance of the original PIE-based algorithm with different systematic error sources, each with six iterations. Figures 3(a1)3(c1) show the recovery results only with aberrations. Reasonably, the original PIE-based algorithm has the capability to compensate the aberrations by simultaneously updating the sample spectrum and the CTF with Eqs. (3) and (4). As a result, the reconstructions are free of artifacts and quite approximate to the ground truth as shown in Figs. 3(a) and 3(b). However, the other three errors respectively present some particular features in extreme conditions. Figures 3(a2)3(c2) present recovery images with 200% LED intensity fluctuation. Compared with Fig. 3(c1), the recovered sample spectrum has been severely blurred and filled with patches, which are the symbol of artifacts caused by LED intensity fluctuation. The reconstructions with parameter imperfections present obvious wrinkles as shown in Figs. 3(a3) and 3(b3), and the upper-left of central bright spot in the spectrum is somewhat distorted as shown in Fig. 3(c3), featured by different characteristics from that in Figs. 3(a2)3(c2). Figures 3(a4)3(c4) present the recovery results with 40% Gaussian noise, and synthetically, Figs. 3(a5)3(c5) show the reconstructions with mixed four errors. Although different errors are fused together whether in the spatial domain or frequency domain, their respective features are retained in the final results.

Fig. 3

The performance of the original PIE-based algorithm with different systematic error sources. (a) HR input intensity and (b) phase profiles serve as the ground truth of the simulated complex sample. Groups (a–c) show the recovery results of intensity, phase, and spectrum, respectively, with the original PIE-based algorithm.

JBO_22_9_096005_f003.png

Figure 4 indicates the experimental results by different algorithms with mixed systematic errors. Groups (a), (b), and (c) show the recovered intensity, phase, and spectrum, respectively. The recovery images in Figs. 4(a1)4(c1), 4(a2)4(c2), and 4(a3)4(c3) are reconstructed by original PIE-based algorithm, LED intensity correction method, and adaptive FPM, respectively, which could be found little improvements of recovery quality by these specific algorithms. Figures 4(a4)4(c4) present the recovery results by PC-FPM for parameter corrections, however, it fails to retrieval the complex sample either and the spectrum rotates clockwise as shown in Fig. 4(c4). In fact, despite the similarities between SC-FPM and PC-FPM in the parts of SA algorithm and nonlinear regression process, SC-FPM still produces a stronger robustness as indicated in Figs. 4(a5)4(c5), demonstrating its effectiveness under such extreme conditions.

Fig. 4

Experimental results by different algorithms with mixed systematic errors. Groups (a), (b), and (c) show the recovery results of intensity, phase, and spectrum, respectively.

JBO_22_9_096005_f004.png

Figure 5 shows the detailed results of each iteration in SC-FPM. As indicated in Fig. 5(a), the recovery results fluctuate within the first 10 iterations but tend to stabilize after 15 iterations. The four positional factors, rotation factor θ, shift factors (Δx,Δy), and height factor h, tend to converge on 5 deg, (1 mm, 1 mm), and 87 mm, respectively, which are demonstrated in Figs. 5(b)5(d). Figure 5(e) presents the central position of each aperture corresponding to different illuminations in the frequency domain, where the ideal, real, and corrected positions are denoted by red triangles, green dots, and blue diamonds, respectively. Finally, the corrected positions accurately converge on θ=4.97  deg, Δx=0.922  mm, Δy=0.974  mm, h=86.799  mm, which approximate to the real parameters introduced above, validating the strong robustness and great adaptability of SC-FPM in the real situation.

Fig. 5

The results of each iteration of SC-FPM in detail. (a) The RMSE of intensity and phase images. (b–d) The recovered four positional factors, rotation factor θ, shift factors Δx, Δy, and height factor h, respectively. (e) The central position of each aperture corresponding to different illuminations in the frequency domain, where the ideal, real, and corrected positions are denoted by red triangles, green dots, and blue diamonds, respectively.

JBO_22_9_096005_f005.png

4.

Experiments

In order to validate the effectiveness of SC-FPM experimentally, we first compare the recovered intensity and phase distributions of one segment (90×90  pixels) in a USAF target with different algorithms. Figure 6 shows the schematic diagram of the experiments. All LR images are captured with a 4×/0.1  NA objective and a CCD camera with pixel pitch 3.75  μm (DMK23G445, Imaging Source Inc., Germany). A programmable 32×32 RGB LED array with 4 mm spacing, controlled by an Arduino is placed at 86 mm above the sample. The central 15×15 red LEDs (with the central wavelength of 631.13 and 20 nm bandwidths) are employed to provide angle-varied illuminations, resulting in a final synthetic NA of 0.5 theoretically.

Fig. 6

Schematic diagram of experiments. (a) The enlargement of 32×32 RGB LED array. (b) The enlargement of microscope with light path diagram. MO, microscope objective; TL, tube lens; M1 and M2, mirrors; BS, beam splitter.

JBO_22_9_096005_f006.png

Figure 7 shows the experimental results of one segment (90×90  pixels) in a USAF target by different algorithms. Figure 7(a) presents the FOV of a USAF target, whereas (a1) shows the enlargement of a subregion of (a), which is low-pass filtered by the low NA of the employed objective. Groups (b), (c), and (d) show the recovered intensity, phase, and spectrum, respectively, with different algorithms. The recovery images in Figs. 7(b1)7(d1), 7(b2)7(d2), and 7(b3)7(d3) are reconstructed by an original PIE-based algorithm at 6 iterations, intensity correction method at 6 iterations, and adaptive FPM at 18 iterations, respectively. Through a comparison of these reconstructions, it could be inferred that the systematic errors mainly come from the noise and parameter imperfections that are featured by obvious wrinkles in the recovered results. In addition, the intensity image in Fig. 7(b3) is greatly improved over (b1) and (b2), demonstrating the advancement of adaptive FPM in noise suppression. Figures 7(b4)7(d4) show the recovery results by PC-FPM at 12 iterations, where the obvious wrinkles are eliminated and the positional parameters are converged on θ=1.8  deg, Δx=1.274  mm, Δy=1.270  mm, h=94.399  mm. But the intensity image is less satisfactory as the group 8 elements 4, 5, 6 cannot be clearly resolved. Figures 7(b5)7(d5) indicate the reconstructions by SC-FPM at 28 iterations with the final corrected parameters of θ=1.6  deg, Δx=1.188  mm, Δy=1.003  mm, h=89.733  mm. Compared with other algorithms, SC-FPM produces a better performance with higher contrast and improved resolution in the final reconstructions, where each line pair is clearly resolved with a uniformly distributed background, demonstrating its great adaptability and strong robustness to unknown systematic errors. As for the corrected positional parameters, only the height factor presents obvious distinctions between PC-FPM and SC-FPM. Actually, 94 mm by PC-FPM far deviated from the practical measurements of h=86  mm while 89 mm by SC-FPM seems more reasonable. In addition, the performance of original FPM algorithm could be significantly enhanced by an offset with the parameters obtained by SC-FPM, which could be found from the comparisons between Figs. 7(b6)7(d6) and 7(b1)7(d1), further validating the reliability of SC-FPM.

Fig. 7

Experimental results of one segment (90×90  pixels) in a USAF target recovered by different algorithms. (a) The FOV captured with a 4×/0.1  NA objective. (a1) The enlargement of a subregion of (a). Groups (b–d) show the recovery results of intensity, phase, and spectrum, respectively.

JBO_22_9_096005_f007.png

In addition, we also test our method with a biological sample (stem transection of dicotyledon) by different algorithms as shown in Fig. 8. The LED array is placed at 85.9 mm above the sample with central red 9×9 LEDs providing the angle-varied illuminations, resulting in a final synthetic NA of 0.35 theoretically. But the results are quite different from above due to the different composition of systematic errors. The recovery images in Figs. 8(b1)8(d1), 8(b2)8(d2), and 8(b3)8(d3) are reconstructed by the original FPM algorithm at 6 iterations, intensity correction method at 6 iterations, and adaptive FPM at 18 iterations, respectively, all of which show a failure in reconstructions either with poorly visible intensities or uneven-distributed phases. Figures 8(b4)8(d4) present the recovery results by PC-FPM at 12 iterations with the final corrected parameters of θ=4.9  deg, Δx=0.104  mm, Δy=0.384  mm, and h=84.522  mm. The recovered phase image is quite better than that in Figs. 8(c1) and 8(c3), but the contrast of the intensity image still remains to be improved. Figures 8(b5)8(d5) indicate the reconstructions by SC-FPM at 34 iterations with the final corrected parameters of θ=3.4  deg, Δx=0.427  mm, Δy=0.287  mm, h=84.733  mm. The superiority of SC-FPM over other algorithms could be observed by the higher contrast and remarkable resolution in final reconstructions as indicated in Figs. 8(b5) and 8(c5), demonstrating its effectiveness and adaptability in practical applications.

Fig. 8

Experimental results of one segment (200×200  pixels) in a biological sample (stem transection of dicotyledon) recovered by different algorithms. (a) The FOV captured with a 4×/0.1  NA objective. (a1) The enlargement of a subregion of (a). Groups (b–d) show the recovery results of intensity, phase, and spectrum, respectively.

JBO_22_9_096005_f008.png

5.

Conclusions

In this paper, we theoretically and experimentally report a system calibration procedure, termed SC-FPM, based on the SA algorithm, LED intensity correction method, nonlinear regression process, and adaptive step-size strategy. SC-FPM can retrieve a high-quality, noise-robust complex object in the case of extreme multiple errors, including aberrations, LED intensity fluctuation, parameter imperfections, and noise in a variety of forms. The effectiveness and robustness of SC-FPM are demonstrated and the great performance has been achieved in both simulations and experiments.

Note that the reasonably selected parameters, such as initial iteration times and step length of SA algorithm, would contribute a lot to the accuracy and efficiency of SC-FPM. Therefore, they need to be carefully estimated according to the actual situations. Here, we mainly focus on the quality of reconstructions, and the aberration is introduced with a simple Zernike polynomial. The quantitative research on aberration is another noteworthy issue especially under such mixed system errors. Whether the aberration is real or the result of different errors it still needs to be analyzed theoretically and experimentally, which may be the subject of future work.

Appendices

Appendix A:

The Evaluation of δ1 and δ2

Note that Eqs. (3) and (4) are based on the PIE algorithm18 in our procedure, which is quite different from the ePIE-based EPRY-FPM algorithm.13,19 In fact, both PIE and ePIE algorithm are widely used but the PIE-based algorithm, namely Eqs. (3) and (4), will be more robust to noise due to the proper evaluation of δ1 and δ2 as shown in Fig. 9. In addition to the idealized situation, each LR image is corrupted with 40% Gaussian noise with different variances to an extreme level. A set of 225 LR intensity images is simulated under this setting. Obviously, the recovery results cannot be converged because of the extreme noise but the best performance will appear at a specific iteration as shown in Figs. 1(c1) and 1(c2). Figure 1 groups (a) and (b) show the best results under different parameters. The epsilon of the machine (eps) is the minimum distance that two numbers could be distinguished by a floating point arithmetic program in MATLAB®. It can be seen that compared with the ePIE-based algorithm, the PIE-based algorithm may be more robust to noise, especially when setting δ1=1, δ2=1000. Compared Figs. 9(a2) and 9(b2) (pink line) with Figs. 9(a4) and 9(b4) (blue line), δ1=1 will be much better than δ1=eps, which is the same as the comparison of Figs. 9(a3) and 9(b3) (green line) with Figs. 9(a5) and 9(b5) (indigo line). And δ2=1000 will be better than δ1=eps according to the comparison of Figs. 9(a2) and 9(b2) (pink line) with Figs. 9(a3) and 9(b3) (green line) or Figs. 9(a4) and 9(b4) (blue line) with Figs. 9(a5) and 9(b5) (indigo line). Additionally, other different combinations of δ1 and δ2 have been tested, but consistently, all these data indicate that the best robustness and convergence efficiency are achieved at δ1=1, δ2=1000.

Fig. 9

Comparison of recovery results with 40% Gaussian noise using different parameters. (a1) and (b1) are the best results with EPRY-FPM algorithm at 17 iterations. (a2) and (b2) are the best results with δ1=eps, δ2=eps at 6 iterations. (a3) and (b3) are the best results with δ1=eps, δ2=1000 at 20 iterations. (a4) and (b4) are the best results with δ1=1, δ2=eps at 6 iterations. (a5) and (b5) are the best results with δ1=1, δ2=1000 at 5 iterations. (c1) and (c2) are the intensity and phase reconstruction accuracy versus iteration time for different algorithms.

JBO_22_9_096005_f009.png

Disclosures

The authors have no relevant financial interests in this article and no potential conflicts of interest to disclose.

Acknowledgments

The authors acknowledge the National Natural Science Foundation of China (NSFC) (61377008 and 81427802).

References

1. 

G. Zheng et al., “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics, 7 (9), 739 –745 (2013). http://dx.doi.org/10.1038/nphoton.2013.187 NPAHBY 1749-4885 Google Scholar

2. 

X. Ou et al., “Quantitative phase imaging via Fourier ptychographic microscopy,” Opt. Lett., 38 (22), 4845 –4848 (2013). http://dx.doi.org/10.1364/OL.38.004845 OPLEDP 0146-9592 Google Scholar

3. 

X. Ou et al., “High numerical aperture Fourier ptychography: principle, implementation and characterization,” Opt. Express, 23 (3), 3472 –3491 (2015). http://dx.doi.org/10.1364/OE.23.003472 OPEXFF 1094-4087 Google Scholar

4. 

L. Bian et al., “Content adaptive illumination for Fourier ptychography,” Opt. Lett., 39 (23), 6648 –6651 (2014). http://dx.doi.org/10.1364/OL.39.006648 OPLEDP 0146-9592 Google Scholar

5. 

S. Pacheco et al., “Transfer function analysis in epi-illumination Fourier ptychography,” Opt. Lett., 40 (22), 5343 –5346 (2015). http://dx.doi.org/10.1364/OL.40.005343 OPLEDP 0146-9592 Google Scholar

6. 

L. Tian et al., “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica, 2 (2), 104 –111 (2015). http://dx.doi.org/10.1364/OPTICA.2.000104 Google Scholar

7. 

X. Ou et al., “Aperture scanning Fourier ptychographic microscopy,” Biomed. Opt. Express, 7 (8), 3140 –3150 (2016). http://dx.doi.org/10.1364/BOE.7.003140 BOEICL 2156-7085 Google Scholar

8. 

S. Dong et al., “High-resolution fluorescence imaging via pattern-illuminated Fourier ptychography,” Opt. Express, 22 (17), 20856 –20870 (2014). http://dx.doi.org/10.1364/OE.22.020856 OPEXFF 1094-4087 Google Scholar

9. 

J. Chung et al., “Wide field-of-view fluorescence image deconvolution with aberration-estimation from Fourier ptychography,” Biomed. Opt. Express, 7 (2), 352 –368 (2016). http://dx.doi.org/10.1364/BOE.7.000352 BOEICL 2156-7085 Google Scholar

10. 

S. Dong et al., “Spectral multiplexing and coherent-state decomposition in Fourier ptychographic imaging,” Biomed. Opt. Express, 5 (6), 1757 –1767 (2014). http://dx.doi.org/10.1364/BOE.5.001757 BOEICL 2156-7085 Google Scholar

11. 

L. Tian et al., “Multiplexed coded illumination for Fourier ptychography with an LED array microscope,” Biomed. Opt. Express, 5 (7), 2376 –2389 (2014). http://dx.doi.org/10.1364/BOE.5.002376 BOEICL 2156-7085 Google Scholar

12. 

L. Tian et al., “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica, 2 (10), 904 –911 (2015). http://dx.doi.org/10.1364/OPTICA.2.000904 Google Scholar

13. 

X. Ou et al., “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express, 22 (5), 4960 –4972 (2014). http://dx.doi.org/10.1364/OE.22.004960 OPEXFF 1094-4087 Google Scholar

14. 

J. Sun et al., “Efficient positional misalignment correction method for Fourier ptychographic microscopy,” Biomed. Opt. Express, 7 (3), 1336 –1350 (2016). http://dx.doi.org/10.1364/BOE.7.001336 BOEICL 2156-7085 Google Scholar

15. 

Z. Bian et al., “Adaptive system correction for robust Fourier ptychographic imaging,” Opt. Express, 21 (26), 32400 –32410 (2013). http://dx.doi.org/10.1364/OE.21.032400 OPEXFF 1094-4087 Google Scholar

16. 

C. Zuo et al., “Adaptive step-size strategy for noise-robust Fourier ptychographic microscopy,” Opt. Express, 24 (18), 20724 –20744 (2016). http://dx.doi.org/10.1364/OE.24.020724 OPEXFF 1094-4087 Google Scholar

17. 

L.-H. Yeh et al., “Experimental robustness of Fourier ptychography phase retrieval algorithms,” Opt. Express, 23 (26), 33214 –33240 (2015). http://dx.doi.org/10.1364/OE.23.033214 OPEXFF 1094-4087 Google Scholar

18. 

J. M. Rodenburg et al., “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett., 85 (20), 4795 –4797 (2004). http://dx.doi.org/10.1063/1.1823034 APPLAB 0003-6951 Google Scholar

19. 

A. M. Maiden et al., “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy, 109 (10), 1256 –1262 (2009). http://dx.doi.org/10.1016/j.ultramic.2009.05.012 ULTRD6 0304-3991 Google Scholar

20. 

L. Bian et al., “Motion-corrected Fourier ptychography,” Biomed. Opt. Express, 7 (11), 4543 –4553 (2016). http://dx.doi.org/10.1364/BOE.7.004543 BOEICL 2156-7085 Google Scholar

21. 

L. Bian et al., “Fourier ptychographic reconstruction using Wirtinger flow optimization,” Opt. Express, 23 (4), 4856 –4866 (2015). http://dx.doi.org/10.1364/OE.23.004856 OPEXFF 1094-4087 Google Scholar

22. 

L. Bian et al., “Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient,” Sci. Rep., 6 27384 (2016). http://dx.doi.org/10.1038/srep27384 SRCEC3 2045-2322 Google Scholar

23. 

A. Pan et al., “code for SC-FPM,” (2017) https://www.sites.google.com/site/dranpanblog/home ( April ). 2017). Google Scholar

Biography

An Pan received his BE degree in electronic science and technology from Nanjing University of Science and Technology in 2014. He is a PhD candidate at Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, China, under the guidance of professor Baoli Yao. His research focuses on computational optical microscopy and quantitative phase imaging. He was awarded the 2017 SPIE Optics and Photonics Education Scholarship as a dedicated advocate for optics outreach and will continue his involvement throughout the optics community.

Ming Lei received his BE degree from the School of Physics and Optoelectronic Engineering, Xi Dian University in 2000 and his PhD from Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences in 2007. He is a professor at the State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences. He was trained as a postdoctoral research fellow in the Department of Chemistry, University of Konstanz in 2008 to 2010. His current research is focused on super-resolution microscopy and optical trapping technologies.

Baoli Yao obtained his PhD in optics at Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences in 1997, and pursued his postdoctoral work in the Technical University of Munich, Germany during 1997 to 1998. Currently, he is associated with the State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, and is the deputy director of the lab. His research areas include super-resolution optical microscopy, digital holographic microscopy, optical micromanipulation and microfabrication, optical data storage, and information processing.

Biographies for the other authors are not available.

© 2017 Society of Photo-Optical Instrumentation Engineers (SPIE) 1083-3668/2017/$25.00 © 2017 SPIE
An Pan, Yan Zhang, Tianyu Zhao, Zhaojun Wang, Dan Dan, Ming Lei, and Baoli Yao "System calibration method for Fourier ptychographic microscopy," Journal of Biomedical Optics 22(9), 096005 (12 September 2017). https://doi.org/10.1117/1.JBO.22.9.096005
Received: 24 June 2017; Accepted: 21 August 2017; Published: 12 September 2017
Lens.org Logo
CITATIONS
Cited by 119 scholarly publications and 12 patents.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Light emitting diodes

Calibration

Microscopy

Reconstruction algorithms

Contrast transfer function

Lawrencium

Image segmentation

Back to Top