We report a recurrent neural network (RNN)-based cross-modality image inference framework, termed Recurrent-MZ+, that explicitly incorporates two or three 2D fluorescence images, acquired at different axial planes, to rapidly reconstruct fluorescence images at arbitrary axial positions within the sample volume, matching the 3D image of the same sample acquired with a confocal scanning microscope. We demonstrated the efficacy of Recurrent-MZ+ on transgenic C. Elegans samples; using 3 wide-field fluorescence images as input, the reconstructed sample volume by Recurrent-MZ+ mitigates the deformations caused by the anisotropic point-spread-function of wide-field microscopy, and matches the ground truth confocal image stack of the sample.
|