We employed a Pix2Pix generative adversarial network to translate the multispectral fluorescence images into colored brightfield representations resembling H&E staining. The model underwent training using 512x512 pixel paired image patches, with a manually stained image serving as the reference and the fluorescence images serving as the processing input. The baseline model, without any modifications, did not achieve high microscopic accuracy, manifesting incorrect color attribution to various biological structures and the addition or removal of image features. However, through the substitution of simple convolutions with Dense convolution units in the U-Net Generator, we observed an increase in the similarity of microscopic structures and the color balance between the paired images. The resulting improvements underscore the potential utility of virtual staining in histopathological analysis for veterinary oncology applications.
|