Several Magnetic Resonance Imaging (MRI) sequences are acquired for diagnosis and treatment. MRI with excellent soft-tissue contrast is desired for post-processing algorithms such as tumor segmentation. However, their performance markedly dropped due to the variation in medical imaging protocols or missing information. This study proposed a co-training deep learning algorithm for segmenting the vestibular schwannoma (VS) cancer. Our model was trained on both contrast-enhanced T1W (ceT1W) and high-resolution T2W (hrT2W) MRI sequences to segment Vestibular Schwannoma (VS) cancer and cochlea. Our model utilized content and style matching mechanisms to infuse the informative features from the network trained using full modality into the network trained using missing modality. Our model was trained using the publicly available Vestibular-Schwannoma-SEG dataset, which consists of 242 patients with ceT1W and hrT2W MRI sequences. The dataset was split into two non-overlapping groups: training (n=210) and testing (n=32). Three metrics were reported, including Dice Score (DCS), Relative Volume Error (RVE), and area under the receiver operating characteristic (AUC-ROC) curve. Our method had a superior performance to segment tumor compared with the baseline with (DCS, RVE, AUC-ROC) of (0.89, 3.57, 0.96) and (0.94, 3.10, 0.97) when ceT1W and hrT2W were missed, respectively. Similar performance was observed for segmenting the cochlea when hrT2W was missed with (DCS, RVE, AUC-ROC) of (0.68, 14.06, 0.80). Our model is robust against missing sequences, which is common in clinical settings. It could benefit clinical centers with missing data or different imaging protocols.
|