Clinical deployment of systems based on deep neural networks is hampered by sensitivity to domain shift, caused by e.g. new scanners or rare events, factors usually overcome by human supervision. We suggest a correct-thenpredict approach, where the user labels a few samples of the new data for each slide, which is used to update the network. This few-shot meta-learning method is based on Model-Agnostic Meta-Learning (MAML), with the goal of training to adapt quickly to new tasks. Here we adapt and apply the method to the histopathological setting by identifying a task as a whole-slide image with its corresponding classification problem. We evaluated the method on three datasets, while purposefully leaving out-of-distribution data out from the training data, such as whole-slide images from other centers, scanners or with different tumor classes. Our results show that MAML outperforms conventionally trained baseline networks on all our datasets in average accuracy per slide. Furthermore, MAML is useful as a robustness mechanism to out-of-distribution data. The model becomes less sensitive to difference between whole-slide images and is viable for clinical implementation when used with the correct-then-predict workflow. This offers a reduced need for data annotation when training networks, and a reduced risk of performance loss when domain shift data occurs after deployment.
|