AI guidance on compression ultrasound is a problem for 2D segmentation networks, which produce inconsistent labels. This is aided by registration but classical untrained approaches cannot handle the large deformations and the noisy background movement and convolutional models like VoxelMorph do not reach the robust accuracy required. Meanwhile, large deformations are typically estimated with multi-warp networks that comprise "correlation layers", but they are resource-intensive and not easily applicable on end-devices in clinical context. We propose to replace the "correlation layer" with a differentiable convex optimisation block and perform end-to-end training of the convolutional feature backbone for improved performance.
Purpose: Image registration is the process of aligning images, and it is a fundamental task in medical image analysis. While many tasks in the field of image analysis, such as image segmentation, are handled almost entirely with deep learning and exceed the accuracy of conventional algorithms, currently available deformable image registration methods are often still conventional. Deep learning methods for medical image registration have recently reached the accuracy of conventional algorithms. However, they are often based on a weakly supervised learning scheme using multilabel image segmentations during training. The creation of such detailed annotations is very time-consuming.
Approach: We propose a weakly supervised learning scheme for deformable image registration. By calculating the loss function based on only bounding box labels, we are able to train an image registration network for large displacement deformations without using densely labeled images. We evaluate our model on interpatient three-dimensional abdominal CT and MRI images.
Results: The results show an improvement of ∼10 % (for CT images) and 20% (for MRI images) in comparison to the unsupervised method. When taking into account the reduced annotation effort, the performance also exceeds the performance of weakly supervised training using detailed image segmentations.
Conclusion: We show that the performance of image registration methods can be enhanced with little annotation effort using our proposed method.
Medical image registration over the past many years has been dominated by techniques which rely on expert an- notations, while not taking advantage of the unlabelled data. Deep unsupervised architectures utilized this widely available unlabelled data to model the anatomically induced patterns in a dataset. Deformable Auto-encoder (DAE), an unsupervised group-wise registration technique, has been used to generate a deformed reconstruction of an input image, which also subsequently generates a global template to capture the deformations in a medical dataset. DAEs however have significant weakness in propagating global information over range long dependencies, which may affect the registration performance on quantitative and qualitative measures. Our proposed method captures valuable knowledge over the whole spatial dimension using an attention mechanism. We present Deformable Auto-encoder Attention Relu Network (DA-AR-Net), which is an exquisite integration of the Attention Relu(Arelu), an attention based activation function into the DAE framework. A detachment of the template image from the deformation field is achieved by encoding the spatial information into two separate latent code representation. Each latent code is followed by a separate decoder network, while only a single encoder is used for feature encoding. Our DA-AR-Net is formalized after an extensive and systematic search across various hyper-parameters - initial setting of learnable parameters of Arelu, the appropriate positioning of Arelu, latent code dimensions, and batch size. Our best architecture shows a significant improvement of 42% on MSE score over previous DAEs and 32% reduction is attained while generating visually sharp global templates.
KEYWORDS: Image registration, Medical imaging, Data modeling, Computer programming, 3D modeling, Neuroimaging, Magnetic resonance imaging, Image restoration, Brain, 3D scanning
Robust groupwise registration methods are important for the analysis of large medical image datasets. We build upon the concept of deforming autoencoders that decouples shape and appearance to represent anatomical variability in a robust and plausible manner. In this work we propose a deep learning model that is trained to generate templates and deformation fields. It employs a joint encoder block which provides latent representations for both shape and appearance and is followed by two independent shape and appearance decoder paths. The model achieves image reconstruction by warping the template provided by the appearance decoder with the estimated warping field provided by the shape encoder. By restricting the embedding to a low-dimensional latent code, we are able to obtain meaningful deformable templates. Our objective function ensures smooth and realistic deformation fields. It contains an invertibility loss term, which is novel for deforming autoencoders and induces backward consistency. This should ensure that warping the reconstructed image with the deformation field ideally results in in the template. In addition, warping the template with the reversed deformation field should ideally produce the reconstructed image. We demonstrate the potential of our approach for application to two- and three-dimensional medical image data by training and evaluating it on labeled MRI brain scans. We show that adding the inverse consistency penalty to the objective function leads to improved and more robust registration results. When evaluated on unseen data with expert labels for accuracy estimation our three-dimensional model achieves substantially increased Dice scores by 5 percentage points.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.