DeCoLearn: Deformation-Compensated Learning for

Image Reconstruction without Ground Truth

IEEE Transactions on Medical Imaging, 2022

JOINT image reconstruction and image registration WITHOUT ANY ground-truth supervision


Weijie Gan1, Yu Sun1, Cihat Eldeniz2, Jiaming Liu3, Hongyu An2,3,4,5 Ulugbek S. Kamilov1,3

1Department of Computer Science & Engineering, Washington University in St.Louis, St. Louis, MO, USA
2Mallinckrodt Institute of Radiology,Washington University in St. Louis, St.Louis, MO, USA
3Department of Electrical & SystemsEngineering, Washington University in St.Louis, St. Louis, MO, USA
4Department of Biomedical Engineering,Washington University in St. Louis, St.Louis, MO, USA
5Department of Neurology, WashingtonUniversity in St. Louis, St. Louis, MO, USA

Paper Preprint Code

We are grateful to Vivian Chen for her contributions to this project website.

Banner

Figure 1: Illustration of the DeCoLearn training.


Abstract


Deep neural networks for medical image reconstruction are traditionally trained using high-quality ground-truth images as training targets. Recent work on Noise2Noise (N2N) has shown the potential of using multiple noisy measurements of the same object as an alternative to having a ground-truth. However, existing N2N-based methods are not suitable for learning from the measurements of an object undergoing nonrigid deformation. This paper addresses this issue by proposing the deformation-compensated learning (DeCoLearn) method for training deep reconstruction networks by compensating for object deformations. A key component of DeCoLearn is a deep registration module, which is jointly trained with the deep reconstruction network without any ground-truth supervision. DeCoLearn was extensively validated on both simulated and experimentally collected magnetic resonance imaging (MRI) data, showing that it can significantly improve imaging quality.

Model


Figure 2: DeCoLearn jointly trains two CNN modules: one for image reconstruction and the other for image registration. DeCoLearn takes the measurement pairs of the same object but at different motion states as input. The reconstruction module removes artifacts due to noise and undersampling. The registration module provides the motion field characterizing the directional mapping between their coordinates. The warping operator is implemented as the Spatial Transform Network (STN) to register the reconstructed images. The network is traiend end-to-end without any ground-truth supervision.

Validation on Experimentally-Collected 4D MRI Data


Banner

Figure 3: Video of DeCoLearn reconstructed images across different slice. LEFT: inverse multi-coil non-uniform fast Fourier transform (MCNUFFT). RIGHT: DeCoLearn.

Banner

Figure 4: Video of DeCoLearn reconstructed images across different respiratory phase. LEFT: inverse multi-coil non-uniform fast Fourier transform (MCNUFFT). RIGHT: DeCoLearn.

Banner

Figure 5: Illustration of DeCoLearn reconstructed images across different respiratory phases.

Banner

Figure 6: Illustration of DeCoLearn reconstructed images compared against several baseline methods.

Validation on Simulated Data

Check our GitHub for the code to generate the following results.


Banner
Banner

Paper


Bibtex


@article{gan2021deformation, title={Deformation-Compensated Learning for Image Reconstruction without Ground Truth}, author={Gan, Weijie and Sun, Yu and Eldeniz, Cihat and Liu, Jiaming and An, Hongyu and Kamilov, Ulugbek S}, journal={IEEE Transactions on Medical Imaging}, month={sep}, number={9}, pages={2371--2384}, volume={41}, year={2022} }