JOINT image reconstruction and image registration WITHOUT ANY ground-truth supervision
1Department of Computer Science & Engineering, Washington University in St.Louis, St. Louis, MO, USA 2Mallinckrodt Institute of Radiology,Washington University in St. Louis, St.Louis, MO, USA 3Department of Electrical & SystemsEngineering, Washington University in St.Louis, St. Louis, MO, USA 4Department of Biomedical Engineering,Washington University in St. Louis, St.Louis, MO, USA 5Department of Neurology, WashingtonUniversity in St. Louis, St. Louis, MO, USA
We are grateful to Vivian Chen for her contributions to this project website.
Figure 1: Illustration of the DeCoLearn training.
Deep neural networks for medical image reconstruction are traditionally trained using high-quality ground-truth images as training targets. Recent work on Noise2Noise (N2N) has shown the potential of using multiple noisy measurements of the same object as an alternative to having a ground-truth. However, existing N2N-based methods are not suitable for learning from the measurements of an object undergoing nonrigid deformation. This paper addresses this issue by proposing the deformation-compensated learning (DeCoLearn) method for training deep reconstruction networks by compensating for object deformations. A key component of DeCoLearn is a deep registration module, which is jointly trained with the deep reconstruction network without any ground-truth supervision. DeCoLearn was extensively validated on both simulated and experimentally collected magnetic resonance imaging (MRI) data, showing that it can significantly improve imaging quality.
Figure 2: DeCoLearn jointly trains two CNN modules: one for image reconstruction and the other for image registration. DeCoLearn takes the measurement pairs of the same object but at different motion states as input. The reconstruction module removes artifacts due to noise and undersampling. The registration module provides the motion field characterizing the directional mapping between their coordinates. The warping operator is implemented as the Spatial Transform Network (STN) to register the reconstructed images. The network is traiend end-to-end without any ground-truth supervision.
Figure 3: Video of DeCoLearn reconstructed images across different slice. LEFT: inverse multi-coil non-uniform fast Fourier transform (MCNUFFT). RIGHT: DeCoLearn.
Figure 4: Video of DeCoLearn reconstructed images across different respiratory phase. LEFT: inverse multi-coil non-uniform fast Fourier transform (MCNUFFT). RIGHT: DeCoLearn.
Figure 5: Illustration of DeCoLearn reconstructed images across different respiratory phases.
Figure 6: Illustration of DeCoLearn reconstructed images compared against several baseline methods.
Check our GitHub for the code to generate the following results.