CoRRECT: A Deep Unfolding Framework for

Motion-Corrected Quantitative R2* Mapping

The FIRST unified qMRI framework for recovering high-quality quantitative R2* maps
directly from noisy, subsampled, and motion-corrupted MRI measurements.


Xiaojian Xu*1, Weijie Gan*1, Satya V. V. N. Kothapalli2,
Dmitriy A. Yablonskiy2, and Ulugbek S. Kamilov1

1Computational Imaging Group (CIG), Washington University in St. Louis, St. Louis, MO, USA
2Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, MO, USA

Preprint

Abstract


Quantitative MRI (qMRI) refers to a class of MRI methods for quantifying the spatial distribution of biological tissue parameters. Traditional qMRI methods usually deal separately with artifacts arising from accelerated data acquisition, involuntary physical motion, and magnetic-field inhomogeneities, leading to suboptimal end-to-end performance. This paper presents CoRRECT, a unified deep unfolding (DU) framework for qMRI consisting of a model-based end-to-end neural network, a method for motion-artifact reduction, and a self-supervised learning scheme. The network is trained to produce R2* maps whose k-space data matches the real data by also accounting for motion and field inhomogeneities. When deployed, CoRRECT only uses the k-space data without any pre-computed parameters for motion or inhomogeneity correction. Our results on experimentally collected multi-Gradient-Recalled Echo (mGRE) MRI data show that CoRRECT recovers motion and inhomogeneity artifact-free R2* maps in highly accelerated acquisition settings. This work opens the door to DU methods that can integrate physical measurement models, biophysical signal models, and learned prior models for high-quality qMRI.

Model


Figure 1: The overview of the CoRRECT framework for training an end-to-end deep network consisting of two modules: Rθ for reconstructing mGRE MRI images and Eφ for estimating corresponding R2* maps. The network takes input as subsampled, noisy, and motion-corrupted k-space measurements. Rθ is implemented as a deep model-based architecture (DMBA) initialized using the zero-filled reconstruction. Eφ is implemented as a customized U-Net architecture mapping the output of Rθ to the desired R2* map.The whole network is trained end-to-end using fully-sampled mGRE sequence data without any ground-truth quantitative R2* maps.

Validation on Simulated Data


Banner

Figure 2: Quantitative and visual evaluation of CoRRECT on simulated data corrupted with synthetic motion, sampled using acceleration factor x4. The bottom-left corner of each image provides the SNR and SSIM values with respect to the ground-truth. Arrows in the zoomed-in plots highlight brain regions that are well reconstructed using CoRRECT. The R2* corresponding to TV, RED, and DU are obtained by the recent LEARN-BIO network. Note the excellent quantitative performance of CoRRECT for mGRE reconstruction and R2*estimation.

Validation on Experimental Data


Banner

Figure 3: Visual evaluation of CoRRECT on experimentally collected data corrupted with real motion, sampled using acceleration factor x4. The mGRE image in the first column (denoted with x1) uses motion-corrupted but fully-sampled k-space data, while the ones in other columns use motion-corrupted and subsampled k-space data. Note the excellent performance of CoRRECT for producing high-quality mGRE and R2*images. Note also the abilty of CoRRECT trained on synthetic motion to address artifacts due to real object motion.

Banner

Figure 4: Visual evaluation of CoRRECT on experimentally-collected data corrupted with real motion, subsampled using acceleration rates x2, x4, x8. Arrows in the zoomed-in plots highlight brain regions that are well reconstructed using CoRRECT. Corrupted (x1) uses motion-corrupted but fully-sampled measurements, while ZF+NLLS, TV, RED, DU and CoRRECT use motion-corrupted and subsampled measurements. Note the improvements due to CoRRECT across different sampling rates.

Banner

Figure 5: Visual evaluation of CoRRECT on experimental data corrupted with real motion, sampled using acceleration rate x4. The first row shows several slices of reconstructed mGRE images from the whole brain volume of 72 slices, while the second row shows the corresponding estimated R2*maps. In each column of the first row, the images to the left of the dashed line are the mGRE images reconstructed from the fully-sampled, noisy, and motion-corrupted measurements, while the images to the right are the result of the CoRRECT reconstruction from subsampled, noisy, and motion-corrupted measurements. In each column of the second row, the R2*maps to the left of the dashed line are estimated using NLLS on mGRE images in the first row, while those to the right are produced by CoRRECT. Arrows in the plots highlight brain regions that are well reconstructed using CoRRECT. Note how CoRRECT can remove artifacts across the whole brain volume.

Paper


Bibtex


@article{xu2022correct, title={CoRRECT: A Deep Unfolding Framework for Motion-Corrected Quantitative R2* Mapping}, author={Xu, Xiaojian and Gan, Weijie and Kothapalli, Satya VVN and Yablonskiy, Dmitriy A and Kamilov, Ulugbek S}, journal={arXiv:2210.06330}, year={2022} }