Understanding PnP-ADMM under mismatched denoisers
1 Washington University in St.Louis, MO, USA 2 University of California Riverside, CA, USA
Figure 1: Illustration of domain adaptation in PnP-ADMM. The mismatched denoiser is pre-trained on source distribution (BreCaHAD) and adapted to target distribution (MetFaces) using a few samples. Adapted prior is then plugged into PnP-ADMM algorithm to reconstruct a sample from MetFaces.
Figure 2: An illustration of domain adaptation influence on PnP-ADMM with mismatched priors.
Plug-and-Play (PnP) priors is a widely-used family of methods for solving imaging inverse problems by integrating physical measurement models with image priors specified using image denoisers. PnP methods have been shown to achieve state-of-the-art performance when the prior is obtained using powerful deep denoisers. Despite extensive work on PnP, the topic of distribution mismatch between the training and testing data has often been overlooked in the PnP literature. This paper presents a set of new theoretical and numerical results on the topic of prior distribution mismatch and domain adaptation for alternating direction method of multipliers (ADMM) variant of PnP. Our theoretical result provides an explicit error bound for PnP-ADMM due to the mismatch between the desired denoiser and the one used for inference. Our analysis contributes to the work in the area by considering the mismatch under nonconvex data-fidelity terms and expansive denoisers. Our first set of numerical results quantifies the impact of the prior distribution mismatch on the performance of PnP-ADMM on the problem of image super-resolution. Our second set of numerical results considers a simple and effective domain adaption strategy that closes the performance gap due to the use of mismatched denoisers. Our results suggest the relative robustness of PnP-ADMM to prior distribution mismatch, while also showing that the performance gap can be significantly reduced with few training samples from the desired distribution.
Figure 3: Visual evaluation of PnP-ADMM on image super-resolution using denoisers trained on several datasets. Note how the disparities in the training distributions of denoisers directly influence the performance of PnP. The denoisers containing images most similar to MetFaces offer the best performance.
Figure 4: Visual evaluation of several priors on the image super-resolution task reported in terms of PSNR (dB) and SSIM for an image from RxRx1. Note the influence of mismatched priors on the performance of PnP. .
Figure 5: Illustration of domain adaptation in PnP-ADMM. The mismatched denoiser is pre-trained on source distribution (BreCaHAD) and adapted to target distribution (MetFaces) using a few samples. Adapted prior is then plugged into PnP-ADMM algorithm to reconstruct a sample from MetFaces.Illustration of domain adaptation in PnP-ADMM. The mismatched denoiser is pre-trained on source distribution (BreCaHAD) and adapted to target distribution (MetFaces) using a few samples. Adapted prior is then plugged into PnP-ADMM algorithm to reconstruct a sample from MetFaces.
Figure 6: Visual comparison on super-resolution with target (MetFaces), mismatched (BreCaHAD), and adapted priors on two MetFaces test images. Note how the recovery performance increases by adaptation of mismatched priors to a larger set of images from the target distribution.
Figure 7: Visual comparison of image super-resolution with target (RxRx1), mismatched (CelebA), and adapted priors on a test image from RxRx1. Note how the recovery performance increases by adaptation of mismatched priors to a larger set of images from the target distribution.