FLAIR: A Conditional Diffusion Framework with Applications to Face Video Restoration

Zihao Zou*, Jiaming Liu*, Shirin Shoushtar, Yubo Wang, Weijie Gan, Ulugbek S. Kamilov

Computational Imaging Group (CIG), Washington University in St.Louis, St. Louis, MO, USA
*Equal contribution




Abstract

Face video restoration (FVR) is a challenging but important problem where one seeks to recover a perceptually realistic face videos from a low-quality input. While diffusion probabilistic models (DPMs) have been shown to achieve remarkable performance for face image restoration, they often fail to preserve temporally coherent, high-quality videos, compromising the fidelity of reconstructed faces. We present a new conditional diffusion framework called FLAIR for FVR. FLAIR ensures temporal consistency across frames in a computationally efficient fashion by converting a traditional image DPM into a video DPM. The proposed conversion uses a recurrent video refinement layer and a temporal self-attention at different scales. FLAIR also uses a conditional iterative refinement process to balance the perceptual and distortion quality during inference. This process consists of two key components: a data-consistency module that analytically ensures that the generated video precisely matches its degraded observation and a coarse-to-fine image enhancement module specifically for facial regions. Our extensive experiments show superiority of FLAIR over the current state-of-the-art (SOTA) for video super-resolution, deblurring, JPEG restoration, and space-time frame interpolation on two high-quality face video datasets.




Introduction




Results

JPEG

X4 Gaussian Blur

Spatial-temporal SR


X4 Motion Blur

X8 Bicubic

Web Video Restoration





Comparing to CodeFormer

Comparing to RestoreFormer++


Comparing to DDNM

Comparing to VRT




Paper




@article{zou2023flair,
  title={FLAIR: A Conditional Diffusion Framework with Applications to Face Video Restoration},
  author={Zihao Zou and Jiaming Liu and Shirin Shoushtari and Yubo Wang and Weijie Gan and Ulugbek S. Kamilov},
  journal={arXiv:2311.15445},
  year={2023}
}