FMA-Net: Flow-Guided Dynamic Filtering and Iterative Feature Refinement with Multi-Attention for Joint Video Super-Resolution and Deblurring

๐ŸŒŸ Enhance video quality with FMA-Net! ๐Ÿ“นโœจ This AI tool offers cutting-edge joint video super-resolution and deblurring using flow-guided dynamic filtering and multi-attention features. ๐Ÿš€๐Ÿ” Elevate your video content effortlessly! #AI #VideoEnhancement #FMANet

  • FMA-Net focuses on joint video super-resolution and deblurring using flow-guided dynamic filtering (FGDF) and iterative feature refinement with multi-attention (FRMA).
  • FGDF in FMA-Net provides precise estimation of spatio-temporally-variant degradation and restoration kernels by learning sophisticated motion representations for handling large motions effectively.
  • FRMA blocks in FMA-Net refine features in a coarse-to-fine manner through iterative updates driven by a novel temporal anchor (TA) loss.
  • Extensive experiments show the superiority of FMA-Net over existing methods in terms of quantitative and qualitative performance for joint video super-resolution and deblurring.
  • The architecture of FMA-Net for joint video super-resolution and deblurring consists of FRMA blocks along with the concept of flow-guided dynamic filtering.
  • FMA-Net has been retrained on the REDS training dataset for video super-resolution and deblurring tasks.
  • The work on FMA-Net was supported by a grant from the Korea government for developing high-quality conversion technology for low-quality media.
  • FMA-Net authors express gratitude to the authors of Nerfies for open-sourcing the website template.