license: apache-2.0
library_name: MoG
MoG: Motion-Aware Generative Frame Interpolation






MoG is a generative video frame interpolation (VFI) model, designed to synthesize intermediate frames between two input frames.
MoG marks the first explicit incorporation of motion guidance between input frames to enhance the motion awareness of generative models. We demonstrate that the intermediate flow derived from flow-based VFI methods can effectively serve as motion guidance, and we propose a simple yet efficient approach to integrate this prior into the network. As a result, MoG achieves significant improvements over existing open-source generative VFI methods, excelling in both real-world and animated scenarios.
Source code is available at https://github.com/MCG-NJU/MoG-VFI.
Network Arichitecture
Model Description
- Developed by: Nanjing University, Tencent PCG
- Model type: Generative video frame interploation model, takes two still video frames as input.
- Arxiv paper: https://arxiv.org/pdf/2501.03699
- Project page: https://mcg-nju.github.io/MoG_Web/
- Repository: https://github.com/MCG-NJU/MoG-VFI
- License: Apache 2.0 license.
Usage
We develop MoG based on DynamiCrafter for real-world scenes and ToonCrafter for animation scenes. Both checkpoints are available, and you can select the desired option by specifying the model parameter. Feel free to use it under the Apache 2.0 license.
Citation
If you find our code useful or our work relevant, please consider citing:
@misc{zhang2024vfimambavideoframeinterpolation,
title={VFIMamba: Video Frame Interpolation with State Space Models},
author={Guozhen Zhang and Chunxu Liu and Yutao Cui and Xiaotong Zhao and Kai Ma and Limin Wang},
year={2024},
eprint={2407.02315},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.02315},
}