Pyramid Attention for Image Restoration
This repository is for PANet and PA-EDSR introduced in the following paper
Yiqun Mei, Yuchen Fan, Yulun Zhang, Jiahui Yu, Yuqian Zhou, Ding Liu, Yun Fu, Thomas S. Huang and Honghui Shi "Pyramid Attention for Image Restoration", [Arxiv]
The code is built on EDSR (PyTorch) & RNAN and tested on Ubuntu 18.04 environment (Python3.6, PyTorch_1.1) with Titan X/1080Ti/V100 GPUs.
Contents
Train
Prepare training data
Download DIV2K training data (800 training + 100 validtion images) from DIV2K dataset or SNU_CVLab.
Specify '--dir_data' in optional.py based on the HR and LR images path.
Organize training data like:
DIV2K/
βββ DIV2K_train_HR
βββ DIV2K_train_LR_bicubic
β βββ X10
β βββ X30
β βββ X50
β βββ X70
βββ DIV2K_valid_HR
βββ DIV2K_valid_LR_bicubic
βββ X10
βββ X30
βββ X50
βββ X70
For more informaiton, please refer to EDSR(PyTorch).
Begin to train
(optional) All the pretrained models and visual results can be downloaded from Google Drive.
Cd to 'PANet-PyTorch/[Task]/code', run the following scripts to train models.
You can use scripts in file 'demo.sb' to train and test models for our paper.
# Example Usage: N=10 python main.py --n_GPUs 1 --lr 1e-4 --batch_size 16 --n_resblocks 80 --save_models --epoch 1000 --decay 200-400-600-800 --model PANET --scale 50 --patch_size 48 --reset --save PANET_N50 --n_feats 64 --data_train DIV2K --chop
Test
Quick start
Cd to 'PANet-PyTorch/[Task]/code', run the following scripts.
You can use scripts in file 'demo.sb' to produce results for our paper.
# No self-ensemble, use different testsets to reproduce the results in the paper. # Example Usage: python main.py --model PANET --n_resblocks 80 --n_feats 64 --data_test Urban100 --scale 10 --save_results --chop --test_only --pre_train ../path_to_model
The whole test pipeline
- Prepare test data. Organize training data like:
benchmark/
βββ testset1
β βββ HR
β βββ LR_bicubic
β βββ X10
β βββ ..
βββ testset2
Conduct image denoise.
See Quick start
Evaluate the results.
Run 'Evaluate_PSNR_SSIM.m' to obtain PSNR/SSIM values for paper.
Citation
If you find the code helpful in your resarch or work, please cite the following papers.
@article{mei2020pyramid,
title={Pyramid Attention Networks for Image Restoration},
author={Mei, Yiqun and Fan, Yuchen and Zhang, Yulun and Yu, Jiahui and Zhou, Yuqian and Liu, Ding and Fu, Yun and Huang, Thomas S and Shi, Honghui},
journal={arXiv preprint arXiv:2004.13824},
year={2020}
}
@InProceedings{Lim_2017_CVPR_Workshops,
author = {Lim, Bee and Son, Sanghyun and Kim, Heewon and Nah, Seungjun and Lee, Kyoung Mu},
title = {Enhanced Deep Residual Networks for Single Image Super-Resolution},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
year = {2017}
}
Acknowledgements
This code is built on EDSR (PyTorch), RNAN and generative-inpainting-pytorch. We thank the authors for sharing their codes.