|
--- |
|
license: cc-by-4.0 |
|
pipeline_tag: image-to-image |
|
tags: |
|
- pytorch |
|
- super-resolution |
|
--- |
|
|
|
[Link to Github Release](https://github.com/Phhofm/models/releases/tag/2xHFA2kShallowESRGAN) |
|
|
|
# 2xHFA2kShallowESRGAN |
|
|
|
Name: 2xHFA2kShallowESRGAN |
|
Author: Philip Hofmann |
|
Release Date: 04.01.2024 |
|
License: CC BY 4.0 |
|
Network: Shallow ESRGAN (6 Blocks) |
|
Scale: 2 |
|
Purpose: 2x anime upscaler |
|
Iterations: 180'000 |
|
epoch: 167 |
|
batch_size: 12 |
|
HR_size: 128 |
|
Dataset: hfa2k |
|
Number of train images: 2568 |
|
OTF Training: Yes |
|
Pretrained_Model_G: None |
|
|
|
Description: |
|
2x shallow esrgan version of the HFA2kCompact model. |
|
This model should be usable with [FAST_Anime_VSR ](https://github.com/Kiteretsu77/FAST_Anime_VSR) using TensorRT for fast inference, as should my [2xHFA2kReal-CUGAN](https://drive.google.com/file/d/1wqlK-rQjPGKJ5pNoVgnK9gcNF1tA8EjV/view?usp=drive_link) model. |
|
|
|
Slow Pics examples: |
|
[Example 1](https://slow.pics/c/RZj6GMwS) |
|
[Example 2](https://slow.pics/c/Q3DHaU45) |
|
[Ludvae1](https://slow.pics/c/fJi4IphY) |
|
[Ludvae2](https://slow.pics/c/iIhgHokD) |
|
|
|
![Example1](https://github.com/Phhofm/models/assets/14755670/367a6b77-a31a-4784-8a09-aca23596fc9d) |
|
![Example2](https://github.com/Phhofm/models/assets/14755670/4c8a688a-8689-421c-a995-847d4de78e3f) |
|
![Example3](https://github.com/Phhofm/models/assets/14755670/c0981f1c-6650-4604-9cc7-1869bfd8a91d) |
|
![Example4](https://github.com/Phhofm/models/assets/14755670/9d14cdb4-829d-4fad-9887-7ff9780ea200) |
|
|