Update README.md
Browse files
README.md
CHANGED
@@ -26,7 +26,7 @@ In Summary:
|
|
26 |
|
27 |
Advantage of this dataset is its large quantity of normalized (512x512px) training tiles
|
28 |
- When applying degradations to create a corresponding LR, the distribution of degradation strengths should be sufficient, even when using multiple degradations.
|
29 |
-
- Big arch options in general can profit from the amount of learning content in this dataset (big transformers like [DRCT-L](https://github.com/ming053l/DRCT), [HMA](https://github.com/korouuuuu/HMA), [HAT-L](https://github.com/XPixelGroup/HAT), [HATFIR](https://github.com/Zdafeng/SwinFIR), [ATD](https://github.com/LabShuHangGU/Adaptive-Token-Dictionary), [CFAT](https://github.com/rayabhisek123/CFAT), [RGT](https://github.com/zhengchen1999/RGT), [DAT2](https://github.com/zhengchen1999/dat). Probably also diffusion based upscalers like [osediff](https://github.com/cswry/osediff), [s3diff](https://github.com/arctichare105/s3diff), [SRDiff](https://github.com/LeiaLi/SRDiff),[resshift](https://github.com/zsyoaoa/resshift), [sinsr](https://github.com/wyf0912/sinsr), [cdformer](https://github.com/i2-multimedia-lab/cdformer)). Since it takes a while to reach a new epoch, higher training iters is advised for the big arch options to profit from the full content. The filtering method used here made sure that metrics should not worsen during training (for example due to blockiness filtering).
|
30 |
- This dataset could still be distilled more to reach higher quality, if for example another promising filtering method is used in the future on this dataset
|
31 |
|
32 |
## Used Datasets
|
|
|
26 |
|
27 |
Advantage of this dataset is its large quantity of normalized (512x512px) training tiles
|
28 |
- When applying degradations to create a corresponding LR, the distribution of degradation strengths should be sufficient, even when using multiple degradations.
|
29 |
+
- Big arch options in general can profit from the amount of learning content in this dataset (big transformers like [DRCT-L](https://github.com/ming053l/DRCT), [HMA](https://github.com/korouuuuu/HMA), [HAT-L](https://github.com/XPixelGroup/HAT), [HATFIR](https://github.com/Zdafeng/SwinFIR), [ATD](https://github.com/LabShuHangGU/Adaptive-Token-Dictionary), [CFAT](https://github.com/rayabhisek123/CFAT), [RGT](https://github.com/zhengchen1999/RGT), [DAT2](https://github.com/zhengchen1999/dat). Probably also diffusion based upscalers like [osediff](https://github.com/cswry/osediff), [s3diff](https://github.com/arctichare105/s3diff), [SRDiff](https://github.com/LeiaLi/SRDiff), [resshift](https://github.com/zsyoaoa/resshift), [sinsr](https://github.com/wyf0912/sinsr), [cdformer](https://github.com/i2-multimedia-lab/cdformer)). Since it takes a while to reach a new epoch, higher training iters is advised for the big arch options to profit from the full content. The filtering method used here made sure that metrics should not worsen during training (for example due to blockiness filtering).
|
30 |
- This dataset could still be distilled more to reach higher quality, if for example another promising filtering method is used in the future on this dataset
|
31 |
|
32 |
## Used Datasets
|