Image Feature Extraction
English
image-to-image
nielsr HF staff commited on
Commit
84dfe08
1 Parent(s): 2e2550f

Improve model card

Browse files

Hi,

Niels here from the community science team at HF. This PR improves the model card, by
- adding a pipeline tag, so that people find it at https://huggingface.co/models?pipeline_tag=image-feature-extraction
- adding "Usage" section on how to use the models
- link to the paper: https://huggingface.co/papers/2409.04410.

Cheers!

Files changed (1) hide show
  1. README.md +11 -1
README.md CHANGED
@@ -2,10 +2,16 @@
2
  license: apache-2.0
3
  language:
4
  - en
 
5
  ---
 
6
  ## Open-MAGVIT2: Democratizing Autoregressive Visual Generation
7
 
8
- [[Project Page]](https://github.com/TencentARC/Open-MAGVIT2)
 
 
 
 
9
 
10
  Until now, VQGAN, the initial tokenizer is still acting an indispensible role in mainstream tasks, especially autoregressive visual generation. Limited by the bottleneck of the size of codebook and the utilization of code, the capability of AR generation with VQGAN is underestimated.
11
 
@@ -18,3 +24,7 @@ ImageNet 128 × 128:
18
 
19
  ImageNet 256 × 256:
20
  - Model [ImageNet_256_Base.ckpt](https://huggingface.co/TencentARC/Open-MAGVIT2/blob/main/imagenet_256_B.ckpt)
 
 
 
 
 
2
  license: apache-2.0
3
  language:
4
  - en
5
+ pipeline_tag: image-feature-extraction
6
  ---
7
+
8
  ## Open-MAGVIT2: Democratizing Autoregressive Visual Generation
9
 
10
+ Code: https://github.com/TencentARC/Open-MAGVIT2
11
+
12
+ Paper: https://huggingface.co/papers/2409.04410
13
+
14
+ ## Introduction
15
 
16
  Until now, VQGAN, the initial tokenizer is still acting an indispensible role in mainstream tasks, especially autoregressive visual generation. Limited by the bottleneck of the size of codebook and the utilization of code, the capability of AR generation with VQGAN is underestimated.
17
 
 
24
 
25
  ImageNet 256 × 256:
26
  - Model [ImageNet_256_Base.ckpt](https://huggingface.co/TencentARC/Open-MAGVIT2/blob/main/imagenet_256_B.ckpt)
27
+
28
+ ## Usage
29
+
30
+ Refer to the Github repository which includes [scripts](https://github.com/TencentARC/Open-MAGVIT2/tree/main/scripts) for training, evaluation and inference.