bytetriper commited on
Commit
a6e101e
1 Parent(s): d798e90

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -3
README.md CHANGED
@@ -1,3 +1,57 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ pipeline_tag: image-to-image
6
+ ---
7
+ # Model Card for Model ID
8
+ VIT-MAE-r is a fine-tuned version of MAE for image reconstuction. We release a version fine-tuned from [MAE-Large](https://huggingface.co/facebook/vit-mae-large)
9
+
10
+ ## Model Details
11
+
12
+ VIT-MAE-r is already converted to hf format and should be able to be used directly by `from_pretrained` method.
13
+
14
+ ### Model Sources
15
+
16
+ <!-- Provide the basic links for the model. -->
17
+
18
+ - **Repository:** [More Information Needed]
19
+ - **Paper [optional]:** [LM4LV: A Frozen Large Language Model for Low-level Vision Tasks](https://arxiv.org/abs/2405.15734v1)
20
+ - **source model**: [MAE-Large](https://huggingface.co/facebook/vit-mae-large)
21
+
22
+ ## How to Get Started with the Model
23
+
24
+ Use the code below to get started with the model.
25
+
26
+ ``python
27
+ from transformers import AutoImageProcessor, AutoModelForPreTraining
28
+ model = AutoModelForPreTraining.from_pretrained("bytetriper/vit-mae-r")
29
+ ``
30
+
31
+
32
+ ## Evaluation
33
+
34
+ This model achieves a rFID on ImageNet val set of 1.24, evaluated using the standard tensorflow tool provided by [Guided-Diffusion](https://github.com/openai/guided-diffusion/tree/main/evaluations)
35
+
36
+ ## Citation [optional]
37
+
38
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
39
+
40
+ **BibTeX:**
41
+
42
+ @article{zheng2024lm4lv,
43
+ title={LM4LV: A Frozen Large Language Model for Low-level Vision Tasks},
44
+ author={Zheng, Boyang and Gu, Jinjin and Li, Shijun and Dong, Chao},
45
+ journal={arXiv preprint arXiv:2405.15734},
46
+ year={2024}
47
+ }
48
+
49
+
50
+
51
+ ## Model Card Authors [optional]
52
+
53
+ Boyang Zheng
54
+
55
+ ## Model Card Contact
56
+
57