Text Generation
Safetensors
English
rwkv
rwkv7
custom_code

Add paper link and library name

#2
by nielsr HF staff - opened
Files changed (1) hide show
  1. README.md +6 -5
README.md CHANGED
@@ -1,14 +1,15 @@
1
  ---
2
- license: apache-2.0
 
3
  datasets:
4
  - EleutherAI/the_pile_deduplicated
5
  language:
6
  - en
 
7
  metrics:
8
  - accuracy
9
- base_model:
10
- - BlinkDL/rwkv-7-pile
11
  pipeline_tag: text-generation
 
12
  ---
13
 
14
  # rwkv7-168M-pile
@@ -37,7 +38,7 @@ This is RWKV-7 model under flash-linear attention format.
37
  <!-- Provide the basic links for the model. -->
38
 
39
  - **Repository:** https://github.com/fla-org/flash-linear-attention ; https://github.com/BlinkDL/RWKV-LM
40
- - **Paper:** https://arxiv.org/abs/2503.14456
41
  - **Weights:** Converted from https://modelscope.cn/models/RWKV/rwkv-7-pile/file/view/master?fileName=RWKV-x070-Pile-168M-20241120-ctx4096.pth
42
 
43
  ## Uses
@@ -81,4 +82,4 @@ This model is trained on the Pile with a total of 332 billion tokens.
81
  ## FAQ
82
  Q: safetensors metadata is none.
83
 
84
- A: upgrade transformers to >=4.48.0: `pip install 'transformers>=4.48.0'`
 
1
  ---
2
+ base_model:
3
+ - BlinkDL/rwkv-7-pile
4
  datasets:
5
  - EleutherAI/the_pile_deduplicated
6
  language:
7
  - en
8
+ license: apache-2.0
9
  metrics:
10
  - accuracy
 
 
11
  pipeline_tag: text-generation
12
+ library_name: rwkv
13
  ---
14
 
15
  # rwkv7-168M-pile
 
38
  <!-- Provide the basic links for the model. -->
39
 
40
  - **Repository:** https://github.com/fla-org/flash-linear-attention ; https://github.com/BlinkDL/RWKV-LM
41
+ - **Paper:** https://huggingface.co/papers/2503.14456
42
  - **Weights:** Converted from https://modelscope.cn/models/RWKV/rwkv-7-pile/file/view/master?fileName=RWKV-x070-Pile-168M-20241120-ctx4096.pth
43
 
44
  ## Uses
 
82
  ## FAQ
83
  Q: safetensors metadata is none.
84
 
85
+ A: upgrade transformers to >=4.48.0: `pip install 'transformers>=4.48.0'`