rwkv7-168M-pile / README.md
ZhangRC's picture
Update README.md
04a0812 verified
|
raw
history blame
2.13 kB
metadata
license: apache-2.0
datasets:
  - EleutherAI/the_pile_deduplicated
language:
  - en
metrics:
  - accuracy
base_model:
  - BlinkDL/rwkv-7-pile
pipeline_tag: text-generation

rwkv7-168m-pile

This is RWKV-7 model under flash-linear attention format.

Model Details

Model Description

  • Developed by: Bo Peng, Yu Zhang, Songlin Yang, Ruochong Zhang
  • Funded by: Shenzhen Yuanshi Intelligent Co. Ltd.
  • Model type: RWKV-7
  • Language(s) (NLP): English
  • License: Apache-2.0
  • Parameter count: 165M
  • Tokenizer: GPT-NeoX 20B tokenizer

Model Sources

Uses

Install flash-linear-attention before using this model:

git clone https://github.com/fla-org/flash-linear-attention
cd flash-linear-attention
pip install -e .

Direct Use

You can use this model just as any other HuggingFace models:

from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained('fla-hub/rwkv7-168m-pile', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained('fla-hub/rwkv7-168m-pile', trust_remote_code=True)

Training Details

Training Data

This model is trained on the Pile with a total of 332 billion tokens.

Training Hyperparameters

  • Training regime: bfloat16, lr 8e-4 to 3e-5 cosine decay, wd 0.1, bsz 8x30x4096

Evaluation

Metrics

lambada_openai: ppl 14.2 acc 45.6%

piqa: acc 65.5%