epinnock commited on
Commit
2e10fb9
·
1 Parent(s): de0cf02

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -0
README.md CHANGED
@@ -1,3 +1,8 @@
1
  ---
2
  license: bigcode-openrail-m
 
 
 
 
3
  ---
 
 
1
  ---
2
  license: bigcode-openrail-m
3
+ datasets:
4
+ - tiiuae/falcon-refinedweb
5
+ language:
6
+ - en
7
  ---
8
+ This a ~90m assistant model for cameloid models like LLama/Alpaca/Vicuna/Guanaco that use the llama tokenizer, allowing for speedups up to 3x with greed sampling. Its trained on 5.5 billion tokens of refinedweb and uses the GPTBigcode architecture and has a context window: 1024. To use please see this article on assisted generation https://huggingface.co/blog/assisted-generation.