Anonymouspro commited on
Commit
436b96d
·
verified ·
1 Parent(s): 3452672

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -3
README.md CHANGED
@@ -1,3 +1,15 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ ##### Anonymouspro/Ttinyllama4b
5
+ https://github.com/jzhang38/TinyLlama
6
+
7
+ The TinyLlama project aims to pretrain a 4B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
8
+
9
+ We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
10
+
11
+ This Model
12
+ This is the chat model finetuned on top of TinyLlama/TinyLlama-1.1B-intermediate-step-955k-2T. We follow HF's Zephyr's training recipe. The model was " initially fine-tuned on a variant of the UltraChat dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT. We then further aligned the model with 🤗 TRL's DPOTrainer on the openbmb/UltraFeedback dataset, which contain 64k prompts and model completions that are ranked by GPT-4."
13
+
14
+ How to use
15
+ You will need the transformers>=4.34 Do check the TinyLlama github page for more information.