Update README.md
Browse files
README.md
CHANGED
@@ -5,14 +5,4 @@ license: apache-2.0
|
|
5 |
contact whatsap:- +8801622951671
|
6 |
this module supported old andeoid phn,, without high CPU
|
7 |
no need GPU
|
8 |
-
only normal phn run 4B models
|
9 |
-
|
10 |
-
The TinyLlama project aims to pretrain a 4B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
|
11 |
-
|
12 |
-
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
|
13 |
-
|
14 |
-
This Model
|
15 |
-
This is the chat model finetuned on top of TinyLlama/TinyLlama-1.1B-intermediate-step-955k-2T. We follow HF's Zephyr's training recipe. The model was " initially fine-tuned on a variant of the UltraChat dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT. We then further aligned the model with 🤗 TRL's DPOTrainer on the openbmb/UltraFeedback dataset, which contain 64k prompts and model completions that are ranked by GPT-4."
|
16 |
-
|
17 |
-
How to use
|
18 |
-
You will need the transformers>=4.34 Do check the TinyLlama github page for more information.
|
|
|
5 |
contact whatsap:- +8801622951671
|
6 |
this module supported old andeoid phn,, without high CPU
|
7 |
no need GPU
|
8 |
+
only normal phn run 4B models
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|