ukim4 commited on
Commit
5f8b044
·
0 Parent(s):

Duplicate from localmodels/LLM

Browse files
.gitattributes ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tflite filter=lfs diff=lfs merge=lfs -text
29
+ *.tgz filter=lfs diff=lfs merge=lfs -text
30
+ *.wasm filter=lfs diff=lfs merge=lfs -text
31
+ *.xz filter=lfs diff=lfs merge=lfs -text
32
+ *.zip filter=lfs diff=lfs merge=lfs -text
33
+ *.zst filter=lfs diff=lfs merge=lfs -text
34
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
OpenAssistant-SFT-7-Llama-30B.ggmlv3.q2_K.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6076bfae3aaaf67282c19687bfc27c2091c217d4715fc5dfe5cd0e8eeb973460
3
+ size 13600369568
OpenAssistant-SFT-7-Llama-30B.ggmlv3.q3_K_L.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0bb45d1b683e273be14e7125ad5971ebc511d62f6e385d3d78bff2c64a7d23a3
3
+ size 17196361760
OpenAssistant-SFT-7-Llama-30B.ggmlv3.q3_K_M.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8fe5826c2f4d77dd3f907ae1c663eb11a02811a257a1239e5c71b2337a02ba21
3
+ size 15637260320
OpenAssistant-SFT-7-Llama-30B.ggmlv3.q3_K_S.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c6e243b92585ec45479e2ed9c72d8f7f23efcc0cdbac7d0dd5a03a2c91e1c2d
3
+ size 13980715040
OpenAssistant-SFT-7-Llama-30B.ggmlv3.q4_0.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:435419e2fe8e3bc61e8d31571519036418edf0b7111f47b921f935a8f2dd7556
3
+ size 18300886688
OpenAssistant-SFT-7-Llama-30B.ggmlv3.q4_K_M.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:56b0a1cb2d502b4181006ce0756bf4727e2a41052e52531a0e97ff65e29b7c09
3
+ size 19566059168
OpenAssistant-SFT-7-Llama-30B.ggmlv3.q4_K_S.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e337e0f93e9a043188456db5d956c88a5040bbae69f5a24656c3a29a8192130b
3
+ size 18300886688
OpenAssistant-SFT-7-Llama-30B.ggmlv3.q5_0.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4baeb9c02dc936ec1398a962e526bb92f59a24b7bc89e0903d51a73d7684af84
3
+ size 22366930592
OpenAssistant-SFT-7-Llama-30B.ggmlv3.q5_1.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb1c28b5ec102b35a22141905b53ea0a61f91bb90c772a77808c52e77fc4aea0
3
+ size 24399952544
OpenAssistant-SFT-7-Llama-30B.ggmlv3.q5_K_M.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:386c1d60e6d0f352809b3a419af9ada445e3c7d3b3ce595d76de08df63a410f3
3
+ size 23018686112
OpenAssistant-SFT-7-Llama-30B.ggmlv3.q5_K_S.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ab991e10f1ccaa3cbd073865ee9649163a468d9b346ed29f681e473de6a3c23
3
+ size 22366930592
OpenAssistant-SFT-7-Llama-30B.ggmlv3.q6_K.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b1ac8b7a42e78bc7030b0f2bf38a640da5fa0276ad3dda8c2228128b90437ee
3
+ size 26687102240
README.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ duplicated_from: localmodels/LLM
3
+ ---
4
+ # OpenAssistant SFT 7 LLaMA 30B ggml
5
+
6
+ From: https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b-xor
7
+
8
+ ---
9
+
10
+ ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
11
+
12
+ Quantized using an older version of llama.cpp and compatible with llama.cpp from May 19, commit 2d5db48.
13
+
14
+ ### k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
15
+
16
+ Quantization methods compatible with latest llama.cpp from June 6, commit 2d43387.
17
+
18
+ ---
19
+
20
+ ## Provided files
21
+ | Name | Quant method | Bits | Size | Max RAM required | Use case |
22
+ | ---- | ---- | ---- | ---- | ---- | ----- |
23
+ | OpenAssistant-SFT-7-Llama-30B.ggmlv3.q2_K.bin | q2_K | 2 | 13.60 GB | 16.10 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
24
+ | OpenAssistant-SFT-7-Llama-30B.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.20 GB | 19.70 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
25
+ | OpenAssistant-SFT-7-Llama-30B.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.64 GB | 18.14 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
26
+ | OpenAssistant-SFT-7-Llama-30B.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 13.98 GB | 16.48 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
27
+ | OpenAssistant-SFT-7-Llama-30B.ggmlv3.q4_0.bin | q4_0 | 4 | 18.30 GB | 20.80 GB | Original llama.cpp quant method, 4-bit. |
28
+ | OpenAssistant-SFT-7-Llama-30B.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.57 GB | 22.07 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
29
+ | OpenAssistant-SFT-7-Llama-30B.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.30 GB | 20.80 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
30
+ | OpenAssistant-SFT-7-Llama-30B.ggmlv3.q5_0.bin | q5_0 | 5 | 22.37 GB | 24.87 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
31
+ | OpenAssistant-SFT-7-Llama-30B.ggmlv3.q5_1.bin | q5_1 | 5 | 24.40 GB | 26.90 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
32
+ | OpenAssistant-SFT-7-Llama-30B.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.02 GB | 25.52 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
33
+ | OpenAssistant-SFT-7-Llama-30B.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.37 GB | 24.87 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
34
+ | OpenAssistant-SFT-7-Llama-30B.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB | 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
35
+
36
+ ---
37
+
38
+ # OpenAssistant LLaMA 30B SFT 7
39
+
40
+ ### Configuration
41
+
42
+ ```
43
+ llama-30b-sft-7:
44
+ dtype: fp16
45
+ log_dir: "llama_log_30b"
46
+ learning_rate: 1e-5
47
+ model_name: /home/ubuntu/Open-Assistant/model/model_training/.saved/llama-30b-super-pretrain/checkpoint-3500
48
+ #model_name: OpenAssistant/llama-30b-super-pretrain
49
+ output_dir: llama_model_30b
50
+ deepspeed_config: configs/zero3_config_sft.json
51
+ weight_decay: 0.0
52
+ residual_dropout: 0.0
53
+ max_length: 2048
54
+ use_flash_attention: true
55
+ warmup_steps: 20
56
+ gradient_checkpointing: true
57
+ gradient_accumulation_steps: 12
58
+ per_device_train_batch_size: 2
59
+ per_device_eval_batch_size: 3
60
+ eval_steps: 101
61
+ save_steps: 485
62
+ num_train_epochs: 4
63
+ save_total_limit: 3
64
+ use_custom_sampler: true
65
+ sort_by_length: false
66
+ #save_strategy: steps
67
+ save_strategy: epoch
68
+ datasets:
69
+ - oasst_export:
70
+ lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk"
71
+ input_file_path: 2023-04-12_oasst_release_ready_synth.jsonl.gz
72
+ val_split: 0.05
73
+ - vicuna:
74
+ val_split: 0.05
75
+ max_val_set: 800
76
+ fraction: 1.0
77
+ - dolly15k:
78
+ val_split: 0.05
79
+ max_val_set: 300
80
+ - grade_school_math_instructions:
81
+ val_split: 0.05
82
+ - code_alpaca:
83
+ val_split: 0.05
84
+ max_val_set: 250
85
+ ```
86
+
87
+ - **OASST dataset paper:** https://arxiv.org/abs/2304.07327