mav23 commited on
Commit
d76df5f
1 Parent(s): 345be4d

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +238 -0
  3. dolphin-2.9-llama3-8b.Q4_0.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ dolphin-2.9-llama3-8b.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,238 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ base_model: meta-llama/Meta-Llama-3-8B
4
+ tags:
5
+ - generated_from_trainer
6
+ - axolotl
7
+ model-index:
8
+ - name: out
9
+ results: []
10
+ datasets:
11
+ - cognitivecomputations/Dolphin-2.9
12
+ - teknium/OpenHermes-2.5
13
+ - m-a-p/CodeFeedback-Filtered-Instruction
14
+ - cognitivecomputations/dolphin-coder
15
+ - cognitivecomputations/samantha-data
16
+ - HuggingFaceH4/ultrachat_200k
17
+ - microsoft/orca-math-word-problems-200k
18
+ - abacusai/SystemChat-1.1
19
+ - Locutusque/function-calling-chatml
20
+ - internlm/Agent-FLAN
21
+ ---
22
+
23
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
24
+ should probably proofread and complete it, then remove this comment. -->
25
+
26
+ # Dolphin 2.9 Llama 3 8b 🐬
27
+
28
+ Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations
29
+
30
+ [![Discord](https://img.shields.io/discord/1156064224225808488?logo=Discord&logoColor=%23ffffff&label=Discord&link=https%3A%2F%2Fdiscord.gg%2FtCMkMDDHwm)](https://discord.gg/cognitivecomputations)
31
+ Discord: https://discord.gg/cognitivecomputations
32
+
33
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
34
+
35
+ A bug has been found in the Dolphin 2.9 dataset in SystemConversations that causes the model to overly talk about the "SYSTEM MESSAGE". To counter this, we recommend you add a statement in the system message directing the model not to mention the system message. An example system message is "The assistant is named Dolphin. A helpful and friendly AI assistant, Dolphin avoids discussing the system message unless directly asked about it."
36
+
37
+ My appreciation for the sponsors of Dolphin 2.9:
38
+ - [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 10xL40S node
39
+
40
+ This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE)
41
+
42
+ The base model has 8k context, and the full-weight fine-tuning was with 4k sequence length.
43
+
44
+ It took 2.5 days on 8x L40S provided by Crusoe Cloud
45
+
46
+ This model was trained FFT on all parameters, using ChatML prompt template format.
47
+
48
+ example:
49
+
50
+ ```
51
+ <|im_start|>system
52
+ You are Dolphin, a helpful AI assistant.<|im_end|>
53
+ <|im_start|>user
54
+ {prompt}<|im_end|>
55
+ <|im_start|>assistant
56
+
57
+ ```
58
+
59
+ Dolphin-2.9 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
60
+
61
+ Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.
62
+
63
+ Dolphin is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license. Dolphin was trained on data generated from GPT4, among other models.
64
+
65
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
66
+ <details><summary>See axolotl config</summary>
67
+
68
+ axolotl version: `0.4.0`
69
+ ```yaml
70
+ base_model: meta-llama/Meta-Llama-3-8B
71
+ model_type: AutoModelForCausalLM
72
+ tokenizer_type: AutoTokenizer
73
+ tokenizer_use_fast: false
74
+
75
+
76
+ load_in_8bit: false
77
+ load_in_4bit: false
78
+ strict: false
79
+ model_config:
80
+
81
+ datasets:
82
+ - path: /workspace/datasets/dolphin-2.9/dolphin201-sharegpt2.jsonl
83
+ type: sharegpt
84
+ conversation: chatml
85
+ - path: /workspace/datasets/dolphin-2.9/Ultrachat200kunfiltered.jsonl
86
+ type: sharegpt
87
+ conversation: chatml
88
+ - path: /workspace/datasets/dolphin-2.9/dolphin-coder-translate-sharegpt2.jsonl
89
+ type: sharegpt
90
+ conversation: chatml
91
+ - path: /workspace/datasets/dolphin-2.9/dolphin-coder-codegen-sharegpt2.jsonl
92
+ type: sharegpt
93
+ conversation: chatml
94
+ - path: /workspace/datasets/dolphin-2.9/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl
95
+ type: sharegpt
96
+ conversation: chatml
97
+ - path: /workspace/datasets/dolphin-2.9/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl
98
+ type: sharegpt
99
+ conversation: chatml
100
+ - path: /workspace/datasets/dolphin-2.9/not_samantha_norefusals.jsonl
101
+ type: sharegpt
102
+ conversation: chatml
103
+ - path: /workspace/datasets/dolphin-2.9/Orca-Math-resort-unfiltered.jsonl
104
+ type: sharegpt
105
+ conversation: chatml
106
+ - path: /workspace/datasets/dolphin-2.9/agent_instruct_react_unfiltered.jsonl
107
+ type: sharegpt
108
+ conversation: chatml
109
+ - path: /workspace/datasets/dolphin-2.9/toolbench_instruct_j1s1_3k_unfiltered.jsonl
110
+ type: sharegpt
111
+ conversation: chatml
112
+ - path: /workspace/datasets/dolphin-2.9/toolbench_negative_unfiltered.jsonl
113
+ type: sharegpt
114
+ conversation: chatml
115
+ - path: /workspace/datasets/dolphin-2.9/toolbench_react_10p_unfiltered.jsonl
116
+ type: sharegpt
117
+ conversation: chatml
118
+ - path: /workspace/datasets/dolphin-2.9/toolbench_tflan_cot_30p_unfiltered.jsonl
119
+ type: sharegpt
120
+ conversation: chatml
121
+ - path: /workspace/datasets/dolphin-2.9/openhermes200k_unfiltered.jsonl
122
+ type: sharegpt
123
+ conversation: chatml
124
+ - path: /workspace/datasets/dolphin-2.9/SystemConversations.jsonl
125
+ type: sharegpt
126
+ conversation: chatml
127
+
128
+ chat_template: chatml
129
+
130
+
131
+ dataset_prepared_path: /workspace/datasets/dolphin-2.9/thingy
132
+ val_set_size: 0.0002
133
+ output_dir: ./out
134
+
135
+ sequence_len: 4096
136
+ sample_packing: true
137
+ pad_to_sequence_len: true
138
+
139
+ gradient_accumulation_steps: 4
140
+ micro_batch_size: 3
141
+ num_epochs: 3
142
+ logging_steps: 1
143
+ optimizer: adamw_8bit
144
+ lr_scheduler: cosine
145
+ learning_rate: 2e-5
146
+
147
+ wandb_project: dolphin-2.9-mixtral-8x22b
148
+ wandb_watch:
149
+ wandb_run_id:
150
+ wandb_log_model:
151
+
152
+ train_on_inputs: false
153
+ group_by_length: false
154
+ bf16: auto
155
+ fp16:
156
+ tf32: false
157
+
158
+ gradient_checkpointing: true
159
+ gradient_checkpointing_kwargs:
160
+ use_reentrant: false
161
+ early_stopping_patience:
162
+ resume_from_checkpoint:
163
+ local_rank:
164
+ logging_steps: 1
165
+ xformers_attention:
166
+ flash_attention: true
167
+ saves_per_epoch: 4
168
+ save_total_limit: 2
169
+ save_steps:
170
+ evals_per_epoch: 4
171
+ eval_sample_packing: false
172
+ debug:
173
+ deepspeed: deepspeed_configs/zero3_bf16.json
174
+ weight_decay: 0.05
175
+ fsdp:
176
+ fsdp_config:
177
+ special_tokens:
178
+ eos_token: "<|im_end|>"
179
+ pad_token: "<|end_of_text|>"
180
+ tokens:
181
+ - "<|im_start|>"
182
+ - "<|im_end|>"
183
+
184
+ ```
185
+
186
+ </details><br>
187
+
188
+ ## Quants
189
+
190
+ GGUF : https://huggingface.co/QuantFactory/dolphin-2.9-llama3-8b-GGUF
191
+
192
+ GGUF with imatrix: https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF
193
+
194
+ Exllamav2: https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-exl2
195
+
196
+ ## Training procedure
197
+
198
+ ### Training hyperparameters
199
+
200
+ The following hyperparameters were used during training:
201
+ - learning_rate: 2e-05
202
+ - train_batch_size: 3
203
+ - eval_batch_size: 3
204
+ - seed: 42
205
+ - distributed_type: multi-GPU
206
+ - num_devices: 8
207
+ - gradient_accumulation_steps: 4
208
+ - total_train_batch_size: 96
209
+ - total_eval_batch_size: 24
210
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
211
+ - lr_scheduler_type: cosine
212
+ - lr_scheduler_warmup_steps: 7
213
+ - num_epochs: 3
214
+
215
+ ### Training results
216
+
217
+ | Training Loss | Epoch | Step | Validation Loss |
218
+ |:-------------:|:------:|:----:|:---------------:|
219
+ | 1.146 | 0.0005 | 1 | 1.1064 |
220
+ | 0.6962 | 0.2501 | 555 | 0.6636 |
221
+ | 0.6857 | 0.5001 | 1110 | 0.6503 |
222
+ | 0.6592 | 0.7502 | 1665 | 0.6419 |
223
+ | 0.6465 | 1.0002 | 2220 | 0.6317 |
224
+ | 0.5295 | 1.2395 | 2775 | 0.6408 |
225
+ | 0.5302 | 1.4895 | 3330 | 0.6351 |
226
+ | 0.5188 | 1.7396 | 3885 | 0.6227 |
227
+ | 0.521 | 1.9896 | 4440 | 0.6168 |
228
+ | 0.3968 | 2.2289 | 4995 | 0.6646 |
229
+ | 0.3776 | 2.4789 | 5550 | 0.6619 |
230
+ | 0.3983 | 2.7290 | 6105 | 0.6602 |
231
+
232
+
233
+ ### Framework versions
234
+
235
+ - Transformers 4.40.0
236
+ - Pytorch 2.2.2+cu121
237
+ - Datasets 2.18.0
238
+ - Tokenizers 0.19.1
dolphin-2.9-llama3-8b.Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15f6144dad6ab113b8c59c07362636ca80cf11b07d2656c9add8ff684326786a
3
+ size 4661223712