File size: 4,696 Bytes
f88596a
 
 
 
 
 
 
 
 
 
 
 
 
23e0cc2
 
 
 
 
f86fc3b
23e0cc2
f86fc3b
23e0cc2
f86fc3b
23e0cc2
f86fc3b
 
23e0cc2
 
 
 
 
 
 
d09abe7
23e0cc2
 
 
f88596a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
---
library_name: transformers
license: other
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-14B/blob/main/LICENSE
base_model: Qwen/Qwen2.5-14B
tags:
- generated_from_trainer
model-index:
- name: 14B-Qwen2.5-Freya-x1
  results: []
---

Awe snap. Another Qwen 2.5 14b by the lord and savior, Sao.

I'm still refining my own settings for Qwen but for those of you who are interested in my most recent settings:

Temp: 1.1-1.2 OR .75-.85
<br>
Min P: 0.02 Min P - 0.05 (Min P seems to help with 'oddities' in responses) 0.035 Seems a decent midpoint.
<br>
Rep Penalty: 1.08
<br>
DRY: 0.3m 1.75, 2
<br>

[This is the 8bpw EXL2 version of this model. For the original model, go here](https://huggingface.co/Sao10K/14B-Qwen2.5-Freya-x1)
<br>
[For the 6bpw version, go here](https://huggingface.co/Statuo/Sao10K_14B-Qwen2.5-Freya-v1-EXL2-6bpw)
<br>
[For the 4bpw version, go here](https://huggingface.co/Statuo/Sao10K_14B-Qwen2.5-Freya-v1-EXL2-4bpw)
<br>


---


![Freya](https://huggingface.co/Sao10K/14B-Qwen2.5-Freya-x1/resolve/main/sad.png)
*Me during failed runs*

# 14B-Qwen2.5-Freya-v1

I decided to mess around with training methods again, considering the re-emegence of  methods like multi-step training. Some people began doing it again, and so, why not? Inspired by AshhLimaRP's methology but done it my way.

Freya-S1
- LoRA Trained on ~1.1GB of literature and raw text over Qwen 2.5's base model.
- Cleaned text and literature as best as I could, still, may have had issues here and there.

Freya-S2
- The first LoRA was applied over Qwen 2.5 Instruct, then I trained on top of that.
- Reduced LoRA rank because it's mainly instruct and other details I won't get into.

Recommended Model Settings | *Look, I just use these, they work fine enough. I don't even know how DRY or other meme samplers work. Your system prompt matters more anyway.*
```
Prompt Format: ChatML
Temperature: 1+ # I don't know, man.
min_p: 0.05
```

Training time in total was ~10 Hours on a 8xH100 Node, sponsored by the Government of Singapore or something. Thanks for the national service allowance, MHA.

https://sao10k.carrd.co/ for contact.

---

[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>

axolotl version: `0.6.0`
```yaml
base_model:
- s1: Qwen/Qwen2.5-14B
- s2: Qwen/Qwen2.5-14B-Instruct
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

load_in_8bit: false
load_in_4bit: false
strict: false
sequence_len: 16384
bf16: auto
fp16:
tf32: false
flash_attention: true
special_tokens:
  
adapter: lora # 16-bit
lora_r:
- s1: 64
- s2: 32
lora_alpha: 64
lora_dropout: 0.2
lora_fan_in_fan_out:
peft_use_rslora: true
lora_target_linear: true
  
# Data
dataset_prepared_path: dataset_run_freya
datasets:
# S1 - Writing / Completion
  - path: datasets/eBooks-cleaned-75K
    type: completion
  - path: datasets/novels-clean-dedupe-10K
    type: completion
# S2 - Instruct
  - path: datasets/10k-amoral-full-fixed-sys.json
    type: chat_template
    chat_template: chatml
    roles_to_train: ["gpt"]
    field_messages: conversations
    message_field_role: from
    message_field_content: value
    train_on_eos: turn
  - path: datasets/44k-hespera-smartshuffle.json
    type: chat_template
    chat_template: chatml
    roles_to_train: ["gpt"]
    field_messages: conversations
    message_field_role: from
    message_field_content: value
    train_on_eos: turn
  - path: datasets/5k_rpg_adventure_instruct-sys.json
    type: chat_template
    chat_template: chatml
    roles_to_train: ["gpt"]
    field_messages: conversations
    message_field_role: from
    message_field_content: value
    train_on_eos: turn
shuffle_merged_datasets: true
warmup_ratio: 0.1

plugins:
  - axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: true

# Iterations
num_epochs:
- s1: 1
- s2: 2

# Sampling
sample_packing: true
pad_to_sequence_len: true
train_on_inputs: false
group_by_length: false

# Batching
gradient_accumulation_steps: 4
micro_batch_size: 2
gradient_checkpointing: unsloth

# Evaluation
val_set_size: 0.025
evals_per_epoch: 5
eval_table_size:
eval_max_new_tokens: 256
eval_sample_packing: false
eval_batch_size: 1

# Optimizer
optimizer: paged_ademamix_8bit
lr_scheduler: cosine
learning_rate:
- s1: 0.000002
- s2: 0.000004
weight_decay: 0.2
max_grad_norm: 10.0

# Garbage Collection
gc_steps: 10

# Misc
deepspeed: ./deepspeed_configs/zero2.json

```

</details><br>