DarkHermes-Llama3.2
Collection
Finetunes of NousResearch's Hermes-3-Llama3.2 Collection.
•
7 items
•
Updated
axolotl version: 0.6.0
base_model: mrcuddle/Dark-Hermes3-Llama3.2-3B
dataloader_num_workers: 4
datasets:
- dataset_prepared_path: last_run_prepared
path: llamafactory/alpaca_en
type: alpaca
eval_steps: 500
evaluation_strategy: steps
fp16: true
gradient_accumulation_steps: 8
gradient_checkpointing: false
learning_rate: 2e-5
load_in_4bit: false
logging_dir: /content/outputs/logs
logging_steps: 10
lr_scheduler: cosine
lr_scheduler_type: cosine
micro_batch_size: 1
num_train_epochs: 3
optimizer: paged_adamw_8bit
output_dir: /content/outputs
overwrite_output_dir: true
per_device_train_batch_size: 4
save_steps: 500
save_total_limit: 2
use_peft: false
val_set_size: 0.05
warmup_steps: 100
This model is a fine-tuned version of mrcuddle/Dark-Hermes3-Llama3.2-3B on the llamafactory/alpaca_en dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
No log | 0.0002 | 1 | 2.4030 |
1.2572 | 0.0814 | 500 | 1.1935 |
1.3061 | 0.1629 | 1000 | 1.1865 |
1.2733 | 0.2443 | 1500 | 1.1864 |
1.265 | 0.3258 | 2000 | 1.1753 |
1.2436 | 0.4072 | 2500 | 1.1542 |
1.2935 | 0.4887 | 3000 | 1.1448 |
1.2595 | 0.5701 | 3500 | 1.1348 |
1.2896 | 0.6515 | 4000 | 1.1295 |
1.2081 | 0.7330 | 4500 | 1.1236 |
1.2451 | 0.8144 | 5000 | 1.1212 |
1.2134 | 0.8959 | 5500 | 1.1205 |
1.2437 | 0.9773 | 6000 | 1.1205 |