Trained on Discord chatlogs from this dataset.
Uses Llama 3.1 formatting.
Merged model: mpasila/Llama-3.1-Discord-Short-8B
Trained with regular LoRA (not quantized/QLoRA) and LoRA rank was 128 and Alpha set to 32. Trained for 1 epoch using A40 for about 5,5 hours.
args = UnslothTrainingArguments(
per_device_train_batch_size = 1,
gradient_accumulation_steps = 8,
warmup_ratio = 0.1,
num_train_epochs = 1,
learning_rate = 5e-5,
embedding_learning_rate = 5e-6,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
optim = "adamw_8bit",
weight_decay = 0.00,
lr_scheduler_type = "cosine",
seed = 3407,
output_dir = "outputs",
),
Uploaded model
- Developed by: mpasila
- License: Llama 3.1 Community License Agreement
- Finetuned from model : unsloth/Meta-Llama-3.1-8B
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 0
Model tree for mpasila/Llama-3.1-Discord-Short-LoRA-8B
Base model
unsloth/Meta-Llama-3.1-8B