emessy's picture
Update README.md
c0d4fa2 verified
metadata
base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
language:
  - en
license: apache-2.0
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - llama
  - trl
  - sft
datasets:
  - emessy/flash_fiction_1

Uploaded model

  • Developed by: emessy
  • License: apache-2.0
  • Finetuned from model : unsloth/Meta-Llama-3.1-8B-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Configure LoRA

lora_config = LoraConfig( r=16, lora_alpha=16, target_modules=["q_proj", "k_proj", "v_proj", "o_proj"], lora_dropout=0.05, bias="none", task_type="CAUSAL_LM" )

Training arguments

training_args = TrainingArguments( output_dir="./results", num_train_epochs=5, per_device_train_batch_size=4, gradient_accumulation_steps=4, learning_rate=2e-4, fp16=True, # Use half-precision logging_steps=10, save_steps=50, eval_steps=50, )