metadata
base_model: EleutherAI/pythia-160m-deduped
library_name: transformers
license: apache-2.0
tags:
- axolotl
- relora
- generated_from_trainer
model-index:
- name: pythia-160m-storytelling
results: []
datasets:
- jtatman/storywriting_combined_instruct
metrics:
- accuracy
- bleu
- rouge
See axolotl config
axolotl version: 0.4.1
base_model: EleutherAI/pythia-160m-deduped
load_in_8bit:
datasets:
- path: jtatman/storywriting_combined_instruct
type: alpaca
dataset_prepared_path: ds-storytelling
chat_template: inst
val_set_size: 0.01
adapter: lora
lora_model_dir:
sequence_len: 2048
lora_r: 16
lora_alpha: 32
lora_dropout: 0.05
lora_target_modules:
- query_key_value
lora_target_linear: true
lora_fan_in_fan_out: true # pythia/GPTNeoX lora specific
lora_modules_to_save:
- embed_in
- embed_out
- lm_head
lora_on_cpu: false
# ReLoRA configuration
# # Must use either 'lora' or 'qlora' adapter, and does not support fsdp or deepspeed
# relora_steps: # Number of steps per ReLoRA restart
# relora_warmup_steps: # Number of per-restart warmup steps
# relora_anneal_steps: # Number of anneal steps for each relora cycle
# relora_prune_ratio: # threshold for optimizer magnitude when pruning
# relora_cpu_offload: # True to perform lora weight merges on cpu during restarts, for modest gpu memory savings
relora_steps: 200
relora_warmup_steps: 10
relora_cpu_offload: false
wandb_project: pythia
wandb_entity:
wandb_watch:
wandb_name: pythia-160m-storytelling
wandb_log_model:
output_dir: ./outputs/lora-alpaca-pythia-160m-storytelling
gradient_accumulation_steps: 16
micro_batch_size: 1
num_epochs: 3
learning_rate: 0.004
lr_scheduler: cosine_with_restarts
#cosine_min_lr_ratio: 0.1
train_on_inputs: false
group_by_length: false
#bf16: auto
#fp16: true
#tf32: false
float16: true
flash_attn:
xformers_attention: true
optimizer: paged_adamw_8bit
gpu_memory_limit: 8GiB
hub_model_id: jtatman/pythia-160m-storytelling
early_stopping_patience: 3
#resume_from_checkpoint: outputs/lora-alpaca-pythia-125m/checkpoint-51040
auto_resume_from_checkpoints: true
local_rank:
weight_decay: 0.0
#evals_per_epoch: 4
eval_steps: 200
logging_steps: 1
save_steps: 200
save_total_limit: 5
warmup_steps: 100
tokens:
- "[INST]"
- "[/INST]"
pythia-160m-storytelling
This model is a fine-tuned version of EleutherAI/pythia-160m-deduped on the None dataset. It achieves the following results on the evaluation set:
- Loss: 5.0097
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.004
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
5.5185 | 0.0012 | 1 | 4.8238 |
4.2012 | 0.2348 | 200 | 4.1556 |
4.4185 | 0.4696 | 400 | 4.8159 |
5.0973 | 0.7043 | 600 | 5.0363 |
8.1159 | 0.9391 | 800 | 8.4966 |
6.7656 | 1.1739 | 1000 | 7.1575 |
7.0548 | 1.4087 | 1200 | 7.3539 |
5.9982 | 1.6445 | 1400 | 5.9954 |
5.7662 | 1.8792 | 1600 | 6.0222 |
4.8094 | 2.1140 | 1800 | 5.0097 |
Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
Metrics
"Open LLM Leaderboard": {
"exact_match,flexible-extract": 0.022,
"exact_match_stderr,flexible-extract": 0.006566447781940106,
"acc_norm,none": 0.318,
"acc_norm_stderr,none": 0.014487919091408506,
"acc,none": 0.2664044125478186,
"acc_stderr,none": 0.003623534644130716,
"bleu_diff,none": -0.6500479549286462,
"bleu_diff_stderr,none": 0.6420841882903697,
"rougeL_diff,none": -0.7765084899781842,
"rougeL_diff_stderr,none": 1.0033586571635116,
"exact_match,strict-match": 0.006,
"exact_match_stderr,strict-match": 0.003457152557758373,
"rouge2_acc,none": 0.192,
"rouge2_acc_stderr,none": 0.017632180454360994,
"rouge1_acc,none": 0.37,
"rouge1_acc_stderr,none": 0.02161328916516578,
"bleu_acc,none": 0.436,
"bleu_acc_stderr,none": 0.0221989546414768,
"rouge1_diff,none": -1.5563905118333812,
"rouge1_diff_stderr,none": 1.022327995054994,
"rouge2_diff,none": -3.3177627227020277,
"rouge2_diff_stderr,none": 0.9477297777821475,
"bleu_max,none": 15.229235419512532,
"bleu_max_stderr,none": 0.6713582602539528,
"rouge2_max,none": 16.487324929036955,
"rouge2_max_stderr,none": 1.0171593586088354,
"rouge1_max,none": 36.3549677399668,
"rouge1_max_stderr,none": 0.9461627463383844,
"rougeL_max,none": 33.87976960164143,
"rougeL_max_stderr,none": 0.9366539036852334,
"rougeL_acc,none": 0.386,
"rougeL_acc_stderr,none": 0.021793529219281158,
"alias": "Open LLM Leaderboard"
},