Poe_4B / README.md
jeiku's picture
Upload 9 files
cb84859 verified
metadata
library_name: transformers
license: other
base_model: IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml
tags:
  - generated_from_trainer
model-index:
  - name: outputs/out
    results: []

Built with Axolotl

See axolotl config

axolotl version: 0.4.1

base_model: IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: PocketDoc/Dans-MemoryCore-CoreCurriculum-Small
    type: sharegpt
    conversation: chatml
  - path: NewEden/Kalo-Opus-Instruct-22k-Refusal-Murdered
    type: sharegpt
    conversation: chatml
  - path: Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
    type: sharegpt
    conversation: chatml
  - path: NewEden/Gryphe-Sonnet-3.5-35k-Subset
    type: sharegpt
    conversation: chatml
  - path: anthracite-org/stheno-filtered-v1.1
    type: sharegpt
    conversation: chatml
  - path: Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
    type: sharegpt
    conversation: chatml
  - path: ResplendentAI/bluemoon
    type: sharegpt
    conversation: chatml
  - path: openerotica/freedom-rp
    type: sharegpt
    conversation: chatml
  - path: jeiku/Nitral_Medical_Dialog_Fixed
    type: sharegpt
    conversation: chatml
  - path: MinervaAI/Aesir-Preview
    type: sharegpt
    conversation: chatml
  - path: jeiku/jeikutxt
    type: completion
  - path: ResplendentAI/Sissification_Hypno_1k
    type: alpaca
  - path: ResplendentAI/theory_of_mind_fixed_output
    type: alpaca
  - path: ResplendentAI/Synthetic_Soul_1k
    type: alpaca

chat_template: chatml

val_set_size: 0.01
output_dir: ./outputs/out

adapter:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:

sequence_len: 8192
# sequence_len: 32768
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true

plugins:
  - axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true

wandb_project: Final4B
wandb_entity:
wandb_watch:
wandb_name: Final4B
wandb_log_model:

gradient_accumulation_steps: 32
micro_batch_size: 2
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.000008
weight_decay: 0.05

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_ratio: 0.1
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 2

debug:
deepspeed: deepspeed_configs/zero3.json
fsdp:
fsdp_config:

special_tokens:
  pad_token: <|finetune_right_pad_id|>


outputs/out

This model is a fine-tuned version of IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.4101

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 8e-06
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • gradient_accumulation_steps: 32
  • total_train_batch_size: 128
  • total_eval_batch_size: 4
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 23
  • num_epochs: 2

Training results

Training Loss Epoch Step Validation Loss
1.7832 0.0079 1 2.8360
1.4689 0.2514 32 2.5697
1.4206 0.5028 64 2.4846
1.3664 0.7542 96 2.4440
1.3767 1.0056 128 2.4197
1.2487 1.2465 160 2.4203
1.2787 1.4979 192 2.4099
1.2786 1.7493 224 2.4101

Framework versions

  • Transformers 4.45.0.dev0
  • Pytorch 2.4.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1