Built with Axolotl

See axolotl config

axolotl version: 0.4.1

base_model: Delta-Vector/Holland-4B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: NewEden/xlam-function-calling-60k-shareGPT
    type: sharegpt
    conversation: chatml
  - path: gardner/glaive-function-calling-v2-sharegpt
    type: sharegpt
    conversation: chatml

chat_template: chatml

val_set_size: 0.01
output_dir: ./outputs/out

adapter:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:

sequence_len: 8192
# sequence_len: 32768
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true

plugins:
  - axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true

wandb_project: GnX Func Calling
wandb_entity:
wandb_watch:
wandb_name: Func Calling GnX
wandb_log_model:

gradient_accumulation_steps: 32
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.00002
weight_decay: 0.05

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_ratio: 0.1
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1

debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero2.json
fsdp:
fsdp_config:

special_tokens:
  pad_token: <|finetune_right_pad_id|>

outputs/out

This model is a fine-tuned version of Delta-Vector/Holland-4B on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1388

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • gradient_accumulation_steps: 32
  • total_train_batch_size: 64
  • total_eval_batch_size: 2
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 28
  • num_epochs: 2

Training results

Training Loss Epoch Step Validation Loss
0.7847 0.0069 1 0.7059
0.442 0.2485 36 0.1606
0.4421 0.4970 72 0.1495
0.4312 0.7455 108 0.1445
0.4094 0.9940 144 0.1407
0.3017 1.2224 180 0.1420
0.3244 1.4709 216 0.1405
0.3106 1.7194 252 0.1392
0.3132 1.9679 288 0.1388

Framework versions

  • Transformers 4.45.0.dev0
  • Pytorch 2.4.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
27
Safetensors
Model size
4.51B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Edens-Gate/GnX

Finetuned
(2)
this model