Edit model card

Built with Axolotl

See axolotl config

axolotl version: 0.4.1

adapter: null
base_model: /var/lib/condor/execute/slot1/dir_873933/llama_model
bf16: auto
dataset_prepared_path: /var/lib/condor/execute/slot1/dir_873933/prepare
dataset_processes: 48
datasets:
- conversation: llama-3
  path: RLHFlow/pair-preference-dataset-mix1
  split: train
  train_on_split: train
  type: sharegpt.load_ultrachat
ddp: null
debug: null
deepspeed: null
early_stopping_patience: null
eval_steps: null
eval_table_max_new_tokens: null
eval_table_size: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 2
lora_model_dir: null
lr_scheduler: cosine
max_grad_norm: 1.0
micro_batch_size: 2
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch_fused
output_dir: /var/lib/condor/execute/slot1/dir_873933/output
pad_to_sequence_len: true
sample_packing: true
save_safetensors: true
save_strategy: epoch
save_total_limit: 1
sequence_len: 3072
special_tokens: null
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.0
wandb_log_model: null
wandb_name: llama-8b-it_data-preference_mix3_bs128_lr5e-6
wandb_watch: null
warmup_steps: 40
weight_decay: 0.0
xformers_attention: null

var/lib/condor/execute/slot1/dir_873933/output

This model was trained from scratch on the None dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 64
  • total_eval_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 40
  • num_epochs: 1

Training results

Framework versions

  • Transformers 4.42.4
  • Pytorch 2.1.2+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
10
Safetensors
Model size
266k params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including RyanYr/pm