--- tags: - axolotl - generated_from_trainer model-index: - name: llama31-it-preference_data_v2_800K_wsafety results: [] --- [Built with Axolotl](https://github.com/axolotl-ai-cloud/axolotl)
See axolotl config axolotl version: `0.4.1` ```yaml adapter: null base_model: /var/lib/condor/execute/slot1/dir_2782837/llama31_pretrain_pad bf16: auto dataset_prepared_path: /var/lib/condor/execute/slot1/dir_2782837/prepare dataset_processes: 48 datasets: - conversation: llama-3 path: RLHFlow/preference_data_v2_80K_wsafety split: train train_on_split: train type: sharegpt.load_ultrachat ddp: null debug: null deepspeed: null early_stopping_patience: null eval_steps: null eval_table_max_new_tokens: null eval_table_size: null flash_attention: false fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: false hub_model_id: RyanYr/llama31-it-preference_data_v2_800K_wsafety hub_strategy: every_save learning_rate: 5.0e-06 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 2 lora_model_dir: null lr_scheduler: cosine max_grad_norm: 1.0 micro_batch_size: 2 model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_torch_fused output_dir: /var/lib/condor/execute/slot1/dir_2782837/output-08-11-2024-18:22 pad_to_sequence_len: true sample_packing: true save_safetensors: true save_steps: 100 save_strategy: steps save_total_limit: 1 sequence_len: 2048 special_tokens: null strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.0 wandb_entity: yyr wandb_log_model: null wandb_name: llama31-8b-it_preference_data_v2_80K_wsafety wandb_project: preference-models wandb_watch: null warmup_steps: 40 weight_decay: 0.0 xformers_attention: null ```

# llama31-it-preference_data_v2_800K_wsafety This model was trained from scratch on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 40 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.44.0 - Pytorch 2.1.2+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1