See axolotl config
axolotl version: 0.6.0
datasets:
- path: phunguyen01/OpenR1-Math-220k-default-train-10k
subset: default
split: train
type: chat_template
field_messages: messages
message_field_role: role
message_field_content: content
roles:
system:
- system
user:
- user
assistant:
- assistant
test_datasets: null
sequence_len: 8192
chat_template: qwen_25
base_model: Qwen/Qwen2.5-3B
micro_batch_size: 4
num_epochs: 2
learning_rate: 5.0e-06
output_dir: experiments/Qwen-3B-R1-Math-10k
dataset_prepared_path: experiments/Qwen-3B-R1-Math-10k/dataset_prepared
seed: 42
gradient_accumulation_steps: 1
gradient_checkpointing: true
flash_attention: true
train_on_inputs: false
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
bf16: auto
logging_steps: 5
lr_scheduler: cosine
warmup_ratio: 0.1
weight_decay: 0.01
save_strategy: epoch
evals_per_epoch: 1
wandb_project: i1
wandb_name: Qwen-3B-R1-Math-10k
hub_model_id: phunguyen01/Qwen-3B-R1-Math-10k
push_to_hub: true
special_tokens:
eos_token: <|im_end|>
additional_special_tokens: ["<think>", "</think>"]
deepspeed: zero2.json
Qwen-3B-R1-Math-10k
This model is a fine-tuned version of Qwen/Qwen2.5-3B on the phunguyen01/OpenR1-Math-220k-default-train-10k dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_hf with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 49
- num_epochs: 2.0
Training results
Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for phunguyen01/Qwen-3B-R1-Math-10k
Base model
Qwen/Qwen2.5-3B