Configuration Parsing
Warning:
In config.json: "quantization_config.bits" must be an integer
Exllamav2 quant (exl2 / 4.25 bpw) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
Quant | Model Size | lm_head |
---|---|---|
See axolotl config
axolotl version: 0.4.0
base_model: microsoft/Phi-3-medium-128k-instruct
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
trust_remote_code: true
load_in_8bit: false
load_in_4bit: false
strict: false
use_wandb: true
wandb_project: shisa-v2
wandb_entity: augmxnt
wandb_name: shisa-llama3-70b-v1.8e6
chat_template: chatml
datasets:
- path: augmxnt/ultra-orca-boros-en-ja-v1
type: sharegpt
dataset_prepared_path: last_run_prepared
val_set_size: 0.05
output_dir: ./outputs/phi3-medium-128k-14b.8e6
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
neftune_noise_alpha: 5
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 3
optimizer: paged_adamw_8bit
adam_beta2: 0.95
adam_epsilon: 0.00001
max_grad_norm: 1.0
lr_scheduler: linear
learning_rate: 0.000008
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: True
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 100
evals_per_epoch: 4
saves_per_epoch: 1
debug:
deepspeed: axolotl/deepspeed_configs/zero3_bf16.json
weight_decay: 0.1
fsdp:
fsdp_config:
resize_token_embeddings_to_32x: true
special_tokens:
pad_token: "<|endoftext|>"
outputs/phi3-medium-128k-14b.8e6
This model is a fine-tuned version of microsoft/Phi-3-medium-128k-instruct on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.3339
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.8309 | 0.0021 | 1 | 2.3406 |
0.7688 | 0.2513 | 121 | 0.4958 |
0.6435 | 0.5026 | 242 | 0.3830 |
0.5286 | 0.7539 | 363 | 0.3626 |
0.5559 | 1.0052 | 484 | 0.3549 |
0.4651 | 1.2425 | 605 | 0.3486 |
0.5294 | 1.4938 | 726 | 0.3432 |
0.5453 | 1.7451 | 847 | 0.3392 |
0.5258 | 1.9964 | 968 | 0.3376 |
0.4805 | 2.2331 | 1089 | 0.3357 |
0.4552 | 2.4844 | 1210 | 0.3352 |
0.5358 | 2.7357 | 1331 | 0.3339 |
Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 6
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Zoyd/shisa-ai_shisa-v1-phi3-14b-4_25bpw_exl2
Base model
microsoft/Phi-3-medium-128k-instruct