|
---
|
|
base_model: pints-ai/1.5-Pints-16K-v0.1
|
|
library_name: peft
|
|
license: apache-2.0
|
|
tags:
|
|
- generated_from_trainer
|
|
model-index:
|
|
- name: tangledgroup/tangled-llama-pints-1.5b-v0.2-instruct
|
|
results: []
|
|
datasets:
|
|
- tangledgroup/tangled-llama-pints-1.5b-v0.2-dataset
|
|
---
|
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
should probably proofread and complete it, then remove this comment. -->
|
|
|
|
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
|
|
<details><summary>See axolotl config</summary>
|
|
|
|
axolotl version: `0.4.1`
|
|
```yaml
|
|
base_model: pints-ai/1.5-Pints-16K-v0.1
|
|
model_type: AutoModelForCausalLM
|
|
tokenizer_type: AutoTokenizer
|
|
|
|
load_in_8bit: false
|
|
load_in_4bit: true
|
|
strict: false
|
|
|
|
datasets:
|
|
- path: tangledgroup/tangled-llama-pints-1.5b-v0.2-dataset
|
|
type: sharegpt
|
|
conversation: chatml
|
|
chat_template: chatml
|
|
dataset_prepared_path:
|
|
val_set_size: 0.05
|
|
output_dir: ./outputs/qlora-out
|
|
|
|
adapter: qlora
|
|
lora_model_dir:
|
|
|
|
sequence_len: 16384
|
|
sample_packing: true
|
|
pad_to_sequence_len: true
|
|
|
|
lora_r: 32
|
|
lora_alpha: 16
|
|
lora_dropout: 0.05
|
|
lora_target_modules:
|
|
lora_target_linear: true
|
|
lora_fan_in_fan_out:
|
|
|
|
wandb_project:
|
|
wandb_entity:
|
|
wandb_watch:
|
|
wandb_name:
|
|
wandb_log_model:
|
|
|
|
gradient_accumulation_steps: 4
|
|
micro_batch_size: 2
|
|
num_epochs: 3
|
|
optimizer: paged_adamw_32bit
|
|
# optimizer: adamw_torch_fused
|
|
lr_scheduler: cosine
|
|
learning_rate: 0.0002
|
|
|
|
train_on_inputs: false
|
|
group_by_length: false
|
|
bf16: auto
|
|
fp16:
|
|
tf32: false
|
|
|
|
gradient_checkpointing: true
|
|
early_stopping_patience:
|
|
resume_from_checkpoint:
|
|
local_rank:
|
|
logging_steps: 1
|
|
xformers_attention:
|
|
flash_attention: true
|
|
|
|
loss_watchdog_threshold: 15.0
|
|
loss_watchdog_patience: 3
|
|
|
|
warmup_steps: 10
|
|
evals_per_epoch: 3
|
|
eval_table_size:
|
|
saves_per_epoch: 1
|
|
debug:
|
|
deepspeed:
|
|
weight_decay: 0.0
|
|
fsdp:
|
|
fsdp_config:
|
|
special_tokens:
|
|
|
|
plugins:
|
|
- axolotl.integrations.liger.LigerPlugin
|
|
liger_rope: true
|
|
liger_rms_norm: true
|
|
liger_swiglu: true
|
|
liger_fused_linear_cross_entropy: true
|
|
```
|
|
|
|
</details><br>
|
|
|
|
# outputs/qlora-out
|
|
|
|
This model is a fine-tuned version of [pints-ai/1.5-Pints-16K-v0.1](https://huggingface.co/pints-ai/1.5-Pints-16K-v0.1) on the None dataset.
|
|
It achieves the following results on the evaluation set:
|
|
- Loss: 0.9847
|
|
|
|
## Model description
|
|
|
|
More information needed
|
|
|
|
## Intended uses & limitations
|
|
|
|
More information needed
|
|
|
|
## Training and evaluation data
|
|
|
|
More information needed
|
|
|
|
## Training procedure
|
|
|
|
### Training hyperparameters
|
|
|
|
The following hyperparameters were used during training:
|
|
- learning_rate: 0.0002
|
|
- train_batch_size: 2
|
|
- eval_batch_size: 2
|
|
- seed: 42
|
|
- gradient_accumulation_steps: 4
|
|
- total_train_batch_size: 8
|
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
|
- lr_scheduler_type: cosine
|
|
- lr_scheduler_warmup_steps: 10
|
|
- num_epochs: 3
|
|
|
|
### Training results
|
|
|
|
| Training Loss | Epoch | Step | Validation Loss |
|
|
|:-------------:|:------:|:----:|:---------------:|
|
|
| 1.1396 | 0.0011 | 1 | 1.1313 |
|
|
| 1.0777 | 0.3332 | 295 | 1.0278 |
|
|
| 1.0219 | 0.6665 | 590 | 1.0119 |
|
|
| 1.0006 | 0.9997 | 885 | 1.0020 |
|
|
| 1.0385 | 1.3307 | 1180 | 0.9954 |
|
|
| 0.9405 | 1.6639 | 1475 | 0.9902 |
|
|
| 0.9249 | 1.9972 | 1770 | 0.9867 |
|
|
| 0.9951 | 2.3282 | 2065 | 0.9856 |
|
|
| 0.9713 | 2.6616 | 2360 | 0.9848 |
|
|
| 0.9576 | 2.9949 | 2655 | 0.9847 |
|
|
|
|
|
|
### Framework versions
|
|
|
|
- PEFT 0.12.0
|
|
- Transformers 4.45.0.dev0
|
|
- Pytorch 2.4.1
|
|
- Datasets 2.21.0
|
|
- Tokenizers 0.19.1
|
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
|
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_tangledgroup__tangled-llama-pints-1.5b-v0.2-instruct)
|
|
|
|
| Metric |Value|
|
|
|-------------------|----:|
|
|
|Avg. | 4.66|
|
|
|IFEval (0-Shot) |17.24|
|
|
|BBH (3-Shot) | 4.08|
|
|
|MATH Lvl 5 (4-Shot)| 0.76|
|
|
|GPQA (0-shot) | 0.00|
|
|
|MuSR (0-shot) | 4.57|
|
|
|MMLU-PRO (5-shot) | 1.30|
|
|
|
|
|