File size: 3,000 Bytes
1c9992c 9232bc0 1c9992c 9232bc0 1c9992c 7bc9f6a 1c9992c 9e956d0 1c9992c 7bc9f6a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 |
---
tags:
- generated_from_trainer
base_model: Locutusque/TinyMistral-248M-v2
model-index:
- name: TinyMistral-v2-Test1/
results: []
datasets:
- JeanKaddour/minipile
- epfl-llm/guidelines
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.3.0`
```yaml
base_model: Locutusque/TinyMistral-248M-v2
model_type: MistralForCausalLM
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: false
strict: false
dataset_processes: 20
datasets:
- path: epfl-llm/guidelines
type: completion
field: clean_text
- path: JeanKaddour/minipile
type: completion
field: text
dataset_prepared_path: TinyMistral-FFT-data
val_set_size: 0.001
output_dir: ./TinyMistral-FFT
sequence_len: 2048
sample_packing: false
pad_to_sequence_len: true
adapter:
lora_model_dir:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
lora_fan_in_fan_out:
# wandb configuration
wandb_project: TinyMistral-FFT
wandb_watch:
wandb_run_id:
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 1
optimizer: paged_adamw_32bit
lr_scheduler: constant
cosine_min_lr_ratio:
learning_rate: 0.00005
train_on_inputs: true
group_by_length: false
bf16: false
fp16: false
tf32: true
gradient_checkpointing: false
early_stopping_patience:
resume_from_checkpoint:
auto_resume_from_checkpoints: false
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: false
flash_attn_cross_entropy: false
flash_attn_rms_norm: true
flash_attn_fuse_qkv: false
flash_attn_fuse_mlp: true
warmup_steps: 10
evals_per_epoch: 100
# eval_steps: 10
eval_table_size:
saves_per_epoch: 50
debug:
deepspeed: #deepspeed/zero2.json # multi-gpu only
weight_decay: 0
# tokens:
special_tokens:
bos_token: "<|bos|>"
eos_token: "<|endoftext|>"
unk_token: "<unk>"
```
</details><br>
# TinyMistral-StructureEvaluator
This model was further trained on the epfl-llm/guidelines and JeanKaddour/minipile datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 39460
### Training results
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0 |