NeuralNovel commited on
Commit
d80480a
1 Parent(s): c780db7

Delete .ipynb_checkpoints

Browse files
.ipynb_checkpoints/README-checkpoint.md DELETED
@@ -1,130 +0,0 @@
1
- ---
2
- tags:
3
- - generated_from_trainer
4
- model-index:
5
- - name: out
6
- results: []
7
- ---
8
-
9
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
10
- should probably proofread and complete it, then remove this comment. -->
11
-
12
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
13
- <details><summary>See axolotl config</summary>
14
-
15
- axolotl version: `0.4.0`
16
- ```yaml
17
- base_model: out/Mistral-DPO
18
- model_type: AutoModelForCausalLM
19
- tokenizer_type: AutoTokenizer
20
- is_mistral_derived_model: true
21
-
22
- load_in_8bit: false
23
- load_in_4bit: false
24
- strict: false
25
-
26
- rl: dpo
27
- datasets:
28
- - path: NeuralNovel/Neural-DPO
29
- type: chatml.intel
30
- split: train
31
- format: "[INST] {instruction} [/INST]"
32
- no_input_format: "[INST] {instruction} [/INST]"
33
-
34
- dataset_prepared_path:
35
- val_set_size: 0.05
36
- output_dir: ./out
37
-
38
- sequence_len: 8192
39
- sample_packing: false
40
- pad_to_sequence_len: true
41
- eval_sample_packing: false
42
-
43
- wandb_project:
44
- wandb_entity:
45
- wandb_watch:
46
- wandb_name:
47
- wandb_log_model:
48
-
49
- gradient_accumulation_steps: 4
50
- micro_batch_size: 2
51
- num_epochs: 6
52
- optimizer: adamw_bnb_8bit
53
- lr_scheduler: cosine
54
- learning_rate: 0.000005
55
-
56
- train_on_inputs: false
57
- group_by_length: false
58
- bf16: auto
59
- fp16:
60
- tf32: false
61
-
62
- gradient_checkpointing: true
63
- early_stopping_patience:
64
- resume_from_checkpoint:
65
- local_rank:
66
- logging_steps: 1
67
- xformers_attention:
68
- flash_attention: true
69
-
70
- warmup_steps: 10
71
- evals_per_epoch: 4
72
- eval_table_size:
73
- eval_max_new_tokens: 128
74
- saves_per_epoch: 0
75
- debug:
76
- deepspeed:
77
- weight_decay: 0.0
78
- fsdp:
79
- fsdp_config:
80
- special_tokens:
81
- bos_token: "<s>"
82
- eos_token: "</s>"
83
- unk_token: "<unk>"
84
-
85
- ```
86
-
87
- </details><br>
88
-
89
- # out
90
-
91
- This model was trained from scratch on an unknown dataset.
92
-
93
- ## Model description
94
-
95
- More information needed
96
-
97
- ## Intended uses & limitations
98
-
99
- More information needed
100
-
101
- ## Training and evaluation data
102
-
103
- More information needed
104
-
105
- ## Training procedure
106
-
107
- ### Training hyperparameters
108
-
109
- The following hyperparameters were used during training:
110
- - learning_rate: 5e-06
111
- - train_batch_size: 2
112
- - eval_batch_size: 8
113
- - seed: 42
114
- - gradient_accumulation_steps: 4
115
- - total_train_batch_size: 8
116
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
117
- - lr_scheduler_type: cosine
118
- - lr_scheduler_warmup_steps: 10
119
- - training_steps: 801
120
-
121
- ### Training results
122
-
123
-
124
-
125
- ### Framework versions
126
-
127
- - Transformers 4.38.0.dev0
128
- - Pytorch 2.2.0+cu121
129
- - Datasets 2.17.1
130
- - Tokenizers 0.15.0