RyanYr commited on
Commit
226fbb8
1 Parent(s): f6b3e4a

Model save

Browse files
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
- base_model: RyanYr/self-correct_Ministral-8B-Instruct-2410_metaMathQA_dpo_iter2
3
  library_name: transformers
4
- model_name: self-correct_Ministral-8B-Instruct-2410_metaMathQA_dpo_iter3_lr1e-7
5
  tags:
6
  - generated_from_trainer
7
  - trl
@@ -9,9 +9,9 @@ tags:
9
  licence: license
10
  ---
11
 
12
- # Model Card for self-correct_Ministral-8B-Instruct-2410_metaMathQA_dpo_iter3_lr1e-7
13
 
14
- This model is a fine-tuned version of [RyanYr/self-correct_Ministral-8B-Instruct-2410_metaMathQA_dpo_iter2](https://huggingface.co/RyanYr/self-correct_Ministral-8B-Instruct-2410_metaMathQA_dpo_iter2).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
  ## Quick start
@@ -20,14 +20,14 @@ It has been trained using [TRL](https://github.com/huggingface/trl).
20
  from transformers import pipeline
21
 
22
  question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
- generator = pipeline("text-generation", model="RyanYr/self-correct_Ministral-8B-Instruct-2410_metaMathQA_dpo_iter3_lr1e-7", device="cuda")
24
  output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
  print(output["generated_text"])
26
  ```
27
 
28
  ## Training procedure
29
 
30
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/gdrfivg0)
31
 
32
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
33
 
 
1
  ---
2
+ base_model: RyanYr/self-correct_Ministral-8B-Instruct-2410_metaMathQA_dpo_iter3_lr1e-7
3
  library_name: transformers
4
+ model_name: self-correct_Ministral-8B-Instruct-2410_metaMathQA_dpo_iter4_lr1e-7
5
  tags:
6
  - generated_from_trainer
7
  - trl
 
9
  licence: license
10
  ---
11
 
12
+ # Model Card for self-correct_Ministral-8B-Instruct-2410_metaMathQA_dpo_iter4_lr1e-7
13
 
14
+ This model is a fine-tuned version of [RyanYr/self-correct_Ministral-8B-Instruct-2410_metaMathQA_dpo_iter3_lr1e-7](https://huggingface.co/RyanYr/self-correct_Ministral-8B-Instruct-2410_metaMathQA_dpo_iter3_lr1e-7).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
  ## Quick start
 
20
  from transformers import pipeline
21
 
22
  question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="RyanYr/self-correct_Ministral-8B-Instruct-2410_metaMathQA_dpo_iter4_lr1e-7", device="cuda")
24
  output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
  print(output["generated_text"])
26
  ```
27
 
28
  ## Training procedure
29
 
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/md0w7ied)
31
 
32
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
33
 
last_checkpoint/config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "RyanYr/self-correct_Ministral-8B-Instruct-2410_metaMathQA_dpo_iter2",
3
  "architectures": [
4
  "MistralForCausalLM"
5
  ],
 
1
  {
2
+ "_name_or_path": "RyanYr/self-correct_Ministral-8B-Instruct-2410_metaMathQA_dpo_iter3_lr1e-7",
3
  "architectures": [
4
  "MistralForCausalLM"
5
  ],
last_checkpoint/model-00001-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b63950ac26992f37b69fac54d7257fdf06f817db5c7912ebda1581cddbeeda60
3
  size 4983016096
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a307e383b148cd1e63ea4f206644e5d2930a4bfa0e2192a4bdfdb6ec7ee9e5cd
3
  size 4983016096
last_checkpoint/model-00002-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:478b4753d75a532db9d1811e86b9fec4137f72540c9397e1f8eb86e696a4b945
3
  size 4999836776
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e41928d9aba5bd1e1fe9f469329e2c59603c6f268c4ed9dae8157a8790d722c
3
  size 4999836776
last_checkpoint/model-00003-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:89ce15203f434e9a1b6da0065e8cc1057369f77e5d34d4871b85c95dd7fbbbc1
3
  size 4983067960
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3670140d8801a73d40fa130e15adf4c5cc45830b5d844ef71ae16ca8e6b4f6e5
3
  size 4983067960
last_checkpoint/model-00004-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9a1c3cbdaeed31d9ec1743500409cb0f311917dbcb15c83b80e67dff94c172a9
3
  size 1073750144
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6ede8c313fefb7942fb139d0e853b9f00116f8a5e2d7f945e1266c8bf07ba57
3
  size 1073750144
last_checkpoint/training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0961bc15186ae81c5777e901bef7d15ec9c87865609f5706bc520911646c0e74
3
  size 8056
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:88af4b6c33d5b8a38584956dd23c9d0fb2d2bde238277ce3724b976369b9b088
3
  size 8056