RyanYr commited on
Commit
5ba238f
1 Parent(s): d74a580

Model save

Browse files
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- base_model: RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter1
3
  library_name: transformers
4
  model_name: self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-2e-7
5
  tags:
@@ -11,7 +11,7 @@ licence: license
11
 
12
  # Model Card for self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-2e-7
13
 
14
- This model is a fine-tuned version of [RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter1](https://huggingface.co/RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter1).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
  ## Quick start
@@ -27,7 +27,7 @@ print(output["generated_text"])
27
 
28
  ## Training procedure
29
 
30
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/ad8tjcps)
31
 
32
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
33
 
 
1
  ---
2
+ base_model: RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2
3
  library_name: transformers
4
  model_name: self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-2e-7
5
  tags:
 
11
 
12
  # Model Card for self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-2e-7
13
 
14
+ This model is a fine-tuned version of [RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2](https://huggingface.co/RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
  ## Quick start
 
27
 
28
  ## Training procedure
29
 
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/ayg0m6dv)
31
 
32
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
33
 
last_checkpoint/config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter1",
3
  "architectures": [
4
  "LlamaForCausalLM"
5
  ],
 
1
  {
2
+ "_name_or_path": "RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2",
3
  "architectures": [
4
  "LlamaForCausalLM"
5
  ],
last_checkpoint/model-00001-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6fb2799e5b6f00356faab943f5fa5ea0997abb5074b50513a371fb74a9d5fb4e
3
  size 4965805240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:48285efd2a2d12c175f27353e90c8e5b90328b0e53c03de7d05d32ba39fa4863
3
  size 4965805240
last_checkpoint/model-00002-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5eab915dce4a1e12fb5e6b67d44967431e00d3645d931a26a68c0856586271f2
3
  size 2247741136
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:54999f6bdcab056b7cb203a9afd47a0e2e4a116fb1f6df0d7313c8f534d29a47
3
  size 2247741136
last_checkpoint/training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:48c8404eb5b6ad6a01839595210894121cf1db47e4a8c23b48326688d1bf3296
3
  size 7608
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84e91f5c83f4b70acc8f8241d7225a10d8bb5ef12e6cef5c1804a1430b7c9ecc
3
  size 7608