yimingliang commited on
Commit
70aaedb
1 Parent(s): 3b22eb7

Upload 117 files

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. qwen4B_models/qwen_4B_d0_iter1_model/README.md +67 -0
  2. qwen4B_models/qwen_4B_d0_iter1_model/adapter_config.json +34 -0
  3. qwen4B_models/qwen_4B_d0_iter1_model/adapter_model.safetensors +3 -0
  4. qwen4B_models/qwen_4B_d0_iter1_model/added_tokens.json +5 -0
  5. qwen4B_models/qwen_4B_d0_iter1_model/all_results.json +12 -0
  6. qwen4B_models/qwen_4B_d0_iter1_model/eval_results.json +7 -0
  7. qwen4B_models/qwen_4B_d0_iter1_model/lora.log +0 -0
  8. qwen4B_models/qwen_4B_d0_iter1_model/merge.log +7 -0
  9. qwen4B_models/qwen_4B_d0_iter1_model/merges.txt +0 -0
  10. qwen4B_models/qwen_4B_d0_iter1_model/runs/Jul15_07-59-39_t-20240715144706-mwmm8-worker-0/events.out.tfevents.1721030446.t-20240715144706-mwmm8-worker-0.40318.0 +3 -0
  11. qwen4B_models/qwen_4B_d0_iter1_model/runs/Jul15_07-59-39_t-20240715144706-mwmm8-worker-0/events.out.tfevents.1721030806.t-20240715144706-mwmm8-worker-0.40318.1 +3 -0
  12. qwen4B_models/qwen_4B_d0_iter1_model/special_tokens_map.json +27 -0
  13. qwen4B_models/qwen_4B_d0_iter1_model/tokenizer_config.json +45 -0
  14. qwen4B_models/qwen_4B_d0_iter1_model/train_results.json +8 -0
  15. qwen4B_models/qwen_4B_d0_iter1_model/trainer_log.jsonl +13 -0
  16. qwen4B_models/qwen_4B_d0_iter1_model/trainer_state.json +120 -0
  17. qwen4B_models/qwen_4B_d0_iter1_model/training_args.bin +3 -0
  18. qwen4B_models/qwen_4B_d0_iter1_model/training_eval_loss.png +0 -0
  19. qwen4B_models/qwen_4B_d0_iter1_model/training_loss.png +0 -0
  20. qwen4B_models/qwen_4B_d0_iter1_model/vocab.json +0 -0
  21. qwen4B_models/qwen_4B_d1_iter2_model/README.md +67 -0
  22. qwen4B_models/qwen_4B_d1_iter2_model/adapter_config.json +34 -0
  23. qwen4B_models/qwen_4B_d1_iter2_model/adapter_model.safetensors +3 -0
  24. qwen4B_models/qwen_4B_d1_iter2_model/added_tokens.json +5 -0
  25. qwen4B_models/qwen_4B_d1_iter2_model/all_results.json +12 -0
  26. qwen4B_models/qwen_4B_d1_iter2_model/eval_results.json +7 -0
  27. qwen4B_models/qwen_4B_d1_iter2_model/lora.log +0 -0
  28. qwen4B_models/qwen_4B_d1_iter2_model/merge.log +7 -0
  29. qwen4B_models/qwen_4B_d1_iter2_model/merges.txt +0 -0
  30. qwen4B_models/qwen_4B_d1_iter2_model/runs/Jul15_10-01-16_t-20240715144706-mwmm8-worker-0/events.out.tfevents.1721037741.t-20240715144706-mwmm8-worker-0.91732.0 +3 -0
  31. qwen4B_models/qwen_4B_d1_iter2_model/runs/Jul15_10-01-16_t-20240715144706-mwmm8-worker-0/events.out.tfevents.1721038213.t-20240715144706-mwmm8-worker-0.91732.1 +3 -0
  32. qwen4B_models/qwen_4B_d1_iter2_model/special_tokens_map.json +27 -0
  33. qwen4B_models/qwen_4B_d1_iter2_model/tokenizer_config.json +45 -0
  34. qwen4B_models/qwen_4B_d1_iter2_model/train_results.json +8 -0
  35. qwen4B_models/qwen_4B_d1_iter2_model/trainer_log.jsonl +16 -0
  36. qwen4B_models/qwen_4B_d1_iter2_model/trainer_state.json +141 -0
  37. qwen4B_models/qwen_4B_d1_iter2_model/training_args.bin +3 -0
  38. qwen4B_models/qwen_4B_d1_iter2_model/training_eval_loss.png +0 -0
  39. qwen4B_models/qwen_4B_d1_iter2_model/training_loss.png +0 -0
  40. qwen4B_models/qwen_4B_d1_iter2_model/vocab.json +0 -0
  41. qwen4B_models/qwen_4B_d2_iter3_model/README.md +67 -0
  42. qwen4B_models/qwen_4B_d2_iter3_model/adapter_config.json +34 -0
  43. qwen4B_models/qwen_4B_d2_iter3_model/adapter_model.safetensors +3 -0
  44. qwen4B_models/qwen_4B_d2_iter3_model/added_tokens.json +5 -0
  45. qwen4B_models/qwen_4B_d2_iter3_model/all_results.json +12 -0
  46. qwen4B_models/qwen_4B_d2_iter3_model/eval_results.json +7 -0
  47. qwen4B_models/qwen_4B_d2_iter3_model/lora.log +0 -0
  48. qwen4B_models/qwen_4B_d2_iter3_model/merge.log +7 -0
  49. qwen4B_models/qwen_4B_d2_iter3_model/merges.txt +0 -0
  50. qwen4B_models/qwen_4B_d2_iter3_model/runs/Jul15_12-07-19_t-20240715144706-mwmm8-worker-0/events.out.tfevents.1721045373.t-20240715144706-mwmm8-worker-0.144068.0 +3 -0
qwen4B_models/qwen_4B_d0_iter1_model/README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ library_name: peft
4
+ tags:
5
+ - llama-factory
6
+ - lora
7
+ - generated_from_trainer
8
+ base_model: /ML-A100/team/mm/eamon/self_instruction/models/Qwen1_5_4B
9
+ model-index:
10
+ - name: qwen_4B_d0_iter1_model
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # qwen_4B_d0_iter1_model
18
+
19
+ This model is a fine-tuned version of [/ML-A100/team/mm/eamon/self_instruction/models/Qwen1_5_4B](https://huggingface.co//ML-A100/team/mm/eamon/self_instruction/models/Qwen1_5_4B) on the qwen_4B_d0_iter1_model dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.9073
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 5e-05
41
+ - train_batch_size: 1
42
+ - eval_batch_size: 1
43
+ - seed: 42
44
+ - distributed_type: multi-GPU
45
+ - num_devices: 8
46
+ - gradient_accumulation_steps: 2
47
+ - total_train_batch_size: 16
48
+ - total_eval_batch_size: 8
49
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
+ - lr_scheduler_type: cosine
51
+ - lr_scheduler_warmup_steps: 20
52
+ - num_epochs: 2.0
53
+
54
+ ### Training results
55
+
56
+ | Training Loss | Epoch | Step | Validation Loss |
57
+ |:-------------:|:------:|:----:|:---------------:|
58
+ | 0.8416 | 1.9608 | 100 | 0.9073 |
59
+
60
+
61
+ ### Framework versions
62
+
63
+ - PEFT 0.10.0
64
+ - Transformers 4.41.2
65
+ - Pytorch 2.1.2+cu121
66
+ - Datasets 2.18.0
67
+ - Tokenizers 0.19.1
qwen4B_models/qwen_4B_d0_iter1_model/adapter_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "/ML-A100/team/mm/eamon/self_instruction/models/Qwen1_5_4B",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 16,
14
+ "lora_dropout": 0.05,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 8,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "v_proj",
24
+ "down_proj",
25
+ "k_proj",
26
+ "gate_proj",
27
+ "up_proj",
28
+ "q_proj",
29
+ "o_proj"
30
+ ],
31
+ "task_type": "CAUSAL_LM",
32
+ "use_dora": false,
33
+ "use_rslora": false
34
+ }
qwen4B_models/qwen_4B_d0_iter1_model/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8175c84e0e918587940396bf30b208103503e4658dd185b843cd045f35004eea
3
+ size 31367672
qwen4B_models/qwen_4B_d0_iter1_model/added_tokens.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "<|endoftext|>": 151643,
3
+ "<|im_end|>": 151645,
4
+ "<|im_start|>": 151644
5
+ }
qwen4B_models/qwen_4B_d0_iter1_model/all_results.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.0,
3
+ "eval_loss": 0.9073211550712585,
4
+ "eval_runtime": 5.944,
5
+ "eval_samples_per_second": 15.141,
6
+ "eval_steps_per_second": 2.019,
7
+ "total_flos": 22867310280704.0,
8
+ "train_loss": 1.0144876452053295,
9
+ "train_runtime": 349.3742,
10
+ "train_samples_per_second": 4.631,
11
+ "train_steps_per_second": 0.292
12
+ }
qwen4B_models/qwen_4B_d0_iter1_model/eval_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.0,
3
+ "eval_loss": 0.9073211550712585,
4
+ "eval_runtime": 5.944,
5
+ "eval_samples_per_second": 15.141,
6
+ "eval_steps_per_second": 2.019
7
+ }
qwen4B_models/qwen_4B_d0_iter1_model/lora.log ADDED
The diff for this file is too large to render. See raw diff
 
qwen4B_models/qwen_4B_d0_iter1_model/merge.log ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ [2024-07-15 08:06:57,661] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
2
+ 07/15/2024 08:06:59 - INFO - llmtuner.data.template - Add <|im_end|>,<|endoftext|> to stop words.
3
+ 07/15/2024 08:06:59 - INFO - llmtuner.model.patcher - Using KV cache for faster generation.
4
+ 07/15/2024 08:07:00 - INFO - llmtuner.model.adapter - Fine-tuning method: LoRA
5
+ 07/15/2024 08:07:03 - INFO - llmtuner.model.adapter - Merged 1 adapter(s).
6
+ 07/15/2024 08:07:03 - INFO - llmtuner.model.adapter - Loaded adapter(s): /ML-A100/team/mm/eamon/self_instruction/seed_ppl/qwen4B_models/qwen_4B_d0_iter1_model
7
+ 07/15/2024 08:07:03 - INFO - llmtuner.model.loader - all params: 3950369280
qwen4B_models/qwen_4B_d0_iter1_model/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
qwen4B_models/qwen_4B_d0_iter1_model/runs/Jul15_07-59-39_t-20240715144706-mwmm8-worker-0/events.out.tfevents.1721030446.t-20240715144706-mwmm8-worker-0.40318.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:944398a9a11683d59b27cd3a49d1dc3f15d334fe687b1d5c723df96bc176a13c
3
+ size 8006
qwen4B_models/qwen_4B_d0_iter1_model/runs/Jul15_07-59-39_t-20240715144706-mwmm8-worker-0/events.out.tfevents.1721030806.t-20240715144706-mwmm8-worker-0.40318.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f56a1f13348f9fafaaabdcae298ba36f54aa16bdec9112769d4b0af92b6b3f3f
3
+ size 354
qwen4B_models/qwen_4B_d0_iter1_model/special_tokens_map.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false
11
+ }
12
+ ],
13
+ "eos_token": {
14
+ "content": "<|endoftext|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false
19
+ },
20
+ "pad_token": {
21
+ "content": "<|endoftext|>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false
26
+ }
27
+ }
qwen4B_models/qwen_4B_d0_iter1_model/tokenizer_config.json ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "151643": {
5
+ "content": "<|endoftext|>",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "151644": {
13
+ "content": "<|im_start|>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "151645": {
21
+ "content": "<|im_end|>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ }
28
+ },
29
+ "additional_special_tokens": [
30
+ "<|im_start|>",
31
+ "<|im_end|>",
32
+ "<|endoftext|>"
33
+ ],
34
+ "bos_token": null,
35
+ "chat_template": "{% set system_message = 'You are a helpful assistant.' %}{% if messages[0]['role'] == 'system' %}{% set system_message = messages[0]['content'] %}{% endif %}{% if system_message is defined %}{{ '<|im_start|>system\\n' + system_message + '<|im_end|>\\n' }}{% endif %}{% for message in messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ '<|im_start|>user\\n' + content + '<|im_end|>\\n<|im_start|>assistant\\n' }}{% elif message['role'] == 'assistant' %}{{ content + '<|endoftext|>' + '\\n' }}{% endif %}{% endfor %}",
36
+ "clean_up_tokenization_spaces": false,
37
+ "eos_token": "<|endoftext|>",
38
+ "errors": "replace",
39
+ "model_max_length": 32768,
40
+ "pad_token": "<|endoftext|>",
41
+ "padding_side": "right",
42
+ "split_special_tokens": false,
43
+ "tokenizer_class": "Qwen2Tokenizer",
44
+ "unk_token": null
45
+ }
qwen4B_models/qwen_4B_d0_iter1_model/train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.0,
3
+ "total_flos": 22867310280704.0,
4
+ "train_loss": 1.0144876452053295,
5
+ "train_runtime": 349.3742,
6
+ "train_samples_per_second": 4.631,
7
+ "train_steps_per_second": 0.292
8
+ }
qwen4B_models/qwen_4B_d0_iter1_model/trainer_log.jsonl ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"current_steps": 10, "total_steps": 102, "loss": 1.302, "eval_loss": null, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": 2.5e-05, "epoch": 0.19607843137254902, "percentage": 9.8, "elapsed_time": "0:00:48", "remaining_time": "0:07:25"}
2
+ {"current_steps": 20, "total_steps": 102, "loss": 1.2195, "eval_loss": null, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": 5e-05, "epoch": 0.39215686274509803, "percentage": 19.61, "elapsed_time": "0:01:19", "remaining_time": "0:05:25"}
3
+ {"current_steps": 30, "total_steps": 102, "loss": 0.9886, "eval_loss": null, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": 4.8187561277552374e-05, "epoch": 0.5882352941176471, "percentage": 29.41, "elapsed_time": "0:01:50", "remaining_time": "0:04:24"}
4
+ {"current_steps": 40, "total_steps": 102, "loss": 1.0972, "eval_loss": null, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": 4.301303984001967e-05, "epoch": 0.7843137254901961, "percentage": 39.22, "elapsed_time": "0:02:21", "remaining_time": "0:03:39"}
5
+ {"current_steps": 50, "total_steps": 102, "loss": 0.9888, "eval_loss": null, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": 3.5226715929283506e-05, "epoch": 0.9803921568627451, "percentage": 49.02, "elapsed_time": "0:02:52", "remaining_time": "0:02:59"}
6
+ {"current_steps": 60, "total_steps": 102, "loss": 0.9882, "eval_loss": null, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": 2.595756834225089e-05, "epoch": 1.1764705882352942, "percentage": 58.82, "elapsed_time": "0:03:23", "remaining_time": "0:02:22"}
7
+ {"current_steps": 70, "total_steps": 102, "loss": 0.9159, "eval_loss": null, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": 1.6549578039787436e-05, "epoch": 1.3725490196078431, "percentage": 68.63, "elapsed_time": "0:03:54", "remaining_time": "0:01:47"}
8
+ {"current_steps": 80, "total_steps": 102, "loss": 0.8981, "eval_loss": null, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": 8.36685749586087e-06, "epoch": 1.5686274509803921, "percentage": 78.43, "elapsed_time": "0:04:25", "remaining_time": "0:01:13"}
9
+ {"current_steps": 90, "total_steps": 102, "loss": 0.9455, "eval_loss": null, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": 2.595861075973613e-06, "epoch": 1.7647058823529411, "percentage": 88.24, "elapsed_time": "0:04:56", "remaining_time": "0:00:39"}
10
+ {"current_steps": 100, "total_steps": 102, "loss": 0.8416, "eval_loss": null, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": 7.335497040648898e-08, "epoch": 1.9607843137254903, "percentage": 98.04, "elapsed_time": "0:05:27", "remaining_time": "0:00:06"}
11
+ {"current_steps": 100, "total_steps": 102, "loss": null, "eval_loss": 0.9073211550712585, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": null, "epoch": 1.9607843137254903, "percentage": 98.04, "elapsed_time": "0:05:27", "remaining_time": "0:00:06"}
12
+ {"current_steps": 102, "total_steps": 102, "loss": null, "eval_loss": null, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": null, "epoch": 2.0, "percentage": 100.0, "elapsed_time": "0:05:49", "remaining_time": "0:00:00"}
13
+ {"current_steps": 12, "total_steps": 12, "loss": null, "eval_loss": 0.9073211550712585, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": null, "epoch": 2.0, "percentage": 100.0, "elapsed_time": "0:06:00", "remaining_time": "0:00:00"}
qwen4B_models/qwen_4B_d0_iter1_model/trainer_state.json ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": 0.9073211550712585,
3
+ "best_model_checkpoint": "/ML-A100/team/mm/eamon/self_instruction/seed_ppl/qwen4B_models/qwen_4B_d0_iter1_model/checkpoint-100",
4
+ "epoch": 2.0,
5
+ "eval_steps": 100,
6
+ "global_step": 102,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.19607843137254902,
13
+ "grad_norm": 1.8514470549546265,
14
+ "learning_rate": 2.5e-05,
15
+ "loss": 1.302,
16
+ "step": 10
17
+ },
18
+ {
19
+ "epoch": 0.39215686274509803,
20
+ "grad_norm": 0.3635169007023558,
21
+ "learning_rate": 5e-05,
22
+ "loss": 1.2195,
23
+ "step": 20
24
+ },
25
+ {
26
+ "epoch": 0.5882352941176471,
27
+ "grad_norm": 0.6423674064129872,
28
+ "learning_rate": 4.8187561277552374e-05,
29
+ "loss": 0.9886,
30
+ "step": 30
31
+ },
32
+ {
33
+ "epoch": 0.7843137254901961,
34
+ "grad_norm": 1.7652163975371618,
35
+ "learning_rate": 4.301303984001967e-05,
36
+ "loss": 1.0972,
37
+ "step": 40
38
+ },
39
+ {
40
+ "epoch": 0.9803921568627451,
41
+ "grad_norm": 0.8125720365482934,
42
+ "learning_rate": 3.5226715929283506e-05,
43
+ "loss": 0.9888,
44
+ "step": 50
45
+ },
46
+ {
47
+ "epoch": 1.1764705882352942,
48
+ "grad_norm": 0.5160887085744938,
49
+ "learning_rate": 2.595756834225089e-05,
50
+ "loss": 0.9882,
51
+ "step": 60
52
+ },
53
+ {
54
+ "epoch": 1.3725490196078431,
55
+ "grad_norm": 0.9475974337041686,
56
+ "learning_rate": 1.6549578039787436e-05,
57
+ "loss": 0.9159,
58
+ "step": 70
59
+ },
60
+ {
61
+ "epoch": 1.5686274509803921,
62
+ "grad_norm": 0.6992885675683204,
63
+ "learning_rate": 8.36685749586087e-06,
64
+ "loss": 0.8981,
65
+ "step": 80
66
+ },
67
+ {
68
+ "epoch": 1.7647058823529411,
69
+ "grad_norm": 0.6476094908491726,
70
+ "learning_rate": 2.595861075973613e-06,
71
+ "loss": 0.9455,
72
+ "step": 90
73
+ },
74
+ {
75
+ "epoch": 1.9607843137254903,
76
+ "grad_norm": 0.852996571624781,
77
+ "learning_rate": 7.335497040648898e-08,
78
+ "loss": 0.8416,
79
+ "step": 100
80
+ },
81
+ {
82
+ "epoch": 1.9607843137254903,
83
+ "eval_loss": 0.9073211550712585,
84
+ "eval_runtime": 10.152,
85
+ "eval_samples_per_second": 8.865,
86
+ "eval_steps_per_second": 1.182,
87
+ "step": 100
88
+ },
89
+ {
90
+ "epoch": 2.0,
91
+ "step": 102,
92
+ "total_flos": 22867310280704.0,
93
+ "train_loss": 1.0144876452053295,
94
+ "train_runtime": 349.3742,
95
+ "train_samples_per_second": 4.631,
96
+ "train_steps_per_second": 0.292
97
+ }
98
+ ],
99
+ "logging_steps": 10,
100
+ "max_steps": 102,
101
+ "num_input_tokens_seen": 0,
102
+ "num_train_epochs": 2,
103
+ "save_steps": 100,
104
+ "stateful_callbacks": {
105
+ "TrainerControl": {
106
+ "args": {
107
+ "should_epoch_stop": false,
108
+ "should_evaluate": false,
109
+ "should_log": false,
110
+ "should_save": true,
111
+ "should_training_stop": false
112
+ },
113
+ "attributes": {}
114
+ }
115
+ },
116
+ "total_flos": 22867310280704.0,
117
+ "train_batch_size": 1,
118
+ "trial_name": null,
119
+ "trial_params": null
120
+ }
qwen4B_models/qwen_4B_d0_iter1_model/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:48c5fdf5bb26762d6b5ea66c5dcd14133315ee9223ab03c8e9dc85c8c2c3a758
3
+ size 7032
qwen4B_models/qwen_4B_d0_iter1_model/training_eval_loss.png ADDED
qwen4B_models/qwen_4B_d0_iter1_model/training_loss.png ADDED
qwen4B_models/qwen_4B_d0_iter1_model/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
qwen4B_models/qwen_4B_d1_iter2_model/README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ library_name: peft
4
+ tags:
5
+ - llama-factory
6
+ - lora
7
+ - generated_from_trainer
8
+ base_model: /ML-A100/team/mm/eamon/self_instruction/models/Qwen1_5_4B
9
+ model-index:
10
+ - name: qwen_4B_d1_iter2_model
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # qwen_4B_d1_iter2_model
18
+
19
+ This model is a fine-tuned version of [/ML-A100/team/mm/eamon/self_instruction/models/Qwen1_5_4B](https://huggingface.co//ML-A100/team/mm/eamon/self_instruction/models/Qwen1_5_4B) on the qwen_4B_d1_iter2_model dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.7857
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 5e-05
41
+ - train_batch_size: 1
42
+ - eval_batch_size: 1
43
+ - seed: 42
44
+ - distributed_type: multi-GPU
45
+ - num_devices: 8
46
+ - gradient_accumulation_steps: 2
47
+ - total_train_batch_size: 16
48
+ - total_eval_batch_size: 8
49
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
+ - lr_scheduler_type: cosine
51
+ - lr_scheduler_warmup_steps: 20
52
+ - num_epochs: 2.0
53
+
54
+ ### Training results
55
+
56
+ | Training Loss | Epoch | Step | Validation Loss |
57
+ |:-------------:|:------:|:----:|:---------------:|
58
+ | 0.6978 | 1.4706 | 100 | 0.7857 |
59
+
60
+
61
+ ### Framework versions
62
+
63
+ - PEFT 0.10.0
64
+ - Transformers 4.41.2
65
+ - Pytorch 2.1.2+cu121
66
+ - Datasets 2.18.0
67
+ - Tokenizers 0.19.1
qwen4B_models/qwen_4B_d1_iter2_model/adapter_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "/ML-A100/team/mm/eamon/self_instruction/models/Qwen1_5_4B",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 16,
14
+ "lora_dropout": 0.05,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 8,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "k_proj",
24
+ "down_proj",
25
+ "o_proj",
26
+ "up_proj",
27
+ "q_proj",
28
+ "gate_proj",
29
+ "v_proj"
30
+ ],
31
+ "task_type": "CAUSAL_LM",
32
+ "use_dora": false,
33
+ "use_rslora": false
34
+ }
qwen4B_models/qwen_4B_d1_iter2_model/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:223f5dc1d3695c84b692cdd8615a694fbe0fef2c71f008057debcec98e041b99
3
+ size 31367672
qwen4B_models/qwen_4B_d1_iter2_model/added_tokens.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "<|endoftext|>": 151643,
3
+ "<|im_end|>": 151645,
4
+ "<|im_start|>": 151644
5
+ }
qwen4B_models/qwen_4B_d1_iter2_model/all_results.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.0,
3
+ "eval_loss": 0.7856930494308472,
4
+ "eval_runtime": 7.9389,
5
+ "eval_samples_per_second": 15.241,
6
+ "eval_steps_per_second": 2.015,
7
+ "total_flos": 21111522983936.0,
8
+ "train_loss": 0.8778598624117234,
9
+ "train_runtime": 458.5184,
10
+ "train_samples_per_second": 4.746,
11
+ "train_steps_per_second": 0.297
12
+ }
qwen4B_models/qwen_4B_d1_iter2_model/eval_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.0,
3
+ "eval_loss": 0.7856930494308472,
4
+ "eval_runtime": 7.9389,
5
+ "eval_samples_per_second": 15.241,
6
+ "eval_steps_per_second": 2.015
7
+ }
qwen4B_models/qwen_4B_d1_iter2_model/lora.log ADDED
The diff for this file is too large to render. See raw diff
 
qwen4B_models/qwen_4B_d1_iter2_model/merge.log ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ [2024-07-15 10:10:22,449] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
2
+ 07/15/2024 10:10:24 - INFO - llmtuner.data.template - Add <|im_end|>,<|endoftext|> to stop words.
3
+ 07/15/2024 10:10:24 - INFO - llmtuner.model.patcher - Using KV cache for faster generation.
4
+ 07/15/2024 10:10:25 - INFO - llmtuner.model.adapter - Fine-tuning method: LoRA
5
+ 07/15/2024 10:10:28 - INFO - llmtuner.model.adapter - Merged 1 adapter(s).
6
+ 07/15/2024 10:10:28 - INFO - llmtuner.model.adapter - Loaded adapter(s): /ML-A100/team/mm/eamon/self_instruction/seed_ppl/qwen4B_models/qwen_4B_d1_iter2_model
7
+ 07/15/2024 10:10:28 - INFO - llmtuner.model.loader - all params: 3950369280
qwen4B_models/qwen_4B_d1_iter2_model/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
qwen4B_models/qwen_4B_d1_iter2_model/runs/Jul15_10-01-16_t-20240715144706-mwmm8-worker-0/events.out.tfevents.1721037741.t-20240715144706-mwmm8-worker-0.91732.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c04452716c83e238a9ac89ae188b5a3b3b9562d3492997299769ca77c6845b79
3
+ size 8637
qwen4B_models/qwen_4B_d1_iter2_model/runs/Jul15_10-01-16_t-20240715144706-mwmm8-worker-0/events.out.tfevents.1721038213.t-20240715144706-mwmm8-worker-0.91732.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:93f38c90c8428f4bf948c6b41ee9f7fc583e9ea3840c292b1df818734fb21620
3
+ size 359
qwen4B_models/qwen_4B_d1_iter2_model/special_tokens_map.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false
11
+ }
12
+ ],
13
+ "eos_token": {
14
+ "content": "<|endoftext|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false
19
+ },
20
+ "pad_token": {
21
+ "content": "<|endoftext|>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false
26
+ }
27
+ }
qwen4B_models/qwen_4B_d1_iter2_model/tokenizer_config.json ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "151643": {
5
+ "content": "<|endoftext|>",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "151644": {
13
+ "content": "<|im_start|>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "151645": {
21
+ "content": "<|im_end|>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ }
28
+ },
29
+ "additional_special_tokens": [
30
+ "<|im_start|>",
31
+ "<|im_end|>",
32
+ "<|endoftext|>"
33
+ ],
34
+ "bos_token": null,
35
+ "chat_template": "{% set system_message = 'You are a helpful assistant.' %}{% if messages[0]['role'] == 'system' %}{% set system_message = messages[0]['content'] %}{% endif %}{% if system_message is defined %}{{ '<|im_start|>system\\n' + system_message + '<|im_end|>\\n' }}{% endif %}{% for message in messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ '<|im_start|>user\\n' + content + '<|im_end|>\\n<|im_start|>assistant\\n' }}{% elif message['role'] == 'assistant' %}{{ content + '<|endoftext|>' + '\\n' }}{% endif %}{% endfor %}",
36
+ "clean_up_tokenization_spaces": false,
37
+ "eos_token": "<|endoftext|>",
38
+ "errors": "replace",
39
+ "model_max_length": 32768,
40
+ "pad_token": "<|endoftext|>",
41
+ "padding_side": "right",
42
+ "split_special_tokens": false,
43
+ "tokenizer_class": "Qwen2Tokenizer",
44
+ "unk_token": null
45
+ }
qwen4B_models/qwen_4B_d1_iter2_model/train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.0,
3
+ "total_flos": 21111522983936.0,
4
+ "train_loss": 0.8778598624117234,
5
+ "train_runtime": 458.5184,
6
+ "train_samples_per_second": 4.746,
7
+ "train_steps_per_second": 0.297
8
+ }
qwen4B_models/qwen_4B_d1_iter2_model/trainer_log.jsonl ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"current_steps": 10, "total_steps": 136, "loss": 1.438, "eval_loss": null, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": 2.5e-05, "epoch": 0.14705882352941177, "percentage": 7.35, "elapsed_time": "0:00:48", "remaining_time": "0:10:07"}
2
+ {"current_steps": 20, "total_steps": 136, "loss": 1.2455, "eval_loss": null, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": 5e-05, "epoch": 0.29411764705882354, "percentage": 14.71, "elapsed_time": "0:01:19", "remaining_time": "0:07:41"}
3
+ {"current_steps": 30, "total_steps": 136, "loss": 0.8748, "eval_loss": null, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": 4.908874981298057e-05, "epoch": 0.4411764705882353, "percentage": 22.06, "elapsed_time": "0:01:50", "remaining_time": "0:06:30"}
4
+ {"current_steps": 40, "total_steps": 136, "loss": 0.8399, "eval_loss": null, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": 4.642142940418973e-05, "epoch": 0.5882352941176471, "percentage": 29.41, "elapsed_time": "0:02:21", "remaining_time": "0:05:40"}
5
+ {"current_steps": 50, "total_steps": 136, "loss": 0.8371, "eval_loss": null, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": 4.2192486471335585e-05, "epoch": 0.7352941176470589, "percentage": 36.76, "elapsed_time": "0:02:52", "remaining_time": "0:04:56"}
6
+ {"current_steps": 60, "total_steps": 136, "loss": 0.8346, "eval_loss": null, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": 3.671021101749476e-05, "epoch": 0.8823529411764706, "percentage": 44.12, "elapsed_time": "0:03:23", "remaining_time": "0:04:18"}
7
+ {"current_steps": 70, "total_steps": 136, "loss": 0.85, "eval_loss": null, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": 3.0374261005275607e-05, "epoch": 1.0294117647058822, "percentage": 51.47, "elapsed_time": "0:03:55", "remaining_time": "0:03:41"}
8
+ {"current_steps": 80, "total_steps": 136, "loss": 0.8218, "eval_loss": null, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": 2.3646527285364565e-05, "epoch": 1.1764705882352942, "percentage": 58.82, "elapsed_time": "0:04:26", "remaining_time": "0:03:06"}
9
+ {"current_steps": 90, "total_steps": 136, "loss": 0.8021, "eval_loss": null, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": 1.7017461746600506e-05, "epoch": 1.3235294117647058, "percentage": 66.18, "elapsed_time": "0:04:57", "remaining_time": "0:02:32"}
10
+ {"current_steps": 100, "total_steps": 136, "loss": 0.6978, "eval_loss": null, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": 1.0970323365940444e-05, "epoch": 1.4705882352941178, "percentage": 73.53, "elapsed_time": "0:05:28", "remaining_time": "0:01:58"}
11
+ {"current_steps": 100, "total_steps": 136, "loss": null, "eval_loss": 0.7856930494308472, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": null, "epoch": 1.4705882352941178, "percentage": 73.53, "elapsed_time": "0:05:28", "remaining_time": "0:01:58"}
12
+ {"current_steps": 110, "total_steps": 136, "loss": 0.7205, "eval_loss": null, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": 5.945948621809091e-06, "epoch": 1.6176470588235294, "percentage": 80.88, "elapsed_time": "0:06:17", "remaining_time": "0:01:29"}
13
+ {"current_steps": 120, "total_steps": 136, "loss": 0.787, "eval_loss": null, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": 2.310614508226078e-06, "epoch": 1.7647058823529411, "percentage": 88.24, "elapsed_time": "0:06:48", "remaining_time": "0:00:54"}
14
+ {"current_steps": 130, "total_steps": 136, "loss": 0.7046, "eval_loss": null, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": 3.293369364618465e-07, "epoch": 1.9117647058823528, "percentage": 95.59, "elapsed_time": "0:07:19", "remaining_time": "0:00:20"}
15
+ {"current_steps": 136, "total_steps": 136, "loss": null, "eval_loss": null, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": null, "epoch": 2.0, "percentage": 100.0, "elapsed_time": "0:07:38", "remaining_time": "0:00:00"}
16
+ {"current_steps": 16, "total_steps": 16, "loss": null, "eval_loss": 0.7856930494308472, "predict_loss": null, "reward": null, "accuracy": null, "learning_rate": null, "epoch": 2.0, "percentage": 100.0, "elapsed_time": "0:07:51", "remaining_time": "0:00:00"}
qwen4B_models/qwen_4B_d1_iter2_model/trainer_state.json ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": 0.7856930494308472,
3
+ "best_model_checkpoint": "/ML-A100/team/mm/eamon/self_instruction/seed_ppl/qwen4B_models/qwen_4B_d1_iter2_model/checkpoint-100",
4
+ "epoch": 2.0,
5
+ "eval_steps": 100,
6
+ "global_step": 136,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.14705882352941177,
13
+ "grad_norm": 1.1161658260127314,
14
+ "learning_rate": 2.5e-05,
15
+ "loss": 1.438,
16
+ "step": 10
17
+ },
18
+ {
19
+ "epoch": 0.29411764705882354,
20
+ "grad_norm": 1.141386114916924,
21
+ "learning_rate": 5e-05,
22
+ "loss": 1.2455,
23
+ "step": 20
24
+ },
25
+ {
26
+ "epoch": 0.4411764705882353,
27
+ "grad_norm": 0.908747056691058,
28
+ "learning_rate": 4.908874981298057e-05,
29
+ "loss": 0.8748,
30
+ "step": 30
31
+ },
32
+ {
33
+ "epoch": 0.5882352941176471,
34
+ "grad_norm": 1.160724824332908,
35
+ "learning_rate": 4.642142940418973e-05,
36
+ "loss": 0.8399,
37
+ "step": 40
38
+ },
39
+ {
40
+ "epoch": 0.7352941176470589,
41
+ "grad_norm": 1.7354204311716015,
42
+ "learning_rate": 4.2192486471335585e-05,
43
+ "loss": 0.8371,
44
+ "step": 50
45
+ },
46
+ {
47
+ "epoch": 0.8823529411764706,
48
+ "grad_norm": 0.989621687985043,
49
+ "learning_rate": 3.671021101749476e-05,
50
+ "loss": 0.8346,
51
+ "step": 60
52
+ },
53
+ {
54
+ "epoch": 1.0294117647058822,
55
+ "grad_norm": 1.4467052607165527,
56
+ "learning_rate": 3.0374261005275607e-05,
57
+ "loss": 0.85,
58
+ "step": 70
59
+ },
60
+ {
61
+ "epoch": 1.1764705882352942,
62
+ "grad_norm": 1.022122556125995,
63
+ "learning_rate": 2.3646527285364565e-05,
64
+ "loss": 0.8218,
65
+ "step": 80
66
+ },
67
+ {
68
+ "epoch": 1.3235294117647058,
69
+ "grad_norm": 1.168833164765407,
70
+ "learning_rate": 1.7017461746600506e-05,
71
+ "loss": 0.8021,
72
+ "step": 90
73
+ },
74
+ {
75
+ "epoch": 1.4705882352941178,
76
+ "grad_norm": 0.4995411891529773,
77
+ "learning_rate": 1.0970323365940444e-05,
78
+ "loss": 0.6978,
79
+ "step": 100
80
+ },
81
+ {
82
+ "epoch": 1.4705882352941178,
83
+ "eval_loss": 0.7856930494308472,
84
+ "eval_runtime": 12.019,
85
+ "eval_samples_per_second": 10.067,
86
+ "eval_steps_per_second": 1.331,
87
+ "step": 100
88
+ },
89
+ {
90
+ "epoch": 1.6176470588235294,
91
+ "grad_norm": 1.6403596296455754,
92
+ "learning_rate": 5.945948621809091e-06,
93
+ "loss": 0.7205,
94
+ "step": 110
95
+ },
96
+ {
97
+ "epoch": 1.7647058823529411,
98
+ "grad_norm": 0.8006724271557235,
99
+ "learning_rate": 2.310614508226078e-06,
100
+ "loss": 0.787,
101
+ "step": 120
102
+ },
103
+ {
104
+ "epoch": 1.9117647058823528,
105
+ "grad_norm": 1.7764723276974967,
106
+ "learning_rate": 3.293369364618465e-07,
107
+ "loss": 0.7046,
108
+ "step": 130
109
+ },
110
+ {
111
+ "epoch": 2.0,
112
+ "step": 136,
113
+ "total_flos": 21111522983936.0,
114
+ "train_loss": 0.8778598624117234,
115
+ "train_runtime": 458.5184,
116
+ "train_samples_per_second": 4.746,
117
+ "train_steps_per_second": 0.297
118
+ }
119
+ ],
120
+ "logging_steps": 10,
121
+ "max_steps": 136,
122
+ "num_input_tokens_seen": 0,
123
+ "num_train_epochs": 2,
124
+ "save_steps": 100,
125
+ "stateful_callbacks": {
126
+ "TrainerControl": {
127
+ "args": {
128
+ "should_epoch_stop": false,
129
+ "should_evaluate": false,
130
+ "should_log": false,
131
+ "should_save": true,
132
+ "should_training_stop": false
133
+ },
134
+ "attributes": {}
135
+ }
136
+ },
137
+ "total_flos": 21111522983936.0,
138
+ "train_batch_size": 1,
139
+ "trial_name": null,
140
+ "trial_params": null
141
+ }
qwen4B_models/qwen_4B_d1_iter2_model/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a826000e4020639bf4b260a06a4ff200ad767f6f072c88c333b42fbbd4e4a65
3
+ size 7032
qwen4B_models/qwen_4B_d1_iter2_model/training_eval_loss.png ADDED
qwen4B_models/qwen_4B_d1_iter2_model/training_loss.png ADDED
qwen4B_models/qwen_4B_d1_iter2_model/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
qwen4B_models/qwen_4B_d2_iter3_model/README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ library_name: peft
4
+ tags:
5
+ - llama-factory
6
+ - lora
7
+ - generated_from_trainer
8
+ base_model: /ML-A100/team/mm/eamon/self_instruction/models/Qwen1_5_4B
9
+ model-index:
10
+ - name: qwen_4B_d2_iter3_model
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # qwen_4B_d2_iter3_model
18
+
19
+ This model is a fine-tuned version of [/ML-A100/team/mm/eamon/self_instruction/models/Qwen1_5_4B](https://huggingface.co//ML-A100/team/mm/eamon/self_instruction/models/Qwen1_5_4B) on the qwen_4B_d2_iter3_model dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.7803
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 5e-05
41
+ - train_batch_size: 1
42
+ - eval_batch_size: 1
43
+ - seed: 42
44
+ - distributed_type: multi-GPU
45
+ - num_devices: 8
46
+ - gradient_accumulation_steps: 2
47
+ - total_train_batch_size: 16
48
+ - total_eval_batch_size: 8
49
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
+ - lr_scheduler_type: cosine
51
+ - lr_scheduler_warmup_steps: 20
52
+ - num_epochs: 2.0
53
+
54
+ ### Training results
55
+
56
+ | Training Loss | Epoch | Step | Validation Loss |
57
+ |:-------------:|:------:|:----:|:---------------:|
58
+ | 0.6777 | 1.3072 | 100 | 0.7803 |
59
+
60
+
61
+ ### Framework versions
62
+
63
+ - PEFT 0.10.0
64
+ - Transformers 4.41.2
65
+ - Pytorch 2.1.2+cu121
66
+ - Datasets 2.18.0
67
+ - Tokenizers 0.19.1
qwen4B_models/qwen_4B_d2_iter3_model/adapter_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "/ML-A100/team/mm/eamon/self_instruction/models/Qwen1_5_4B",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 16,
14
+ "lora_dropout": 0.05,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 8,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "q_proj",
24
+ "gate_proj",
25
+ "down_proj",
26
+ "up_proj",
27
+ "v_proj",
28
+ "o_proj",
29
+ "k_proj"
30
+ ],
31
+ "task_type": "CAUSAL_LM",
32
+ "use_dora": false,
33
+ "use_rslora": false
34
+ }
qwen4B_models/qwen_4B_d2_iter3_model/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e6fa97eb51fb31d6156d36d6468dbadb1a8edc635ecf9219b866b6c42c3291b
3
+ size 31367672
qwen4B_models/qwen_4B_d2_iter3_model/added_tokens.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "<|endoftext|>": 151643,
3
+ "<|im_end|>": 151645,
4
+ "<|im_start|>": 151644
5
+ }
qwen4B_models/qwen_4B_d2_iter3_model/all_results.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.9869281045751634,
3
+ "eval_loss": 0.7803115844726562,
4
+ "eval_runtime": 8.3907,
5
+ "eval_samples_per_second": 16.208,
6
+ "eval_steps_per_second": 2.026,
7
+ "total_flos": 20289411874816.0,
8
+ "train_loss": 0.8229627185746243,
9
+ "train_runtime": 505.2345,
10
+ "train_samples_per_second": 4.822,
11
+ "train_steps_per_second": 0.301
12
+ }
qwen4B_models/qwen_4B_d2_iter3_model/eval_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.9869281045751634,
3
+ "eval_loss": 0.7803115844726562,
4
+ "eval_runtime": 8.3907,
5
+ "eval_samples_per_second": 16.208,
6
+ "eval_steps_per_second": 2.026
7
+ }
qwen4B_models/qwen_4B_d2_iter3_model/lora.log ADDED
The diff for this file is too large to render. See raw diff
 
qwen4B_models/qwen_4B_d2_iter3_model/merge.log ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ [2024-07-15 12:18:22,063] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
2
+ 07/15/2024 12:18:24 - INFO - llmtuner.data.template - Add <|im_end|>,<|endoftext|> to stop words.
3
+ 07/15/2024 12:18:24 - INFO - llmtuner.model.patcher - Using KV cache for faster generation.
4
+ 07/15/2024 12:18:24 - INFO - llmtuner.model.adapter - Fine-tuning method: LoRA
5
+ 07/15/2024 12:18:27 - INFO - llmtuner.model.adapter - Merged 1 adapter(s).
6
+ 07/15/2024 12:18:27 - INFO - llmtuner.model.adapter - Loaded adapter(s): /ML-A100/team/mm/eamon/self_instruction/seed_ppl/qwen4B_models/qwen_4B_d2_iter3_model
7
+ 07/15/2024 12:18:27 - INFO - llmtuner.model.loader - all params: 3950369280
qwen4B_models/qwen_4B_d2_iter3_model/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
qwen4B_models/qwen_4B_d2_iter3_model/runs/Jul15_12-07-19_t-20240715144706-mwmm8-worker-0/events.out.tfevents.1721045373.t-20240715144706-mwmm8-worker-0.144068.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b4992da340f51fafb479efca3bc0c9e5a3a44ce71daf064f0f85e6b335e4286
3
+ size 9059