File size: 10,864 Bytes
23aa310
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
  0%|                                                                                                                  | 0/1000 [00:00<?, ?it/s]









  1%|β–‰                                                                                                      | 9/1000 [04:05<11:53:17, 43.19s/it]
{'loss': 4.0894, 'learning_rate': 0.0198, 'epoch': 8.42}
  1%|β–ˆ                                                                                                      | 10/1000 [04:20<9:27:19, 34.38s/it][INFO|configuration_utils.py:457] 2023-04-22 15:58:29,414 >> Configuration saved in output\adgen-chatglm-6b-pt-128-2e-2\checkpoint-10\config.json
[INFO|configuration_utils.py:362] 2023-04-22 15:58:29,416 >> Configuration saved in output\adgen-chatglm-6b-pt-128-2e-2\checkpoint-10\generation_config.json
[INFO|modeling_utils.py:1762] 2023-04-22 15:58:29,657 >> Model weights saved in output\adgen-chatglm-6b-pt-128-2e-2\checkpoint-10\pytorch_model.bin
[INFO|tokenization_utils_base.py:2163] 2023-04-22 15:58:29,662 >> tokenizer config file saved in output\adgen-chatglm-6b-pt-128-2e-2\checkpoint-10\tokenizer_config.json
[INFO|tokenization_utils_base.py:2170] 2023-04-22 15:58:29,664 >> Special tokens file saved in output\adgen-chatglm-6b-pt-128-2e-2\checkpoint-10\special_tokens_map.json








  2%|β–ˆβ–‰                                                                                                     | 19/1000 [08:55<8:52:26, 32.56s/it]
{'loss': 1.6609, 'learning_rate': 0.0196, 'epoch': 16.84}
  2%|β–ˆβ–ˆ                                                                                                     | 20/1000 [09:09<7:20:43, 26.98s/it][INFO|configuration_utils.py:457] 2023-04-22 16:03:18,188 >> Configuration saved in output\adgen-chatglm-6b-pt-128-2e-2\checkpoint-20\config.json
[INFO|configuration_utils.py:362] 2023-04-22 16:03:18,191 >> Configuration saved in output\adgen-chatglm-6b-pt-128-2e-2\checkpoint-20\generation_config.json
[INFO|modeling_utils.py:1762] 2023-04-22 16:03:18,399 >> Model weights saved in output\adgen-chatglm-6b-pt-128-2e-2\checkpoint-20\pytorch_model.bin
[INFO|tokenization_utils_base.py:2163] 2023-04-22 16:03:18,403 >> tokenizer config file saved in output\adgen-chatglm-6b-pt-128-2e-2\checkpoint-20\tokenizer_config.json
[INFO|tokenization_utils_base.py:2170] 2023-04-22 16:03:18,405 >> Special tokens file saved in output\adgen-chatglm-6b-pt-128-2e-2\checkpoint-20\special_tokens_map.json








  3%|β–ˆβ–ˆβ–‰                                                                                                    | 29/1000 [12:01<4:03:58, 15.08s/it]
{'loss': 0.38, 'learning_rate': 0.0194, 'epoch': 25.26}
  3%|β–ˆβ–ˆβ–ˆ                                                                                                    | 30/1000 [12:14<3:54:04, 14.48s/it][INFO|configuration_utils.py:457] 2023-04-22 16:06:23,084 >> Configuration saved in output\adgen-chatglm-6b-pt-128-2e-2\checkpoint-30\config.json
[INFO|configuration_utils.py:362] 2023-04-22 16:06:23,086 >> Configuration saved in output\adgen-chatglm-6b-pt-128-2e-2\checkpoint-30\generation_config.json
[INFO|modeling_utils.py:1762] 2023-04-22 16:06:23,292 >> Model weights saved in output\adgen-chatglm-6b-pt-128-2e-2\checkpoint-30\pytorch_model.bin
[INFO|tokenization_utils_base.py:2163] 2023-04-22 16:06:23,296 >> tokenizer config file saved in output\adgen-chatglm-6b-pt-128-2e-2\checkpoint-30\tokenizer_config.json
[INFO|tokenization_utils_base.py:2170] 2023-04-22 16:06:23,296 >> Special tokens file saved in output\adgen-chatglm-6b-pt-128-2e-2\checkpoint-30\special_tokens_map.json








  4%|β–ˆβ–ˆβ–ˆβ–ˆ                                                                                                   | 39/1000 [14:51<4:40:29, 17.51s/it]
{'loss': 0.0535, 'learning_rate': 0.0192, 'epoch': 33.68}
  4%|β–ˆβ–ˆβ–ˆβ–ˆ                                                                                                   | 40/1000 [15:11<4:49:19, 18.08s/it][INFO|configuration_utils.py:457] 2023-04-22 16:09:20,023 >> Configuration saved in output\adgen-chatglm-6b-pt-128-2e-2\checkpoint-40\config.json
[INFO|configuration_utils.py:362] 2023-04-22 16:09:20,027 >> Configuration saved in output\adgen-chatglm-6b-pt-128-2e-2\checkpoint-40\generation_config.json
[INFO|modeling_utils.py:1762] 2023-04-22 16:09:20,233 >> Model weights saved in output\adgen-chatglm-6b-pt-128-2e-2\checkpoint-40\pytorch_model.bin
[INFO|tokenization_utils_base.py:2163] 2023-04-22 16:09:20,237 >> tokenizer config file saved in output\adgen-chatglm-6b-pt-128-2e-2\checkpoint-40\tokenizer_config.json
[INFO|tokenization_utils_base.py:2170] 2023-04-22 16:09:20,238 >> Special tokens file saved in output\adgen-chatglm-6b-pt-128-2e-2\checkpoint-40\special_tokens_map.json








  5%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–                                                                                                 | 50/1000 [18:38<5:22:50, 20.39s/it][INFO|configuration_utils.py:457] 2023-04-22 16:12:47,553 >> Configuration saved in output\adgen-chatglm-6b-pt-128-2e-2\checkpoint-50\config.json
[INFO|configuration_utils.py:362] 2023-04-22 16:12:47,556 >> Configuration saved in output\adgen-chatglm-6b-pt-128-2e-2\checkpoint-50\generation_config.json
[INFO|modeling_utils.py:1762] 2023-04-22 16:12:47,773 >> Model weights saved in output\adgen-chatglm-6b-pt-128-2e-2\checkpoint-50\pytorch_model.bin
[INFO|tokenization_utils_base.py:2163] 2023-04-22 16:12:47,780 >> tokenizer config file saved in output\adgen-chatglm-6b-pt-128-2e-2\checkpoint-50\tokenizer_config.json
[INFO|tokenization_utils_base.py:2170] 2023-04-22 16:12:47,781 >> Special tokens file saved in output\adgen-chatglm-6b-pt-128-2e-2\checkpoint-50\special_tokens_map.json
{'loss': 0.0295, 'learning_rate': 0.019, 'epoch': 42.11}
Saving PrefixEncoder
  5%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž                                                                                                 | 51/1000 [18:59<5:26:08, 20.62s/it]Traceback (most recent call last):
  File "main.py", line 444, in <module>
    main()
  File "main.py", line 383, in main
    train_result = trainer.train(resume_from_checkpoint=checkpoint)
  File "E:\Documents\Desktop\ChatGLM-6B\ptuning\trainer.py", line 1635, in train
    return inner_training_loop(
  File "E:\Documents\Desktop\ChatGLM-6B\ptuning\trainer.py", line 1904, in _inner_training_loop
    tr_loss_step = self.training_step(model, inputs)
  File "E:\Documents\Desktop\ChatGLM-6B\ptuning\trainer.py", line 2665, in training_step
    loss.backward()
  File "D:\Program\Python38\lib\site-packages\torch\_tensor.py", line 487, in backward
    torch.autograd.backward(
  File "D:\Program\Python38\lib\site-packages\torch\autograd\__init__.py", line 200, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
KeyboardInterrupt
Error in sys.excepthook:
Traceback (most recent call last):
  File "D:\Program\Python38\lib\site-packages\rich\console.py", line 1694, in print
    extend(render(renderable, render_options))
  File "D:\Program\Python38\lib\site-packages\rich\console.py", line 1330, in render
    yield from self.render(render_output, _options)
  File "D:\Program\Python38\lib\site-packages\rich\console.py", line 1326, in render
    for render_output in iter_render:
  File "D:\Program\Python38\lib\site-packages\rich\constrain.py", line 29, in __rich_console__
    yield from console.render(self.renderable, child_options)
  File "D:\Program\Python38\lib\site-packages\rich\console.py", line 1326, in render
    for render_output in iter_render:
  File "D:\Program\Python38\lib\site-packages\rich\panel.py", line 220, in __rich_console__
    lines = console.render_lines(renderable, child_options, style=style)
  File "D:\Program\Python38\lib\site-packages\rich\console.py", line 1366, in render_lines
    lines = list(
  File "D:\Program\Python38\lib\site-packages\rich\segment.py", line 292, in split_and_crop_lines
    for segment in segments:
  File "D:\Program\Python38\lib\site-packages\rich\console.py", line 1326, in render
    for render_output in iter_render:
  File "D:\Program\Python38\lib\site-packages\rich\padding.py", line 97, in __rich_console__
    lines = console.render_lines(
  File "D:\Program\Python38\lib\site-packages\rich\console.py", line 1366, in render_lines
    lines = list(
  File "D:\Program\Python38\lib\site-packages\rich\segment.py", line 292, in split_and_crop_lines
    for segment in segments:
  File "D:\Program\Python38\lib\site-packages\rich\console.py", line 1330, in render
    yield from self.render(render_output, _options)
  File "D:\Program\Python38\lib\site-packages\rich\console.py", line 1326, in render
    for render_output in iter_render:
  File "D:\Program\Python38\lib\site-packages\rich\syntax.py", line 609, in __rich_console__
    segments = Segments(self._get_syntax(console, options))
  File "D:\Program\Python38\lib\site-packages\rich\segment.py", line 668, in __init__
    self.segments = list(segments)
  File "D:\Program\Python38\lib\site-packages\rich\syntax.py", line 637, in _get_syntax
    text = self.highlight(processed_code, self.line_range)
  File "D:\Program\Python38\lib\site-packages\rich\syntax.py", line 509, in highlight
    text.append_tokens(tokens_to_spans())
  File "D:\Program\Python38\lib\site-packages\rich\text.py", line 995, in append_tokens
    for content, style in tokens:
  File "D:\Program\Python38\lib\site-packages\rich\syntax.py", line 497, in tokens_to_spans
    _token_type, token = next(tokens)
  File "D:\Program\Python38\lib\site-packages\rich\syntax.py", line 484, in line_tokenize
    for token_type, token in lexer.get_tokens(code):
  File "D:\Program\Python38\lib\site-packages\pygments\lexer.py", line 190, in streamer
    for _, t, v in self.get_tokens_unprocessed(text):
  File "D:\Program\Python38\lib\site-packages\pygments\lexer.py", line 632, in get_tokens_unprocessed
    m = rexmatch(text, pos)
KeyboardInterrupt
Original exception was:
Traceback (most recent call last):
  File "main.py", line 444, in <module>
    main()
  File "main.py", line 383, in main
    train_result = trainer.train(resume_from_checkpoint=checkpoint)
  File "E:\Documents\Desktop\ChatGLM-6B\ptuning\trainer.py", line 1635, in train
    return inner_training_loop(
  File "E:\Documents\Desktop\ChatGLM-6B\ptuning\trainer.py", line 1904, in _inner_training_loop
    tr_loss_step = self.training_step(model, inputs)
  File "E:\Documents\Desktop\ChatGLM-6B\ptuning\trainer.py", line 2665, in training_step
    loss.backward()
  File "D:\Program\Python38\lib\site-packages\torch\_tensor.py", line 487, in backward
    torch.autograd.backward(
  File "D:\Program\Python38\lib\site-packages\torch\autograd\__init__.py", line 200, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
KeyboardInterrupt