File size: 39,866 Bytes
6d11cac
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
Warning: Unable to use MOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics
Warning: Unable to use MCMOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics
[2024-05-19 14:19:05,764] [    INFO] distributed_strategy.py:214 - distributed strategy initialized
======================= Modified FLAGS detected =======================
FLAGS(name='FLAGS_selected_gpus', current_value='1', default_value='')
=======================================================================
I0519 14:19:05.765990   199 tcp_utils.cc:130] Successfully connected to 172.19.2.2:47457
I0519 14:19:05.766265   199 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-19 14:19:05,766] [    INFO] topology.py:358 - Total 2 pipe comm group(s) create successfully!
W0519 14:19:05.767158   199 gpu_resources.cc:119] Please NOTE: device: 1, GPU Compute Capability: 7.5, Driver API Version: 12.2, Runtime API Version: 11.8
W0519 14:19:05.784675   199 gpu_resources.cc:164] device: 1, cuDNN Version: 8.9.
I0519 14:19:05.968580   199 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-19 14:19:05,968] [    INFO] topology.py:358 - Total 1 data comm group(s) create successfully!
[2024-05-19 14:19:05,968] [    INFO] topology.py:358 - Total 2 model comm group(s) create successfully!
[2024-05-19 14:19:05,968] [    INFO] topology.py:358 - Total 2 sharding comm group(s) create successfully!
I0519 14:19:05.969017   199 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-19 14:19:05,969] [    INFO] topology.py:288 - HybridParallelInfo: rank_id: 1, mp_degree: 1, sharding_degree: 1, pp_degree: 1, dp_degree: 2, sep_degree: 1, mp_group: [1],  sharding_group: [1], pp_group: [1], dp_group: [0, 1], sep:group: None, check/clip group: [1]
Config annotation datasets/VisDrone/datasets/VisDrone/annotations_VisDrone_train.json is not a file, dataset config is not valid
Traceback (most recent call last):
  File "/kaggle/working/ObjectDetection/DETR/tools/train.py", line 183, in <module>
    main()
  File "/kaggle/working/ObjectDetection/DETR/tools/train.py", line 179, in main
    run(FLAGS, cfg)
  File "/kaggle/working/ObjectDetection/DETR/tools/train.py", line 126, in run
    trainer = Trainer(cfg, mode='train')
  File "/kaggle/working/ObjectDetection/DETR/ppdet/engine/trainer.py", line 77, in __init__
    self.loader = create('{}Reader'.format(capital_mode))(
  File "/kaggle/working/ObjectDetection/DETR/ppdet/data/reader.py", line 167, in __call__
    self.dataset.check_or_download_dataset()
  File "/kaggle/working/ObjectDetection/DETR/ppdet/data/source/dataset.py", line 105, in check_or_download_dataset
    self.dataset_dir = get_dataset_path(self.dataset_dir, self.anno_path,
  File "/kaggle/working/ObjectDetection/DETR/ppdet/utils/download.py", line 190, in get_dataset_path
    raise ValueError(
ValueError: Dataset /kaggle/working/ObjectDetection/DETR/datasets/VisDrone is not valid for reason above, please check again.
I0519 14:19:06.217675   199 process_group_nccl.cc:132] ProcessGroupNCCL destruct 
I0519 14:19:06.217739   199 process_group_nccl.cc:132] ProcessGroupNCCL destruct 
I0519 14:19:06.217757   199 process_group_nccl.cc:132] ProcessGroupNCCL destruct 
Warning: Unable to use MOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics
Warning: Unable to use MCMOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics
[2024-05-19 14:21:15,561] [    INFO] distributed_strategy.py:214 - distributed strategy initialized
======================= Modified FLAGS detected =======================
FLAGS(name='FLAGS_selected_gpus', current_value='1', default_value='')
=======================================================================
I0519 14:21:15.562182   275 tcp_utils.cc:130] Successfully connected to 172.19.2.2:58840
I0519 14:21:15.601843   275 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-19 14:21:15,602] [    INFO] topology.py:358 - Total 2 pipe comm group(s) create successfully!
W0519 14:21:15.602743   275 gpu_resources.cc:119] Please NOTE: device: 1, GPU Compute Capability: 7.5, Driver API Version: 12.2, Runtime API Version: 11.8
W0519 14:21:15.604059   275 gpu_resources.cc:164] device: 1, cuDNN Version: 8.9.
I0519 14:21:15.770931   275 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-19 14:21:15,771] [    INFO] topology.py:358 - Total 1 data comm group(s) create successfully!
[2024-05-19 14:21:15,771] [    INFO] topology.py:358 - Total 2 model comm group(s) create successfully!
[2024-05-19 14:21:15,771] [    INFO] topology.py:358 - Total 2 sharding comm group(s) create successfully!
I0519 14:21:15.771346   275 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-19 14:21:15,771] [    INFO] topology.py:288 - HybridParallelInfo: rank_id: 1, mp_degree: 1, sharding_degree: 1, pp_degree: 1, dp_degree: 2, sep_degree: 1, mp_group: [1],  sharding_group: [1], pp_group: [1], dp_group: [0, 1], sep:group: None, check/clip group: [1]
Config annotation datasets/VisDrone/datasets/VisDrone/annotations_VisDrone_train.json is not a file, dataset config is not valid
Traceback (most recent call last):
  File "/kaggle/working/ObjectDetection/DETR/tools/train.py", line 183, in <module>
    main()
  File "/kaggle/working/ObjectDetection/DETR/tools/train.py", line 179, in main
    run(FLAGS, cfg)
  File "/kaggle/working/ObjectDetection/DETR/tools/train.py", line 126, in run
    trainer = Trainer(cfg, mode='train')
  File "/kaggle/working/ObjectDetection/DETR/ppdet/engine/trainer.py", line 77, in __init__
    self.loader = create('{}Reader'.format(capital_mode))(
  File "/kaggle/working/ObjectDetection/DETR/ppdet/data/reader.py", line 167, in __call__
    self.dataset.check_or_download_dataset()
  File "/kaggle/working/ObjectDetection/DETR/ppdet/data/source/dataset.py", line 105, in check_or_download_dataset
    self.dataset_dir = get_dataset_path(self.dataset_dir, self.anno_path,
  File "/kaggle/working/ObjectDetection/DETR/ppdet/utils/download.py", line 190, in get_dataset_path
    raise ValueError(
ValueError: Dataset /kaggle/working/ObjectDetection/DETR/datasets/VisDrone is not valid for reason above, please check again.
I0519 14:21:16.015441   275 process_group_nccl.cc:132] ProcessGroupNCCL destruct 
I0519 14:21:16.015496   275 process_group_nccl.cc:132] ProcessGroupNCCL destruct 
I0519 14:21:16.015507   275 process_group_nccl.cc:132] ProcessGroupNCCL destruct 
Warning: Unable to use MOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics
Warning: Unable to use MCMOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics
[2024-05-19 14:25:56,459] [    INFO] distributed_strategy.py:214 - distributed strategy initialized
======================= Modified FLAGS detected =======================
FLAGS(name='FLAGS_selected_gpus', current_value='1', default_value='')
=======================================================================
I0519 14:25:56.461030   341 tcp_utils.cc:107] Retry to connect to 172.19.2.2:58530 while the server is not yet listening.
I0519 14:25:59.461315   341 tcp_utils.cc:130] Successfully connected to 172.19.2.2:58530
I0519 14:25:59.489733   341 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-19 14:25:59,490] [    INFO] topology.py:358 - Total 2 pipe comm group(s) create successfully!
W0519 14:25:59.490772   341 gpu_resources.cc:119] Please NOTE: device: 1, GPU Compute Capability: 7.5, Driver API Version: 12.2, Runtime API Version: 11.8
W0519 14:25:59.492075   341 gpu_resources.cc:164] device: 1, cuDNN Version: 8.9.
I0519 14:25:59.615023   341 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-19 14:25:59,615] [    INFO] topology.py:358 - Total 1 data comm group(s) create successfully!
[2024-05-19 14:25:59,615] [    INFO] topology.py:358 - Total 2 model comm group(s) create successfully!
[2024-05-19 14:25:59,615] [    INFO] topology.py:358 - Total 2 sharding comm group(s) create successfully!
I0519 14:25:59.615588   341 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-19 14:25:59,615] [    INFO] topology.py:288 - HybridParallelInfo: rank_id: 1, mp_degree: 1, sharding_degree: 1, pp_degree: 1, dp_degree: 2, sep_degree: 1, mp_group: [1],  sharding_group: [1], pp_group: [1], dp_group: [0, 1], sep:group: None, check/clip group: [1]
loading annotations into memory...
Done (t=2.02s)
creating index...
index created!
Traceback (most recent call last):
  File "/kaggle/working/ObjectDetection/DETR/tools/train.py", line 183, in <module>
    main()
  File "/kaggle/working/ObjectDetection/DETR/tools/train.py", line 179, in main
    run(FLAGS, cfg)
  File "/kaggle/working/ObjectDetection/DETR/tools/train.py", line 126, in run
    trainer = Trainer(cfg, mode='train')
  File "/kaggle/working/ObjectDetection/DETR/ppdet/engine/trainer.py", line 77, in __init__
    self.loader = create('{}Reader'.format(capital_mode))(
  File "/kaggle/working/ObjectDetection/DETR/ppdet/data/reader.py", line 168, in __call__
    self.dataset.parse_dataset()
  File "/kaggle/working/ObjectDetection/DETR/ppdet/data/source/coco.py", line 186, in parse_dataset
    gt_class[i][0] = self.catid2clsid[catid]
KeyError: 0
I0519 14:26:02.515522   341 process_group_nccl.cc:132] ProcessGroupNCCL destruct 
I0519 14:26:02.515580   341 process_group_nccl.cc:132] ProcessGroupNCCL destruct 
I0519 14:26:02.515591   341 process_group_nccl.cc:132] ProcessGroupNCCL destruct 
Warning: Unable to use MOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics
Warning: Unable to use MCMOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics
[2024-05-19 14:31:28,629] [    INFO] distributed_strategy.py:214 - distributed strategy initialized
======================= Modified FLAGS detected =======================
FLAGS(name='FLAGS_selected_gpus', current_value='1', default_value='')
=======================================================================
I0519 14:31:28.630641   422 tcp_utils.cc:107] Retry to connect to 172.19.2.2:52124 while the server is not yet listening.
I0519 14:31:31.630841   422 tcp_utils.cc:130] Successfully connected to 172.19.2.2:52124
I0519 14:31:31.659746   422 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-19 14:31:31,660] [    INFO] topology.py:358 - Total 2 pipe comm group(s) create successfully!
W0519 14:31:31.660879   422 gpu_resources.cc:119] Please NOTE: device: 1, GPU Compute Capability: 7.5, Driver API Version: 12.2, Runtime API Version: 11.8
W0519 14:31:31.662217   422 gpu_resources.cc:164] device: 1, cuDNN Version: 8.9.
I0519 14:31:31.785684   422 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-19 14:31:31,785] [    INFO] topology.py:358 - Total 1 data comm group(s) create successfully!
[2024-05-19 14:31:31,785] [    INFO] topology.py:358 - Total 2 model comm group(s) create successfully!
[2024-05-19 14:31:31,786] [    INFO] topology.py:358 - Total 2 sharding comm group(s) create successfully!
I0519 14:31:31.786106   422 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-19 14:31:31,786] [    INFO] topology.py:288 - HybridParallelInfo: rank_id: 1, mp_degree: 1, sharding_degree: 1, pp_degree: 1, dp_degree: 2, sep_degree: 1, mp_group: [1],  sharding_group: [1], pp_group: [1], dp_group: [0, 1], sep:group: None, check/clip group: [1]
loading annotations into memory...
Done (t=1.99s)
creating index...
index created!
Found an invalid bbox in annotations: im_id: 201, area: 0.0 x1: 611, y1: 158, x2: 615, y2: 158.
W0519 14:32:10.965493   422 reducer.cc:721] All parameters are involved in the backward pass. It is recommended to set find_unused_parameters to False to improve performance. However, if unused parameters appear in subsequent iterative training, then an error will occur. Please make it clear that in the subsequent training, there will be no parameters that are not used in the backward pass, and then set find_unused_parameters
Traceback (most recent call last):
  File "/kaggle/working/ObjectDetection/DETR/tools/train.py", line 183, in <module>
    main()
  File "/kaggle/working/ObjectDetection/DETR/tools/train.py", line 179, in main
    run(FLAGS, cfg)
  File "/kaggle/working/ObjectDetection/DETR/tools/train.py", line 135, in run
    trainer.train(FLAGS.eval)
  File "/kaggle/working/ObjectDetection/DETR/ppdet/engine/trainer.py", line 377, in train
    outputs = model(data)
  File "/opt/conda/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1429, in __call__
    return self.forward(*inputs, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/paddle/distributed/parallel.py", line 528, in forward
    outputs = self._layers(*inputs, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1429, in __call__
    return self.forward(*inputs, **kwargs)
  File "/kaggle/working/ObjectDetection/DETR/ppdet/modeling/architectures/meta_arch.py", line 60, in forward
    out = self.get_loss()
  File "/kaggle/working/ObjectDetection/DETR/ppdet/modeling/architectures/detr.py", line 113, in get_loss
    return self._forward()
  File "/kaggle/working/ObjectDetection/DETR/ppdet/modeling/architectures/detr.py", line 87, in _forward
    out_transformer = self.transformer(body_feats, pad_mask, self.inputs)
  File "/opt/conda/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1429, in __call__
    return self.forward(*inputs, **kwargs)
  File "/kaggle/working/ObjectDetection/DETR/ppdet/modeling/transformers/rtdetr_transformer.py", line 434, in forward
    out_bboxes, out_logits = self.decoder(
  File "/opt/conda/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1429, in __call__
    return self.forward(*inputs, **kwargs)
  File "/kaggle/working/ObjectDetection/DETR/ppdet/modeling/transformers/rtdetr_transformer.py", line 206, in forward
    output = layer(output, ref_points_input, memory,
  File "/opt/conda/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1429, in __call__
    return self.forward(*inputs, **kwargs)
  File "/kaggle/working/ObjectDetection/DETR/ppdet/modeling/transformers/rtdetr_transformer.py", line 160, in forward
    tgt2 = self.self_attn(q, k, value=tgt, attn_mask=attn_mask)
  File "/opt/conda/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1429, in __call__
    return self.forward(*inputs, **kwargs)
  File "/kaggle/working/ObjectDetection/DETR/ppdet/modeling/layers.py", line 1287, in forward
    product = product * scaling
MemoryError: 

--------------------------------------
C++ Traceback (most recent call last):
--------------------------------------
0   paddle::pybind::CallScalarFuction(paddle::Tensor const&, double, std::string)
1   scale_ad_func(paddle::Tensor const&, paddle::experimental::ScalarBase<paddle::Tensor>, float, bool)
2   paddle::experimental::scale(paddle::Tensor const&, paddle::experimental::ScalarBase<paddle::Tensor> const&, float, bool)
3   void phi::ScaleKernel<float, phi::GPUContext>(phi::GPUContext const&, phi::DenseTensor const&, paddle::experimental::ScalarBase<phi::DenseTensor> const&, float, bool, phi::DenseTensor*)
4   float* phi::DeviceContext::Alloc<float>(phi::TensorBase*, unsigned long, bool) const
5   phi::DeviceContext::Impl::Alloc(phi::TensorBase*, phi::Place const&, phi::DataType, unsigned long, bool, bool) const
6   phi::DenseTensor::AllocateFrom(phi::Allocator*, phi::DataType, unsigned long, bool)
7   paddle::memory::allocation::Allocator::Allocate(unsigned long)
8   paddle::memory::allocation::StatAllocator::AllocateImpl(unsigned long)
9   paddle::memory::allocation::Allocator::Allocate(unsigned long)
10  paddle::memory::allocation::Allocator::Allocate(unsigned long)
11  paddle::memory::allocation::Allocator::Allocate(unsigned long)
12  paddle::memory::allocation::Allocator::Allocate(unsigned long)
13  paddle::memory::allocation::CUDAAllocator::AllocateImpl(unsigned long)
14  std::string phi::enforce::GetCompleteTraceBackString<std::string >(std::string&&, char const*, int)
15  phi::enforce::GetCurrentTraceBackString[abi:cxx11](bool)

----------------------
Error Message Summary:
----------------------
ResourceExhaustedError: 

Out of memory error on GPU 1. Cannot allocate 71.296875MB memory on GPU 1, 14.733398GB memory has been allocated and available memory is only 15.062500MB.

Please check whether there is any other process using GPU 1.
1. If yes, please stop them, or start PaddlePaddle on another GPU.
2. If no, please decrease the batch size of your model. 
 (at /paddle/paddle/fluid/memory/allocation/cuda_allocator.cc:86)

I0519 14:32:58.850878   422 process_group_nccl.cc:132] ProcessGroupNCCL destruct 
I0519 14:32:58.851009   422 process_group_nccl.cc:132] ProcessGroupNCCL destruct 
I0519 14:32:58.851034   422 process_group_nccl.cc:132] ProcessGroupNCCL destruct 
Warning: Unable to use MOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics
Warning: Unable to use MCMOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics
usage: train.py [-h] [-c CONFIG] [-o [OPT ...]] [--eval] [-r RESUME]
                [--slim_config SLIM_CONFIG] [--enable_ce ENABLE_CE] [--amp]
                [--fleet] [--use_vdl USE_VDL] [--vdl_log_dir VDL_LOG_DIR]
                [--use_wandb USE_WANDB] [--save_prediction_only]
                [--profiler_options PROFILER_OPTIONS] [--save_proposals]
                [--proposals_path PROPOSALS_PATH] [--to_static]
train.py: error: argument --use_wandb: expected one argument
Warning: Unable to use MOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics
Warning: Unable to use MCMOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics
[2024-05-19 14:36:22,326] [    INFO] distributed_strategy.py:214 - distributed strategy initialized
======================= Modified FLAGS detected =======================
FLAGS(name='FLAGS_selected_gpus', current_value='1', default_value='')
=======================================================================
I0519 14:36:22.328052   629 tcp_utils.cc:107] Retry to connect to 172.19.2.2:37848 while the server is not yet listening.
I0519 14:36:25.328375   629 tcp_utils.cc:130] Successfully connected to 172.19.2.2:37848
I0519 14:36:25.358937   629 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-19 14:36:25,359] [    INFO] topology.py:358 - Total 2 pipe comm group(s) create successfully!
W0519 14:36:25.360067   629 gpu_resources.cc:119] Please NOTE: device: 1, GPU Compute Capability: 7.5, Driver API Version: 12.2, Runtime API Version: 11.8
W0519 14:36:25.361441   629 gpu_resources.cc:164] device: 1, cuDNN Version: 8.9.
I0519 14:36:25.489009   629 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-19 14:36:25,489] [    INFO] topology.py:358 - Total 1 data comm group(s) create successfully!
[2024-05-19 14:36:25,489] [    INFO] topology.py:358 - Total 2 model comm group(s) create successfully!
[2024-05-19 14:36:25,489] [    INFO] topology.py:358 - Total 2 sharding comm group(s) create successfully!
I0519 14:36:25.489432   629 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-19 14:36:25,489] [    INFO] topology.py:288 - HybridParallelInfo: rank_id: 1, mp_degree: 1, sharding_degree: 1, pp_degree: 1, dp_degree: 2, sep_degree: 1, mp_group: [1],  sharding_group: [1], pp_group: [1], dp_group: [0, 1], sep:group: None, check/clip group: [1]
loading annotations into memory...
Done (t=1.98s)
creating index...
index created!
Found an invalid bbox in annotations: im_id: 201, area: 0.0 x1: 611, y1: 158, x2: 615, y2: 158.
W0519 14:36:40.838984   629 reducer.cc:721] All parameters are involved in the backward pass. It is recommended to set find_unused_parameters to False to improve performance. However, if unused parameters appear in subsequent iterative training, then an error will occur. Please make it clear that in the subsequent training, there will be no parameters that are not used in the backward pass, and then set find_unused_parameters
Warning: Unable to use MOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics
Warning: Unable to use MCMOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics
[2024-05-19 14:39:04,754] [    INFO] distributed_strategy.py:214 - distributed strategy initialized
======================= Modified FLAGS detected =======================
FLAGS(name='FLAGS_selected_gpus', current_value='1', default_value='')
=======================================================================
I0519 14:39:04.756042   798 tcp_utils.cc:130] Successfully connected to 172.19.2.2:58013
I0519 14:39:04.769721   798 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-19 14:39:04,770] [    INFO] topology.py:358 - Total 2 pipe comm group(s) create successfully!
W0519 14:39:04.770710   798 gpu_resources.cc:119] Please NOTE: device: 1, GPU Compute Capability: 7.5, Driver API Version: 12.2, Runtime API Version: 11.8
W0519 14:39:04.772061   798 gpu_resources.cc:164] device: 1, cuDNN Version: 8.9.
I0519 14:39:04.955181   798 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-19 14:39:04,955] [    INFO] topology.py:358 - Total 1 data comm group(s) create successfully!
[2024-05-19 14:39:04,955] [    INFO] topology.py:358 - Total 2 model comm group(s) create successfully!
[2024-05-19 14:39:04,955] [    INFO] topology.py:358 - Total 2 sharding comm group(s) create successfully!
I0519 14:39:04.955638   798 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-19 14:39:04,955] [    INFO] topology.py:288 - HybridParallelInfo: rank_id: 1, mp_degree: 1, sharding_degree: 1, pp_degree: 1, dp_degree: 2, sep_degree: 1, mp_group: [1],  sharding_group: [1], pp_group: [1], dp_group: [0, 1], sep:group: None, check/clip group: [1]
loading annotations into memory...
Done (t=2.35s)
creating index...
index created!
Found an invalid bbox in annotations: im_id: 201, area: 0.0 x1: 611, y1: 158, x2: 615, y2: 158.
wandb not found, please install wandb. Use: `pip install wandb`.
Traceback (most recent call last):
  File "/kaggle/working/ObjectDetection/DETR/tools/train.py", line 183, in <module>
    main()
  File "/kaggle/working/ObjectDetection/DETR/tools/train.py", line 179, in main
    run(FLAGS, cfg)
  File "/kaggle/working/ObjectDetection/DETR/tools/train.py", line 126, in run
    trainer = Trainer(cfg, mode='train')
  File "/kaggle/working/ObjectDetection/DETR/ppdet/engine/trainer.py", line 150, in __init__
    self._init_callbacks()
  File "/kaggle/working/ObjectDetection/DETR/ppdet/engine/trainer.py", line 162, in _init_callbacks
    self._callbacks.append(WandbCallback(self))
  File "/kaggle/working/ObjectDetection/DETR/ppdet/engine/callbacks.py", line 323, in __init__
    raise e
  File "/kaggle/working/ObjectDetection/DETR/ppdet/engine/callbacks.py", line 318, in __init__
    import wandb
  File "/opt/conda/lib/python3.10/site-packages/wandb/__init__.py", line 27, in <module>
    from wandb import sdk as wandb_sdk
  File "/opt/conda/lib/python3.10/site-packages/wandb/sdk/__init__.py", line 25, in <module>
    from .artifacts.artifact import Artifact
  File "/opt/conda/lib/python3.10/site-packages/wandb/sdk/artifacts/artifact.py", line 46, in <module>
    from wandb.apis.normalize import normalize_exceptions
  File "/opt/conda/lib/python3.10/site-packages/wandb/apis/__init__.py", line 43, in <module>
    from .internal import Api as InternalApi  # noqa
  File "/opt/conda/lib/python3.10/site-packages/wandb/apis/internal.py", line 3, in <module>
    from wandb.sdk.internal.internal_api import Api as InternalApi
  File "/opt/conda/lib/python3.10/site-packages/wandb/sdk/internal/internal_api.py", line 48, in <module>
    from ..lib import retry
  File "/opt/conda/lib/python3.10/site-packages/wandb/sdk/lib/retry.py", line 17, in <module>
    from .mailbox import ContextCancelledError
  File "/opt/conda/lib/python3.10/site-packages/wandb/sdk/lib/mailbox.py", line 102, in <module>
    class _MailboxSlot:
  File "/opt/conda/lib/python3.10/site-packages/wandb/sdk/lib/mailbox.py", line 103, in _MailboxSlot
    _result: Optional[pb.Result]
AttributeError: module 'wandb.proto.wandb_internal_pb2' has no attribute 'Result'
I0519 14:39:20.125581   798 process_group_nccl.cc:132] ProcessGroupNCCL destruct 
I0519 14:39:20.126137   798 process_group_nccl.cc:132] ProcessGroupNCCL destruct 
I0519 14:39:20.126168   798 process_group_nccl.cc:132] ProcessGroupNCCL destruct 
Warning: Unable to use MOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics
Warning: Unable to use MCMOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics
[2024-05-19 14:47:12,782] [    INFO] distributed_strategy.py:214 - distributed strategy initialized
======================= Modified FLAGS detected =======================
FLAGS(name='FLAGS_selected_gpus', current_value='1', default_value='')
=======================================================================
I0519 14:47:12.783265   179 tcp_utils.cc:107] Retry to connect to 172.19.2.2:41929 while the server is not yet listening.
I0519 14:47:15.783504   179 tcp_utils.cc:130] Successfully connected to 172.19.2.2:41929
I0519 14:47:15.811667   179 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-19 14:47:15,812] [    INFO] topology.py:358 - Total 2 pipe comm group(s) create successfully!
W0519 14:47:15.812635   179 gpu_resources.cc:119] Please NOTE: device: 1, GPU Compute Capability: 7.5, Driver API Version: 12.2, Runtime API Version: 11.8
W0519 14:47:15.826958   179 gpu_resources.cc:164] device: 1, cuDNN Version: 8.9.
I0519 14:47:15.988464   179 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-19 14:47:15,988] [    INFO] topology.py:358 - Total 1 data comm group(s) create successfully!
[2024-05-19 14:47:15,988] [    INFO] topology.py:358 - Total 2 model comm group(s) create successfully!
[2024-05-19 14:47:15,988] [    INFO] topology.py:358 - Total 2 sharding comm group(s) create successfully!
I0519 14:47:15.988860   179 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-19 14:47:15,988] [    INFO] topology.py:288 - HybridParallelInfo: rank_id: 1, mp_degree: 1, sharding_degree: 1, pp_degree: 1, dp_degree: 2, sep_degree: 1, mp_group: [1],  sharding_group: [1], pp_group: [1], dp_group: [0, 1], sep:group: None, check/clip group: [1]
loading annotations into memory...
Done (t=1.97s)
creating index...
index created!
Found an invalid bbox in annotations: im_id: 201, area: 0.0 x1: 611, y1: 158, x2: 615, y2: 158.
W0519 14:48:20.116335   179 reducer.cc:721] All parameters are involved in the backward pass. It is recommended to set find_unused_parameters to False to improve performance. However, if unused parameters appear in subsequent iterative training, then an error will occur. Please make it clear that in the subsequent training, there will be no parameters that are not used in the backward pass, and then set find_unused_parameters
Traceback (most recent call last):
  File "/kaggle/working/ObjectDetection/DETR/tools/train.py", line 183, in <module>
    main()
  File "/kaggle/working/ObjectDetection/DETR/tools/train.py", line 179, in main
    run(FLAGS, cfg)
  File "/kaggle/working/ObjectDetection/DETR/tools/train.py", line 135, in run
    trainer.train(FLAGS.eval)
  File "/kaggle/working/ObjectDetection/DETR/ppdet/engine/trainer.py", line 377, in train
    outputs = model(data)
  File "/root/.local/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1429, in __call__
    return self.forward(*inputs, **kwargs)
  File "/root/.local/lib/python3.10/site-packages/paddle/distributed/parallel.py", line 528, in forward
    outputs = self._layers(*inputs, **kwargs)
  File "/root/.local/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1429, in __call__
    return self.forward(*inputs, **kwargs)
  File "/kaggle/working/ObjectDetection/DETR/ppdet/modeling/architectures/meta_arch.py", line 60, in forward
    out = self.get_loss()
  File "/kaggle/working/ObjectDetection/DETR/ppdet/modeling/architectures/detr.py", line 113, in get_loss
    return self._forward()
  File "/kaggle/working/ObjectDetection/DETR/ppdet/modeling/architectures/detr.py", line 87, in _forward
    out_transformer = self.transformer(body_feats, pad_mask, self.inputs)
  File "/root/.local/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1429, in __call__
    return self.forward(*inputs, **kwargs)
  File "/kaggle/working/ObjectDetection/DETR/ppdet/modeling/transformers/rtdetr_transformer.py", line 434, in forward
    out_bboxes, out_logits = self.decoder(
  File "/root/.local/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1429, in __call__
    return self.forward(*inputs, **kwargs)
  File "/kaggle/working/ObjectDetection/DETR/ppdet/modeling/transformers/rtdetr_transformer.py", line 206, in forward
    output = layer(output, ref_points_input, memory,
  File "/root/.local/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1429, in __call__
    return self.forward(*inputs, **kwargs)
  File "/kaggle/working/ObjectDetection/DETR/ppdet/modeling/transformers/rtdetr_transformer.py", line 160, in forward
    tgt2 = self.self_attn(q, k, value=tgt, attn_mask=attn_mask)
  File "/root/.local/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1429, in __call__
    return self.forward(*inputs, **kwargs)
  File "/kaggle/working/ObjectDetection/DETR/ppdet/modeling/layers.py", line 1287, in forward
    product = product * scaling
MemoryError: 

--------------------------------------
C++ Traceback (most recent call last):
--------------------------------------
0   paddle::pybind::CallScalarFuction(paddle::Tensor const&, double, std::string)
1   scale_ad_func(paddle::Tensor const&, paddle::experimental::ScalarBase<paddle::Tensor>, float, bool)
2   paddle::experimental::scale(paddle::Tensor const&, paddle::experimental::ScalarBase<paddle::Tensor> const&, float, bool)
3   void phi::ScaleKernel<float, phi::GPUContext>(phi::GPUContext const&, phi::DenseTensor const&, paddle::experimental::ScalarBase<phi::DenseTensor> const&, float, bool, phi::DenseTensor*)
4   float* phi::DeviceContext::Alloc<float>(phi::TensorBase*, unsigned long, bool) const
5   phi::DeviceContext::Impl::Alloc(phi::TensorBase*, phi::Place const&, phi::DataType, unsigned long, bool, bool) const
6   phi::DenseTensor::AllocateFrom(phi::Allocator*, phi::DataType, unsigned long, bool)
7   paddle::memory::allocation::Allocator::Allocate(unsigned long)
8   paddle::memory::allocation::StatAllocator::AllocateImpl(unsigned long)
9   paddle::memory::allocation::Allocator::Allocate(unsigned long)
10  paddle::memory::allocation::Allocator::Allocate(unsigned long)
11  paddle::memory::allocation::Allocator::Allocate(unsigned long)
12  paddle::memory::allocation::Allocator::Allocate(unsigned long)
13  paddle::memory::allocation::CUDAAllocator::AllocateImpl(unsigned long)
14  std::string phi::enforce::GetCompleteTraceBackString<std::string >(std::string&&, char const*, int)
15  phi::enforce::GetCurrentTraceBackString[abi:cxx11](bool)

----------------------
Error Message Summary:
----------------------
ResourceExhaustedError: 

Out of memory error on GPU 1. Cannot allocate 540.382812MB memory on GPU 1, 14.481445GB memory has been allocated and available memory is only 273.062500MB.

Please check whether there is any other process using GPU 1.
1. If yes, please stop them, or start PaddlePaddle on another GPU.
2. If no, please decrease the batch size of your model. 
 (at /paddle/paddle/fluid/memory/allocation/cuda_allocator.cc:86)

I0519 23:16:03.615808   179 process_group_nccl.cc:132] ProcessGroupNCCL destruct 
I0519 23:16:03.616861   179 process_group_nccl.cc:132] ProcessGroupNCCL destruct 
I0519 23:16:03.616915   179 process_group_nccl.cc:132] ProcessGroupNCCL destruct 
Warning: Unable to use MOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics
Warning: Unable to use MCMOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics
[2024-05-20 01:06:18,200] [    INFO] distributed_strategy.py:214 - distributed strategy initialized
======================= Modified FLAGS detected =======================
FLAGS(name='FLAGS_selected_gpus', current_value='1', default_value='')
=======================================================================
I0520 01:06:18.201324   214 tcp_utils.cc:107] Retry to connect to 172.19.2.2:45378 while the server is not yet listening.
I0520 01:06:21.201637   214 tcp_utils.cc:130] Successfully connected to 172.19.2.2:45378
I0520 01:06:21.230141   214 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-20 01:06:21,230] [    INFO] topology.py:358 - Total 2 pipe comm group(s) create successfully!
W0520 01:06:21.231158   214 gpu_resources.cc:119] Please NOTE: device: 1, GPU Compute Capability: 7.5, Driver API Version: 12.2, Runtime API Version: 11.8
W0520 01:06:21.242787   214 gpu_resources.cc:164] device: 1, cuDNN Version: 8.9.
I0520 01:06:21.446182   214 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-20 01:06:21,446] [    INFO] topology.py:358 - Total 1 data comm group(s) create successfully!
[2024-05-20 01:06:21,446] [    INFO] topology.py:358 - Total 2 model comm group(s) create successfully!
[2024-05-20 01:06:21,446] [    INFO] topology.py:358 - Total 2 sharding comm group(s) create successfully!
I0520 01:06:21.446622   214 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-20 01:06:21,446] [    INFO] topology.py:288 - HybridParallelInfo: rank_id: 1, mp_degree: 1, sharding_degree: 1, pp_degree: 1, dp_degree: 2, sep_degree: 1, mp_group: [1],  sharding_group: [1], pp_group: [1], dp_group: [0, 1], sep:group: None, check/clip group: [1]
loading annotations into memory...
Done (t=2.07s)
creating index...
index created!
Found an invalid bbox in annotations: im_id: 201, area: 0.0 x1: 611, y1: 158, x2: 615, y2: 158.
Traceback (most recent call last):
  File "/kaggle/working/ObjectDetection/DETR/tools/train.py", line 183, in <module>
    main()
  File "/kaggle/working/ObjectDetection/DETR/tools/train.py", line 179, in main
    run(FLAGS, cfg)
  File "/kaggle/working/ObjectDetection/DETR/tools/train.py", line 130, in run
    trainer.resume_weights(FLAGS.resume)
  File "/kaggle/working/ObjectDetection/DETR/ppdet/engine/trainer.py", line 259, in resume_weights
    self.start_epoch = load_weight(self.model, weights, self.optimizer,
  File "/kaggle/working/ObjectDetection/DETR/ppdet/utils/checkpoint.py", line 55, in load_weight
    raise ValueError("Model pretrain path {} does not "
ValueError: Model pretrain path /kaggle/working/ObjectDetection/DETR/output/rtdetr_hgnetv2_x_6x_coco/latest.pdparams does not exists.
I0520 01:06:37.745325   214 process_group_nccl.cc:132] ProcessGroupNCCL destruct 
I0520 01:06:37.745388   214 process_group_nccl.cc:132] ProcessGroupNCCL destruct 
I0520 01:06:37.745405   214 process_group_nccl.cc:132] ProcessGroupNCCL destruct 
Warning: Unable to use MOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics
Warning: Unable to use MCMOT metric, please install motmetrics, for example: `pip install motmetrics`, see https://github.com/longcw/py-motmetrics
[2024-05-20 01:07:01,880] [    INFO] distributed_strategy.py:214 - distributed strategy initialized
======================= Modified FLAGS detected =======================
FLAGS(name='FLAGS_selected_gpus', current_value='1', default_value='')
=======================================================================
I0520 01:07:01.881565   364 tcp_utils.cc:130] Successfully connected to 172.19.2.2:36395
I0520 01:07:01.921115   364 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-20 01:07:01,921] [    INFO] topology.py:358 - Total 2 pipe comm group(s) create successfully!
W0520 01:07:01.922013   364 gpu_resources.cc:119] Please NOTE: device: 1, GPU Compute Capability: 7.5, Driver API Version: 12.2, Runtime API Version: 11.8
W0520 01:07:01.923400   364 gpu_resources.cc:164] device: 1, cuDNN Version: 8.9.
I0520 01:07:02.108011   364 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-20 01:07:02,108] [    INFO] topology.py:358 - Total 1 data comm group(s) create successfully!
[2024-05-20 01:07:02,108] [    INFO] topology.py:358 - Total 2 model comm group(s) create successfully!
[2024-05-20 01:07:02,108] [    INFO] topology.py:358 - Total 2 sharding comm group(s) create successfully!
I0520 01:07:02.108417   364 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
[2024-05-20 01:07:02,108] [    INFO] topology.py:288 - HybridParallelInfo: rank_id: 1, mp_degree: 1, sharding_degree: 1, pp_degree: 1, dp_degree: 2, sep_degree: 1, mp_group: [1],  sharding_group: [1], pp_group: [1], dp_group: [0, 1], sep:group: None, check/clip group: [1]
loading annotations into memory...
Done (t=1.99s)
creating index...
index created!
Found an invalid bbox in annotations: im_id: 201, area: 0.0 x1: 611, y1: 158, x2: 615, y2: 158.
W0520 01:07:37.872128   364 reducer.cc:721] All parameters are involved in the backward pass. It is recommended to set find_unused_parameters to False to improve performance. However, if unused parameters appear in subsequent iterative training, then an error will occur. Please make it clear that in the subsequent training, there will be no parameters that are not used in the backward pass, and then set find_unused_parameters


--------------------------------------
C++ Traceback (most recent call last):
--------------------------------------
0   paddle::pybind::eager_api_sync_batch_norm_(_object*, _object*, _object*)
1   sync_batch_norm__ad_func(paddle::Tensor const&, paddle::Tensor&, paddle::Tensor&, paddle::Tensor const&, paddle::Tensor const&, bool, float, float, std::string, bool, bool)
2   paddle::experimental::sync_batch_norm_(paddle::Tensor const&, paddle::Tensor&, paddle::Tensor&, paddle::Tensor const&, paddle::Tensor const&, bool, float, float, std::string const&, bool, bool)
3   void phi::SyncBatchNormKernel<float, phi::GPUContext>(phi::GPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, bool, float, float, std::string const&, bool, bool, phi::DenseTensor*, phi::DenseTensor*, phi::DenseTensor*, phi::DenseTensor*, phi::DenseTensor*, phi::DenseTensor*)

----------------------
Error Message Summary:
----------------------
FatalError: `Termination signal` is detected by the operating system.
  [TimeInfo: *** Aborted at 1716178280 (unix time) try "date -d @1716178280" if you are using GNU date ***]
  [SignalInfo: *** SIGTERM (@0x15c) received by PID 364 (TID 0x7d69c38aa740) from PID 348 ***]