Update log.txt
Browse files
log.txt
ADDED
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Writing logs to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/albert-base-v2-glue:wnli-2020-06-29-11:21/log.txt.
|
2 |
+
Loading [94mnlp[0m dataset [94mglue[0m, subset [94mwnli[0m, split [94mtrain[0m.
|
3 |
+
Loading [94mnlp[0m dataset [94mglue[0m, subset [94mwnli[0m, split [94mvalidation[0m.
|
4 |
+
Loaded dataset. Found: 2 labels: ([0, 1])
|
5 |
+
Loading transformers AutoModelForSequenceClassification: albert-base-v2
|
6 |
+
Tokenizing training data. (len: 635)
|
7 |
+
Tokenizing eval data (len: 71)
|
8 |
+
Loaded data and tokenized in 4.413618564605713s
|
9 |
+
Training model across 4 GPUs
|
10 |
+
***** Running training *****
|
11 |
+
Num examples = 635
|
12 |
+
Batch size = 64
|
13 |
+
Max sequence length = 256
|
14 |
+
Num steps = 45
|
15 |
+
Num epochs = 5
|
16 |
+
Learning rate = 2e-05
|
17 |
+
Eval accuracy: 59.154929577464785%
|
18 |
+
Best acc found. Saved model to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/albert-base-v2-glue:wnli-2020-06-29-11:21/.
|
19 |
+
Eval accuracy: 47.88732394366197%
|
20 |
+
Eval accuracy: 45.07042253521127%
|
21 |
+
Eval accuracy: 47.88732394366197%
|
22 |
+
Eval accuracy: 50.70422535211267%
|
23 |
+
Saved tokenizer <textattack.models.tokenizers.auto_tokenizer.AutoTokenizer object at 0x7f9b70a4ba60> to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/albert-base-v2-glue:wnli-2020-06-29-11:21/.
|
24 |
+
Wrote README to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/albert-base-v2-glue:wnli-2020-06-29-11:21/README.md.
|
25 |
+
Wrote training args to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/albert-base-v2-glue:wnli-2020-06-29-11:21/train_args.json.
|
26 |
+
Writing logs to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/albert-base-v2-glue:wnli-2020-06-29-11:21/log.txt.
|
27 |
+
Loading [94mnlp[0m dataset [94mglue[0m, subset [94mwnli[0m, split [94mtrain[0m.
|
28 |
+
Loading [94mnlp[0m dataset [94mglue[0m, subset [94mwnli[0m, split [94mvalidation[0m.
|
29 |
+
Loaded dataset. Found: 2 labels: ([0, 1])
|
30 |
+
Loading transformers AutoModelForSequenceClassification: albert-base-v2
|
31 |
+
Tokenizing training data. (len: 635)
|
32 |
+
Tokenizing eval data (len: 71)
|
33 |
+
Loaded data and tokenized in 4.476848840713501s
|
34 |
+
Training model across 4 GPUs
|
35 |
+
***** Running training *****
|
36 |
+
Num examples = 635
|
37 |
+
Batch size = 128
|
38 |
+
Max sequence length = 256
|
39 |
+
Num steps = 20
|
40 |
+
Num epochs = 5
|
41 |
+
Learning rate = 2e-05
|