Commit
·
fd2e5db
1
Parent(s):
b1c53f2
initial commit
Browse files- README.md +18 -0
- config.json +26 -0
- pytorch_model.bin +3 -0
- special_tokens_map.json +1 -0
- tokenizer.json +0 -0
- tokenizer_config.json +1 -0
- training.log +41 -0
- vocab.txt +0 -0
README.md
ADDED
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
tags:
|
4 |
+
- bert
|
5 |
+
- rte
|
6 |
+
- glue
|
7 |
+
- kd
|
8 |
+
- torchdistill
|
9 |
+
license: apache-2.0
|
10 |
+
datasets:
|
11 |
+
- rte
|
12 |
+
metrics:
|
13 |
+
- accuracy
|
14 |
+
---
|
15 |
+
|
16 |
+
`bert-base-uncased` fine-tuned on RTE dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation.
|
17 |
+
The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/rte/kd/bert_base_uncased_from_bert_large_uncased.yaml).
|
18 |
+
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
|
config.json
ADDED
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "bert-base-uncased",
|
3 |
+
"architectures": [
|
4 |
+
"BertForSequenceClassification"
|
5 |
+
],
|
6 |
+
"attention_probs_dropout_prob": 0.1,
|
7 |
+
"finetuning_task": "rte",
|
8 |
+
"gradient_checkpointing": false,
|
9 |
+
"hidden_act": "gelu",
|
10 |
+
"hidden_dropout_prob": 0.1,
|
11 |
+
"hidden_size": 768,
|
12 |
+
"initializer_range": 0.02,
|
13 |
+
"intermediate_size": 3072,
|
14 |
+
"layer_norm_eps": 1e-12,
|
15 |
+
"max_position_embeddings": 512,
|
16 |
+
"model_type": "bert",
|
17 |
+
"num_attention_heads": 12,
|
18 |
+
"num_hidden_layers": 12,
|
19 |
+
"pad_token_id": 0,
|
20 |
+
"position_embedding_type": "absolute",
|
21 |
+
"problem_type": "single_label_classification",
|
22 |
+
"transformers_version": "4.6.1",
|
23 |
+
"type_vocab_size": 2,
|
24 |
+
"use_cache": true,
|
25 |
+
"vocab_size": 30522
|
26 |
+
}
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a8d70e71e4e304fc905099faa9090838b449b426749817ec6ac6ad141f5bf340
|
3 |
+
size 438024457
|
special_tokens_map.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "do_lower": true, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "bert-base-uncased"}
|
training.log
ADDED
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
2021-05-31 17:50:29,582 INFO __main__ Namespace(adjust_lr=False, config='torchdistill/configs/sample/glue/rte/kd/bert_base_uncased_from_bert_large_uncased.yaml', log='log/glue/rte/kd/bert_base_uncased_from_bert_large_uncased.txt', private_output='leaderboard/glue/kd/bert_base_uncased_from_bert_large_uncased/', seed=None, student_only=False, task_name='rte', test_only=False, world_size=1)
|
2 |
+
2021-05-31 17:50:29,613 INFO __main__ Distributed environment: NO
|
3 |
+
Num processes: 1
|
4 |
+
Process index: 0
|
5 |
+
Local process index: 0
|
6 |
+
Device: cuda
|
7 |
+
Use FP16 precision: True
|
8 |
+
|
9 |
+
2021-05-31 17:50:39,943 WARNING datasets.builder Reusing dataset glue (/root/.cache/huggingface/datasets/glue/rte/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)
|
10 |
+
2021-05-31 17:50:41,587 INFO __main__ Start training
|
11 |
+
2021-05-31 17:50:41,587 INFO torchdistill.models.util [teacher model]
|
12 |
+
2021-05-31 17:50:41,587 INFO torchdistill.models.util Using the original teacher model
|
13 |
+
2021-05-31 17:50:41,588 INFO torchdistill.models.util [student model]
|
14 |
+
2021-05-31 17:50:41,588 INFO torchdistill.models.util Using the original student model
|
15 |
+
2021-05-31 17:50:41,588 INFO torchdistill.core.distillation Loss = 1.0 * OrgLoss
|
16 |
+
2021-05-31 17:50:41,588 INFO torchdistill.core.distillation Freezing the whole teacher model
|
17 |
+
2021-05-31 17:50:45,473 INFO torchdistill.misc.log Epoch: [0] [ 0/78] eta: 0:01:02 lr: 9.957264957264958e-05 sample/s: 5.016064534395792 loss: 0.0097 (0.0097) time: 0.8039 data: 0.0065 max mem: 3804
|
18 |
+
2021-05-31 17:51:25,664 INFO torchdistill.misc.log Epoch: [0] [50/78] eta: 0:00:22 lr: 7.820512820512821e-05 sample/s: 5.000885583822104 loss: 0.0065 (0.0081) time: 0.8035 data: 0.0049 max mem: 5115
|
19 |
+
2021-05-31 17:51:47,044 INFO torchdistill.misc.log Epoch: [0] Total time: 0:01:02
|
20 |
+
2021-05-31 17:51:48,268 INFO /usr/local/lib/python3.7/dist-packages/datasets/metric.py Removing /root/.cache/huggingface/metrics/glue/rte/default_experiment-1-0.arrow
|
21 |
+
2021-05-31 17:51:48,268 INFO __main__ Validation: accuracy = 0.6570397111913358
|
22 |
+
2021-05-31 17:51:48,268 INFO __main__ Updating ckpt at ./resource/ckpt/glue/rte/kd/rte-bert-base-uncased_from_bert-large-uncased
|
23 |
+
2021-05-31 17:51:50,012 INFO torchdistill.misc.log Epoch: [1] [ 0/78] eta: 0:01:02 lr: 6.623931623931624e-05 sample/s: 5.007037577218317 loss: 0.0042 (0.0042) time: 0.8055 data: 0.0066 max mem: 5115
|
24 |
+
2021-05-31 17:52:30,204 INFO torchdistill.misc.log Epoch: [1] [50/78] eta: 0:00:22 lr: 4.4871794871794874e-05 sample/s: 4.994946466605415 loss: 0.0033 (0.0032) time: 0.8037 data: 0.0050 max mem: 5115
|
25 |
+
2021-05-31 17:52:51,720 INFO torchdistill.misc.log Epoch: [1] Total time: 0:01:02
|
26 |
+
2021-05-31 17:52:52,943 INFO /usr/local/lib/python3.7/dist-packages/datasets/metric.py Removing /root/.cache/huggingface/metrics/glue/rte/default_experiment-1-0.arrow
|
27 |
+
2021-05-31 17:52:52,943 INFO __main__ Validation: accuracy = 0.6859205776173285
|
28 |
+
2021-05-31 17:52:52,943 INFO __main__ Updating ckpt at ./resource/ckpt/glue/rte/kd/rte-bert-base-uncased_from_bert-large-uncased
|
29 |
+
2021-05-31 17:52:54,985 INFO torchdistill.misc.log Epoch: [2] [ 0/78] eta: 0:01:03 lr: 3.290598290598291e-05 sample/s: 5.004806067380402 loss: 0.0014 (0.0014) time: 0.8145 data: 0.0153 max mem: 5115
|
30 |
+
2021-05-31 17:53:35,181 INFO torchdistill.misc.log Epoch: [2] [50/78] eta: 0:00:22 lr: 1.153846153846154e-05 sample/s: 5.00735140278351 loss: 0.0012 (0.0013) time: 0.8039 data: 0.0051 max mem: 5115
|
31 |
+
2021-05-31 17:53:56,746 INFO torchdistill.misc.log Epoch: [2] Total time: 0:01:02
|
32 |
+
2021-05-31 17:53:57,970 INFO /usr/local/lib/python3.7/dist-packages/datasets/metric.py Removing /root/.cache/huggingface/metrics/glue/rte/default_experiment-1-0.arrow
|
33 |
+
2021-05-31 17:53:57,970 INFO __main__ Validation: accuracy = 0.6823104693140795
|
34 |
+
2021-05-31 17:53:58,006 INFO __main__ [Teacher: bert-large-uncased]
|
35 |
+
2021-05-31 17:54:01,530 INFO /usr/local/lib/python3.7/dist-packages/datasets/metric.py Removing /root/.cache/huggingface/metrics/glue/rte/default_experiment-1-0.arrow
|
36 |
+
2021-05-31 17:54:01,530 INFO __main__ Test: accuracy = 0.740072202166065
|
37 |
+
2021-05-31 17:54:04,735 INFO __main__ [Student: bert-base-uncased]
|
38 |
+
2021-05-31 17:54:05,975 INFO /usr/local/lib/python3.7/dist-packages/datasets/metric.py Removing /root/.cache/huggingface/metrics/glue/rte/default_experiment-1-0.arrow
|
39 |
+
2021-05-31 17:54:05,976 INFO __main__ Test: accuracy = 0.6859205776173285
|
40 |
+
2021-05-31 17:54:05,976 INFO __main__ Start prediction for private dataset(s)
|
41 |
+
2021-05-31 17:54:05,977 INFO __main__ rte/test: 3000 samples
|
vocab.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|