persian-flair-pos / training.log
hamedkhaledi's picture
Update model
d5e53ea
raw
history blame
12.4 kB
2022-08-07 16:00:48,261 ----------------------------------------------------------------------------------------------------
2022-08-07 16:00:48,267 Model: "SequenceTagger(
(embeddings): StackedEmbeddings(
(list_embedding_0): WordEmbeddings('fa')
(list_embedding_1): FlairEmbeddings(
(lm): LanguageModel(
(drop): Dropout(p=0.1, inplace=False)
(encoder): Embedding(5105, 100)
(rnn): LSTM(100, 2048)
(decoder): Linear(in_features=2048, out_features=5105, bias=True)
)
)
(list_embedding_2): FlairEmbeddings(
(lm): LanguageModel(
(drop): Dropout(p=0.1, inplace=False)
(encoder): Embedding(5105, 100)
(rnn): LSTM(100, 2048)
(decoder): Linear(in_features=2048, out_features=5105, bias=True)
)
)
)
(word_dropout): WordDropout(p=0.05)
(locked_dropout): LockedDropout(p=0.5)
(embedding2nn): Linear(in_features=4396, out_features=4396, bias=True)
(rnn): LSTM(4396, 256, batch_first=True, bidirectional=True)
(linear): Linear(in_features=512, out_features=32, bias=True)
(beta): 1.0
(weights): None
(weight_tensor) None
)"
2022-08-07 16:00:48,272 ----------------------------------------------------------------------------------------------------
2022-08-07 16:00:48,276 Corpus: "Corpus: 24000 train + 3000 dev + 3000 test sentences"
2022-08-07 16:00:48,281 ----------------------------------------------------------------------------------------------------
2022-08-07 16:00:48,282 Parameters:
2022-08-07 16:00:48,285 - learning_rate: "0.1"
2022-08-07 16:00:48,289 - mini_batch_size: "8"
2022-08-07 16:00:48,293 - patience: "3"
2022-08-07 16:00:48,295 - anneal_factor: "0.5"
2022-08-07 16:00:48,296 - max_epochs: "5"
2022-08-07 16:00:48,297 - shuffle: "True"
2022-08-07 16:00:48,300 - train_with_dev: "False"
2022-08-07 16:00:48,301 - batch_growth_annealing: "False"
2022-08-07 16:00:48,303 ----------------------------------------------------------------------------------------------------
2022-08-07 16:00:48,306 Model training base path: "/content/drive/MyDrive/project/data/pos/model2"
2022-08-07 16:00:48,309 ----------------------------------------------------------------------------------------------------
2022-08-07 16:00:48,316 Device: cuda:0
2022-08-07 16:00:48,317 ----------------------------------------------------------------------------------------------------
2022-08-07 16:00:48,318 Embeddings storage mode: none
2022-08-07 16:00:48,337 ----------------------------------------------------------------------------------------------------
2022-08-07 16:02:01,728 epoch 1 - iter 300/3000 - loss 0.75227154 - samples/sec: 32.71 - lr: 0.100000
2022-08-07 16:03:44,240 epoch 1 - iter 600/3000 - loss 0.54616157 - samples/sec: 23.58 - lr: 0.100000
2022-08-07 16:05:07,940 epoch 1 - iter 900/3000 - loss 0.46940731 - samples/sec: 28.91 - lr: 0.100000
2022-08-07 16:06:48,542 epoch 1 - iter 1200/3000 - loss 0.41914715 - samples/sec: 24.03 - lr: 0.100000
2022-08-07 16:08:31,313 epoch 1 - iter 1500/3000 - loss 0.38015901 - samples/sec: 23.52 - lr: 0.100000
2022-08-07 16:10:05,508 epoch 1 - iter 1800/3000 - loss 0.35604709 - samples/sec: 25.67 - lr: 0.100000
2022-08-07 16:11:31,898 epoch 1 - iter 2100/3000 - loss 0.33691470 - samples/sec: 28.01 - lr: 0.100000
2022-08-07 16:13:00,338 epoch 1 - iter 2400/3000 - loss 0.32109903 - samples/sec: 27.35 - lr: 0.100000
2022-08-07 16:14:32,548 epoch 1 - iter 2700/3000 - loss 0.31528796 - samples/sec: 26.23 - lr: 0.100000
2022-08-07 16:16:09,123 epoch 1 - iter 3000/3000 - loss 0.30213703 - samples/sec: 25.03 - lr: 0.100000
2022-08-07 16:16:09,831 ----------------------------------------------------------------------------------------------------
2022-08-07 16:16:09,836 EPOCH 1 done: loss 0.3021 - lr 0.1000000
2022-08-07 16:21:08,895 DEV : loss 0.1289350390434265 - f1-score (micro avg) 0.9601
2022-08-07 16:21:08,937 BAD EPOCHS (no improvement): 0
2022-08-07 16:21:10,769 saving best model
2022-08-07 16:21:12,532 ----------------------------------------------------------------------------------------------------
2022-08-07 16:22:54,846 epoch 2 - iter 300/3000 - loss 0.21020090 - samples/sec: 23.46 - lr: 0.100000
2022-08-07 16:24:33,507 epoch 2 - iter 600/3000 - loss 0.20664426 - samples/sec: 24.50 - lr: 0.100000
2022-08-07 16:26:17,056 epoch 2 - iter 900/3000 - loss 0.20271364 - samples/sec: 23.33 - lr: 0.100000
2022-08-07 16:27:59,228 epoch 2 - iter 1200/3000 - loss 0.20055706 - samples/sec: 23.65 - lr: 0.100000
2022-08-07 16:29:39,722 epoch 2 - iter 1500/3000 - loss 0.19912427 - samples/sec: 24.05 - lr: 0.100000
2022-08-07 16:31:27,754 epoch 2 - iter 1800/3000 - loss 0.19760227 - samples/sec: 22.36 - lr: 0.100000
2022-08-07 16:33:12,162 epoch 2 - iter 2100/3000 - loss 0.19795635 - samples/sec: 23.14 - lr: 0.100000
2022-08-07 16:34:53,586 epoch 2 - iter 2400/3000 - loss 0.19672791 - samples/sec: 23.84 - lr: 0.100000
2022-08-07 16:36:42,505 epoch 2 - iter 2700/3000 - loss 0.19643492 - samples/sec: 22.19 - lr: 0.100000
2022-08-07 16:38:22,496 epoch 2 - iter 3000/3000 - loss 0.19530593 - samples/sec: 24.17 - lr: 0.100000
2022-08-07 16:38:23,157 ----------------------------------------------------------------------------------------------------
2022-08-07 16:38:23,162 EPOCH 2 done: loss 0.1953 - lr 0.1000000
2022-08-07 16:43:34,928 DEV : loss 0.10149012506008148 - f1-score (micro avg) 0.9708
2022-08-07 16:43:34,973 BAD EPOCHS (no improvement): 0
2022-08-07 16:43:36,767 saving best model
2022-08-07 16:43:38,486 ----------------------------------------------------------------------------------------------------
2022-08-07 16:45:23,089 epoch 3 - iter 300/3000 - loss 0.17774341 - samples/sec: 22.95 - lr: 0.100000
2022-08-07 16:47:08,214 epoch 3 - iter 600/3000 - loss 0.17596867 - samples/sec: 22.98 - lr: 0.100000
2022-08-07 16:48:50,711 epoch 3 - iter 900/3000 - loss 0.17436321 - samples/sec: 23.58 - lr: 0.100000
2022-08-07 16:50:35,039 epoch 3 - iter 1200/3000 - loss 0.17306311 - samples/sec: 23.16 - lr: 0.100000
2022-08-07 16:52:20,808 epoch 3 - iter 1500/3000 - loss 0.17261464 - samples/sec: 22.84 - lr: 0.100000
2022-08-07 16:54:02,750 epoch 3 - iter 1800/3000 - loss 0.17438407 - samples/sec: 23.71 - lr: 0.100000
2022-08-07 16:55:42,154 epoch 3 - iter 2100/3000 - loss 0.17363800 - samples/sec: 24.31 - lr: 0.100000
2022-08-07 16:57:21,978 epoch 3 - iter 2400/3000 - loss 0.17156485 - samples/sec: 24.21 - lr: 0.100000
2022-08-07 16:59:05,968 epoch 3 - iter 2700/3000 - loss 0.17042576 - samples/sec: 23.23 - lr: 0.100000
2022-08-07 17:00:46,166 epoch 3 - iter 3000/3000 - loss 0.16937353 - samples/sec: 24.12 - lr: 0.100000
2022-08-07 17:00:46,857 ----------------------------------------------------------------------------------------------------
2022-08-07 17:00:46,860 EPOCH 3 done: loss 0.1694 - lr 0.1000000
2022-08-07 17:05:58,652 DEV : loss 0.09684865176677704 - f1-score (micro avg) 0.9731
2022-08-07 17:05:58,703 BAD EPOCHS (no improvement): 0
2022-08-07 17:06:00,477 saving best model
2022-08-07 17:06:02,321 ----------------------------------------------------------------------------------------------------
2022-08-07 17:07:44,646 epoch 4 - iter 300/3000 - loss 0.16212096 - samples/sec: 23.46 - lr: 0.100000
2022-08-07 17:09:25,119 epoch 4 - iter 600/3000 - loss 0.15843816 - samples/sec: 24.05 - lr: 0.100000
2022-08-07 17:11:07,080 epoch 4 - iter 900/3000 - loss 0.15900626 - samples/sec: 23.70 - lr: 0.100000
2022-08-07 17:12:47,149 epoch 4 - iter 1200/3000 - loss 0.15764029 - samples/sec: 24.15 - lr: 0.100000
2022-08-07 17:14:33,737 epoch 4 - iter 1500/3000 - loss 0.16000098 - samples/sec: 22.66 - lr: 0.100000
2022-08-07 17:16:21,024 epoch 4 - iter 1800/3000 - loss 0.15931205 - samples/sec: 22.52 - lr: 0.100000
2022-08-07 17:18:01,785 epoch 4 - iter 2100/3000 - loss 0.15961928 - samples/sec: 23.99 - lr: 0.100000
2022-08-07 17:19:44,524 epoch 4 - iter 2400/3000 - loss 0.15845056 - samples/sec: 23.52 - lr: 0.100000
2022-08-07 17:21:27,429 epoch 4 - iter 2700/3000 - loss 0.15771950 - samples/sec: 23.49 - lr: 0.100000
2022-08-07 17:23:10,018 epoch 4 - iter 3000/3000 - loss 0.15777116 - samples/sec: 23.56 - lr: 0.100000
2022-08-07 17:23:10,788 ----------------------------------------------------------------------------------------------------
2022-08-07 17:23:10,794 EPOCH 4 done: loss 0.1578 - lr 0.1000000
2022-08-07 17:28:23,406 DEV : loss 0.09011354297399521 - f1-score (micro avg) 0.9744
2022-08-07 17:28:23,451 BAD EPOCHS (no improvement): 0
2022-08-07 17:28:25,515 saving best model
2022-08-07 17:28:27,346 ----------------------------------------------------------------------------------------------------
2022-08-07 17:30:06,455 epoch 5 - iter 300/3000 - loss 0.14466099 - samples/sec: 24.22 - lr: 0.100000
2022-08-07 17:31:44,351 epoch 5 - iter 600/3000 - loss 0.14401223 - samples/sec: 24.70 - lr: 0.100000
2022-08-07 17:33:27,083 epoch 5 - iter 900/3000 - loss 0.14768050 - samples/sec: 23.53 - lr: 0.100000
2022-08-07 17:35:07,577 epoch 5 - iter 1200/3000 - loss 0.14646819 - samples/sec: 24.05 - lr: 0.100000
2022-08-07 17:36:47,275 epoch 5 - iter 1500/3000 - loss 0.14604558 - samples/sec: 24.25 - lr: 0.100000
2022-08-07 17:38:24,129 epoch 5 - iter 1800/3000 - loss 0.14788483 - samples/sec: 24.96 - lr: 0.100000
2022-08-07 17:40:04,518 epoch 5 - iter 2100/3000 - loss 0.14695063 - samples/sec: 24.08 - lr: 0.100000
2022-08-07 17:41:51,964 epoch 5 - iter 2400/3000 - loss 0.14697433 - samples/sec: 22.49 - lr: 0.100000
2022-08-07 17:43:32,173 epoch 5 - iter 2700/3000 - loss 0.14745015 - samples/sec: 24.12 - lr: 0.100000
2022-08-07 17:45:17,557 epoch 5 - iter 3000/3000 - loss 0.14917362 - samples/sec: 22.93 - lr: 0.100000
2022-08-07 17:45:18,255 ----------------------------------------------------------------------------------------------------
2022-08-07 17:45:18,263 EPOCH 5 done: loss 0.1492 - lr 0.1000000
2022-08-07 17:50:33,128 DEV : loss 0.08973350375890732 - f1-score (micro avg) 0.9746
2022-08-07 17:50:33,176 BAD EPOCHS (no improvement): 0
2022-08-07 17:50:34,869 saving best model
2022-08-07 17:50:38,774 ----------------------------------------------------------------------------------------------------
2022-08-07 17:50:38,811 loading file /content/drive/MyDrive/project/data/pos/model2/best-model.pt
2022-08-07 17:55:05,420 0.9637 0.9637 0.9637 0.9637
2022-08-07 17:55:05,422
Results:
- F-score (micro) 0.9637
- F-score (macro) 0.8989
- Accuracy 0.9637
By class:
precision recall f1-score support
N_SING 0.9724 0.9521 0.9621 30553
P 0.9577 0.9919 0.9745 9951
DELM 0.9982 0.9996 0.9989 8122
ADJ 0.8768 0.9334 0.9042 7466
CON 0.9905 0.9786 0.9845 6823
N_PL 0.9719 0.9644 0.9681 5163
V_PA 0.9753 0.9756 0.9755 2873
V_PRS 0.9922 0.9852 0.9887 2841
NUM 0.9907 0.9982 0.9944 2232
PRO 0.9823 0.9349 0.9580 2258
DET 0.9429 0.9800 0.9611 1853
CLITIC 1.0000 1.0000 1.0000 1259
V_PP 0.9398 0.9836 0.9612 1158
V_SUB 0.9746 0.9680 0.9713 1031
ADV 0.8180 0.8375 0.8276 880
ADV_TIME 0.9238 0.9673 0.9451 489
V_AUX 0.9947 0.9947 0.9947 379
ADJ_SUP 0.9925 0.9815 0.9870 270
ADJ_CMPR 0.9372 0.9275 0.9323 193
ADV_NEG 0.9071 0.8523 0.8789 149
ADV_I 0.8345 0.8286 0.8315 140
ADJ_INO 0.8846 0.5476 0.6765 168
FW 0.8442 0.5285 0.6500 123
ADV_COMP 0.8072 0.8816 0.8428 76
ADV_LOC 0.9342 0.9726 0.9530 73
V_IMP 0.7826 0.6429 0.7059 56
PREV 0.8276 0.7500 0.7869 32
INT 0.8333 0.4167 0.5556 24
micro avg 0.9637 0.9637 0.9637 86635
macro avg 0.9245 0.8848 0.8989 86635
weighted avg 0.9643 0.9637 0.9637 86635
samples avg 0.9637 0.9637 0.9637 86635
2022-08-07 17:55:05,427 ----------------------------------------------------------------------------------------------------