Madhura commited on
Commit
c82fbcd
·
1 Parent(s): bafdfc4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -1
README.md CHANGED
@@ -1,38 +1,70 @@
 
 
 
 
 
 
1
  tokenclass-wnut
2
 
3
  This model is a fine-tuned version of distilbert-base-uncased on the wnut_17 dataset. It achieves the following results on the evaluation set:
4
 
5
  Loss: 0.2858
 
6
  Precision: 0.4846
 
7
  Recall: 0.2632
 
8
  F1: 0.3411
 
9
  Accuracy: 0.9386
 
10
  Model description
 
11
  More information needed
12
 
13
  Intended uses & limitations
 
14
  More information needed
15
 
16
  Training and evaluation data
 
17
  More information needed
18
 
19
  Training procedure
 
20
  Training hyperparameters
 
21
  The following hyperparameters were used during training:
22
 
23
  learning_rate: 2e-05
 
24
  train_batch_size: 16
 
25
  eval_batch_size: 16
 
26
  seed: 42
 
27
  optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
 
28
  lr_scheduler_type: linear
 
29
  num_epochs: 2
 
30
  Training results
31
- Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
 
 
 
32
  No log 1.0 213 0.2976 0.3873 0.1974 0.2615 0.9352
 
33
  No log 2.0 426 0.2858 0.4846 0.2632 0.3411 0.9386
 
34
  Framework versions
 
35
  Transformers 4.20.1
 
36
  Pytorch 1.11.0+cpu
 
37
  Datasets 2.1.0
 
38
  Tokenizers 0.12.1
 
1
+ ---
2
+ metrics:
3
+ - accuracy
4
+ - precision
5
+ pipeline_tag: token-classification
6
+ ---
7
  tokenclass-wnut
8
 
9
  This model is a fine-tuned version of distilbert-base-uncased on the wnut_17 dataset. It achieves the following results on the evaluation set:
10
 
11
  Loss: 0.2858
12
+
13
  Precision: 0.4846
14
+
15
  Recall: 0.2632
16
+
17
  F1: 0.3411
18
+
19
  Accuracy: 0.9386
20
+
21
  Model description
22
+
23
  More information needed
24
 
25
  Intended uses & limitations
26
+
27
  More information needed
28
 
29
  Training and evaluation data
30
+
31
  More information needed
32
 
33
  Training procedure
34
+
35
  Training hyperparameters
36
+
37
  The following hyperparameters were used during training:
38
 
39
  learning_rate: 2e-05
40
+
41
  train_batch_size: 16
42
+
43
  eval_batch_size: 16
44
+
45
  seed: 42
46
+
47
  optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
+
49
  lr_scheduler_type: linear
50
+
51
  num_epochs: 2
52
+
53
  Training results
54
+
55
+
56
+ Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
57
+
58
  No log 1.0 213 0.2976 0.3873 0.1974 0.2615 0.9352
59
+
60
  No log 2.0 426 0.2858 0.4846 0.2632 0.3411 0.9386
61
+
62
  Framework versions
63
+
64
  Transformers 4.20.1
65
+
66
  Pytorch 1.11.0+cpu
67
+
68
  Datasets 2.1.0
69
+
70
  Tokenizers 0.12.1