espejelomar commited on
Commit
4b98b98
·
1 Parent(s): 7a714c3

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -11
README.md CHANGED
@@ -1,7 +1,6 @@
1
  ---
2
  license: apache-2.0
3
  tags:
4
- - text-classification
5
  - generated_from_trainer
6
  datasets:
7
  - glue
@@ -23,10 +22,10 @@ model-index:
23
  metrics:
24
  - name: Accuracy
25
  type: accuracy
26
- value: 0.8578431372549019
27
  - name: F1
28
  type: f1
29
- value: 0.8982456140350877
30
  ---
31
 
32
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -34,11 +33,11 @@ should probably proofread and complete it, then remove this comment. -->
34
 
35
  # platzi-distilroberta-base-mrpc-glue-omar-espejel
36
 
37
- This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets.
38
  It achieves the following results on the evaluation set:
39
- - Loss: 0.6574
40
- - Accuracy: 0.8578
41
- - F1: 0.8982
42
 
43
  ## Model description
44
 
@@ -63,15 +62,14 @@ The following hyperparameters were used during training:
63
  - seed: 42
64
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
65
  - lr_scheduler_type: linear
66
- - num_epochs: 4
67
 
68
  ### Training results
69
 
70
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
71
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
72
- | 0.5047 | 1.09 | 500 | 0.7064 | 0.8284 | 0.8768 |
73
- | 0.3665 | 2.18 | 1000 | 0.6574 | 0.8578 | 0.8982 |
74
- | 0.2025 | 3.27 | 1500 | 0.7720 | 0.8505 | 0.8932 |
75
 
76
 
77
  ### Framework versions
 
1
  ---
2
  license: apache-2.0
3
  tags:
 
4
  - generated_from_trainer
5
  datasets:
6
  - glue
 
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
+ value: 0.8553921568627451
26
  - name: F1
27
  type: f1
28
+ value: 0.897391304347826
29
  ---
30
 
31
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
33
 
34
  # platzi-distilroberta-base-mrpc-glue-omar-espejel
35
 
36
+ This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset.
37
  It achieves the following results on the evaluation set:
38
+ - Loss: 0.6342
39
+ - Accuracy: 0.8554
40
+ - F1: 0.8974
41
 
42
  ## Model description
43
 
 
62
  - seed: 42
63
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
64
  - lr_scheduler_type: linear
65
+ - num_epochs: 3
66
 
67
  ### Training results
68
 
69
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
70
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
71
+ | 0.5004 | 1.09 | 500 | 0.6371 | 0.8113 | 0.8715 |
72
+ | 0.33 | 2.18 | 1000 | 0.6342 | 0.8554 | 0.8974 |
 
73
 
74
 
75
  ### Framework versions