Update README.md
Browse files
README.md
CHANGED
@@ -101,6 +101,7 @@ When fine-tuned on those datasets, this model (the first row of the table) achie
|
|
101 |
|Finnish-NLP/roberta-large-finnish |88.02 |94.53 |95.23 |74.30 |
|
102 |
|TurkuNLP/bert-base-finnish-cased-v1 |**88.82** |94.90 |**95.49** |**76.07** |
|
103 |
|
|
|
104 |
|
105 |
## Team Members
|
106 |
|
|
|
101 |
|Finnish-NLP/roberta-large-finnish |88.02 |94.53 |95.23 |74.30 |
|
102 |
|TurkuNLP/bert-base-finnish-cased-v1 |**88.82** |94.90 |**95.49** |**76.07** |
|
103 |
|
104 |
+
To conclude, this ELECTRA model loses to other models but is still fairly competitive compared to our roberta-large models when taking into account that this ELECTRA model has 110M parameters when roberta-large models have 355M parameters.
|
105 |
|
106 |
## Team Members
|
107 |
|