Update README.md
Browse files
README.md
CHANGED
@@ -140,9 +140,16 @@ For further testing our decoder, in addition to the testing data described above
|
|
140 |
| **LLaMA-2 (English)** | **0.2458** | 0.2903 | 0.0913 | 0.1034 |
|
141 |
| **LLaMA-2 Chat (English)** | 0.2231 | **0.2959** | 0.5546 | 0.1750 |
|
142 |
|
143 |
-
<br>
|
144 |
|
145 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
146 |
# How to use
|
147 |
|
148 |
You can use this model directly with a pipeline for causal language modeling:
|
|
|
140 |
| **LLaMA-2 (English)** | **0.2458** | 0.2903 | 0.0913 | 0.1034 |
|
141 |
| **LLaMA-2 Chat (English)** | 0.2231 | **0.2959** | 0.5546 | 0.1750 |
|
142 |
|
|
|
143 |
|
144 |
|
145 |
+
In comparison with other decoder of the same dimension, namely Sabiá 1.5B, Gervásio shows a superior
|
146 |
+
or competitive performance for the tasks in PTBR, while being the sole encoder of 1.5B dimmension for the PTPT
|
147 |
+
variant of Portuguese and thus the state of art
|
148 |
+
in this respect at the time of its publishing. For further evaluation data,
|
149 |
+
see the respective [publication](https://arxiv.org/abs/2402.18766).
|
150 |
+
|
151 |
+
<br>
|
152 |
+
|
153 |
# How to use
|
154 |
|
155 |
You can use this model directly with a pipeline for causal language modeling:
|