Update README.md
Browse files
README.md
CHANGED
@@ -37,8 +37,39 @@ The PreTENS task aims at focusing on semantic competence with specific attention
|
|
37 |
|
38 |
We collected the Italian part of the original dataset, and more specifically only the first sub-task: **acceptability sentence classification**.
|
39 |
|
40 |
-
|
41 |
-
- add
|
42 |
|
43 |
-
|
44 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
|
38 |
We collected the Italian part of the original dataset, and more specifically only the first sub-task: **acceptability sentence classification**.
|
39 |
|
40 |
+
## Example
|
|
|
41 |
|
42 |
+
Here you can see the structure of the single sample in the present dataset.
|
43 |
+
|
44 |
+
```json
|
45 |
+
{
|
46 |
+
"text": string, # text of the tweet
|
47 |
+
"label": int, # 0: Ambiguo, 1: Non Ambiguo
|
48 |
+
}
|
49 |
+
```
|
50 |
+
|
51 |
+
## Statitics
|
52 |
+
|
53 |
+
Training: -
|
54 |
+
|
55 |
+
Test: -
|
56 |
+
|
57 |
+
## Proposed Prompts
|
58 |
+
|
59 |
+
Here we will describe the prompt given to the model over which we will compute the perplexity score, as model's answer we will chose the prompt with lower perplexity.
|
60 |
+
Moreover, for each subtask, we define a description that is prepended to the prompts, needed by the model to understand the task.
|
61 |
+
|
62 |
+
Description of the task: ""
|
63 |
+
|
64 |
+
Label (**Ambiguo**): ""
|
65 |
+
|
66 |
+
Label (**Non Ambiguo**): ""
|
67 |
+
|
68 |
+
## Some Results
|
69 |
+
|
70 |
+
| Pretens | ACCURACY |
|
71 |
+
| :--------: | :----: |
|
72 |
+
| Mistral-7B | 0 |
|
73 |
+
| ZEFIRO | 0 |
|
74 |
+
| Llama-3 | 0 |
|
75 |
+
| ANITA | 0 |
|