File size: 2,735 Bytes
5da2357
 
 
 
 
 
 
 
 
371fe3e
5da2357
 
371fe3e
5da2357
371fe3e
 
5da2357
 
 
 
 
 
 
 
103a531
 
 
 
 
 
 
8970ed4
 
5228825
 
c1f7443
5228825
c1f7443
 
 
 
7e86c1a
f241098
c1f7443
 
 
 
 
04a5979
 
 
 
c1f7443
 
 
 
 
 
2c049ab
c1f7443
d0df8da
 
f241098
c1f7443
712c5bb
d0df8da
d659073
d0df8da
 
2c049ab
d0df8da
c1f7443
55cd819
c1f7443
7226399
 
cfd5fcd
 
 
 
 
 
8962edc
 
712c5bb
 
 
 
1381971
712c5bb
1381971
5919ad6
1381971
712c5bb
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
dataset_info:
  features:
  - name: text
    dtype: string
  - name: label
    dtype: int64
  splits:
  - name: train
    num_bytes: 314451
    num_examples: 5837
  - name: test
    num_bytes: 839852
    num_examples: 14560
  download_size: 345578
  dataset_size: 1154303
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
---

# Presupposed Taxonomies: Evaluating Neural Network Semantics (PreTENS) 

Original Paper: https://aclanthology.org/2022.semeval-1.29.pdf

This dataset comes from SemEVAL-2022 shared tasks.

The PreTENS task aims at focusing on semantic competence with specific attention on the evaluation of language models with respect to the  recognition of appropriate taxonomic relations between two nominal arguments.

We collected the Italian part of the original dataset, and more specifically only the first sub-task: **acceptability sentence classification**.

## Example

Here you can see the structure of the single sample in the present dataset.

```json
{
  "text": string, # sample's text
  "label": int, # 0: non ha senso, 1: ha senso
}
```

## Statitics

| PRETENS | 0 | 1 |
| :--------: | :----: | :----: |
| Training | 3029 | 2808 |
| Test | 7707 | 6853 |

## Proposed Prompts

Here we will describe the prompt given to the model over which we will compute the perplexity score, as model's answer we will chose the prompt with lower perplexity.
Moreover, for each subtask, we define a description that is prepended to the prompts, needed by the model to understand the task.

Description of the task: "Indica se le seguenti frasi hanno senso.\n\n"

### Cloze Style:

Label (**non ha senso**): "{{text}}\nLa frase precedente non ha senso"

Label (**ha senso**): "{{text}}\nLa frase precedente ha senso"

### MCQA Style:

```txt
{{text}}\nDomanda: La frase precedente ha senso senso? Rispondi sì o no:
```

## Results

The following results are given by the Cloze-style prompting over some english and italian-adapted LLMs.

| PRETENS | ACCURACY (15-shots) |
| :-----: | :--: |
| Gemma-2B | 53.5 |
| QWEN2-1.5B | 56.47 |
| Mistral-7B | 66.5 |
| ZEFIRO | 62 |
| Llama-3-8B | 72.34 |
| Llama-3-8B-IT | 65.58 |
| ANITA | 66.1 |

## Aknowledgement

We would like to thank the authors of this resource for publicly releasing such an intriguing benchmark.

Additionally, we extend our gratitude to the students of the [MNLP-2024 course](https://naviglinlp.blogspot.com/), whose first homework explored various interesting prompting strategies.

The original dataset is freely available for download [here](https://github.com/shammur/SemEval2022Task3).

## License

The data come under [MIT](https://opensource.org/license/mit) license.