|
--- |
|
license: cc-by-4.0 |
|
task_categories: |
|
- text-classification |
|
- zero-shot-classification |
|
task_ids: |
|
- multi-label-classification |
|
language: |
|
- en |
|
tags: |
|
- Human Values |
|
- Value Detection |
|
- Multi-Label |
|
pretty_name: Human Value Detection Dataset |
|
size_categories: |
|
- 1K<n<10K |
|
--- |
|
# The Touché23-ValueEval Dataset |
|
|
|
## Table of Contents |
|
- [Table of Contents](#table-of-contents) |
|
- [Dataset Description](#dataset-description) |
|
- [Dataset Summary](#dataset-summary) |
|
- [Dataset Usage](#dataset-usage) |
|
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) |
|
- [Languages](#languages) |
|
- [Dataset Structure](#dataset-structure) |
|
- [Argument Instances](#argument-instances) |
|
- [Metadata Instances](#metadata-instances) |
|
- [Additional Information](#additional-information) |
|
- [Dataset Curators](#dataset-curators) |
|
- [Licensing Information](#licensing-information) |
|
- [Citation Information](#citation-information) |
|
|
|
## Dataset Description |
|
|
|
- **Homepage:** [https://webis.de/data/touche23-valueeval.html](https://webis.de/data/touche23-valueeval.html) |
|
- **Repository:** [Zenodo](https://doi.org/10.5281/zenodo.6814563) |
|
- **Paper:** [The Touché23-ValueEval Dataset for Identifying Human Values behind Arguments.](https://webis.de/downloads/publications/papers/mirzakhmedova_2023a.pdf) |
|
- **Leaderboard:** [https://touche.webis.de/](https://touche.webis.de/semeval23/touche23-web/index.html#results) |
|
- **Point of Contact:** [Webis Group](https://webis.de/people.html) |
|
|
|
### Dataset Summary |
|
|
|
The Touché23-ValueEval Dataset comprises 9324 arguments from six different sources. An arguments source is indicated with the first letter of its `Argument ID`: |
|
- `A`: [IBM-ArgQ-Rank-30kArgs](https://research.ibm.com/haifa/dept/vst/debating_data.shtml#Argument%20Quality) |
|
- `C`:Chinese question-answering website [Zhihu](https://www.zhihu.com) |
|
- `D`:[Group Discussion Ideas (GD IDEAS)](https://www.groupdiscussionideas.com) |
|
- `E`:[The Conference for the Future of Europe](https://futureu.europa.eu) |
|
- `F`:Contribution by the language.ml lab (Doratossadat, Omid, Mohammad, Ehsaneddin) [1]: |
|
arguments from the "Nahj al-Balagha" [2] and "Ghurar al-Hikam wa Durar ak-Kalim" [3] |
|
- `G`:[The New York Times](https://www.nytimes.com) |
|
|
|
The annotated labels are based on the value taxonomy published in |
|
[Identifying the Human Values behind Arguments](https://webis.de/publications.html#kiesel_2022b) (Kiesel et al. 2022) at ACL'22. |
|
|
|
[1] https://language.ml |
|
[2] https://en.wikipedia.org/wiki/Nahj_al-Balagha |
|
[3] https://en.wikipedia.org/wiki/Ghurar_al-Hikam_wa_Durar_al-Kalim |
|
|
|
### Dataset Usage |
|
|
|
The default configuration name is `main`. |
|
```python |
|
from datasets import load_dataset |
|
dataset = load_dataset("webis/Touche23-ValueEval") |
|
print(dataset['train'].info.description) |
|
for argument in iter(dataset['train']): |
|
print(f"{argument['Argument ID']}: {argument['Stance']} '{argument['Conclusion']}': {argument['Premise']}") |
|
``` |
|
|
|
### Supported Tasks and Leaderboards |
|
|
|
Human Value Detection |
|
|
|
### Languages |
|
|
|
The [Argument Instances](#argument-instances) are all monolingual; it only includes English (mostly en-US) documents. |
|
The [Metadata Instances](#metadata-instances) for some dataset parts additionally state the arguments in their original language and phrasing. |
|
|
|
## Dataset Structure |
|
|
|
### Argument Instances |
|
|
|
Each argument instance has the following attributes: |
|
- `Argument ID`: The unique identifier for the argument within the dataset |
|
- `Conclusion`: Conclusion text of the argument |
|
- `Stance`: Stance of the `Premise` towards the `Conclusion; one of "in favor of", "against" |
|
- `Premise`: Premise text of the argument |
|
- `Labels`: The `Labels` for each example is an array of 1s (argument resorts to value) and 0s (argument does not resort to value). The order is the same as in the original files. |
|
|
|
Additionally, the labels are separated into *value-categories*, aka. level 2 labels of the value taxonomy (Kiesel et al. 2022b), and *human values*, aka. level 1 labels of the value taxonomy. |
|
This distinction is also reflected in the configuration names: |
|
- `<config>`: As the [Task](https://touche.webis.de/semeval23/touche23-web/) is focused mainly on the detection of value-categories, |
|
each base configuration ([listed below](#p-list-base-configs)) has the 20 value-categories as labels: |
|
```python |
|
labels = ["Self-direction: thought", "Self-direction: action", "Stimulation", "Hedonism", "Achievement", "Power: dominance", "Power: resources", "Face", "Security: personal", "Security: societal", "Tradition", "Conformity: rules", "Conformity: interpersonal", "Humility", "Benevolence: caring", "Benevolence: dependability", "Universalism: concern", "Universalism: nature", "Universalism: tolerance", "Universalism: objectivity"] |
|
``` |
|
- `<config>-level1`: The 54 human values from the level 1 of the value taxonomy are not used for the 2023 task |
|
(except for the annotation), but are still listed here for some might find them useful for understanding the value |
|
categories. Their order is also the same as in the original files. For more details see the [value-categories](#metadata-instances) configuration. |
|
|
|
<p id="p-list-base-configs">The configuration names (as replacements for <code><config></code>) in this dataset are:</p> |
|
|
|
- `main`: 8865 arguments (sources: `A`, `D`, `E`) with splits `train`, `validation`, and `test` (default configuration name) |
|
```python |
|
dataset_main_train = load_dataset("webis/Touche23-ValueEval", split="train") |
|
dataset_main_validation = load_dataset("webis/Touche23-ValueEval", split="validation") |
|
dataset_main_test = load_dataset("webis/Touche23-ValueEval", split="test") |
|
``` |
|
- `nahjalbalagha`: 279 arguments (source: `F`) with split `test` |
|
```python |
|
dataset_nahjalbalagha_test = load_dataset("webis/Touche23-ValueEval", name="nahjalbalagha", split="test") |
|
``` |
|
- `nyt`: 80 arguments (source: `G`) with split `test` |
|
```python |
|
dataset_nyt_test = load_dataset("webis/Touche23-ValueEval", name="nyt", split="test") |
|
``` |
|
- `zhihu`: 100 arguments (source: `C`) with split `validation` |
|
```python |
|
dataset_zhihu_validation = load_dataset("webis/Touche23-ValueEval", name="zhihu", split="validation") |
|
``` |
|
|
|
Please note that due to copyright reasons, there currently does not exist a direct download link to the arguments contained in the |
|
New york Times |
|
dataset. Accessing any of the `nyt` or `nyt-level1` configurations will therefore use the specifically created |
|
[nyt-downloader program](https://github.com/touche-webis-de/touche-code/tree/main/semeval23/human-value-detection/nyt-downloader) |
|
to create and access the arguments locally. See the program's |
|
[README](https://github.com/touche-webis-de/touche-code/blob/main/semeval23/human-value-detection/nyt-downloader/README.md) |
|
for further details. |
|
|
|
### Metadata Instances |
|
|
|
The following lists all configuration names for metadata. Each configuration only has a single split named `meta`. |
|
- `ibm-meta`: Each row corresponds to one argument (IDs starting with `A`) from the [IBM-ArgQ-Rank-30kArgs](https://research.ibm.com/haifa/dept/vst/debating_data.shtml#Argument%20Quality) |
|
- `Argument ID`: The unique identifier for the argument |
|
- `WA`: the quality label according to the weighted-average scoring function |
|
- `MACE-P`: the quality label according to the MACE-P scoring function |
|
- `stance_WA`: the stance label according to the weighted-average scoring function |
|
- `stance_WA_conf`: the confidence in the stance label according to the weighted-average scoring function |
|
```python |
|
dataset_ibm_metadata = load_dataset("webis/Touche23-ValueEval", name="ibm-meta", split="meta") |
|
``` |
|
- `zhihu-meta`: Each row corresponds to one argument (IDs starting with `C`) from the Chinese question-answering website [Zhihu](https://www.zhihu.com) |
|
- `Argument ID`: The unique identifier for the argument |
|
- `Conclusion Chinese`: The original chinese conclusion statement |
|
- `Premise Chinese`: The original chinese premise statement |
|
- `URL`: Link to the original statement the argument was taken from |
|
```python |
|
dataset_zhihu_metadata = load_dataset("webis/Touche23-ValueEval", name="zhihu-meta", split="meta") |
|
``` |
|
- `gdi-meta`: Each row corresponds to one argument (IDs starting with `D`) from [GD IDEAS](https://www.groupdiscussionideas.com/) |
|
- `Argument ID`: The unique identifier for the argument |
|
- `URL`: Link to the topic the argument was taken from |
|
```python |
|
dataset_gdi_metadata = load_dataset("webis/Touche23-ValueEval", name="gdi-meta", split="meta") |
|
``` |
|
- `cofe-meta`: Each row corresponds to one argument (IDs starting with `E`) from [the Conference for the Future of Europe](https://futureu.europa.eu) |
|
- `Argument ID`: The unique identifier for the argument |
|
- `URL`: Link to the comment the argument was taken from |
|
```python |
|
dataset_cofe_metadata = load_dataset("webis/Touche23-ValueEval", name="cofe-meta", split="meta") |
|
``` |
|
- `nahjalbalagha-meta`: Each row corresponds to one argument (IDs starting with `F`). This file contains information on the 279 arguments in `nahjalbalagha` (or `nahjalbalagha-level1`) |
|
and 1047 additional arguments that were not labeled so far. This data was contributed by the language.ml lab. |
|
- `Argument ID`: The unique identifier for the argument |
|
- `Conclusion Farsi`: Conclusion text of the argument in Farsi |
|
- `Stance Farsi`: Stance of the `Premise` towards the `Conclusion`, in Farsi |
|
- `Premise Farsi`: Premise text of the argument in Farsi |
|
- `Conclusion English`: Conclusion text of the argument in English (translated from Farsi) |
|
- `Stance English`: Stance of the `Premise` towards the `Conclusion`; one of "in favor of", "against" |
|
- `Premise English`: Premise text of the argument in English (translated from Farsi) |
|
- `Source`: Source text of the argument; one of "Nahj al-Balagha", "Ghurar al-Hikam wa Durar ak-Kalim"; their Farsi translations were used |
|
- `Method`: How the premise was extracted from the source; one of "extracted" (directly taken), "deduced"; the conclusion are deduced |
|
```python |
|
dataset_nahjalbalagha_metadata = load_dataset("webis/Touche23-ValueEval", name="nahjalbalagha-meta", split="meta") |
|
``` |
|
- `nyt-meta`: Each row corresponds to one argument (IDs starting with `G`) from [The New York Times](https://www.nytimes.com) |
|
- `Argument ID`: The unique identifier for the argument |
|
- `URL`: Link to the article the argument was taken from |
|
- `Internet Archive timestamp`: Timestamp of the article's version in the Internet Archive that was used |
|
```python |
|
dataset_nyt_metadata = load_dataset("webis/Touche23-ValueEval", name="nyt-meta", split="meta") |
|
``` |
|
- `value-categories`: Contains a single JSON-entry with the structure of level 2 and level 1 values regarding the value taxonomy: |
|
``` |
|
{ |
|
"<value category>": { |
|
"<level 1 value>": [ |
|
"<exemplary effect a corresponding argument might target>", |
|
... |
|
], ... |
|
}, ... |
|
} |
|
``` |
|
As this configuration contains just a single entry, an example usage could be: |
|
```python |
|
value_categories = load_dataset("webis/Touche23-ValueEval", name="value-categories", split="meta")[0] |
|
``` |
|
|
|
## Additional Information |
|
|
|
### Dataset Curators |
|
|
|
[More Information Needed] |
|
|
|
### Licensing Information |
|
[Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) |
|
|
|
### Citation Information |
|
|
|
``` |
|
@Article{mirzakhmedova:2023a, |
|
author = {Nailia Mirzakhmedova and Johannes Kiesel and Milad Alshomary and Maximilian Heinrich and Nicolas Handke\ |
|
and Xiaoni Cai and Valentin Barriere and Doratossadat Dastgheib and Omid Ghahroodi and {Mohammad Ali} Sadraei\ |
|
and Ehsaneddin Asgari and Lea Kawaletz and Henning Wachsmuth and Benno Stein}, |
|
doi = {10.48550/arXiv.2301.13771}, |
|
journal = {CoRR}, |
|
month = jan, |
|
publisher = {arXiv}, |
|
title = {{The Touch{\'e}23-ValueEval Dataset for Identifying Human Values behind Arguments}}, |
|
volume = {abs/2301.13771}, |
|
year = 2023 |
|
} |
|
``` |