File size: 13,220 Bytes
c113c3a 74195be 11097bc 74195be 1b1c332 74195be 11097bc 74195be 11097bc 38d6abe c113c3a 5964977 38d6abe 5964977 38d6abe 5964977 38d6abe 5964977 38d6abe 5964977 38d6abe 5eb8067 0870d76 38d6abe 5eb8067 38d6abe 5964977 38d6abe 5964977 38d6abe |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 |
---
license: apache-2.0
dataset_info:
- config_name: default
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
_type: Value
- name: annotation_type
dtype: string
- name: user_id
dtype: string
splits:
- name: train
num_bytes: 254697574
num_examples: 202364
- name: test
num_bytes: 1782208
num_examples: 1750
download_size: 138163772
dataset_size: 256479782
- config_name: demographics
features:
- name: user_id
dtype: string
- name: age_range
sequence: int64
- name: gender
dtype: string
- name: country
dtype: string
- name: languages
sequence: string
- name: dialects
sequence: string
splits:
- name: train
num_bytes: 202127
num_examples: 1456
download_size: 113702
dataset_size: 202127
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- config_name: demographics
data_files:
- split: train
path: demographics/train-*
annotations_creators:
- crowdsourced
- expert-generated
language:
- amh
- arb
- ary
- ars
- acq
- arz
- apc
- ben
- ceb
- dan
- deu
- ell
- eng
- eus
- fil
- fin
- fra
- gle
- guj
- hat
- hau
- hin
- hun
- ibo
- ind
- ita
- jav
- jpn
- kan
- kir
- kor
- kur
- lit
- mal
- mar
- mlg
- msa
- mya
- nep
- nld
- nso
- nya
- pan
- pes
- pol
- por
- pus
- rus
- sin
- sna
- snd
- som
- spa
- sqi
- srp
- sun
- swa
- swe
- tam
- tel
- tha
- tur
- ukr
- urd
- vie
- wol
- xho
- yor
- zho
- zul
language_creators:
- crowdsourced
- expert-generated
multilinguality:
- multilingual
pretty_name: Aya Dataset
size_categories:
- 100K<n<1M
source_datasets:
- original
tags: []
task_categories:
- other
task_ids: []
---
![Aya Header](https://huggingface.co/datasets/CohereForAI/aya_dataset/resolve/main/aya_header.png)
# Dataset Summary
The `Aya Dataset` is a multilingual instruction fine-tuning dataset curated by an open-science community via [Aya Annotation Platform](https://aya.for.ai/) from Cohere For AI. The dataset contains a total of 204k human-annotated prompt-completion pairs along with the demographics data of the annotators.<br>
This dataset can be used to train, finetune, evaluate multilingual LLMs.
- **Curated by:** Contributors of [Aya Open Science Intiative](https://aya.for.ai/).
- **Language(s):** 65 languages (71 including dialects & scripts).
- **License:** [Apache 2.0](https://opensource.org/license/apache-2-0)
- **Aya Datasets Family:**
| Name | Explanation |
|------|--------------|
| [aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset) | Human-annotated multilingual instruction finetuning dataset, comprising over 204K instances across 65 languages. |
| [aya_collection](https://huggingface.co/datasets/CohereForAI/aya_collection) | Created by applying instruction-style templates from fluent speakers to 44 datasets, including translations of 19 instruction-style datasets into 101 languages, providing 513k instances for various tasks.|
| [aya_evaluation_suite](https://huggingface.co/datasets/CohereForAI/aya_evaluation_suite) | A diverse evaluation set for multilingual open-ended generation, featuring 250 culturally grounded prompts in 7 languages, 200 translated prompts in 24 languages, and human-edited versions selected for cross-cultural relevance from English Dolly in 6 languages.|
# Dataset
The `Aya Dataset` comprises of two types of data:
1. **Human Annotations:** Original annotations (brand new prompts and completions written by annotators) and re-annotations (human edits of automatically generated prompts and completions).
2. **Demographics Data:** Anonymized information for each annotator.
## Load with Datasets
To load this dataset consisting of both prompt-completions and demographics data with `datasets`, you'll just need to install Datasets as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
# Load the annotations dataset
aya_dataset = load_dataset("CohereForAI/aya_dataset")
# Load the demographics dataset
aya_demographics = load_dataset("CohereForAI/aya_dataset", "demographics")
```
## Data Fields
### Human Annotations (Default)
The data fields are the same among all splits:
- `inputs`: Prompt or input to the language model.
- `targets`: Completion or output of the language model.
- `language`: The language of the `inputs` and `targets`.
- `language_code`: The ISO code for the language of the `inputs` and `targets`.
- `annotation_type`: The value denoting whether `inputs` and `targets` are 'original_annotations' or 're-annotations'.
- `user_id`: Unique identifier of the annotator who submitted the prompt-completion pair.
### Demographics Data
The data fields are the same among all splits:
- `user_id`: Unique identifier of the annotator who submitted the prompt-completion pair.
- `age_range`: Age of the annotator. Ranges from 0 to 121.
- `gender`: Gender of the annotator. The values are 'male', 'female', 'prefer not to say', 'non-binary' and 'others'.
- `languages`: List of languages spoken by the annotator.
- `dialects`: Dialects reported by the annotator.
## Data Splits
### Human Annotations (Default)
The following are the splits of the data:
| Split | No. of instances | Language Coverage |
|-------|------------------|-------------------|
| train | 202,364 | All |
| test | 1,750 | 7 ('Standard Arabic', 'Yoruba', 'Turkish', 'English', 'Simplified Chinese', 'Portuguese', 'Telugu')|
### Demographics Data
The following are the splits of the data:
| Split | No. of Instances |
|-------|------------------|
| train | 1,456 |
## Data Instances
### Human Annotations (Default)
An example of `train` looks as follows:
```json
{
"inputs": "What cultural events or festivals add vibrancy to Colombo's calendar...",
"targets": "Colombo's cultural calendar is adorned with diverse events and festivals that celebrate the city's rich tapestry of traditions...",
"language": "English",
"language_code": "eng",
"annotation_type": "original-annotations",
"user_id": "f0ff69570af705b75c5a0851883e..."
}
```
### Demographics Data
An example of `train` looks as follows:
```json
{
"user_id": "f0ff69570af705b75c5a0851883e...",
"age_range": [ 25, 35 ],
"gender": "female",
"languages": [ "English", "Hausa" ],
"dialects": [ "Hausa" ]
}
```
## Statistics
### Annotation Types
The following is the breakdown of original annotations and re-annotations in the final dataset.
| Type of Annotation | Instances |
|--------------------|-----------|
| Original Annotations | 138,844 |
| Re-Annotations | 65,270 |
| Total | 204,114|
### Languages
The dataset covers 65 languages: 28 high-resource, 12 mid-resource, and 31 low-resource languages. The following is details about the languages, dialects & scripts included in the dataset.
<details>
<summary> Languages Info </summary>
| ISO Code | Language | Resources |
|----------|----------|-----------|
| `amh` | Amharic | Low |
| `arb`, `ary`, `ars`, `acq`, `arz` & `apc` | Arabic (Standard, Moroccan, Najdi, Ta'izzi-Adeni, Egyptian & South Levantine) | High |
| `ben` | Bengali | Mid |
| `ceb` | Cebuano | Mid |
| `dan` | Danish | Mid |
| `deu` | German | High |
| `ell` | Greek | Mid |
| `eng` | English | High |
| `eus` | Basque | High |
| `fil` | Filipino | Mid |
| `fin` | Finnish | Mid |
| `fra` | French | High |
| `gle` | Irish | Low |
| `guj` | Gujarati | Low |
| `hat` | Haitian Creole | Low |
| `hau` | Hausa | Low |
| `hin` | Hindi | High |
| `hun` | Hungarian | High |
| `ibo` | Igbo | Low |
| `ind` | Indonesian | Mid |
| `ita` | Italian | High |
| `jav` | Javanese | Low |
| `jpn` | Japanese | High |
| `kan` | Kannada | Low |
| `kir` | Kyrgyz | Low |
| `kor` | Korean | Mid |
| `kur` | Kurdish | Low |
| `lit` | Lithuanian | Mid |
| `mal` | Malayalam | Low |
| `mar` | Marathi | Low |
| `mlg` | Malagasy | Low |
| `msa` | Malay | Mid |
| `mya` | Burmese | Low |
| `nep` | Nepali | Low |
| `nld` | Dutch | High |
| `nso` | Northern Sotho | Low |
| `nya` | Chichewa | Low |
| `pan` | Punjabi | Low |
| `pes` | Persian | High |
| `pol` | Polish | High |
| `por` | Portuguese | High |
| `pus` | Pashto | Low |
| `rus` | Russian | High |
| `sin` | Sinhala | Low |
| `sna` | Shona | Low |
| `snd` | Sindhi | Low |
| `som` | Somali | Low |
| `spa` | Spanish | High |
| `sqi` | Albanian | Low |
| `srp` | Serbian | High |
| `sun` | Sundanese | Low |
| `swa` | Swahili | Low |
| `swe` | Swedish | High |
| `tam` | Tamil | Mid |
| `tel` | Telugu | Low |
| `tha` | Thai | Mid |
| `tur` | Turkish | High |
| `ukr` | Ukrainian | Mid |
| `urd` | Urdu | Mid |
| `vie` | Vietnamese | High |
| `wol` | Wolof | Low |
| `xho` | Xhosa | Low |
| `yor` | Yorùbá | Low |
| `zho` | Chinese (Traditional & Simplified) | High |
| `zul` | Zulu | Low |
</details>
<br>
# Motivations & Intentions
- **Curation Rationale:** The curation effort employed an open-science approach to create a diverse instruction-style dataset through annotators across the globe that ensures comprehensive representation across all languages. The success of the curation effort, led by volunteers across diverse backgrounds, was significantly influenced by their hope to meaningfully bring NLP advancements to their languages.
# Known Limitations
- **Language and dialect coverage:** The dataset covers a limited fraction of the world's linguistic diversity, with 93% of languages not represented, facing challenges in distinguishing between languages and dialects, lacking coverage for many regional dialects, and excluding programming languages.
- **Uneven distribution of contributions:** The dataset contains contributions in annotation activities, with a 'long tail' of annotators making only one or two contributions, leading to potential dataset imbalances across languages and a lack of diversity within certain language annotations.
- **Cultural and Personal Bias:** In the dataset, certain languages have limited representation due to a few dominant annotators, potentially leading to a narrow viewpoint and skewed distribution of content, particularly towards certain domains like news.
- **Gendered Pronouns:** Many of the languages in the Aya Dataset only contain pronouns that are explicitly gendered (e.g., Arabic) or that lack gender-neutral third-person pronouns for gender-neutral reference (e.g. Estonian).
- **Formality Distinctions:** The dataset encompasses languages with diverse formality distinctions, involving honorifics and situational choices in pronoun use, reflecting varying levels of standardization influenced by regional, cultural, and identity factors.
- **Toxic or Offensive Speech:** The Aya Annotation Platform lacked specific flags for toxic speech, relying on human verification and peer review to mitigate offensive content, but there's no guarantee that all potentially offensive data points were removed during the annotation process.
- **Accounting for mislabeled data:** The Aya Annotation Platform lacks re-labeling capabilities, leading to potential mislabeled data in the Aya Dataset, including instances of incorrect language assignments and non-compliance with instruction-style formatting.
# Additional Information
## Provenance
- **Methods Used:** Crowd-sourced through volunteer annotations, followed by a quality assessment phase in which samples from the dataset were checked.
- **Methodology Details:**
- *Source:* Original annotations and edits of opensource NLP datasets
- *Platform:* [Aya Annotation Platform](https://aya.for.ai/)
- *Dates of Collection:* Jun 2023 - Dec 2023
## Dataset Version and Maintenance
- **Maintenance Status:** Actively Maintained
- **Version Details:**
- *Current version:* 1.0
- *Last Update:* 12/2023
- *First Release:* 02/2024
- **Maintenance Plan:** Updates will be periodically made available based on volunteer contributions.
## Authorship
- **Publishing Organization:** [Cohere For AI](https://cohere.com/research)
- **Industry Type:** Not-for-profit - Tech
- **Contact Details:** https://aya.for.ai/
## Licensing Information
This dataset can be used for any purpose, whether academic or commercial, under the terms of the [Apache 2.0](https://opensource.org/license/apache-2-0) License.
## Citation Information
```bibtex
@misc{singh2024aya,
title={Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning},
author={Shivalika Singh and Freddie Vargus and Daniel Dsouza and Börje F. Karlsson and Abinaya Mahendiran and Wei-Yin Ko and Herumb Shandilya and Jay Patel and Deividas Mataciunas and Laura OMahony and Mike Zhang and Ramith Hettiarachchi and Joseph Wilson and Marina Machado and Luisa Souza Moura and Dominik Krzemiński and Hakimeh Fadaei and Irem Ergün and Ifeoma Okoh and Aisha Alaagib and Oshan Mudannayake and Zaid Alyafeai and Vu Minh Chien and Sebastian Ruder and Surya Guthikonda and Emad A. Alghamdi and Sebastian Gehrmann and Niklas Muennighoff and Max Bartolo and Julia Kreutzer and Ahmet Üstün and Marzieh Fadaee and Sara Hooker},
year={2024},
eprint={2402.06619},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |