datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
2.76M
| likes
int64 0
6.62k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
47
⌀ | createdAt
unknown | card
stringlengths 15
1M
|
---|---|---|---|---|---|---|---|---|
Jay-Rajput/DIS_IPL_Preds | Jay-Rajput | "2024-05-27T06:26:15Z" | 12,494 | 0 | [
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | null | "2024-04-06T09:18:15Z" | ---
configs:
- config_name: predictions
data_files: predictions/*.json
---
---
license: apache-2.0
---
|
jacobbieker/eumetsat-cloudmask-rss | jacobbieker | "2024-02-28T20:56:15Z" | 12,440 | 0 | [
"license:mit",
"doi:10.57967/hf/1642",
"region:us"
] | null | "2024-01-12T18:51:32Z" | ---
license: mit
---
|
IGNF/PASTIS-HD | IGNF | "2024-10-04T13:39:24Z" | 12,345 | 9 | [
"task_categories:image-classification",
"task_categories:image-segmentation",
"license:etalab-2.0",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"arxiv:2107.07933",
"arxiv:2112.07558",
"arxiv:2404.08351",
"region:us",
"remote sensing",
"Agricultural"
] | [
"image-classification",
"image-segmentation"
] | "2024-04-02T14:58:15Z" | ---
license: etalab-2.0
task_categories:
- image-classification
- image-segmentation
tags:
- remote sensing
- Agricultural
size_categories:
- 1K<n<10K
---
# 🌱 PASTIS-HD 🌿 Panoptic Agricultural Satellite TIme Series : optical time series, radar time series and very high resolution image
[PASTIS](https://github.com/VSainteuf/pastis-benchmark) is a benchmark dataset for panoptic and semantic segmentation of agricultural parcels from satellite time series.
It contains 2,433 patches within the French metropolitan territory with panoptic annotations (instance index + semantic label for each pixel).
Each patch is a Sentinel-2 multispectral image time series of variable lentgh.
This dataset have been extended in 2021 with aligned radar Sentinel-1 observations for all 2433 patches.
For each patch, it constains approximately 70 observations of Sentinel-1 in ascending orbit, and 70 observations in descending orbit. Each each Sentinel1 observation is assembled into a 3-channel image: vertical polarization (VV), horizontal polarisation (VH), and the ratio vertical over horizontal polarization (VV/VH). This extension is named PASTIS-R.
We extend PASTIS with aligned very high resolution satellite images from SPOT 6-7 constellation for all 2433 patches in addition to the Sentinel-1 and 2 time series.
The image are resampled to a 1m resolution and converted to 8 bits.
This enhancement significantly improves the dataset's spatial content, providing more granular information for agricultural parcel segmentation.
**PASTIS-HD** can be used to evaluate multi-modal fusion methods (with optical time series, radar time series and VHR images) for parcel-based classification, semantic segmentation, and panoptic segmentation.
## Dataset in numbers
🛰️ Sentinel 2 | 🛰️ Sentinel 1 | 🛰️ **SPOT 6-7 VHR** | 🗻 Annotations
:-------------------------------------------- | :-------------------------------------------------- | :------------------------------| :------------------------------
➡️ 2,433 time series | ➡️ 2 time 2,433 time series | ➡️ **2,433 images** | 124,422 individual parcels
➡️ 10m / pixel | ➡️ 10m / pixel | ➡️ **1.5m / pixel** | covers ~4,000 km²
➡️ 128x128 pixels / images | ➡️ 128x128 pixels / images | ➡️ **1280x1280 pixels / images** | over 2B pixels
➡️ 38-61 acquisitions / series | ➡️ ~ 70 acquisitions / series | ➡️ **One observation** | 18 crop types
➡️ 10 spectral bands |➡️ 2 spectral bands | ➡️ **3 spectral bands** |
⚠️ The **SPOT data are natively 1.5m resolution**, but we over-sampled them at 1m to align them pixel-perfect with Sentinel data.
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6582b7dd75754a803e484487/sxmnCAGs0p2u_PALLsqyN.jpeg)
## Data loading
The Github repository associated to this dataset contains a PyTorch dataset class of [the OmniSat repository](https://github.com/gastruc/OmniSat/blob/main/src/data/Pastis.py) that can be readily used to load data for training models on PASTIS-HD.
The time series contained in PASTIS have variable lengths.
The Sentinel 1 and 2 time series are stored in numpy array. The SPOT images are in TIFF format.
The annotations are in numpy array too.
⚠️ The S2 and S1 folders contains more than 2433 files on the contrary to the labels folder. Some patches are not labelled and not used for training.
The relevant information can be find in the metadata.geojson file (with 2433 entries), which is used as an index by the dataloader.
### Remark about the folder names
⚠️ The **DATA_S1A** folder contains the Sentinel-1 **ascendent** images whereas the **DATA_S1D** folder contains the Sentinel-1 **descendant** images.
## Ground Truth Annotations
The agricultural parcels are grouped into 18 different crop classes as shown in the table below. The backgroud class corresponds to non-agricultural land, and the void label for parcels that are mostly outside their patch.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6582b7dd75754a803e484487/aHQB0uq4cqBX-7hkCkpFn.png)
Additional information about the dataset can be found in the documentation/pastis-documentation.pdf document.
## Credits
- The Sentinel imagery used in PASTIS was retrieved from [THEIA](www.theia.land.fr):
"Value-added data processed by the CNES for the Theia www.theia.land.fr data cluster using Copernicus data.
The treatments use algorithms developed by Theia’s Scientific Expertise Centres. "
- The annotations used in PASTIS stem from the French [land parcel identification system](https://www.data.gouv.fr/en/datasets/registre-parcellaire-graphique-rpg-contours-des-parcelles-et-ilots-culturaux-et-leur-groupe-de-cultures-majoritaire/) produced
by IGN.
- The SPOT images are opendata thanks to the Dataterra Dinamis initiative in the case of the ["Couverture France DINAMIS"](https://dinamis.data-terra.org/opendata/) program.
## References
If you use PASTIS please cite the [related paper](https://arxiv.org/abs/2107.07933):
```
@article{garnot2021panoptic,
title={Panoptic Segmentation of Satellite Image Time Series
with Convolutional Temporal Attention Networks},
author={Sainte Fare Garnot, Vivien and Landrieu, Loic},
journal={ICCV},
year={2021}
}
```
For the PASTIS-R optical-radar fusion dataset, please also cite [this paper](https://arxiv.org/abs/2112.07558v1):
```
@article{garnot2021mmfusion,
title = {Multi-modal temporal attention models for crop mapping from satellite time series},
journal = {ISPRS Journal of Photogrammetry and Remote Sensing},
year = {2022},
doi = {https://doi.org/10.1016/j.isprsjprs.2022.03.012},
author = {Vivien {Sainte Fare Garnot} and Loic Landrieu and Nesrine Chehata},
}
```
For the PASTIS-HD with the 3 modalities optical-radar time series plus VHR images dataset, please also cite [this paper](https://arxiv.org/abs/2404.08351):
```
@article{astruc2024omnisat,
title={Omni{S}at: {S}elf-Supervised Modality Fusion for {E}arth Observation},
author={Astruc, Guillaume and Gonthier, Nicolas and Mallet, Clement and Landrieu, Loic},
journal={ECCV},
year={2024}
}
``` |
luulinh90s/chm-corr-prj-giang | luulinh90s | "2024-07-06T14:42:17Z" | 12,335 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-10-03T01:26:35Z" | ---
license: mit
---
|
liwu/MNBVC | liwu | "2024-08-23T02:21:05Z" | 12,306 | 500 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"source_datasets:original",
"language:zh",
"license:mit",
"region:us"
] | [
"text-generation",
"fill-mask"
] | "2023-02-13T14:00:47Z" | ---
annotations_creators:
- other
language:
- zh
language_creators:
- other
license:
- mit
multilinguality:
- monolingual
pretty_name: MNBVC
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
---
# Dataset Card for MNBVC
## Table of Contents
- [Dataset Card for MNBVC](#dataset-card-for-mnbvc)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [数据集介绍](#数据集介绍)
- [数据子集](#数据子集)
- [数据格式](#数据格式)
- [文本数据](#文本数据)
- [问答数据](#问答数据)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://mnbvc.253874.net/
- **Repository:** https://github.com/esbatmop/MNBVC
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** N/A
### 数据集介绍
中文互联网上最古老最神秘(没有之一)的里屋社区于2023.1.1庄重宣布:
在英明神武的里屋管子带领下,决心发挥社区所长(哪都长),帮助开源社区长期更新一份最大的中文互联网语料集。
Huggingface上的MNBVC数据集在逐渐更新中,请到[https://github.com/esbatmop/MNBVC](https://github.com/esbatmop/MNBVC) 获取未完成清洗的更多数据。
可以使用如下脚本加载:
```python
from datasets import load_dataset
dataset = load_dataset("liwu/MNBVC", 'law_judgement', split='train', streaming=True)
next(iter(dataset)) # get the first line
```
## 数据子集
MNBVC数据集包含数个子集:
- `law_judgement`: 来自法律文书的文本。
- `gov_xuexiqiangguo`: 来自学习强国的文本。
- `gov_report`: 来自政府工作报告的文本。
- `co_ann_report`: 企业年报文本。
- `code_metadata`: 代码元数据。
- `qa_zhihu`: 来自[知乎](https://huggingface.co/datasets/wangrui6/Zhihu-KOL)的问答数据。
- `qa_wikihow`: 来自wikihow的问答数据。
- `qa_mfa`: 外交部问答数据。
- `news_peoples_daily`: 来自人民日报的文本数据。
- `wikipedia`: 来自维基百科的文本数据。
- `qa_stackexchange`: 来自StackExchange的问答数据。
- `qa_chatgpt`: 使用ChatGPT构造的问答语料,感谢[genggui001](https://github.com/genggui001)贡献语料。
- `math`:
- `math_qa `: 和数学领域有关的问答数据。
- `emath` :中国数学爱好者论坛语料数据
- `math_chat`: 和数学领域有关的对话数据数据,可以提升模型Chain of Thought的能力。
- `crawler_oscar`: 从CommonCrawl中清洗出来的通用文本数据。
- `game` : 一些游戏的平行语料数据。
- `Hogwarts_legacy` : 霍格沃茨指遗
- `The_Wither_3` : 巫师三
## 数据格式
目前MNBVC数据集包含如下几类数据:
- 通用文本
- 问答语料
- 代码语料
- 多轮对话
- 论坛语料
- 平行语料
可以在[MNBVC的wiki页面](https://wiki.mnbvc.org/doku.php/%E7%8E%B0%E6%9C%89%E8%AF%AD%E6%96%99%E6%A0%BC%E5%BC%8F)上查看这几类数据的具体格式。
项目早期所上传的数据使用如下格式,以后这一格式会被废弃,相应数据也会重新上传:
```json
{
"text": datasets.Value("string"),
"meta": datasets.Value("string")
}
```
### Contributions
Thanks to the [Liwu community](http://mnbvc.253874.net/) for constructing this dataset.
Thanks to [silver](https://github.com/silverriver) and [jiaming](https://huggingface.co/Yjiaming) for adding and uploading this dataset to Huggingface.
### Citation
Please cite the repo if you use the data or code in this repo.
```
@misc{mnbvc,
author = {{MOP-LIWU Community} and {MNBVC Team}},
title = {MNBVC: Massive Never-ending BT Vast Chinese corpus},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/esbatmop/MNBVC}},
}
```
|
common-canvas/commoncatalog-cc-by-nc-nd | common-canvas | "2024-05-16T19:46:41Z" | 12,240 | 2 | [
"task_categories:text-to-image",
"language:en",
"license:cc-by-nc-nd-4.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2310.16825",
"region:us"
] | [
"text-to-image"
] | "2023-10-19T02:10:48Z" | ---
license: cc-by-nc-nd-4.0
dataset_info:
features:
- name: jpg
dtype: image
- name: blip2_caption
dtype: string
- name: caption
dtype: string
- name: licensename
dtype: string
- name: licenseurl
dtype: string
- name: width
dtype: int32
- name: height
dtype: int32
- name: original_width
dtype: int32
- name: original_height
dtype: int32
- name: photoid
dtype: int64
- name: uid
dtype: string
- name: unickname
dtype: string
- name: datetaken
dtype: timestamp[us]
- name: dateuploaded
dtype: int64
- name: capturedevice
dtype: string
- name: title
dtype: string
- name: usertags
dtype: string
- name: machinetags
dtype: string
- name: longitude
dtype: float64
- name: latitude
dtype: float64
- name: accuracy
dtype: int64
- name: pageurl
dtype: string
- name: downloadurl
dtype: string
- name: serverid
dtype: int64
- name: farmid
dtype: int64
- name: secret
dtype: string
- name: secretoriginal
dtype: string
- name: ext
dtype: string
- name: url
dtype: string
- name: key
dtype: string
- name: status
dtype: string
- name: error_message
dtype: string
- name: exif
dtype: string
- name: sha256
dtype: string
- name: description
dtype: string
task_categories:
- text-to-image
language:
- en
---
# Dataset Card for CommonCatalog CC-BY-NC-ND
This dataset is a large collection of high-resolution Creative Common images (composed of different licenses, see paper Table 1 in the Appendix) collected in 2014 from users of Yahoo Flickr.
The dataset contains images of up to 4k resolution, making this one of the highest resolution captioned image datasets.
## Dataset Details
### Dataset Description
We provide captions synthetic captions to approximately 100 million high resolution images collected from Yahoo Flickr Creative Commons (YFCC).
- **Curated by:** Aaron Gokaslan
- **Language(s) (NLP):** en
- **License:** See relevant yaml tag / dataset name.
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/mosaicml/diffusion
- **Paper:** https://arxiv.org/abs/2310.16825
- **Demo:** See CommonCanvas Gradios
## Uses
We use CommonCatalog to train a family latent diffusion models called CommonCanvas.
The goal is to produce a model that is competitive with Stable Diffusion 2, but to do so using an easily accessible dataset of known provenance.
Doing so makes replicating the model significantly easier, and provides a clearer mechanism for applying training-data attribution techniques.
### Direct Use
Training text-to-image models
Training image-to-text models
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
* Commercial use
* Crafting content that is offensive or injurious towards individuals, including negative portrayals of their living conditions, cultural backgrounds, religious beliefs, etc.
* Deliberately creating or spreading content that is discriminatory or reinforces harmful stereotypes.
* Falsely representing individuals without their permission.
* Generating sexual content that may be seen by individuals without their consent.
* Producing or disseminating false or misleading information.
* Creating content that depicts extreme violence or bloodshed.
* Distributing content that modifies copyrighted or licensed material in a way that breaches its usage terms.
## Dataset Structure
The dataset is divided into 10 subsets each containing parquets about 4GB each. Each subfolder within contains a resolution range of the images and their respective aspect ratios.
The dataset is also divided along images licensed for commercial use (C) and those that are not (NC).
## Dataset Creation
### Curation Rationale
Creating a standardized, accessible dataset with synthetic caption and releasing it so other people can train on a common dataset for open source image generation.
### Source Data
Yahoo Flickr Creative Commons 100M Dataset and Synthetically Generated Caption Data.
#### Data Collection and Processing
All synthetic captions were generated with BLIP2. See paper for more details.
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
Users of Flickr
## Bias, Risks, and Limitations
See Yahoo Flickr Creative Commons 100M dataset for more information. The information was collected circa 2014 and known to have a bias towards internet connected Western countries. Some areas such as the global south lack representation.
## Citation
**BibTeX:**
```
@article{gokaslan2023commoncanvas,
title={CommonCanvas: An Open Diffusion Model Trained with Creative-Commons Images},
author={Gokaslan, Aaron and Cooper, A Feder and Collins, Jasmine and Seguin, Landan and Jacobson, Austin and Patel, Mihir and Frankle, Jonathan and Stephenson, Cory and Kuleshov, Volodymyr},
journal={arXiv preprint arXiv:2310.16825},
year={2023}
}
```
## Dataset Card Authors
[Aaron Gokaslan](https://huggingface.co/Skylion007)
## Dataset Card Contact
[Aaron Gokaslan](https://huggingface.co/Skylion007)
|
mlabonne/FineTome-100k | mlabonne | "2024-07-29T09:52:30Z" | 12,145 | 135 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-27T18:34:47Z" | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: source
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 239650960.7474458
num_examples: 100000
download_size: 116531415
dataset_size: 239650960.7474458
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# FineTome-100k
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/75I3ffI4XnRlheOQ7kNJ3.jpeg)
The FineTome dataset is a subset of [arcee-ai/The-Tome](https://huggingface.co/datasets/arcee-ai/The-Tome) (without arcee-ai/qwen2-72b-magpie-en), re-filtered using [HuggingFaceFW/fineweb-edu-classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier).
It was made for my article ["Fine-tune Llama 3.1 Ultra-Efficiently with Unsloth"](https://huggingface.co/blog/mlabonne/sft-llama3). |
kamilakesbi/transformers_image_doc | kamilakesbi | "2024-04-22T15:51:29Z" | 12,138 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-04-22T15:50:03Z" | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 406434.0
num_examples: 2
download_size: 381914
dataset_size: 406434.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
cardiffnlp/databench | cardiffnlp | "2024-12-01T18:25:22Z" | 12,131 | 6 | [
"task_categories:table-question-answering",
"task_categories:question-answering",
"language:en",
"language:es",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"table-question-answering",
"table",
"qa"
] | [
"table-question-answering",
"question-answering"
] | "2023-12-21T08:08:56Z" | ---
language:
- en
- es
pretty_name: " 💾🏋️💾 DataBench 💾🏋️💾"
tags:
- table-question-answering
- table
- qa
license: mit
task_categories:
- table-question-answering
- question-answering
default: qa
configs:
- config_name: qa
data_files:
- data/001_Forbes/qa.parquet
- data/002_Titanic/qa.parquet
- data/003_Love/qa.parquet
- data/004_Taxi/qa.parquet
- data/005_NYC/qa.parquet
- data/006_London/qa.parquet
- data/007_Fifa/qa.parquet
- data/008_Tornados/qa.parquet
- data/009_Central/qa.parquet
- data/010_ECommerce/qa.parquet
- data/011_SF/qa.parquet
- data/012_Heart/qa.parquet
- data/013_Roller/qa.parquet
- data/014_Airbnb/qa.parquet
- data/015_Food/qa.parquet
- data/016_Holiday/qa.parquet
- data/017_Hacker/qa.parquet
- data/018_Staff/qa.parquet
- data/019_Aircraft/qa.parquet
- data/020_Real/qa.parquet
- data/021_Telco/qa.parquet
- data/022_Airbnbs/qa.parquet
- data/023_Climate/qa.parquet
- data/024_Salary/qa.parquet
- data/025_Data/qa.parquet
- data/026_Predicting/qa.parquet
- data/027_Supermarket/qa.parquet
- data/028_Predict/qa.parquet
- data/029_NYTimes/qa.parquet
- data/030_Professionals/qa.parquet
- data/031_Trustpilot/qa.parquet
- data/032_Delicatessen/qa.parquet
- data/033_Employee/qa.parquet
- data/034_World/qa.parquet
- data/035_Billboard/qa.parquet
- data/036_US/qa.parquet
- data/037_Ted/qa.parquet
- data/038_Stroke/qa.parquet
- data/039_Happy/qa.parquet
- data/040_Speed/qa.parquet
- data/041_Airline/qa.parquet
- data/042_Predict/qa.parquet
- data/043_Predict/qa.parquet
- data/044_IMDb/qa.parquet
- data/045_Predict/qa.parquet
- data/046_120/qa.parquet
- data/047_Bank/qa.parquet
- data/048_Data/qa.parquet
- data/049_Boris/qa.parquet
- data/050_ING/qa.parquet
- data/051_Pokemon/qa.parquet
- data/052_Professional/qa.parquet
- data/053_Patents/qa.parquet
- data/054_Joe/qa.parquet
- data/055_German/qa.parquet
- data/056_Emoji/qa.parquet
- data/057_Spain/qa.parquet
- data/058_US/qa.parquet
- data/059_Second/qa.parquet
- data/060_Bakery/qa.parquet
- data/061_Disneyland/qa.parquet
- data/062_Trump/qa.parquet
- data/063_Influencers/qa.parquet
- data/064_Clustering/qa.parquet
- data/065_RFM/qa.parquet
# - split: 001_Forbes
# path: data/001_Forbes/qa.parquet
# - split: 002_Titanic
# path: data/002_Titanic/qa.parquet
# - split: 003_Love
# path: data/003_Love/qa.parquet
# - split: 004_Taxi
# path: data/004_Taxi/qa.parquet
# - split: 005_NYC
# path: data/005_NYC/qa.parquet
# - split: 006_London
# path: data/006_London/qa.parquet
# - split: 007_Fifa
# path: data/007_Fifa/qa.parquet
# - split: 008_Tornados
# path: data/008_Tornados/qa.parquet
# - split: 009_Central
# path: data/009_Central/qa.parquet
# - split: 010_ECommerce
# path: data/010_ECommerce/qa.parquet
# - split: 011_SF
# path: data/011_SF/qa.parquet
# - split: 012_Heart
# path: data/012_Heart/qa.parquet
# - split: 013_Roller
# path: data/013_Roller/qa.parquet
# - split: 014_Airbnb
# path: data/014_Airbnb/qa.parquet
# - split: 015_Food
# path: data/015_Food/qa.parquet
# - split: 016_Holiday
# path: data/016_Holiday/qa.parquet
# - split: 017_Hacker
# path: data/017_Hacker/qa.parquet
# - split: 018_Staff
# path: data/018_Staff/qa.parquet
# - split: 019_Aircraft
# path: data/019_Aircraft/qa.parquet
# - split: 020_Real
# path: data/020_Real/qa.parquet
# - split: 021_Telco
# path: data/021_Telco/qa.parquet
# - split: 022_Airbnbs
# path: data/022_Airbnbs/qa.parquet
# - split: 023_Climate
# path: data/023_Climate/qa.parquet
# - split: 024_Salary
# path: data/024_Salary/qa.parquet
# - split: 025_Data
# path: data/025_Data/qa.parquet
# - split: 026_Predicting
# path: data/026_Predicting/qa.parquet
# - split: 027_Supermarket
# path: data/027_Supermarket/qa.parquet
# - split: 028_Predict
# path: data/028_Predict/qa.parquet
# - split: 029_NYTimes
# path: data/029_NYTimes/qa.parquet
# - split: 030_Professionals
# path: data/030_Professionals/qa.parquet
# - split: 031_Trustpilot
# path: data/031_Trustpilot/qa.parquet
# - split: 032_Delicatessen
# path: data/032_Delicatessen/qa.parquet
# - split: 033_Employee
# path: data/033_Employee/qa.parquet
# - split: 034_World
# path: data/034_World/qa.parquet
# - split: 035_Billboard
# path: data/035_Billboard/qa.parquet
# - split: 036_US
# path: data/036_US/qa.parquet
# - split: 037_Ted
# path: data/037_Ted/qa.parquet
# - split: 038_Stroke
# path: data/038_Stroke/qa.parquet
# - split: 039_Happy
# path: data/039_Happy/qa.parquet
# - split: 040_Speed
# path: data/040_Speed/qa.parquet
# - split: 041_Airline
# path: data/041_Airline/qa.parquet
# - split: 042_Predict
# path: data/042_Predict/qa.parquet
# - split: 043_Predict
# path: data/043_Predict/qa.parquet
# - split: 044_IMDb
# path: data/044_IMDb/qa.parquet
# - split: 045_Predict
# path: data/045_Predict/qa.parquet
# - split: "046_120"
# path: data/046_120/qa.parquet
# - split: 047_Bank
# path: data/047_Bank/qa.parquet
# - split: 048_Data
# path: data/048_Data/qa.parquet
# - split: 049_Boris
# path: data/049_Boris/qa.parquet
# - split: 050_ING
# path: data/050_ING/qa.parquet
# - split: 051_Pokemon
# path: data/051_Pokemon/qa.parquet
# - split: 052_Professional
# path: data/052_Professional/qa.parquet
# - split: 053_Patents
# path: data/053_Patents/qa.parquet
# - split: 054_Joe
# path: data/054_Joe/qa.parquet
# - split: 055_German
# path: data/055_German/qa.parquet
# - split: 056_Emoji
# path: data/056_Emoji/qa.parquet
# - split: 057_Spain
# path: data/057_Spain/qa.parquet
# - split: 058_US
# path: data/058_US/qa.parquet
# - split: 059_Second
# path: data/059_Second/qa.parquet
# - split: 060_Bakery
# path: data/060_Bakery/qa.parquet
# - split: 061_Disneyland
# path: data/061_Disneyland/qa.parquet
# - split: 062_Trump
# path: data/062_Trump/qa.parquet
# - split: 063_Influencers
# path: data/063_Influencers/qa.parquet
# - split: 064_Clustering
# path: data/064_Clustering/qa.parquet
# - split: 065_RFM
# path: data/065_RFM/qa.parquet
# - config_name: 001_Forbes
# data_files:
# - split: full
# path: data/001_Forbes/all.parquet
# - split: lite
# path: data/001_Forbes/sample.parquet
# - config_name: 002_Titanic
# data_files:
# - split: full
# path: data/002_Titanic/all.parquet
# - split: lite
# path: data/002_Titanic/sample.parquet
# - config_name: 003_Love
# data_files:
# - split: full
# path: data/003_Love/all.parquet
# - split: lite
# path: data/003_Love/sample.parquet
# - config_name: 004_Taxi
# data_files:
# - split: full
# path: data/004_Taxi/all.parquet
# - split: lite
# path: data/004_Taxi/sample.parquet
# - config_name: 005_NYC
# data_files:
# - split: full
# path: data/005_NYC/all.parquet
# - split: lite
# path: data/005_NYC/sample.parquet
# - config_name: 006_London
# data_files:
# - split: full
# path: data/006_London/all.parquet
# - split: lite
# path: data/006_London/sample.parquet
# - config_name: 007_Fifa
# data_files:
# - split: full
# path: data/007_Fifa/all.parquet
# - split: lite
# path: data/007_Fifa/sample.parquet
# - config_name: 008_Tornados
# data_files:
# - split: full
# path: data/008_Tornados/all.parquet
# - split: lite
# path: data/008_Tornados/sample.parquet
# - config_name: 009_Central
# data_files:
# - split: full
# path: data/009_Central/all.parquet
# - split: lite
# path: data/009_Central/sample.parquet
# - config_name: 010_ECommerce
# data_files:
# - split: full
# path: data/010_ECommerce/all.parquet
# - split: lite
# path: data/010_ECommerce/sample.parquet
# - config_name: 011_SF
# data_files:
# - split: full
# path: data/011_SF/all.parquet
# - split: lite
# path: data/011_SF/sample.parquet
# - config_name: 012_Heart
# data_files:
# - split: full
# path: data/012_Heart/all.parquet
# - split: lite
# path: data/012_Heart/sample.parquet
# - config_name: 013_Roller
# data_files:
# - split: full
# path: data/013_Roller/all.parquet
# - split: lite
# path: data/013_Roller/sample.parquet
# - config_name: 014_Airbnb
# data_files:
# - split: full
# path: data/014_Airbnb/all.parquet
# - split: lite
# path: data/014_Airbnb/sample.parquet
# - config_name: 015_Food
# data_files:
# - split: full
# path: data/015_Food/all.parquet
# - split: lite
# path: data/015_Food/sample.parquet
# - config_name: 016_Holiday
# data_files:
# - split: full
# path: data/016_Holiday/all.parquet
# - split: lite
# path: data/016_Holiday/sample.parquet
# - config_name: 017_Hacker
# data_files:
# - split: full
# path: data/017_Hacker/all.parquet
# - split: lite
# path: data/017_Hacker/sample.parquet
# - config_name: 018_Staff
# data_files:
# - split: full
# path: data/018_Staff/all.parquet
# - split: lite
# path: data/018_Staff/sample.parquet
# - config_name: 019_Aircraft
# data_files:
# - split: full
# path: data/019_Aircraft/all.parquet
# - split: lite
# path: data/019_Aircraft/sample.parquet
# - config_name: 020_Real
# data_files:
# - split: full
# path: data/020_Real/all.parquet
# - split: lite
# path: data/020_Real/sample.parquet
# - config_name: 021_Telco
# data_files:
# - split: full
# path: data/021_Telco/all.parquet
# - split: lite
# path: data/021_Telco/sample.parquet
# - config_name: 022_Airbnbs
# data_files:
# - split: full
# path: data/022_Airbnbs/all.parquet
# - split: lite
# path: data/022_Airbnbs/sample.parquet
# - config_name: 023_Climate
# data_files:
# - split: full
# path: data/023_Climate/all.parquet
# - split: lite
# path: data/023_Climate/sample.parquet
# - config_name: 024_Salary
# data_files:
# - split: full
# path: data/024_Salary/all.parquet
# - split: lite
# path: data/024_Salary/sample.parquet
# - config_name: 025_Data
# data_files:
# - split: full
# path: data/025_Data/all.parquet
# - split: lite
# path: data/025_Data/sample.parquet
# - config_name: 026_Predicting
# data_files:
# - split: full
# path: data/026_Predicting/all.parquet
# - split: lite
# path: data/026_Predicting/sample.parquet
# - config_name: 027_Supermarket
# data_files:
# - split: full
# path: data/027_Supermarket/all.parquet
# - split: lite
# path: data/027_Supermarket/sample.parquet
# - config_name: 028_Predict
# data_files:
# - split: full
# path: data/028_Predict/all.parquet
# - split: lite
# path: data/028_Predict/sample.parquet
# - config_name: 029_NYTimes
# data_files:
# - split: full
# path: data/029_NYTimes/all.parquet
# - split: lite
# path: data/029_NYTimes/sample.parquet
# - config_name: 030_Professionals
# data_files:
# - split: full
# path: data/030_Professionals/all.parquet
# - split: lite
# path: data/030_Professionals/sample.parquet
# - config_name: 031_Trustpilot
# data_files:
# - split: full
# path: data/031_Trustpilot/all.parquet
# - split: lite
# path: data/031_Trustpilot/sample.parquet
# - config_name: 032_Delicatessen
# data_files:
# - split: full
# path: data/032_Delicatessen/all.parquet
# - split: lite
# path: data/032_Delicatessen/sample.parquet
# - config_name: 033_Employee
# data_files:
# - split: full
# path: data/033_Employee/all.parquet
# - split: lite
# path: data/033_Employee/sample.parquet
# - config_name: 034_World
# data_files:
# - split: full
# path: data/034_World/all.parquet
# - split: lite
# path: data/034_World/sample.parquet
# - config_name: 035_Billboard
# data_files:
# - split: full
# path: data/035_Billboard/all.parquet
# - split: lite
# path: data/035_Billboard/sample.parquet
# - config_name: 036_US
# data_files:
# - split: full
# path: data/036_US/all.parquet
# - split: lite
# path: data/036_US/sample.parquet
# - config_name: 037_Ted
# data_files:
# - split: full
# path: data/037_Ted/all.parquet
# - split: lite
# path: data/037_Ted/sample.parquet
# - config_name: 038_Stroke
# data_files:
# - split: full
# path: data/038_Stroke/all.parquet
# - split: lite
# path: data/038_Stroke/sample.parquet
# - config_name: 039_Happy
# data_files:
# - split: full
# path: data/039_Happy/all.parquet
# - split: lite
# path: data/039_Happy/sample.parquet
# - config_name: 040_Speed
# data_files:
# - split: full
# path: data/040_Speed/all.parquet
# - split: lite
# path: data/040_Speed/sample.parquet
# - config_name: 041_Airline
# data_files:
# - split: full
# path: data/041_Airline/all.parquet
# - split: lite
# path: data/041_Airline/sample.parquet
# - config_name: 042_Predict
# data_files:
# - split: full
# path: data/042_Predict/all.parquet
# - split: lite
# path: data/042_Predict/sample.parquet
# - config_name: 043_Predict
# data_files:
# - split: full
# path: data/043_Predict/all.parquet
# - split: lite
# path: data/043_Predict/sample.parquet
# - config_name: 044_IMDb
# data_files:
# - split: full
# path: data/044_IMDb/all.parquet
# - split: lite
# path: data/044_IMDb/sample.parquet
# - config_name: 045_Predict
# data_files:
# - split: full
# path: data/045_Predict/all.parquet
# - split: lite
# path: data/045_Predict/sample.parquet
# - config_name: "046_120"
# data_files:
# - split: full
# path: data/046_120/all.parquet
# - split: lite
# path: data/046_120/sample.parquet
# - config_name: 047_Bank
# data_files:
# - split: full
# path: data/047_Bank/all.parquet
# - split: lite
# path: data/047_Bank/sample.parquet
# - config_name: 048_Data
# data_files:
# - split: full
# path: data/048_Data/all.parquet
# - split: lite
# path: data/048_Data/sample.parquet
# - config_name: 049_Boris
# data_files:
# - split: full
# path: data/049_Boris/all.parquet
# - split: lite
# path: data/049_Boris/sample.parquet
# - config_name: 050_ING
# data_files:
# - split: full
# path: data/050_ING/all.parquet
# - split: lite
# path: data/050_ING/sample.parquet
# - config_name: 051_Pokemon
# data_files:
# - split: full
# path: data/051_Pokemon/all.parquet
# - split: lite
# path: data/051_Pokemon/sample.parquet
# - config_name: 052_Professional
# data_files:
# - split: full
# path: data/052_Professional/all.parquet
# - split: lite
# path: data/052_Professional/sample.parquet
# - config_name: 053_Patents
# data_files:
# - split: full
# path: data/053_Patents/all.parquet
# - split: lite
# path: data/053_Patents/sample.parquet
# - config_name: 054_Joe
# data_files:
# - split: full
# path: data/054_Joe/all.parquet
# - split: lite
# path: data/054_Joe/sample.parquet
# - config_name: 055_German
# data_files:
# - split: full
# path: data/055_German/all.parquet
# - split: lite
# path: data/055_German/sample.parquet
# - config_name: 056_Emoji
# data_files:
# - split: full
# path: data/056_Emoji/all.parquet
# - split: lite
# path: data/056_Emoji/sample.parquet
# - config_name: 057_Spain
# data_files:
# - split: full
# path: data/057_Spain/all.parquet
# - split: lite
# path: data/057_Spain/sample.parquet
# - config_name: 058_US
# data_files:
# - split: full
# path: data/058_US/all.parquet
# - split: lite
# path: data/058_US/sample.parquet
# - config_name: 059_Second
# data_files:
# - split: full
# path: data/059_Second/all.parquet
# - split: lite
# path: data/059_Second/sample.parquet
# - config_name: 060_Bakery
# data_files:
# - split: full
# path: data/060_Bakery/all.parquet
# - split: lite
# path: data/060_Bakery/sample.parquet
# - config_name: 061_Disneyland
# data_files:
# - split: full
# path: data/061_Disneyland/all.parquet
# - split: lite
# path: data/061_Disneyland/sample.parquet
# - config_name: 062_Trump
# data_files:
# - split: full
# path: data/062_Trump/all.parquet
# - split: lite
# path: data/062_Trump/sample.parquet
# - config_name: 063_Influencers
# data_files:
# - split: full
# path: data/063_Influencers/all.parquet
# - split: lite
# path: data/063_Influencers/sample.parquet
# - config_name: 064_Clustering
# data_files:
# - split: full
# path: data/064_Clustering/all.parquet
# - split: lite
# path: data/064_Clustering/sample.parquet
# - config_name: 065_RFM
# data_files:
# - split: full
# path: data/065_RFM/all.parquet
# - split: lite
# path: data/065_RFM/sample.parquet
- config_name: semeval
data_files:
- split: train
path:
- data/001_Forbes/qa.parquet
- data/002_Titanic/qa.parquet
- data/003_Love/qa.parquet
- data/004_Taxi/qa.parquet
- data/005_NYC/qa.parquet
- data/006_London/qa.parquet
- data/007_Fifa/qa.parquet
- data/008_Tornados/qa.parquet
- data/009_Central/qa.parquet
- data/010_ECommerce/qa.parquet
- data/011_SF/qa.parquet
- data/012_Heart/qa.parquet
- data/013_Roller/qa.parquet
- data/014_Airbnb/qa.parquet
- data/015_Food/qa.parquet
- data/016_Holiday/qa.parquet
- data/017_Hacker/qa.parquet
- data/018_Staff/qa.parquet
- data/019_Aircraft/qa.parquet
- data/020_Real/qa.parquet
- data/021_Telco/qa.parquet
- data/022_Airbnbs/qa.parquet
- data/023_Climate/qa.parquet
- data/024_Salary/qa.parquet
- data/025_Data/qa.parquet
- data/026_Predicting/qa.parquet
- data/027_Supermarket/qa.parquet
- data/028_Predict/qa.parquet
- data/029_NYTimes/qa.parquet
- data/030_Professionals/qa.parquet
- data/031_Trustpilot/qa.parquet
- data/032_Delicatessen/qa.parquet
- data/033_Employee/qa.parquet
- data/034_World/qa.parquet
- data/035_Billboard/qa.parquet
- data/036_US/qa.parquet
- data/037_Ted/qa.parquet
- data/038_Stroke/qa.parquet
- data/039_Happy/qa.parquet
- data/040_Speed/qa.parquet
- data/041_Airline/qa.parquet
- data/042_Predict/qa.parquet
- data/043_Predict/qa.parquet
- data/044_IMDb/qa.parquet
- data/045_Predict/qa.parquet
- data/046_120/qa.parquet
- data/047_Bank/qa.parquet
- data/048_Data/qa.parquet
- data/049_Boris/qa.parquet
- split: dev
path:
- data/050_ING/qa.parquet
- data/051_Pokemon/qa.parquet
- data/052_Professional/qa.parquet
- data/053_Patents/qa.parquet
- data/054_Joe/qa.parquet
- data/055_German/qa.parquet
- data/056_Emoji/qa.parquet
- data/057_Spain/qa.parquet
- data/058_US/qa.parquet
- data/059_Second/qa.parquet
- data/060_Bakery/qa.parquet
- data/061_Disneyland/qa.parquet
- data/062_Trump/qa.parquet
- data/063_Influencers/qa.parquet
- data/064_Clustering/qa.parquet
- data/065_RFM/qa.parquet
---
# 💾🏋️💾 DataBench 💾🏋️💾
This repository contains the original 65 datasets used for the paper [Question Answering over Tabular Data with DataBench:
A Large-Scale Empirical Evaluation of LLMs](https://huggingface.co/datasets/cardiffnlp/databench/resolve/main/Databench-LREC-Coling-2024.pdf) which appeared in LREC-COLING 2024.
Large Language Models (LLMs) are showing emerging abilities, and one of the latest recognized ones is tabular
reasoning in question answering on tabular data. Although there are some available datasets to assess question
answering systems on tabular data, they are not large and diverse enough to evaluate this new ability of LLMs.
To this end, we provide a corpus of 65 real world datasets, with 3,269,975 and 1615 columns in total, and 1300 questions to evaluate your models for the task of QA over Tabular Data.
## Usage
```python
from datasets import load_dataset
# Load all QA pairs
all_qa = load_dataset("cardiffnlp/databench", name="qa", split="train")
# Load SemEval 2025 task 8 Question-Answer splits
semeval_train_qa = load_dataset("cardiffnlp/databench", name="semeval", split="train")
semeval_dev_qa = load_dataset("cardiffnlp/databench", name="semeval", split="dev")
```
You can use any of the individual [integrated libraries](https://huggingface.co/docs/hub/datasets-libraries#libraries) to load the actual data where the answer is to be retrieved.
For example, using pandas in Python:
```python
import pandas as pd
# "001_Forbes", the id of the dataset
ds_id = all_qa['dataset'][0]
# full dataset
df = pd.read_parquet(f"hf://datasets/cardiffnlp/databench/data/{ds_id}/all.parquet")
# sample dataset
df = pd.read_parquet(f"hf://datasets/cardiffnlp/databench/data/{ds_id}/sample.parquet")
```
## 📚 Datasets
By clicking on each name in the table below, you will be able to explore each dataset.
| | Name | Rows | Cols | Domain | Source (Reference) |
|---:|:-------------------------------|-------:|-------:|:---------------------------|:-----------------------------------------------------------------------------------------------------------------------------------|
| 1 | [Forbes](https://public.graphext.com/0b211530c7e213d3/index.html?section=data) | 2668 | 17 | Business | [Forbes](https://www.forbes.com/billionaires/)|
| 2 | [Titanic](https://public.graphext.com/8577225c5ffd88fd/index.html) | 887 | 8 | Travel and Locations | [Kaggle](https://www.kaggle.com/competitions/titanic/data)|
| 3 | [Love](https://public.graphext.com/be7a566b0c485916/index.html) | 373 | 35 | Social Networks and Surveys | [Graphext](https://public.graphext.com/1de78f6820cfd5ba/index.html) |
| 4 | [Taxi](https://public.graphext.com/bcee13c23070f333/index.html) | 100000 | 20 | Travel and Locations | [Kaggle](https://www.kaggle.com/competitions/nyc-taxi-trip-duration/overview) |
| 5 | [NYC Calls](https://public.graphext.com/1ce2f5fae408621e/index.html) | 100000 | 46 | Business | [City of New York](https://data.cityofnewyork.us/Social-Services/NYC-311-Data/jrb2-thup) |
| 6 | [London Airbnbs](https://public.graphext.com/6bbf4bbd3ff279c0/index.html) | 75241 | 74 | Travel and Locations | [Kaggle](https://www.kaggle.com/datasets/labdmitriy/airbnb) |
| 7 | [Fifa](https://public.graphext.com/37bca51494c10a79/index.html) | 14620 | 59 | Sports and Entertainment | [Kaggle](https://www.kaggle.com/datasets/stefanoleone992/fifa-21-complete-player-dataset) |
| 8 | [Tornados](https://public.graphext.com/4be9872e031199c3/index.html) | 67558 | 14 | Health | [Kaggle](https://www.kaggle.com/datasets/danbraswell/us-tornado-dataset-1950-2021) |
| 9 | [Central Park](https://public.graphext.com/7b3d3a4d7bf1e9b5/index.html) | 56245 | 6 | Travel and Locations | [Kaggle](https://www.kaggle.com/datasets/danbraswell/new-york-city-weather-18692022) |
| 10 | [ECommerce Reviews](https://public.graphext.com/a5b8911b215958ad/index.html) | 23486 | 10 | Business | [Kaggle](https://www.kaggle.com/datasets/nicapotato/womens-ecommerce-clothing-reviews) |
| 11 | [SF Police](https://public.graphext.com/ab815ab14f88115c/index.html) | 713107 | 35 | Social Networks and Surveys | [US Gov](https://catalog.data.gov/dataset/police-department-incident-reports-2018-to-present) |
| 12 | [Heart Failure](https://public.graphext.com/245cec64075f5542/index.html) | 918 | 12 | Health | [Kaggle](https://www.kaggle.com/datasets/fedesoriano/heart-failure-prediction) |
| 13 | [Roller Coasters](https://public.graphext.com/1e550e6c24fc1930/index.html) | 1087 | 56 | Sports and Entertainment | [Kaggle](https://www.kaggle.com/datasets/robikscube/rollercoaster-database) |
| 14 | [Madrid Airbnbs](https://public.graphext.com/77265ea3a63e650f/index.html) | 20776 | 75 | Travel and Locations | [Inside Airbnb](http://data.insideairbnb.com/spain/comunidad-de-madrid/madrid/2023-09-07/data/listings.parquet.gz) |
| 15 | [Food Names](https://public.graphext.com/5aad4c5d6ef140b3/index.html) | 906 | 4 | Business | [Data World](https://data.world/alexandra/generic-food-database) |
| 16 | [Holiday Package Sales](https://public.graphext.com/fbc34d3f24282e46/index.html) | 4888 | 20 | Travel and Locations | [Kaggle](https://www.kaggle.com/datasets/susant4learning/holiday-package-purchase-prediction) |
| 17 | [Hacker News](https://public.graphext.com/f20501a9d616b5a5/index.html) | 9429 | 20 | Social Networks and Surveys | [Kaggle](https://www.kaggle.com/datasets/hacker-news/hacker-news) |
| 18 | [Staff Satisfaction](https://public.graphext.com/6822ac1ce6307fec/index.html) | 14999 | 11 | Business | [Kaggle](https://www.kaggle.com/datasets/mohamedharris/employee-satisfaction-index-dataset) |
| 19 | [Aircraft Accidents](https://public.graphext.com/1802117b1b14f5c5/index.html) | 23519 | 23 | Health | [Kaggle](https://www.kaggle.com/datasets/ramjasmaurya/aviation-accidents-history1919-april-2022) |
| 20 | [Real Estate Madrid](https://public.graphext.com/5f83ec219a7ea84f/index.html) | 26026 | 59 | Business | [Idealista](https://public.graphext.com/5f83ec219a7ea84f/index.html) |
| 21 | [Telco Customer Churn](https://public.graphext.com/362cd8e3e96f70d4/index.html) | 7043 | 21 | Business | [Kaggle](https://www.kaggle.com/datasets/blastchar/telco-customer-churn) |
| 22 | [Airbnbs Listings NY](https://public.graphext.com/77265ea3a63e650f/index.html) | 37012 | 33 | Travel and Locations | [Kaggle](https://www.kaggle.com/datasets/dgomonov/new-york-city-airbnb-open-data) |
| 23 | [Climate in Madrid](https://public.graphext.com/83a75b4f1cea8df4/index.html?section=data) | 36858 | 26 | Travel and Locations | [AEMET](https://public.graphext.com/83a75b4f1cea8df4/index.html?section=data) |
| 24 | [Salary Survey Spain 2018](https://public.graphext.com/24d1e717ba01aa3d/index.html) | 216726 | 29 | Business | [INE](ine.es) |
| 25 | [Data Driven SEO ](https://public.graphext.com/4e5b1cac9ebdfa44/index.html) | 62 | 5 | Business | [Graphext](https://www.graphext.com/post/data-driven-seo-a-keyword-optimization-guide-using-web-scraping-co-occurrence-analysis-graphext-deepnote-adwords) |
| 26 | [Predicting Wine Quality](https://public.graphext.com/de04acf5d18a9aea/index.html) | 1599 | 12 | Business | [Kaggle](https://www.kaggle.com/datasets/yasserh/wine-quality-dataset) |
| 27 | [Supermarket Sales](https://public.graphext.com/9a6742da6a8d8f7f/index.html) | 1000 | 17 | Business | [Kaggle](https://www.kaggle.com/datasets/aungpyaeap/supermarket-sales) |
| 28 | [Predict Diabetes](https://public.graphext.com/def4bada27af324c/index.html) | 768 | 9 | Health | [Kaggle](https://www.kaggle.com/datasets/iammustafatz/diabetes-prediction-dataset) |
| 29 | [NYTimes World In 2021](https://public.graphext.com/af4c8eef1757973c/index.html?section=data) | 52588 | 5 | Travel and Locations | [New York Times](https://public.graphext.com/af4c8eef1757973c/index.html) |
| 30 | [Professionals Kaggle Survey](https://public.graphext.com/3a2e87f90363a85d/index.html) | 19169 | 64 | Business | [Kaggle](https://www.kaggle.com/c/kaggle-survey-2021/data) |
| 31 | [Trustpilot Reviews](https://public.graphext.com/367e29432331fbfd/index.html?section=data) | 8020 | 6 | Business | [TrustPilot](https://public.graphext.com/367e29432331fbfd/index.html?section=data) |
| 32 | [Delicatessen Customers](https://public.graphext.com/a1687589fbde07bc/index.html) | 2240 | 29 | Business | [Kaggle](https://www.kaggle.com/datasets/rodsaldanha/arketing-campaign) |
| 33 | [Employee Attrition](https://public.graphext.com/07a91a15ecf2b8f6/index.html) | 14999 | 11 | Business | [Kaggle(modified)](https://www.kaggle.com/datasets/pavan9065/predicting-employee-attrition) |
| 34 | [World Happiness Report 2020](https://public.graphext.com/754c83ff0a7ba087/index.html) | 153 | 20 | Social Networks and Surveys | [World Happiness](https://worldhappiness.report/data/) |
| 35 | [Billboard Lyrics](https://public.graphext.com/7e0b009e8d0af719/index.html) | 5100 | 6 | Sports and Entertainment | [Brown University](https://cs.brown.edu/courses/cs100/students/project11/) |
| 36 | [US Migrations 2012-2016](https://public.graphext.com/dbdadf87a5c21695/index.html) | 288300 | 9 | Social Networks and Surveys | [US Census](https://www.census.gov/topics/population/migration/guidance/county-to-county-migration-flows.html) |
| 37 | [Ted Talks](https://public.graphext.com/07e48466fb670904/index.html) | 4005 | 19 | Social Networks and Surveys | [Kaggle](https://www.kaggle.com/datasets/ashishjangra27/ted-talks) |
| 38 | [Stroke Likelihood](https://public.graphext.com/20ccfee9e84948e3/index.html) | 5110 | 12 | Health | [Kaggle](https://www.kaggle.com/datasets/kamilpytlak/personal-key-indicators-of-heart-disease) |
| 39 | [Happy Moments](https://public.graphext.com/9b86efff48989701/index.html) | 100535 | 11 | Social Networks and Surveys | [Kaggle](https://www.kaggle.com/datasets/ritresearch/happydb) |
| 40 | [Speed Dating](https://public.graphext.com/f1912daad7870be0/index.html) | 8378 | 123 | Social Networks and Surveys | [Kaggle](https://www.kaggle.com/datasets/ulrikthygepedersen/speed-dating) |
| 41 | [Airline Mentions X (former Twitter)](https://public.graphext.com/29cb7f73f6e17a38/index.html) | 14640 | 15 | Social Networks and Surveys | [X (former Twitter)](https://public.graphext.com/7e6999327d1f83fd/index.html) |
| 42 | [Predict Student Performance](https://public.graphext.com/def4bada27af324c/index.html) | 395 | 33 | Business | [Kaggle](https://www.kaggle.com/datasets/impapan/student-performance-data-set) |
| 43 | [Loan Defaults](https://public.graphext.com/0c7fb68ab8071a1f/index.html) | 83656 | 20 | Business | [SBA](https://www.kaggle.com/datasets/mirbektoktogaraev/should-this-loan-be-approved-or-denied) |
| 44 | [IMDb Movies](https://public.graphext.com/e23e33774872c496/index.html) | 85855 | 22 | Sports and Entertainment | [Kaggle](https://www.kaggle.com/datasets/harshitshankhdhar/imdb-dataset-of-top-1000-movies-and-tv-shows) |
| 45 | [Spotify Song Popularity](https://public.graphext.com/def4bada27af324c/index.html) | 21000 | 19 | Sports and Entertainment | [Spotify](https://www.kaggle.com/datasets/tomigelo/spotify-audio-features) |
| 46 | [120 Years Olympics](https://public.graphext.com/e57d5e2f172c9a99/index.html) | 271116 | 15 | Sports and Entertainment | [Kaggle](https://www.kaggle.com/datasets/heesoo37/120-years-of-olympic-history-athletes-and-results) |
| 47 | [Bank Customer Churn](https://public.graphext.com/e8f7aeacd209f74a/index.html) | 7088 | 15 | Business | [Kaggle](https://www.kaggle.com/datasets/mathchi/churn-for-bank-customers) |
| 48 | [Data Science Salary Data](https://public.graphext.com/4e5b1cac9ebdfa44/index.html) | 742 | 28 | Business | [Kaggle](https://www.kaggle.com/datasets/ruchi798/data-science-job-salaries) |
| 49 | [Boris Johnson UK PM Tweets](https://public.graphext.com/f6623a1ca0f41c8e/index.html) | 3220 | 34 | Social Networks and Surveys | [X (former Twitter)](https://public.graphext.com/f6623a1ca0f41c8e/index.html) |
| 50 | [ING 2019 X Mentions](https://public.graphext.com/075030310aa702c6/index.html) | 7244 | 22 | Social Networks and Surveys | [X (former Twitter)](https://public.graphext.com/075030310aa702c6/index.html) |
| 51 | [Pokemon Features](https://public.graphext.com/f30d4d863a2e6b01/index.html) | 1072 | 13 | Business | [Kaggle](https://www.kaggle.com/datasets/rounakbanik/pokemon) |
| 52 | [Professional Map](https://public.graphext.com/70af2240cb751968/index.html) | 1227 | 12 | Business | [Kern et al, PNAS'20](https://github.com/behavioral-ds/VocationMap) |
| 53 | [Google Patents](https://public.graphext.com/a262300e31874716/index.html) | 9999 | 20 | Business | [BigQuery](https://www.kaggle.com/datasets/bigquery/patents/data) |
| 54 | [Joe Biden Tweets](https://public.graphext.com/33fa2efa41541ab1/index.html) | 491 | 34 | Social Networks and Surveys | [X (former Twitter)](https://public.graphext.com/339cee259f0a9b32/index.html?section=data) |
55 | [German Loans](https://public.graphext.com/d3f5e425e9d4b0a1/index.html) | 1000 | 18 | Business | [Kaggle](https://www.kaggle.com/datasets/uciml/german-credit/data) |
| 56 | [Emoji Diet](https://public.graphext.com/e721cc7d790c06d4/index.html) | 58 | 35 | Health | [Kaggle](https://www.kaggle.com/datasets/ofrancisco/emoji-diet-nutritional-data-sr28) |
| 57 | [Spain Survey 2015](https://public.graphext.com/90ca7539b160fdfa/index.html?section=data) | 20000 | 45 | Social Networks and Surveys | [CIS](https://public.graphext.com/90ca7539b160fdfa/index.html?section=data) |
| 58 | [US Polls 2020](https://public.graphext.com/dbdadf87a5c21695/index.html) | 3523 | 52 | Social Networks and Surveys | [Brandwatch](https://www.brandwatch.com/p/us-election-raw-polling-data/) |
| 59 | [Second Hand Cars](https://public.graphext.com/543d0c49d7120ca0/index.html) | 50000 | 21 | Business | [DataMarket](https://www.kaggle.com/datasets/datamarket/venta-de-coches) |
| 60 | [Bakery Purchases](https://public.graphext.com/6f2102e80f47a192/index.html) | 20507 | 5 | Business | [Kaggle](https://www.kaggle.com/code/xvivancos/market-basket-analysis/report) |
| 61 | [Disneyland Customer Reviews](https://public.graphext.com/b1037bb566b7b316/index.html) | 42656 | 6 | Travel and Locations | [Kaggle](https://www.kaggle.com/datasets/arushchillar/disneyland-reviews) |
| 62 | [Trump Tweets](https://public.graphext.com/7aff94c3b7f159fc/index.html) | 15039 | 20 | Social Networks and Surveys | [X (former Twitter)](https://public.graphext.com/be903c098a90e46f/index.html?section=data) |
| 63 | [Influencers](https://public.graphext.com/e097f1ea03d761a9/index.html) | 1039 | 14 | Social Networks and Surveys | [X (former Twitter)](https://public.graphext.com/e097f1ea03d761a9/index.html) |
| 64 | [Clustering Zoo Animals](https://public.graphext.com/d1b66902e46a712a/index.html) | 101 | 18 | Health | [Kaggle](https://www.kaggle.com/datasets/jirkadaberger/zoo-animals) |
| 65 | [RFM Analysis](https://public.graphext.com/4db2e54e29006a21/index.html) | 541909 | 8 | Business | [UCI ML](https://www.kaggle.com/datasets/carrie1/ecommerce-data) |
## 🏗️ Folder structure
Each folder represents one dataset. You will find the following files within:
* all.parquet: the processed data, with each column tagged with our typing system, in [parquet](https://arrow.apache.org/docs/python/parquet.html).
* qa.parquet: contains the human-made set of questions, tagged by type and columns used, for the dataset (sample_answer indicates the answers for DataBench lite)
* sample.parquet: sample containing 20 rows of the original dataset (DataBench lite)
* info.yml: additional information about the dataset
## 🗂️ Column typing system
In an effort to map the stage for later analysis, we have categorized the columns by type. This information allows us to segment different kinds of data so that we can subsequently analyze the model's behavior on each column type separately. All parquet files have been casted to their smallest viable data type using the open source [Lector](https://github.com/graphext/lector) reader.
What this means is that in the data types we have more granular information that allows us to know if the column contains NaNs or not (following panda’s convention of Int vs int), as well as whether small numerical values contain negatives (Uint vs int) and their range. We also have dates with potential timezone information (although for now they’re all UTC), as well as information about categories’ cardinality coming from the arrow types.
In the table below you can see all the data types assigned to each column, as well as the number of columns for each type. The most common data types are numbers and categories with 1336 columns of the total of 1615 included in DataBench. These are followed by some other more rare types as urls, booleans, dates or lists of elements.
| Type | Columns | Example |
| -------------- | ------- | ----------------------- |
| number | 788 | 55 |
| category | 548 | apple |
| date | 50 | 1970-01-01 |
| text | 46 | A red fox ran... |
| url | 31 | google.com |
| boolean | 18 | True |
| list[number] | 14 | [1,2,3] |
| list[category] | 112 | [apple, orange, banana] |
| list[url] | 8 | [google.com, apple.com] |
## 🔗 Reference
You can download the paper [here](https://huggingface.co/datasets/cardiffnlp/databench/resolve/main/Databench-LREC-Coling-2024.pdf).
If you use this resource, please use the following reference:
```
@inproceedings{oses-etal-2024-databench,
title = "Question Answering over Tabular Data with DataBench: A Large-Scale Empirical Evaluation of LLMs",
author = "Jorge Osés Grijalba and Luis Alfonso Ureña-López and
Eugenio Martínez Cámara and Jose Camacho-Collados",
booktitle = "Proceedings of LREC-COLING 2024",
year = "2024",
address = "Turin, Italy"
}
``` |
bigscience/evaluation-results | bigscience | "2023-05-28T00:13:53Z" | 11,991 | 10 | [
"task_categories:other",
"size_categories:100M<n<1B",
"region:us"
] | [
"other"
] | "2022-08-01T18:35:58Z" | ---
pretty_name: evaluation-results
size_categories:
- 100M<n<1B
task_categories:
- other
---
# BigScience BLOOM Evaluation Results
This repository contains evaluation results & original predictions of BLOOM & friends.
## Usage
You can load numeric results via:
```python
from datasets import load_dataset
ds = load_dataset("bigscience/evaluation-results", "bloom")
```
If it takes too long, it may be faster to clone the repository and load the data from disk:
```python
!git clone https://huggingface.co/datasets/bigscience/evaluation-results
ds = load_dataset("evaluation-results", "bloom")
```
For example generations (.jsonl files), you need to manually browse the repository.
## Structure
For `bigsciencelmevalharness`, `lmevalharness` & `codeeval` evaluation_frameworks the structure is:
`model_name > evaluation_framework > checkpoint_type > dataset_name > data`
## Evaluation Procedure
- `bigsciencelmevalharness` files were created using the below:
- https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/291
- https://github.com/bigscience-workshop/lm-evaluation-harness
- `lmevalharness` files were created using the below:
- https://github.com/bigscience-workshop/Megatron-DeepSpeed
- https://github.com/EleutherAI/lm-evaluation-harness
- `codeeval` files were created using the HumanEval code dataset with the below:
- https://github.com/loubnabnl/bloom-code-evaluation
|
OpenDILabCommunity/LMDrive | OpenDILabCommunity | "2023-12-25T13:28:07Z" | 11,983 | 13 | [
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2312.07488",
"region:us"
] | null | "2023-11-30T08:56:30Z" | ---
configs:
- config_name: default
data_files:
- split: train
path: navigation_instruction_list.txt
sep: " "
default: true
license: apache-2.0
language:
- en
size_categories:
- n>1T
---
# LMDrive 64K Dataset Card
LMDrive Dataset consists of 64K instruction-sensor-control data clips collected in the CARLA simulator, where each clip includes one navigation instruction, several notice instructions, a sequence of multi-modal multi-view sensor data, and control signals. The duration of the clip spans from 2 to 20 seconds.
## Dataset details
- `data/`: dataset folder, the entire dataset contains about 2T of data.
- `data/Town01`: sub dataset folder, which only consists of the data folder for the Town01
- `data/Town02`: sub dataset folder, which only consists of the data folder for the Town02
- ...
- `dataset_index.txt`: the data list for pretraining the vision encoder
- `navigation_instruction_list.txt`: the data list for instruction finetuning
- `notice_instruction_list.json`: the data list for instruction finetuning (optional if the notice instruction data is not engaged in the training)
**Dataset date:**
LMDrive-1.0 Dataset was collected in September 2023.
**Paper or resources for more information:**
Github: https://github.com/opendilab/LMDrive/README.md
Paper: https://arxiv.org/abs/2312.07488
**License:**
Attribution-NonCommercial 4.0 International
**Where to send questions or comments about the model:**
https://github.com/opendilab/LMDrive/issues
## Intended use
**Primary intended uses:**
The primary use of LMDrive is research on large multimodal models for autonomous driving.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, large multimodal model, autonomous driving, and artificial intelligence. |
Voxel51/PIDray | Voxel51 | "2024-06-06T14:46:34Z" | 11,978 | 2 | [
"task_categories:object-detection",
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"library:fiftyone",
"arxiv:2108.07020",
"region:us",
"fiftyone",
"image",
"object-detection"
] | [
"object-detection"
] | "2024-04-29T13:57:49Z" | ---
annotations_creators: []
language: en
license: apache-2.0
size_categories:
- 1K<n<10K
task_categories:
- object-detection
task_ids: []
pretty_name: pidray
tags:
- fiftyone
- image
- object-detection
dataset_summary: '
![image/png](dataset_preview.gif)
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 9482 samples.
## Installation
If you haven''t already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
import fiftyone.utils.huggingface as fouh
# Load the dataset
# Note: other available arguments include ''max_samples'', etc
dataset = fouh.load_from_hub("voxel51/PIDray")
# Launch the App
session = fo.launch_app(dataset)
```
'
---
# Dataset Card for pidray
PIDray is a large-scale dataset which covers various cases in real-world scenarios for prohibited item detection, especially for deliberately hidden items. The dataset contains 12 categories of prohibited items in 47, 677 X-ray images with high-quality annotated segmentation masks and bounding boxes.
![image/png](dataset_preview.gif)
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 9482 samples.
## Installation
If you haven't already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
import fiftyone.utils.huggingface as fouh
# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = fouh.load_from_hub("dgural/PIDray")
# Launch the App
session = fo.launch_app(dataset)
```
## Dataset Details
### Dataset Description
From _Towards Real-World Prohibited Item Detection: A Large-Scale X-ray Benchmark_:
Automatic security inspection using computer vision technology is a challenging task in real-world scenarios due to various factors, including intra-class variance, class imbalance, and occlusion. Most of the previous methods rarely solve the cases that the prohibited items are deliberately hidden in messy objects due to the lack of large-scale datasets, restricted their applications in real-world scenarios. Towards real-world prohibited item detection, we collect a large-scale dataset, named as PIDray, which covers various cases in real-world scenarios for prohibited item detection, especially for deliberately hidden items. With an intensive amount of effort, our dataset contains
categories of prohibited items in X-ray images with high-quality annotated segmentation masks and bounding boxes. To the best of our knowledge, it is the largest prohibited items detection dataset to date. Meanwhile, we design the selective dense attention network (SDANet) to construct a strong baseline, which consists of the dense attention module and the dependency refinement module. The dense attention module formed by the spatial and channel-wise dense attentions, is designed to learn the discriminative features to boost the performance. The dependency refinement module is used to exploit the dependencies of multi-scale features. Extensive experiments conducted on the collected PIDray dataset demonstrate that the proposed method performs favorably against the state-of-the-art methods, especially for detecting the deliberately hidden items.
- **Language(s) (NLP):** en
- **License:** apache-2.0
The images and the corresponding annotations in PIDray Dataset can be used ONLY for academic purposes, NOT for commercial purposes.
Copyright © 2021 Institute of Software Chinese Academy of Sciences, University of Chinese Academy of Sciences
All rights reserved.
### Dataset Sources
- **Repository:** https://github.com/bywang2018/security-dataset
- **Paper [optional]:** https://arxiv.org/abs/2108.07020
## Uses
This used for academic research on airport security screening machines and the detection of objects being scanned.
### Out-of-Scope Use
Any non-academic work is out of scope and prohibited.
## Citation
@inproceedings{wang2021towards,
title={Towards Real-World Prohibited Item Detection: A Large-Scale X-ray Benchmark},
author={Wang, Boying and Zhang, Libo and Wen, Longyin and Liu, Xianglong and Wu, Yanjun},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={5412--5421},
year={2021}
}
|
argmaxinc/whisperkit-evals | argmaxinc | "2024-12-24T09:07:06Z" | 11,964 | 4 | [
"region:us",
"whisper",
"whisperkit",
"coreml",
"asr",
"quantized"
] | null | "2024-02-28T08:05:36Z" |
---
pretty_name: "WhisperKit ASR Evaluation Results"
viewer: false
library_name: whisperkit
tags:
- whisper
- whisperkit
- coreml
- asr
- quantized
---
# WhisperKit
WhisperKit is an on-device speech recognition framework for Apple Silicon:
https://github.com/argmaxinc/WhisperKit
For performance and accuracy benchmarks on real devices, please see:
https://huggingface.co/spaces/argmaxinc/whisperkit-benchmarks
|
Voxel51/emnist-letters-tiny | Voxel51 | "2024-07-23T18:58:23Z" | 11,938 | 2 | [
"task_categories:image-classification",
"language:en",
"size_categories:10K<n<100K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"library:fiftyone",
"arxiv:1702.05373",
"region:us",
"fiftyone",
"image",
"image-classification"
] | [
"image-classification"
] | "2024-07-23T18:43:35Z" | ---
annotations_creators: []
language: en
size_categories:
- 10K<n<100K
task_categories:
- image-classification
task_ids: []
pretty_name: EMNIST-Letters-10k
tags:
- fiftyone
- image
- image-classification
dataset_summary: '
![image/png](dataset_preview.png)
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 10000 samples.
## Installation
If you haven''t already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
from fiftyone.utils.huggingface import load_from_hub
# Load the dataset
# Note: other available arguments include ''max_samples'', etc
dataset = load_from_hub("Voxel51/emnist-letters-tiny")
# Launch the App
session = fo.launch_app(dataset)
```
'
---
# Dataset Card for EMNIST-Letters-10k
<!-- Provide a quick summary of the dataset. -->
A random subset of the train and test splits from the letters portion of [EMNIST](https://pytorch.org/vision/0.18/generated/torchvision.datasets.EMNIST.html)
![image/png](dataset_preview.png)
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 10000 samples.
## Installation
If you haven't already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
from fiftyone.utils.huggingface import load_from_hub
# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = load_from_hub("Voxel51/emnist-letters-tiny")
# Launch the App
session = fo.launch_app(dataset)
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Homepage:** https://www.nist.gov/itl/products-and-services/emnist-dataset
- **Paper :** https://arxiv.org/abs/1702.05373v1
## Citation
**BibTeX:**
```bibtex
@misc{cohen2017emnistextensionmnisthandwritten,
title={EMNIST: an extension of MNIST to handwritten letters},
author={Gregory Cohen and Saeed Afshar and Jonathan Tapson and André van Schaik},
year={2017},
eprint={1702.05373},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/1702.05373},
}
```
## Dataset Card Author
[Jacob Marks](https://huggingface.co/jamarks)
|
qmeeus/voxpopuli | qmeeus | "2024-02-06T23:13:46Z" | 11,879 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-06T15:17:31Z" | ---
dataset_info:
- config_name: de
features:
- name: audio_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 61603981153.568
num_examples: 108473
- name: validation
num_bytes: 1149586917.507
num_examples: 2109
download_size: 52060225655
dataset_size: 62753568071.075
- config_name: es
features:
- name: audio_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 36533665201.936
num_examples: 50922
- name: validation
num_bytes: 1173444834.383
num_examples: 1631
download_size: 1005381345
dataset_size: 37707110036.319
- config_name: fr
features:
- name: audio_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 48346650213.26
num_examples: 73561
- name: validation
num_bytes: 1149779276.605
num_examples: 1727
download_size: 17314564262
dataset_size: 49496429489.865005
- config_name: nl
features:
- name: audio_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 10436544940.608
num_examples: 20968
- name: validation
num_bytes: 636925883.64
num_examples: 1230
download_size: 9404833804
dataset_size: 11073470824.248
configs:
- config_name: de
data_files:
- split: train
path: de/train-*
- split: validation
path: de/validation-*
- config_name: es
data_files:
- split: train
path: es/train-*
- split: validation
path: es/validation-*
- config_name: fr
data_files:
- split: train
path: fr/train-*
- split: validation
path: fr/validation-*
- config_name: nl
data_files:
- split: train
path: nl/train-*
- split: validation
path: nl/validation-*
---
# Dataset Card for "voxpopuli"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
allenai/ZebraLogicBench-private | allenai | "2024-07-04T04:21:32Z" | 11,879 | 5 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-04T04:15:11Z" | ---
dataset_info:
- config_name: grid_mode
features:
- name: id
dtype: string
- name: size
dtype: string
- name: puzzle
dtype: string
- name: solution
struct:
- name: header
sequence: string
- name: rows
sequence:
sequence: string
- name: created_at
dtype: string
splits:
- name: test
num_bytes: 1545275
num_examples: 1000
download_size: 345826
dataset_size: 1545275
- config_name: mc_mode
features:
- name: id
dtype: string
- name: puzzle
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype: string
- name: created_at
dtype: string
splits:
- name: test
num_bytes: 5039993
num_examples: 3259
download_size: 826292
dataset_size: 5039993
configs:
- config_name: grid_mode
data_files:
- split: test
path: grid_mode/test-*
- config_name: mc_mode
data_files:
- split: test
path: mc_mode/test-*
---
|
DeliberatorArchiver/asmr-archive-data | DeliberatorArchiver | "2024-12-25T16:33:56Z" | 11,865 | 5 | [
"language:ja",
"license:agpl-3.0",
"size_categories:n>1T",
"region:us",
"not-for-all-audiences"
] | null | "2024-10-07T12:52:51Z" | ---
license: agpl-3.0
language:
- ja
tags:
- not-for-all-audiences
pretty_name: ASMR Archive Dataset
size_categories:
- n>1T
viewer: false
---
# ASMR Media Archive Storage
This repository contains an archive of ASMR works.
All data in this repository is uploaded for **educational and research purposes only.** **All use is at your own risk.**
> [!IMPORTANT]
> This repository contains **>= 32 TB** of files.
> Git LFS consumes twice as much disk space because of the way it works, so `git clone` is not recommended. [Hugging Face CLI](https://huggingface.co/docs/huggingface_hub/guides/cli) or [Python libraries](https://huggingface.co/docs/huggingface_hub/index) allow you to select and download only a subset of files.
**\>\>\> [CLICK HERE or on the IMAGE BELOW for a list of works](https://asmr-archive-data.daydreamer-json.cc/) \<\<\<**
<a href="https://asmr-archive-data.daydreamer-json.cc/"><img width="500" src="./front_page_screenshot.jpg"></a> |
lmms-lab/LLaVA-OneVision-Data | lmms-lab | "2024-10-22T06:47:46Z" | 11,860 | 149 | [
"language:en",
"language:zh",
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2408.03326",
"arxiv:2310.05126",
"region:us"
] | null | "2024-07-25T15:25:28Z" | ---
language:
- en
- zh
license: apache-2.0
pretty_name: llava-onevision-data
dataset_info:
- config_name: CLEVR-Math(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 791346970
num_examples: 5280
download_size: 441208499
dataset_size: 791346970
- config_name: FigureQA(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 463326576.625
num_examples: 17587
download_size: 258197193
dataset_size: 463326576.625
- config_name: GEOS(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1503641
num_examples: 498
download_size: 684471
dataset_size: 1503641
- config_name: GeoQA+(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 53579705.75
num_examples: 17162
download_size: 33480538
dataset_size: 53579705.75
- config_name: Geometry3K(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 218085473.5
num_examples: 9724
download_size: 125914780
dataset_size: 218085473.5
- config_name: IconQA(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 208430568.375
num_examples: 22589
download_size: 117222488
dataset_size: 208430568.375
- config_name: MapQA(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 384120915.875
num_examples: 5225
download_size: 215768443
dataset_size: 384120915.875
- config_name: PMC-VQA(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 571444866.5
num_examples: 35948
download_size: 326541003
dataset_size: 571444866.5
- config_name: Super-CLEVR(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2795082410.75
num_examples: 8642
download_size: 1580301917
dataset_size: 2795082410.75
- config_name: TabMWP(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 307726997.5
num_examples: 22452
download_size: 173938487
dataset_size: 307726997.5
- config_name: UniGeo(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 38296693.375
num_examples: 11949
download_size: 24170743
dataset_size: 38296693.375
- config_name: VisualWebInstruct(filtered)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 36317112275.0
num_examples: 263584
download_size: 36239916454
dataset_size: 36317112275.0
- config_name: VizWiz(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1170333936.5
num_examples: 6604
download_size: 660752297
dataset_size: 1170333936.5
- config_name: ai2d(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 438572782.375
num_examples: 2429
download_size: 437348514
dataset_size: 438572782.375
- config_name: ai2d(gpt4v)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 866076731
num_examples: 4864
download_size: 860306578
dataset_size: 866076731
- config_name: ai2d(internvl)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1832787249.625
num_examples: 12403
download_size: 527493895
dataset_size: 1832787249.625
- config_name: allava_instruct_laion4v
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 5981767621.25
num_examples: 49990
download_size: 5873046236
dataset_size: 5981767621.25
- config_name: allava_instruct_vflan4v
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2680974558.25
num_examples: 19990
download_size: 2670088751
dataset_size: 2680974558.25
- config_name: aokvqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 6896420844.25
num_examples: 16534
download_size: 6894236970
dataset_size: 6896420844.25
- config_name: chart2text(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1145458729.5
num_examples: 26956
download_size: 1123681047
dataset_size: 1145458729.5
- config_name: chartqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 815335215.5
num_examples: 18260
download_size: 803084541
dataset_size: 815335215.5
- config_name: chrome_writting
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 44422597.875
num_examples: 8825
download_size: 39611257
dataset_size: 44422597.875
- config_name: clevr(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 10528974543.625
num_examples: 69995
download_size: 10460536445
dataset_size: 10528974543.625
- config_name: diagram_image_to_text(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 18858266
num_examples: 295
download_size: 18659115
dataset_size: 18858266
- config_name: dvqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 4487270615.625
num_examples: 199995
download_size: 4277056467
dataset_size: 4487270615.625
- config_name: figureqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2351194509.625
num_examples: 99995
download_size: 2222640639
dataset_size: 2351194509.625
- config_name: geo170k(align)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 204236256.75
num_examples: 60242
download_size: 58185410
dataset_size: 204236256.75
- config_name: geo170k(qa)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 266040519.125
num_examples: 67823
download_size: 160022430
dataset_size: 266040519.125
- config_name: geo3k
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 42634333.625
num_examples: 2091
download_size: 41097851
dataset_size: 42634333.625
- config_name: geomverse(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2263893609.75
num_examples: 9298
download_size: 2211726352
dataset_size: 2263893609.75
- config_name: hateful_memes(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 3057252325.125
num_examples: 8495
download_size: 3055839880
dataset_size: 3057252325.125
- config_name: hitab(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 161706881.125
num_examples: 2495
download_size: 157871287
dataset_size: 161706881.125
- config_name: hme100k
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 273229915.5
num_examples: 74492
download_size: 241005430
dataset_size: 273229915.5
- config_name: iam(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1131633206.75
num_examples: 5658
download_size: 1128371221
dataset_size: 1131633206.75
- config_name: iconqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 331284932.25
num_examples: 27302
download_size: 327005220
dataset_size: 331284932.25
- config_name: iiit5k
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 21821437.25
num_examples: 1990
download_size: 21623116
dataset_size: 21821437.25
- config_name: image_textualization(filtered)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 5218283253.375
num_examples: 99573
download_size: 5164176816
dataset_size: 5218283253.375
- config_name: infographic(gpt4v)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 713657496.25
num_examples: 1982
download_size: 656276080
dataset_size: 713657496.25
- config_name: infographic_vqa
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1528953078.75
num_examples: 4394
download_size: 1419340319
dataset_size: 1528953078.75
- config_name: infographic_vqa_llava_format
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1765315696.875
num_examples: 2113
download_size: 1764548536
dataset_size: 1765315696.875
- config_name: intergps(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 24973395.625
num_examples: 1275
download_size: 24736545
dataset_size: 24973395.625
- config_name: k12_printing
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1205153118.5
num_examples: 256636
download_size: 1108572712
dataset_size: 1205153118.5
- config_name: llavar_gpt4_20k
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 633833350.25
num_examples: 19790
download_size: 625365542
dataset_size: 633833350.25
- config_name: lrv_chart
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 99338686
num_examples: 1776
download_size: 97979446
dataset_size: 99338686
- config_name: lrv_normal(filtered)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 422589381.75
num_examples: 10490
download_size: 406958773
dataset_size: 422589381.75
- config_name: magpie_pro(l3_80b_mt)
features:
- name: id
dtype: string
- name: image
dtype: 'null'
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1657129141
num_examples: 299988
download_size: 885893066
dataset_size: 1657129141
- config_name: magpie_pro(l3_80b_st)
features:
- name: id
dtype: string
- name: image
dtype: 'null'
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1033666690
num_examples: 299990
download_size: 562771564
dataset_size: 1033666690
- config_name: magpie_pro(qwen2_72b_st)
features:
- name: id
dtype: string
- name: image
dtype: 'null'
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 703489344
num_examples: 299982
download_size: 361433408
dataset_size: 703489344
- config_name: mapqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 3355751195.5
num_examples: 37412
download_size: 3305639218
dataset_size: 3355751195.5
- config_name: mathqa
features:
- name: id
dtype: string
- name: image
dtype: 'null'
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 18318538
num_examples: 29827
download_size: 7857130
dataset_size: 18318538
- config_name: mavis_math_metagen
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2304025372.5
num_examples: 87348
download_size: 322776224
dataset_size: 2304025372.5
- config_name: mavis_math_rule_geo
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 14313211512.25
num_examples: 99990
download_size: 5841283073
dataset_size: 14313211512.25
- config_name: multihiertt(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 300319803.25
num_examples: 7614
download_size: 295638314
dataset_size: 300319803.25
- config_name: orand_car_a
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 23602442.125
num_examples: 1999
download_size: 23333412
dataset_size: 23602442.125
- config_name: raven(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1706160514.625
num_examples: 41995
download_size: 1693150088
dataset_size: 1706160514.625
- config_name: rendered_text(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 11082594894.625
num_examples: 9995
download_size: 11081962044
dataset_size: 11082594894.625
- config_name: robut_sqa(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 685580779.375
num_examples: 8509
download_size: 678666263
dataset_size: 685580779.375
- config_name: robut_wikisql(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 6200499653
num_examples: 74984
download_size: 6168399217
dataset_size: 6200499653
- config_name: robut_wtq(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 4091776188.875
num_examples: 38241
download_size: 4062777449
dataset_size: 4091776188.875
- config_name: scienceqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 286843125.625
num_examples: 4971
download_size: 282896809
dataset_size: 286843125.625
- config_name: scienceqa(nona_context)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2111029055
num_examples: 19208
download_size: 2053942726
dataset_size: 2111029055
- config_name: screen2words(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 7977502095.375
num_examples: 15725
download_size: 7962327904
dataset_size: 7977502095.375
- config_name: sharegpt4o
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 6968025789.5
num_examples: 57284
download_size: 6772195470
dataset_size: 6968025789.5
- config_name: sharegpt4v(coco)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2620153362.875
num_examples: 50017
download_size: 2595583499
dataset_size: 2620153362.875
- config_name: sharegpt4v(knowledge)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 372100773.5
num_examples: 1988
download_size: 369799318
dataset_size: 372100773.5
- config_name: sharegpt4v(llava)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 781795487.25
num_examples: 29990
download_size: 400344187
dataset_size: 781795487.25
- config_name: sharegpt4v(sam)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 4437405218.25
num_examples: 8990
download_size: 4428597081
dataset_size: 4437405218.25
- config_name: sroie
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 117810195
num_examples: 33616
download_size: 103647636
dataset_size: 117810195
- config_name: st_vqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 5771194098.75
num_examples: 17242
download_size: 5768888141
dataset_size: 5771194098.75
- config_name: tabmwp(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 311192518.375
num_examples: 22717
download_size: 306092255
dataset_size: 311192518.375
- config_name: tallyqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 35998988065.625
num_examples: 98675
download_size: 35982430394
dataset_size: 35998988065.625
- config_name: textcaps
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2222268476.25
num_examples: 21942
download_size: 2217838132
dataset_size: 2222268476.25
- config_name: textocr(gpt4v)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2581655353
num_examples: 25104
download_size: 2574418106
dataset_size: 2581655353
- config_name: tqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 331203026.25
num_examples: 27302
download_size: 326999466
dataset_size: 331203026.25
- config_name: ureader_cap
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 9269857109.75
num_examples: 91434
download_size: 2292099971
dataset_size: 9269857109.75
- config_name: ureader_ie
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 11871457209.75
num_examples: 17322
download_size: 1999083115
dataset_size: 11871457209.75
- config_name: vision_flan(filtered)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 24847242604.5
num_examples: 186060
download_size: 24750561877
dataset_size: 24847242604.5
- config_name: vistext(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 550187184.5
num_examples: 9964
download_size: 452795103
dataset_size: 550187184.5
- config_name: visual7w(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 4451436523.875
num_examples: 14361
download_size: 4441971985
dataset_size: 4451436523.875
- config_name: visualmrc(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2938154124.25
num_examples: 3022
download_size: 2909296079
dataset_size: 2938154124.25
- config_name: vqarad(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 95533417
num_examples: 308
download_size: 95410398
dataset_size: 95533417
- config_name: vsr(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 891981646
num_examples: 2152
download_size: 891572866
dataset_size: 891981646
- config_name: websight(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 11209715828.625
num_examples: 9995
download_size: 11144460985
dataset_size: 11209715828.625
configs:
- config_name: CLEVR-Math(MathV360K)
data_files:
- split: train
path: CLEVR-Math(MathV360K)/train-*
- config_name: FigureQA(MathV360K)
data_files:
- split: train
path: FigureQA(MathV360K)/train-*
- config_name: GEOS(MathV360K)
data_files:
- split: train
path: GEOS(MathV360K)/train-*
- config_name: GeoQA+(MathV360K)
data_files:
- split: train
path: GeoQA+(MathV360K)/train-*
- config_name: Geometry3K(MathV360K)
data_files:
- split: train
path: Geometry3K(MathV360K)/train-*
- config_name: IconQA(MathV360K)
data_files:
- split: train
path: IconQA(MathV360K)/train-*
- config_name: MapQA(MathV360K)
data_files:
- split: train
path: MapQA(MathV360K)/train-*
- config_name: PMC-VQA(MathV360K)
data_files:
- split: train
path: PMC-VQA(MathV360K)/train-*
- config_name: Super-CLEVR(MathV360K)
data_files:
- split: train
path: Super-CLEVR(MathV360K)/train-*
- config_name: TabMWP(MathV360K)
data_files:
- split: train
path: TabMWP(MathV360K)/train-*
- config_name: UniGeo(MathV360K)
data_files:
- split: train
path: UniGeo(MathV360K)/train-*
- config_name: VisualWebInstruct(filtered)
data_files:
- split: train
path: VisualWebInstruct(filtered)/train-*
- config_name: VizWiz(MathV360K)
data_files:
- split: train
path: VizWiz(MathV360K)/train-*
- config_name: ai2d(cauldron,llava_format)
data_files:
- split: train
path: ai2d(cauldron,llava_format)/train-*
- config_name: ai2d(gpt4v)
data_files:
- split: train
path: ai2d(gpt4v)/train-*
- config_name: ai2d(internvl)
data_files:
- split: train
path: ai2d(internvl)/train-*
- config_name: allava_instruct_laion4v
data_files:
- split: train
path: allava_instruct_laion4v/train-*
- config_name: allava_instruct_vflan4v
data_files:
- split: train
path: allava_instruct_vflan4v/train-*
- config_name: aokvqa(cauldron,llava_format)
data_files:
- split: train
path: aokvqa(cauldron,llava_format)/train-*
- config_name: chart2text(cauldron)
data_files:
- split: train
path: chart2text(cauldron)/train-*
- config_name: chartqa(cauldron,llava_format)
data_files:
- split: train
path: chartqa(cauldron,llava_format)/train-*
- config_name: chrome_writting
data_files:
- split: train
path: chrome_writting/train-*
- config_name: clevr(cauldron,llava_format)
data_files:
- split: train
path: clevr(cauldron,llava_format)/train-*
- config_name: diagram_image_to_text(cauldron)
data_files:
- split: train
path: diagram_image_to_text(cauldron)/train-*
- config_name: dvqa(cauldron,llava_format)
data_files:
- split: train
path: dvqa(cauldron,llava_format)/train-*
- config_name: figureqa(cauldron,llava_format)
data_files:
- split: train
path: figureqa(cauldron,llava_format)/train-*
- config_name: geo170k(align)
data_files:
- split: train
path: geo170k(align)/train-*
- config_name: geo170k(qa)
data_files:
- split: train
path: geo170k(qa)/train-*
- config_name: geo3k
data_files:
- split: train
path: geo3k/train-*
- config_name: geomverse(cauldron)
data_files:
- split: train
path: geomverse(cauldron)/train-*
- config_name: hateful_memes(cauldron,llava_format)
data_files:
- split: train
path: hateful_memes(cauldron,llava_format)/train-*
- config_name: hitab(cauldron,llava_format)
data_files:
- split: train
path: hitab(cauldron,llava_format)/train-*
- config_name: hme100k
data_files:
- split: train
path: hme100k/train-*
- config_name: iam(cauldron)
data_files:
- split: train
path: iam(cauldron)/train-*
- config_name: iconqa(cauldron,llava_format)
data_files:
- split: train
path: iconqa(cauldron,llava_format)/train-*
- config_name: iiit5k
data_files:
- split: train
path: iiit5k/train-*
- config_name: image_textualization(filtered)
data_files:
- split: train
path: image_textualization(filtered)/train-*
- config_name: infographic(gpt4v)
data_files:
- split: train
path: infographic(gpt4v)/train-*
- config_name: infographic_vqa
data_files:
- split: train
path: infographic_vqa/train-*
- config_name: infographic_vqa_llava_format
data_files:
- split: train
path: infographic_vqa_llava_format/train-*
- config_name: intergps(cauldron,llava_format)
data_files:
- split: train
path: intergps(cauldron,llava_format)/train-*
- config_name: k12_printing
data_files:
- split: train
path: k12_printing/train-*
- config_name: llavar_gpt4_20k
data_files:
- split: train
path: llavar_gpt4_20k/train-*
- config_name: lrv_chart
data_files:
- split: train
path: lrv_chart/train-*
- config_name: lrv_normal(filtered)
data_files:
- split: train
path: lrv_normal(filtered)/train-*
- config_name: magpie_pro(l3_80b_mt)
data_files:
- split: train
path: magpie_pro(l3_80b_mt)/train-*
- config_name: magpie_pro(l3_80b_st)
data_files:
- split: train
path: magpie_pro(l3_80b_st)/train-*
- config_name: magpie_pro(qwen2_72b_st)
data_files:
- split: train
path: magpie_pro(qwen2_72b_st)/train-*
- config_name: mapqa(cauldron,llava_format)
data_files:
- split: train
path: mapqa(cauldron,llava_format)/train-*
- config_name: mathqa
data_files:
- split: train
path: mathqa/train-*
- config_name: mavis_math_metagen
data_files:
- split: train
path: mavis_math_metagen/train-*
- config_name: mavis_math_rule_geo
data_files:
- split: train
path: mavis_math_rule_geo/train-*
- config_name: multihiertt(cauldron)
data_files:
- split: train
path: multihiertt(cauldron)/train-*
- config_name: orand_car_a
data_files:
- split: train
path: orand_car_a/train-*
- config_name: raven(cauldron)
data_files:
- split: train
path: raven(cauldron)/train-*
- config_name: rendered_text(cauldron)
data_files:
- split: train
path: rendered_text(cauldron)/train-*
- config_name: robut_sqa(cauldron)
data_files:
- split: train
path: robut_sqa(cauldron)/train-*
- config_name: robut_wikisql(cauldron)
data_files:
- split: train
path: robut_wikisql(cauldron)/train-*
- config_name: robut_wtq(cauldron,llava_format)
data_files:
- split: train
path: robut_wtq(cauldron,llava_format)/train-*
- config_name: scienceqa(cauldron,llava_format)
data_files:
- split: train
path: scienceqa(cauldron,llava_format)/train-*
- config_name: scienceqa(nona_context)
data_files:
- split: train
path: scienceqa(nona_context)/train-*
- config_name: screen2words(cauldron)
data_files:
- split: train
path: screen2words(cauldron)/train-*
- config_name: sharegpt4o
data_files:
- split: train
path: sharegpt4o/train-*
- config_name: sharegpt4v(coco)
data_files:
- split: train
path: sharegpt4v(coco)/train-*
- config_name: sharegpt4v(knowledge)
data_files:
- split: train
path: sharegpt4v(knowledge)/train-*
- config_name: sharegpt4v(llava)
data_files:
- split: train
path: sharegpt4v(llava)/train-*
- config_name: sharegpt4v(sam)
data_files:
- split: train
path: sharegpt4v(sam)/train-*
- config_name: sroie
data_files:
- split: train
path: sroie/train-*
- config_name: st_vqa(cauldron,llava_format)
data_files:
- split: train
path: st_vqa(cauldron,llava_format)/train-*
- config_name: tabmwp(cauldron)
data_files:
- split: train
path: tabmwp(cauldron)/train-*
- config_name: tallyqa(cauldron,llava_format)
data_files:
- split: train
path: tallyqa(cauldron,llava_format)/train-*
- config_name: textcaps
data_files:
- split: train
path: textcaps/train-*
- config_name: textocr(gpt4v)
data_files:
- split: train
path: textocr(gpt4v)/train-*
- config_name: tqa(cauldron,llava_format)
data_files:
- split: train
path: tqa(cauldron,llava_format)/train-*
- config_name: ureader_cap
data_files:
- split: train
path: ureader_cap/train-*
- config_name: ureader_ie
data_files:
- split: train
path: ureader_ie/train-*
- config_name: vision_flan(filtered)
data_files:
- split: train
path: vision_flan(filtered)/train-*
- config_name: vistext(cauldron)
data_files:
- split: train
path: vistext(cauldron)/train-*
- config_name: visual7w(cauldron,llava_format)
data_files:
- split: train
path: visual7w(cauldron,llava_format)/train-*
- config_name: visualmrc(cauldron)
data_files:
- split: train
path: visualmrc(cauldron)/train-*
- config_name: vqarad(cauldron,llava_format)
data_files:
- split: train
path: vqarad(cauldron,llava_format)/train-*
- config_name: vsr(cauldron,llava_format)
data_files:
- split: train
path: vsr(cauldron,llava_format)/train-*
- config_name: websight(cauldron)
data_files:
- split: train
path: websight(cauldron)/train-*
---
# Dataset Card for LLaVA-OneVision
**[2024-09-01]: Uploaded VisualWebInstruct(filtered), it's used in OneVision Stage**
> almost all subsets are uploaded with HF's required format and you can use the recommended interface to download them and follow our code below to convert them.
> the subset of `ureader_kg` and `ureader_qa` are uploaded with the processed jsons and tar.gz of image folders.
> You may directly download them from the following url.
> https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data/tree/main/ureader_kg
In this dataset, we include the data splits used in the both final image stage and one-vision stage. For more details, please check our [paper](arxiv.org/abs/2408.03326) and our [training doc](https://github.com/LLaVA-VL/LLaVA-NeXT/tree/main/scripts/train#about-the-llava-onevision-data).
## Dataset Description
- **Curated by:** Bo Li, Kaichen Zhang, Hao Zhang, Yuanhan Zhang, Renrui Zhang, Feng Li, Dong Guo
- **Language(s) (NLP):** English, Chinese
- **License:** Apache License 2.0
## Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Dataset Collection:** We include a few subsets from existing dataset collection [Cambrian](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M), [Cauldron](https://huggingface.co/datasets/HuggingFaceM4/the_cauldron), [UReader](https://arxiv.org/abs/2310.05126). Since we only used a few subsets from these datasets, and applied the cleaning and re-annotation process, we uploaded our processed version of these datasets into our own repository and thank the authors for providing the original datasets.
- **Other Datasets:** For rest single source dataset, such as AI2D, OKVQA, we cite and link the original sources in our paper.
## Uses
This dataset is used for the training of the LLaVA-OneVision model. We only allow the use of this dataset for academic research and education purpose. For OpenAI GPT-4 generated data, we recommend the users to check the [OpenAI Usage Policy](https://openai.com/policies/usage-policies/).
## Dataset Structure
We expalin the data composition for mid-stage and final-stage at our repo in [**training doc**](https://github.com/LLaVA-VL/LLaVA-NeXT/tree/main/scripts/train#about-the-llava-onevision-data).
### Statistics
We provide the statistics of the dataset in the following figures, and refer the audience to check our paper.
![](https://i.postimg.cc/2y989XZJ/WX20240802-145215-2x.png)
![](https://i.postimg.cc/MZ9TGXFD/WX20240802-145226-2x.png)
### Code Guidance
To help audience to better understand our dataest, we upload them into Hugging Face Dataset compatible format. During LLaVA-OneVision training, we use the `json` and `image/video` folder to store the data.
> the subset of `ureader_kg` and `ureader_qa` are uploaded with the processed jsons and tar.gz of image folders. You may directly download them from the following url.
> https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data/tree/main/ureader_kg
Here we provide the code guidance to convert the dataset into the format of LLaVA-OneVision, and conduct the training of the LLaVA-OneVision model with converted dataset.
```python
import os
from datasets import load_dataset
from tqdm import tqdm
import json
data = load_dataset("lmms-lab/LLaVA-OneVision-Data", split="train")
image_folder = "<your_image_folder>"
converted_data = []
for da in tqdm(data):
json_data = {}
json_data["id"] = da["id"]
if da["image"] is not None:
json_data["image"] = f"{da['id']}.jpg"
da["image"].save(os.path.join(image_folder, json_data["image"]))
json_data["conversations"] = da["conversations"]
converted_data.append(json_data)
with open("<your_json_file>.json", "w") as f:
json.dump(converted_data, f, indent=4, ensure_ascii=False)
```
## Citation
**BibTeX:**
[More Information Needed]
## Glossary
The dataset collection process is conducted by all of the authors, we thank the Feng Li and Renrui Zhang for providing [LLaVA-M4-Instruct Data](https://huggingface.co/datasets/lmms-lab/M4-Instruct-Data) and Yuanhan for providing the [Video datasets](https://huggingface.co/datasets/lmms-lab/LLaVA-Video-178K).
After the dataset collection, the cleaning and re-annotation process, including final mixture of the dataset, is conducted by Bo Li and with the great help of Kaichen Zhang.
## Dataset Card Authors
The dataset is curated by the following authors:
Bo Li, Kaichen Zhang, Hao Zhang, Yuanhan Zhang, Renrui Zhang, Feng Li
## Dataset Card Contact
[Bo Li](https://brianboli.com/): [email protected]
[Kaichen Zhang](https://www.linkedin.com/in/kaichen-zhang-014b17219/?originalSubdomain=sg) |
DL3DV/DL3DV-ALL-2K | DL3DV | "2024-09-03T11:38:35Z" | 11,759 | 0 | [
"size_categories:n>1T",
"region:us",
"3D Vision",
"NeRF",
"3D Gaussian",
"Dataset",
"Novel View Synthesis",
"Text to 3D",
"Image to 3D"
] | null | "2024-03-05T06:03:15Z" | ---
tags:
- 3D Vision
- NeRF
- 3D Gaussian
- Dataset
- Novel View Synthesis
- Text to 3D
- Image to 3D
pretty_name: Dl3DV-Dataset
size_categories:
- n>1T
---
# DL3DV-Dataset
This repo has all the 2K frames with camera poses of DL3DV-10K Dataset. We are working hard to review all the dataset to avoid sensitive information. Thank you for your patience.
# Download
If you have enough space, you can use git to download a dataset from huggingface. See this [link](https://huggingface.co/docs/hub/en/datasets-downloading). [480P](https://huggingface.co/datasets/DL3DV/DL3DV-ALL-480P)/[960P](https://huggingface.co/datasets/DL3DV/DL3DV-ALL-960P) versions should satisfies most needs.
If you do not have enough space, we further provide a [download script](https://github.com/DL3DV-10K/Dataset/blob/main/scripts/download.py) here to download a subset. The usage:
```Bash
usage: download.py [-h] --odir ODIR --subset {1K,2K,3K,4K,5K,6K,7K,8K,9K,10K} --resolution {4K,2K,960P,480P} --file_type {images+poses,video,colmap_cache} [--hash HASH]
[--clean_cache]
optional arguments:
-h, --help show this help message and exit
--odir ODIR output directory
--subset {1K,2K,3K,4K,5K,6K,7K,8K,9K,10K}
The subset of the benchmark to download
--resolution {4K,2K,960P,480P}
The resolution to donwnload
--file_type {images+poses,video,colmap_cache}
The file type to download
--hash HASH If set subset=hash, this is the hash code of the scene to download
--clean_cache If set, will clean the huggingface cache to save space
```
Here are some examples:
```Bash
# Make sure you have applied for the access.
# Use this to download the download.py script
wget https://raw.githubusercontent.com/DL3DV-10K/Dataset/main/scripts/download.py
# Download 2K resolution images and poses, 0~1K subset, output to DL3DV-10K directory
python download.py --odir DL3DV-10K --subset 1K --resolution 2K --file_type images+poses --clean_cache
# Download 2K resolution images and poses, 1K~2K subset, output to DL3DV-10K directory
python download.py --odir DL3DV-10K --subset 2K --resolution 2K --file_type images+poses --clean_cache
```
You can also download a specific scene with its hash. The scene-hash pair visualization can be found [here](https://htmlpreview.github.io/?https://github.com/DL3DV-10K/Dataset/blob/main/visualize/index.html).
```Bash
python download.py --odir DL3DV-10K --subset 2K --resolution 2K --file_type images+poses --hash e2cedefea8a0ed2d0ffbd5bdc08acbe7e1f85c96f72f7b790e9dfe1c98963047 --clean_cache
```
# News
- [x] DL3DV-1K, 2K, 3K, 4K
- [ ] DL3DV-5K ~ 10K
|
bigscience/xP3 | bigscience | "2023-05-30T15:49:59Z" | 11,752 | 108 | [
"task_categories:other",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"multilinguality:multilingual",
"language:ak",
"language:ar",
"language:as",
"language:bm",
"language:bn",
"language:ca",
"language:code",
"language:en",
"language:es",
"language:eu",
"language:fon",
"language:fr",
"language:gu",
"language:hi",
"language:id",
"language:ig",
"language:ki",
"language:kn",
"language:lg",
"language:ln",
"language:ml",
"language:mr",
"language:ne",
"language:nso",
"language:ny",
"language:or",
"language:pa",
"language:pt",
"language:rn",
"language:rw",
"language:sn",
"language:st",
"language:sw",
"language:ta",
"language:te",
"language:tn",
"language:ts",
"language:tum",
"language:tw",
"language:ur",
"language:vi",
"language:wo",
"language:xh",
"language:yo",
"language:zh",
"language:zu",
"license:apache-2.0",
"size_categories:100M<n<1B",
"arxiv:2211.01786",
"region:us"
] | [
"other"
] | "2022-10-10T10:38:53Z" | ---
annotations_creators:
- expert-generated
- crowdsourced
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
license:
- apache-2.0
multilinguality:
- multilingual
pretty_name: xP3
size_categories:
- 100M<n<1B
task_categories:
- other
---
# Dataset Card for xP3
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/bigscience-workshop/xmtf
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:[email protected])
### Dataset Summary
> xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.
- **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3). We provide this version to save processing time and ease reproducibility.
- **Languages:** 46 (Can be extended by [recreating with more splits](https://github.com/bigscience-workshop/xmtf#create-xp3))
- **xP3 Dataset Family:**
<table>
<tr>
<th>Name</th>
<th>Explanation</th>
<th>Example models</th>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t>
<td>Mixture of 17 tasks in 277 languages with English prompts</td>
<td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t>
<td>Mixture of 13 training tasks in 46 languages with English prompts</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t>
<td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t>
<td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td>
<td></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t>
<td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t>
<td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
</tr>
</table>
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"inputs": "Sentence 1: Fue académico en literatura metafísica, teología y ciencias clásicas.\nSentence 2: Fue académico en literatura metafísica, teología y ciencia clásica.\nQuestion: Can we rewrite Sentence 1 to Sentence 2? Yes or No?",
"targets": "Yes"
}
```
### Data Fields
The data fields are the same among all splits:
- `inputs`: the natural language input fed to the model
- `targets`: the natural language target that the model has to generate
### Data Splits
The below table summarizes sizes per language (computed from the `merged_{lang}.jsonl` files). Due to languages like `tw` only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage. Adding a new language is very simple, you can take [this script adding Russian](https://huggingface.co/datasets/bs-la/xP3ru/blob/main/xp3_ru.py) as an example.
|Language|Kilobytes|%|Samples|%|
|--------|------:|-:|---:|-:|
|tw|106288|0.11|265071|0.34|
|bm|107056|0.11|265180|0.34|
|ak|108096|0.11|265071|0.34|
|eu|108112|0.11|269973|0.34|
|ca|110608|0.12|271191|0.34|
|fon|113072|0.12|265063|0.34|
|st|114080|0.12|265063|0.34|
|ki|115040|0.12|265180|0.34|
|tum|116032|0.12|265063|0.34|
|wo|122560|0.13|365063|0.46|
|ln|126304|0.13|365060|0.46|
|as|156256|0.16|265063|0.34|
|or|161472|0.17|265063|0.34|
|kn|165456|0.17|265063|0.34|
|ml|175040|0.18|265864|0.34|
|rn|192992|0.2|318189|0.4|
|nso|229712|0.24|915051|1.16|
|tn|235536|0.25|915054|1.16|
|lg|235936|0.25|915021|1.16|
|rw|249360|0.26|915043|1.16|
|ts|250256|0.26|915044|1.16|
|sn|252496|0.27|865056|1.1|
|xh|254672|0.27|915058|1.16|
|zu|263712|0.28|915061|1.16|
|ny|272128|0.29|915063|1.16|
|ig|325232|0.34|950097|1.2|
|yo|352784|0.37|918416|1.16|
|ne|393680|0.41|315754|0.4|
|pa|523248|0.55|339210|0.43|
|gu|560688|0.59|347499|0.44|
|sw|560896|0.59|1114455|1.41|
|mr|666240|0.7|417269|0.53|
|bn|832720|0.88|428843|0.54|
|ta|924496|0.97|410633|0.52|
|te|1332912|1.4|573364|0.73|
|ur|1918272|2.02|855756|1.08|
|vi|3101408|3.27|1667306|2.11|
|code|4330752|4.56|2707724|3.43|
|hi|4393696|4.63|1543441|1.96|
|zh|4589904|4.83|3560556|4.51|
|id|4606288|4.85|2627392|3.33|
|ar|4677264|4.93|2148955|2.72|
|fr|5546688|5.84|5055942|6.41|
|pt|6129584|6.46|3562772|4.52|
|es|7571808|7.98|5151349|6.53|
|en|37261104|39.25|31495184|39.93|
|total|94941936|100.0|78883588|100.0|
## Dataset Creation
### Source Data
#### Training datasets
- Code Miscellaneous
- [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex)
- [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus)
- [GreatCode](https://huggingface.co/datasets/great_code)
- [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes)
- Closed-book QA
- [Hotpot QA](https://huggingface.co/datasets/hotpot_qa)
- [Trivia QA](https://huggingface.co/datasets/trivia_qa)
- [Web Questions](https://huggingface.co/datasets/web_questions)
- [Wiki QA](https://huggingface.co/datasets/wiki_qa)
- Extractive QA
- [Adversarial QA](https://huggingface.co/datasets/adversarial_qa)
- [CMRC2018](https://huggingface.co/datasets/cmrc2018)
- [DRCD](https://huggingface.co/datasets/clue)
- [DuoRC](https://huggingface.co/datasets/duorc)
- [MLQA](https://huggingface.co/datasets/mlqa)
- [Quoref](https://huggingface.co/datasets/quoref)
- [ReCoRD](https://huggingface.co/datasets/super_glue)
- [ROPES](https://huggingface.co/datasets/ropes)
- [SQuAD v2](https://huggingface.co/datasets/squad_v2)
- [xQuAD](https://huggingface.co/datasets/xquad)
- TyDI QA
- [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary)
- [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp)
- Multiple-Choice QA
- [ARC](https://huggingface.co/datasets/ai2_arc)
- [C3](https://huggingface.co/datasets/c3)
- [CoS-E](https://huggingface.co/datasets/cos_e)
- [Cosmos](https://huggingface.co/datasets/cosmos)
- [DREAM](https://huggingface.co/datasets/dream)
- [MultiRC](https://huggingface.co/datasets/super_glue)
- [OpenBookQA](https://huggingface.co/datasets/openbookqa)
- [PiQA](https://huggingface.co/datasets/piqa)
- [QUAIL](https://huggingface.co/datasets/quail)
- [QuaRel](https://huggingface.co/datasets/quarel)
- [QuaRTz](https://huggingface.co/datasets/quartz)
- [QASC](https://huggingface.co/datasets/qasc)
- [RACE](https://huggingface.co/datasets/race)
- [SciQ](https://huggingface.co/datasets/sciq)
- [Social IQA](https://huggingface.co/datasets/social_i_qa)
- [Wiki Hop](https://huggingface.co/datasets/wiki_hop)
- [WiQA](https://huggingface.co/datasets/wiqa)
- Paraphrase Identification
- [MRPC](https://huggingface.co/datasets/super_glue)
- [PAWS](https://huggingface.co/datasets/paws)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [QQP](https://huggingface.co/datasets/qqp)
- Program Synthesis
- [APPS](https://huggingface.co/datasets/codeparrot/apps)
- [CodeContests](https://huggingface.co/datasets/teven/code_contests)
- [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs)
- [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp)
- [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search)
- [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code)
- Structure-to-text
- [Common Gen](https://huggingface.co/datasets/common_gen)
- [Wiki Bio](https://huggingface.co/datasets/wiki_bio)
- Sentiment
- [Amazon](https://huggingface.co/datasets/amazon_polarity)
- [App Reviews](https://huggingface.co/datasets/app_reviews)
- [IMDB](https://huggingface.co/datasets/imdb)
- [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes)
- [Yelp](https://huggingface.co/datasets/yelp_review_full)
- Simplification
- [BiSECT](https://huggingface.co/datasets/GEM/BiSECT)
- Summarization
- [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail)
- [Gigaword](https://huggingface.co/datasets/gigaword)
- [MultiNews](https://huggingface.co/datasets/multi_news)
- [SamSum](https://huggingface.co/datasets/samsum)
- [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua)
- [XLSum](https://huggingface.co/datasets/GEM/xlsum)
- [XSum](https://huggingface.co/datasets/xsum)
- Topic Classification
- [AG News](https://huggingface.co/datasets/ag_news)
- [DBPedia](https://huggingface.co/datasets/dbpedia_14)
- [TNEWS](https://huggingface.co/datasets/clue)
- [TREC](https://huggingface.co/datasets/trec)
- [CSL](https://huggingface.co/datasets/clue)
- Translation
- [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200)
- [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt)
- Word Sense disambiguation
- [WiC](https://huggingface.co/datasets/super_glue)
- [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic)
#### Evaluation datasets (included in [xP3all](https://huggingface.co/datasets/bigscience/xP3all) except for NLI datasets & HumanEval)
- Natural Language Inference (NLI)
- [ANLI](https://huggingface.co/datasets/anli)
- [CB](https://huggingface.co/datasets/super_glue)
- [RTE](https://huggingface.co/datasets/super_glue)
- [XNLI](https://huggingface.co/datasets/xnli)
- Coreference Resolution
- [Winogrande](https://huggingface.co/datasets/winogrande)
- [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd)
- Program Synthesis
- [HumanEval](https://huggingface.co/datasets/openai_humaneval)
- Sentence Completion
- [COPA](https://huggingface.co/datasets/super_glue)
- [Story Cloze](https://huggingface.co/datasets/story_cloze)
- [XCOPA](https://huggingface.co/datasets/xcopa)
- [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze)
## Additional Information
### Licensing Information
The dataset is released under Apache 2.0.
### Citation Information
```bibtex
@article{muennighoff2022crosslingual,
title={Crosslingual generalization through multitask finetuning},
author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others},
journal={arXiv preprint arXiv:2211.01786},
year={2022}
}
```
### Contributions
Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset. |
knkarthick/dialogsum | knkarthick | "2023-10-03T10:56:21Z" | 11,691 | 190 | [
"task_categories:summarization",
"task_categories:text2text-generation",
"task_categories:text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"dialogue-summary",
"one-liner-summary",
"meeting-title",
"email-subject"
] | [
"summarization",
"text2text-generation",
"text-generation"
] | "2022-06-28T10:17:20Z" | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license: cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
- text2text-generation
- text-generation
task_ids: []
pretty_name: DIALOGSum Corpus
tags:
- dialogue-summary
- one-liner-summary
- meeting-title
- email-subject
---
# Dataset Card for DIALOGSum Corpus
## Dataset Description
### Links
- **Homepage:** https://aclanthology.org/2021.findings-acl.449
- **Repository:** https://github.com/cylnlp/dialogsum
- **Paper:** https://aclanthology.org/2021.findings-acl.449
- **Point of Contact:** https://huggingface.co/knkarthick
### Dataset Summary
DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 (Plus 100 holdout data for topic generation) dialogues with corresponding manually labeled summaries and topics.
### Languages
English
## Dataset Structure
### Data Instances
DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 dialogues (+1000 tests) split into train, test and validation.
The first instance in the training set:
{'id': 'train_0', 'summary': "Mr. Smith's getting a check-up, and Doctor Hawkins advises him to have one every year. Hawkins'll give some information about their classes and medications to help Mr. Smith quit smoking.", 'dialogue': "#Person1#: Hi, Mr. Smith. I'm Doctor Hawkins. Why are you here today?\n#Person2#: I found it would be a good idea to get a check-up.\n#Person1#: Yes, well, you haven't had one for 5 years. You should have one every year.\n#Person2#: I know. I figure as long as there is nothing wrong, why go see the doctor?\n#Person1#: Well, the best way to avoid serious illnesses is to find out about them early. So try to come at least once a year for your own good.\n#Person2#: Ok.\n#Person1#: Let me see here. Your eyes and ears look fine. Take a deep breath, please. Do you smoke, Mr. Smith?\n#Person2#: Yes.\n#Person1#: Smoking is the leading cause of lung cancer and heart disease, you know. You really should quit.\n#Person2#: I've tried hundreds of times, but I just can't seem to kick the habit.\n#Person1#: Well, we have classes and some medications that might help. I'll give you more information before you leave.\n#Person2#: Ok, thanks doctor.", 'topic': "get a check-up}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- topic: human written topic/one liner of the dialogue.
- id: unique file id of an example.
### Data Splits
- train: 12460
- val: 500
- test: 1500
- holdout: 100 [Only 3 features: id, dialogue, topic]
## Dataset Creation
### Curation Rationale
In paper:
We collect dialogue data for DialogSum from three public dialogue corpora, namely Dailydialog (Li et al., 2017), DREAM (Sun et al., 2019) and MuTual (Cui et al., 2019), as well as an English speaking practice website. These datasets contain face-to-face spoken dialogues that cover a wide range of daily-life topics, including schooling, work, medication, shopping, leisure, travel. Most conversations take place between friends, colleagues, and between service providers and customers.
Compared with previous datasets, dialogues from DialogSum have distinct characteristics:
Under rich real-life scenarios, including more diverse task-oriented scenarios;
Have clear communication patterns and intents, which is valuable to serve as summarization sources;
Have a reasonable length, which comforts the purpose of automatic summarization.
We ask annotators to summarize each dialogue based on the following criteria:
Convey the most salient information;
Be brief;
Preserve important named entities within the conversation;
Be written from an observer perspective;
Be written in formal language.
### Who are the source language producers?
linguists
### Who are the annotators?
language experts
## Licensing Information
CC BY-NC-SA 4.0
## Citation Information
```
@inproceedings{chen-etal-2021-dialogsum,
title = "{D}ialog{S}um: {A} Real-Life Scenario Dialogue Summarization Dataset",
author = "Chen, Yulong and
Liu, Yang and
Chen, Liang and
Zhang, Yue",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.449",
doi = "10.18653/v1/2021.findings-acl.449",
pages = "5062--5074",
```
## Contributions
Thanks to [@cylnlp](https://github.com/cylnlp) for adding this dataset. |
HuggingFaceTB/smoltalk | HuggingFaceTB | "2024-11-26T11:02:25Z" | 11,675 | 258 | [
"language:en",
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"synthetic"
] | null | "2024-11-17T15:52:41Z" | ---
language:
- en
tags:
- synthetic
pretty_name: SmolTalk
size_categories:
- 1M<n<10M
configs:
- config_name: all
data_files:
- split: train
path: data/all/train-*
- split: test
path: data/all/test-*
- config_name: smol-magpie-ultra
data_files:
- split: train
path: data/smol-magpie-ultra/train-*
- split: test
path: data/smol-magpie-ultra/test-*
- config_name: smol-constraints
data_files:
- split: train
path: data/smol-constraints/train-*
- split: test
path: data/smol-constraints/test-*
- config_name: smol-rewrite
data_files:
- split: train
path: data/smol-rewrite/train-*
- split: test
path: data/smol-rewrite/test-*
- config_name: smol-summarize
data_files:
- split: train
path: data/smol-summarize/train-*
- split: test
path: data/smol-summarize/test-*
- config_name: apigen-80k
data_files:
- split: train
path: data/apigen-80k/train-*
- split: test
path: data/apigen-80k/test-*
- config_name: everyday-conversations
data_files:
- split: train
path: data/everyday-conversations/train-*
- split: test
path: data/everyday-conversations/test-*
- config_name: explore-instruct-rewriting
data_files:
- split: train
path: data/explore-instruct-rewriting/train-*
- split: test
path: data/explore-instruct-rewriting/test-*
- config_name: longalign
data_files:
- split: train
path: data/longalign/train-*
- split: test
path: data/longalign/test-*
- config_name: metamathqa-50k
data_files:
- split: train
path: data/metamathqa-50k/train-*
- split: test
path: data/metamathqa-50k/test-*
- config_name: numina-cot-100k
data_files:
- split: train
path: data/numina-cot-100k/train-*
- split: test
path: data/numina-cot-100k/test-*
- config_name: openhermes-100k
data_files:
- split: train
path: data/openhermes-100k/train-*
- split: test
path: data/openhermes-100k/test-*
- config_name: self-oss-instruct
data_files:
- split: train
path: data/self-oss-instruct/train-*
- split: test
path: data/self-oss-instruct/test-*
- config_name: systemchats-30k
data_files:
- split: train
path: data/systemchats-30k/train-*
- split: test
path: data/systemchats-30k/test-*
---
# SmolTalk
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/PlVz8O5yJ1FGGlJeLP4n-.png)
## Dataset description
This is a synthetic dataset designed for supervised finetuning (SFT) of LLMs. It was used to build [SmolLM2-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct) family of models and contains 1M samples.
During the development of SmolLM2, we observed that models finetuned on public SFT datasets underperformed compared to other models with proprietary instruction datasets. To address this gap, we created new synthetic datasets that improve instruction following while covering diverse tasks including text editing, rewriting, summarization, and reasoning.
Through a series of data ablations at 1.7B scale, we enhanced our SFT mix by incorporating public datasets to strengthen specific capabilities such as mathematics, coding, system prompt following and long-context understanding.
All the new datasets were generated with [distilabel](https://github.com/argilla-io/distilabel) and you can find the generation code here https://github.com/huggingface/smollm/tree/main/distilabel_pipelines.
You can load a dataset using
```python
from datasets import load_dataset
ds = load_dataset("HuggingFaceTB/smoltalk", "all", split="train")
# to load the train split of a specific subset such as smol-magpie-ultra, you can do
ds = load_dataset("HuggingFaceTB/smoltalk", "smol-magpie-ultra", split="train")
```
## Dataset composition
The mix consists of:
**New datasets**
- *Smol-Magpie-Ultra*: the core component of our mix, consisting of 400K samples generated using the Magpie pipeline with /Llama-3.1-405B-Instruct. We also heavily curate and filter this dataset compared to the original Magpie-Pro pipeline. SmolLM models trained on this dataset alone outperform those trained on popular public datasets like OpenHermes and Magpie Pro across key benchmarks including IFEval and MT-Bench.
- Smol-contraints: a 36K-sample dataset that trains models to follow specific constraints, such as generating responses with a fixed number of sentences or words, or incorporating specified words in the output. The dataset has been decontaminated against IFEval to prevent overlap.
- Smol-rewrite: an 50k-sample collection focused on text rewriting tasks, such as adjusting tone to be more friendly or professional. Note that Smol-Magpie-Ultra also includes some rewriting, editing, and summarization examples.
- Smol-summarize: an 100k-sample dataset specialized in email and news summarization.
**Existing public datasets**
To enhance capabilities in mathematics, coding, system prompts, and long-context understanding, we fine-tuned SmolLM2-1.7B on various public SFT datasets and included subsets of the best performing ones using tuned ratios. These include:
- OpenHermes2.5: we added 100k samples from [OpenHermes2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5), since we found that it helps preserve and boost benchmarks such as MMLU and WinoGrande, and BBH.
- MetaMathQA: we add this [dataset](https://huggingface.co/datasets/meta-math/MetaMathQA?) to improve the model on mathematics and reasoning, we include 50k random samples.
- NuminaMath-CoT: we find that this [dataset](https://huggingface.co/datasets/AI-MO/NuminaMath-CoT) helps on mathematics, especially hard problems found in benchmarks such as MATH.
- Self-Oss-Starcoder2-Instruct: we use this [dataset](https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k) to improve coding capabilities.
- SystemChats2.0: to make the model support a variety of system prompt formats we add 30k samples from the [SystemChat-2.0](https://huggingface.co/datasets/cognitivecomputations/SystemChat-2.0) dataset. Note that Smol-rewrite and and Smol-summarize datasets also include system prompts.
- LongAlign: we find that finetuning the model on only short samples makes it loose long context abilities beyond 2048 tokens, so we add english samples (with less than 16k tokens) from the [LongAlign-10k](https://huggingface.co/datasets/THUDM/LongAlign-10k) dataset and train with a 8192 sequence.
- Everyday-conversations: this [dataset](https://huggingface.co/datasets/HuggingFaceTB/everyday-conversations-llama3.1-2k) includes multi-turn everyday conversations such as greeting and was used in SmolLM v1 post-training.
- APIGen-Function-Calling: we use 80k samples from [apigen-function-calling](https://huggingface.co/datasets/argilla/apigen-function-calling) which is a mix of [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1) and [xlam-function-calling-60k](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k) datasets.
- Explore-Instruct-Rewriting: 30k samples from this rewriting [dataset](https://huggingface.co/datasets/Wanfq/Explore_Instruct_Rewriting_32k).
You can find the code for generating the new datasets with [distilabel](https://github.com/argilla-io/distilabel) here: https://github.com/huggingface/smollm. The ablation details will be included in an upcoming blog post.
## License
All the new datasets (Smol-Magpie-Ultra, Smol-contraints, Smol-rewrite, Smol-summarize) are licensed under [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0). For the existing public datasets, please refer to the original dataset for the license [Dataset composition](#dataset-composition)
## Evaluation
We compare SmolTalk to the recent [Orca AgentInstruct 1M](https://huggingface.co/datasets/microsoft/orca-agentinstruct-1M-v1) dataset by finetuning SmolLM2 on both datasets using the same training setup (we train for 2 epochs, using a learning rate of 3e-04, a sequence length of 8192 and a global batch size of 16).
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/W7TOuHqb5rILneQ-QkIDU.png)
We also observe significant improvements at 7B scale when fine-tuning [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.3) on SmolTalk, notably on IFEval, BBH, GS8Mk and MATH.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/M5EzF6PFZij7hLI8fTxEV.png)
## Smol-SmolTalk
For SmolLM2-135M-Instruct and SmolLM2-360M-Instruct, we use a subset of the dataset that is more suitable for these smaller models. For instance, we only include samples from Smol-Magpie-Ultra with more concise conversations and exclude advanced math datasets. You can find the dataset here: https://huggingface.co/datasets/HuggingFaceTB/smol-smoltalk
The training code is available here https://github.com/huggingface/alignment-handbook/tree/main/recipes/smollm2
## Citation
```bash
@misc{allal2024SmolLM2,
title={SmolLM2 - with great data, comes great performance},
author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Lewis Tunstall and Agustín Piqueres and Andres Marafioti and Cyril Zakka and Leandro von Werra and Thomas Wolf},
year={2024},
}
``` |
japanese-asr/whisper_transcriptions.reazon_speech_all.wer_10.0.vectorized | japanese-asr | "2024-09-17T13:53:02Z" | 11,515 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-09-12T10:10:35Z" | ---
dataset_info:
- config_name: subset_0
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44407083236
num_examples: 28889
download_size: 6430216790
dataset_size: 44407083236
- config_name: subset_1
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44089216600
num_examples: 28682
download_size: 6385763048
dataset_size: 44089216600
- config_name: subset_10
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43927652252
num_examples: 28577
download_size: 6336100250
dataset_size: 43927652252
- config_name: subset_100
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44365586824
num_examples: 28862
download_size: 6435201244
dataset_size: 44365586824
- config_name: subset_101
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44324247868
num_examples: 28835
download_size: 6431762006
dataset_size: 44324247868
- config_name: subset_102
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43821526656
num_examples: 28508
download_size: 6367882564
dataset_size: 43821526656
- config_name: subset_103
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44084293668
num_examples: 28679
download_size: 6363475471
dataset_size: 44084293668
- config_name: subset_104
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44042930672
num_examples: 28652
download_size: 6381242681
dataset_size: 44042930672
- config_name: subset_106
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43912140892
num_examples: 28567
download_size: 6343450605
dataset_size: 43912140892
- config_name: subset_107
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43931998624
num_examples: 28580
download_size: 6358400755
dataset_size: 43931998624
- config_name: subset_108
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44042913000
num_examples: 28652
download_size: 6405970862
dataset_size: 44042913000
- config_name: subset_109
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44642253680
num_examples: 29042
download_size: 6437990632
dataset_size: 44642253680
- config_name: subset_11
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44256762756
num_examples: 28791
download_size: 6393712860
dataset_size: 44256762756
- config_name: subset_110
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43889022688
num_examples: 28552
download_size: 6360561092
dataset_size: 43889022688
- config_name: subset_111
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44129144280
num_examples: 28708
download_size: 6408022759
dataset_size: 44129144280
- config_name: subset_112
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44041454396
num_examples: 28651
download_size: 6391629995
dataset_size: 44041454396
- config_name: subset_113
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44218161920
num_examples: 28766
download_size: 6397865173
dataset_size: 44218161920
- config_name: subset_114
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44311827300
num_examples: 28827
download_size: 6392228352
dataset_size: 44311827300
- config_name: subset_115
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43438751460
num_examples: 28259
download_size: 6261293593
dataset_size: 43438751460
- config_name: subset_116
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43895154544
num_examples: 28556
download_size: 6347517025
dataset_size: 43895154544
- config_name: subset_117
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43969041880
num_examples: 28604
download_size: 6375498562
dataset_size: 43969041880
- config_name: subset_118
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44027316104
num_examples: 28642
download_size: 6354466340
dataset_size: 44027316104
- config_name: subset_119
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44289059560
num_examples: 28812
download_size: 6416432647
dataset_size: 44289059560
- config_name: subset_12
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44293612564
num_examples: 28815
download_size: 6433586401
dataset_size: 44293612564
- config_name: subset_120
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44530056588
num_examples: 28969
download_size: 6437978882
dataset_size: 44530056588
- config_name: subset_121
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 3074160
num_examples: 2
download_size: 556271
dataset_size: 3074160
- config_name: subset_122
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44136739628
num_examples: 28713
download_size: 6404302139
dataset_size: 44136739628
- config_name: subset_123
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44096634284
num_examples: 28687
download_size: 6389251368
dataset_size: 44096634284
- config_name: subset_124
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44001467124
num_examples: 28625
download_size: 6385493649
dataset_size: 44001467124
- config_name: subset_125
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44128863696
num_examples: 28708
download_size: 6364505444
dataset_size: 44128863696
- config_name: subset_126
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44382486420
num_examples: 28873
download_size: 6441197752
dataset_size: 44382486420
- config_name: subset_127
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44006092176
num_examples: 28628
download_size: 6361537304
dataset_size: 44006092176
- config_name: subset_128
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43759809728
num_examples: 28468
download_size: 6336544958
dataset_size: 43759809728
- config_name: subset_129
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44024331328
num_examples: 28640
download_size: 6359644430
dataset_size: 44024331328
- config_name: subset_13
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44357930276
num_examples: 28857
download_size: 6420201483
dataset_size: 44357930276
- config_name: subset_130
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44192010836
num_examples: 28749
download_size: 6422867143
dataset_size: 44192010836
- config_name: subset_131
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44099663532
num_examples: 28689
download_size: 6371664563
dataset_size: 44099663532
- config_name: subset_132
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44095360096
num_examples: 28686
download_size: 6383911332
dataset_size: 44095360096
- config_name: subset_133
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43730827940
num_examples: 28449
download_size: 6313519416
dataset_size: 43730827940
- config_name: subset_134
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44007518388
num_examples: 28629
download_size: 6389179458
dataset_size: 44007518388
- config_name: subset_135
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43707840616
num_examples: 28434
download_size: 6317643688
dataset_size: 43707840616
- config_name: subset_136
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44033774672
num_examples: 28646
download_size: 6373240832
dataset_size: 44033774672
- config_name: subset_137
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 3074344
num_examples: 2
download_size: 557594
dataset_size: 3074344
- config_name: subset_138
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43841655788
num_examples: 28521
download_size: 6370669259
dataset_size: 43841655788
- config_name: subset_139
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43793963000
num_examples: 28490
download_size: 6351019624
dataset_size: 43793963000
- config_name: subset_14
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44413389620
num_examples: 28893
download_size: 6406524573
dataset_size: 44413389620
- config_name: subset_140
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43978329680
num_examples: 28610
download_size: 6341082690
dataset_size: 43978329680
- config_name: subset_141
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44030464856
num_examples: 28644
download_size: 6383471765
dataset_size: 44030464856
- config_name: subset_142
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43793910464
num_examples: 28490
download_size: 6348275681
dataset_size: 43793910464
- config_name: subset_143
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44448732656
num_examples: 28916
download_size: 6450504968
dataset_size: 44448732656
- config_name: subset_144
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43686238792
num_examples: 28420
download_size: 6334779676
dataset_size: 43686238792
- config_name: subset_145
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44141228568
num_examples: 28716
download_size: 6363170999
dataset_size: 44141228568
- config_name: subset_146
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43800179728
num_examples: 28494
download_size: 6358878988
dataset_size: 43800179728
- config_name: subset_147
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44294909712
num_examples: 28816
download_size: 6412779644
dataset_size: 44294909712
- config_name: subset_148
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43886264664
num_examples: 28550
download_size: 6377384251
dataset_size: 43886264664
- config_name: subset_149
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44030547976
num_examples: 28644
download_size: 6383895865
dataset_size: 44030547976
- config_name: subset_15
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44014998072
num_examples: 28634
download_size: 6373512015
dataset_size: 44014998072
- config_name: subset_150
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43899790252
num_examples: 28559
download_size: 6346605145
dataset_size: 43899790252
- config_name: subset_151
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43913769264
num_examples: 28568
download_size: 6389364151
dataset_size: 43913769264
- config_name: subset_152
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44493036076
num_examples: 28945
download_size: 6441659355
dataset_size: 44493036076
- config_name: subset_153
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4611236
num_examples: 3
download_size: 671590
dataset_size: 4611236
- config_name: subset_154
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43904573672
num_examples: 28562
download_size: 6353845259
dataset_size: 43904573672
- config_name: subset_155
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44414946296
num_examples: 28894
download_size: 6399004665
dataset_size: 44414946296
- config_name: subset_156
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43787907000
num_examples: 28486
download_size: 6361131234
dataset_size: 43787907000
- config_name: subset_157
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43838676140
num_examples: 28519
download_size: 6377464479
dataset_size: 43838676140
- config_name: subset_158
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43920002016
num_examples: 28572
download_size: 6365562506
dataset_size: 43920002016
- config_name: subset_159
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44079873808
num_examples: 28676
download_size: 6385289404
dataset_size: 44079873808
- config_name: subset_16
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44135044504
num_examples: 28712
download_size: 6367990267
dataset_size: 44135044504
- config_name: subset_160
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44261370184
num_examples: 28794
download_size: 6435970157
dataset_size: 44261370184
- config_name: subset_161
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44096758836
num_examples: 28687
download_size: 6411447660
dataset_size: 44096758836
- config_name: subset_162
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43901416400
num_examples: 28560
download_size: 6394315107
dataset_size: 43901416400
- config_name: subset_163
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44322671320
num_examples: 28834
download_size: 6421064852
dataset_size: 44322671320
- config_name: subset_164
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43978582144
num_examples: 28610
download_size: 6362813793
dataset_size: 43978582144
- config_name: subset_165
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44018298496
num_examples: 28636
download_size: 6376999923
dataset_size: 44018298496
- config_name: subset_166
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44251922632
num_examples: 28788
download_size: 6419837278
dataset_size: 44251922632
- config_name: subset_167
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44104251680
num_examples: 28692
download_size: 6408687778
dataset_size: 44104251680
- config_name: subset_168
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43790884880
num_examples: 28488
download_size: 6371985468
dataset_size: 43790884880
- config_name: subset_169
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6147752
num_examples: 4
download_size: 527132
dataset_size: 6147752
- config_name: subset_17
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44179626060
num_examples: 28741
download_size: 6410813569
dataset_size: 44179626060
- config_name: subset_170
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44238190244
num_examples: 28779
download_size: 6425085842
dataset_size: 44238190244
- config_name: subset_171
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43855344672
num_examples: 28530
download_size: 6351374612
dataset_size: 43855344672
- config_name: subset_172
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43744717352
num_examples: 28458
download_size: 6322671761
dataset_size: 43744717352
- config_name: subset_173
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43993634824
num_examples: 28620
download_size: 6324282823
dataset_size: 43993634824
- config_name: subset_174
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44381122280
num_examples: 28872
download_size: 6448679863
dataset_size: 44381122280
- config_name: subset_175
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44391843308
num_examples: 28879
download_size: 6448621992
dataset_size: 44391843308
- config_name: subset_176
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44158323572
num_examples: 28727
download_size: 6408233260
dataset_size: 44158323572
- config_name: subset_177
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44033693424
num_examples: 28646
download_size: 6415876282
dataset_size: 44033693424
- config_name: subset_178
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42685714068
num_examples: 27769
download_size: 6200737024
dataset_size: 42685714068
- config_name: subset_179
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42648659092
num_examples: 27745
download_size: 6171525632
dataset_size: 42648659092
- config_name: subset_18
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43836770424
num_examples: 28518
download_size: 6326151956
dataset_size: 43836770424
- config_name: subset_180
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42551809752
num_examples: 27682
download_size: 6168382243
dataset_size: 42551809752
- config_name: subset_181
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42434879408
num_examples: 27606
download_size: 6123055947
dataset_size: 42434879408
- config_name: subset_182
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42408752772
num_examples: 27589
download_size: 6152174336
dataset_size: 42408752772
- config_name: subset_183
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42966849416
num_examples: 27952
download_size: 6194170724
dataset_size: 42966849416
- config_name: subset_184
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42540803548
num_examples: 27675
download_size: 6179994976
dataset_size: 42540803548
- config_name: subset_185
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4610940
num_examples: 3
download_size: 510678
dataset_size: 4610940
- config_name: subset_186
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42273847412
num_examples: 27501
download_size: 6135274899
dataset_size: 42273847412
- config_name: subset_187
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42547162108
num_examples: 27679
download_size: 6140828239
dataset_size: 42547162108
- config_name: subset_188
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42513408276
num_examples: 27657
download_size: 6141115163
dataset_size: 42513408276
- config_name: subset_189
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42247299832
num_examples: 27484
download_size: 6114021654
dataset_size: 42247299832
- config_name: subset_19
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43870784704
num_examples: 28540
download_size: 6361457035
dataset_size: 43870784704
- config_name: subset_190
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42559396388
num_examples: 27687
download_size: 6144933007
dataset_size: 42559396388
- config_name: subset_191
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42727058440
num_examples: 27796
download_size: 6159613829
dataset_size: 42727058440
- config_name: subset_192
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42465891192
num_examples: 27626
download_size: 6137572406
dataset_size: 42465891192
- config_name: subset_193
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42687083448
num_examples: 27770
download_size: 6156875941
dataset_size: 42687083448
- config_name: subset_194
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43269701988
num_examples: 28149
download_size: 6279255539
dataset_size: 43269701988
- config_name: subset_195
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43100379428
num_examples: 28039
download_size: 6244533477
dataset_size: 43100379428
- config_name: subset_196
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43182000120
num_examples: 28092
download_size: 6246268592
dataset_size: 43182000120
- config_name: subset_197
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42488819788
num_examples: 27641
download_size: 6178356059
dataset_size: 42488819788
- config_name: subset_198
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43151315408
num_examples: 28072
download_size: 6236447434
dataset_size: 43151315408
- config_name: subset_199
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43020760060
num_examples: 27987
download_size: 6246173797
dataset_size: 43020760060
- config_name: subset_2
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43987672944
num_examples: 28616
download_size: 6372442472
dataset_size: 43987672944
- config_name: subset_20
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44078517716
num_examples: 28675
download_size: 6385824155
dataset_size: 44078517716
- config_name: subset_200
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43286671192
num_examples: 28160
download_size: 6280144588
dataset_size: 43286671192
- config_name: subset_201
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 3073928
num_examples: 2
download_size: 379680
dataset_size: 3073928
- config_name: subset_202
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42862469264
num_examples: 27884
download_size: 6203880452
dataset_size: 42862469264
- config_name: subset_203
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42893042416
num_examples: 27904
download_size: 6220561824
dataset_size: 42893042416
- config_name: subset_204
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43033034108
num_examples: 27995
download_size: 6252547275
dataset_size: 43033034108
- config_name: subset_205
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43129968864
num_examples: 28058
download_size: 6242739407
dataset_size: 43129968864
- config_name: subset_206
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43139090800
num_examples: 28064
download_size: 6235515866
dataset_size: 43139090800
- config_name: subset_207
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43152809356
num_examples: 28073
download_size: 6283290397
dataset_size: 43152809356
- config_name: subset_208
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42942228856
num_examples: 27936
download_size: 6201443185
dataset_size: 42942228856
- config_name: subset_209
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42900706308
num_examples: 27909
download_size: 6209468923
dataset_size: 42900706308
- config_name: subset_21
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 3073968
num_examples: 2
download_size: 340735
dataset_size: 3073968
- config_name: subset_210
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43097615852
num_examples: 28037
download_size: 6250699366
dataset_size: 43097615852
- config_name: subset_211
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43345131936
num_examples: 28198
download_size: 6290127680
dataset_size: 43345131936
- config_name: subset_212
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43369720992
num_examples: 28214
download_size: 6322218871
dataset_size: 43369720992
- config_name: subset_213
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43273017772
num_examples: 28151
download_size: 6290984482
dataset_size: 43273017772
- config_name: subset_214
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43339017792
num_examples: 28194
download_size: 6291790140
dataset_size: 43339017792
- config_name: subset_215
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43148309288
num_examples: 28070
download_size: 6274426221
dataset_size: 43148309288
- config_name: subset_216
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43554083872
num_examples: 28334
download_size: 6316086000
dataset_size: 43554083872
- config_name: subset_217
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6148384
num_examples: 4
download_size: 787021
dataset_size: 6148384
- config_name: subset_218
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43388064416
num_examples: 28226
download_size: 6284993121
dataset_size: 43388064416
- config_name: subset_219
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43400316424
num_examples: 28234
download_size: 6293046087
dataset_size: 43400316424
- config_name: subset_22
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44242802888
num_examples: 28782
download_size: 6406171080
dataset_size: 44242802888
- config_name: subset_220
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43246544032
num_examples: 28134
download_size: 6276081988
dataset_size: 43246544032
- config_name: subset_221
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43091341748
num_examples: 28033
download_size: 6246844874
dataset_size: 43091341748
- config_name: subset_222
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43282260444
num_examples: 28157
download_size: 6273569814
dataset_size: 43282260444
- config_name: subset_223
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43218862392
num_examples: 28116
download_size: 6267480974
dataset_size: 43218862392
- config_name: subset_53
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43020592356
num_examples: 27987
download_size: 6237193214
dataset_size: 43020592356
- config_name: subset_105
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43217581324
num_examples: 28115
download_size: 6241162732
dataset_size: 43217581324
- config_name: subset_23
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44087547940
num_examples: 28681
download_size: 6378825677
dataset_size: 44087547940
- config_name: subset_24
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44250388180
num_examples: 28787
download_size: 6399288392
dataset_size: 44250388180
- config_name: subset_25
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44682379040
num_examples: 29068
download_size: 6472664846
dataset_size: 44682379040
- config_name: subset_26
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43988774372
num_examples: 28617
download_size: 6351536356
dataset_size: 43988774372
- config_name: subset_27
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44124322548
num_examples: 28705
download_size: 6384396942
dataset_size: 44124322548
- config_name: subset_28
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44274970012
num_examples: 28803
download_size: 6405118297
dataset_size: 44274970012
- config_name: subset_29
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44127365308
num_examples: 28707
download_size: 6394981446
dataset_size: 44127365308
- config_name: subset_3
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44010774700
num_examples: 28631
download_size: 6385129614
dataset_size: 44010774700
- config_name: subset_30
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43959947880
num_examples: 28598
download_size: 6351099073
dataset_size: 43959947880
- config_name: subset_31
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43939721468
num_examples: 28585
download_size: 6349698481
dataset_size: 43939721468
- config_name: subset_32
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43701336432
num_examples: 28430
download_size: 6317498365
dataset_size: 43701336432
- config_name: subset_33
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43912133780
num_examples: 28567
download_size: 6347741424
dataset_size: 43912133780
- config_name: subset_34
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43924879268
num_examples: 28575
download_size: 6385061613
dataset_size: 43924879268
- config_name: subset_35
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44198269620
num_examples: 28753
download_size: 6417152268
dataset_size: 44198269620
- config_name: subset_36
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43958143980
num_examples: 28597
download_size: 6371530333
dataset_size: 43958143980
- config_name: subset_37
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1536892
num_examples: 1
download_size: 145043
dataset_size: 1536892
- config_name: subset_38
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43907738296
num_examples: 28564
download_size: 6370745101
dataset_size: 43907738296
- config_name: subset_39
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43783169540
num_examples: 28483
download_size: 6360636678
dataset_size: 43783169540
- config_name: subset_4
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44035016260
num_examples: 28647
download_size: 6356360790
dataset_size: 44035016260
- config_name: subset_40
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43876677072
num_examples: 28544
download_size: 6363545223
dataset_size: 43876677072
- config_name: subset_41
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44039928304
num_examples: 28650
download_size: 6400395515
dataset_size: 44039928304
- config_name: subset_42
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43550868688
num_examples: 28332
download_size: 6288205442
dataset_size: 43550868688
- config_name: subset_43
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43772245200
num_examples: 28476
download_size: 6312411517
dataset_size: 43772245200
- config_name: subset_44
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44043101784
num_examples: 28652
download_size: 6367757278
dataset_size: 44043101784
- config_name: subset_45
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43839830568
num_examples: 28520
download_size: 6302918743
dataset_size: 43839830568
- config_name: subset_46
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44418011720
num_examples: 28896
download_size: 6420581627
dataset_size: 44418011720
- config_name: subset_47
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44239609176
num_examples: 28780
download_size: 6409168799
dataset_size: 44239609176
- config_name: subset_48
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43441872132
num_examples: 28261
download_size: 6279351848
dataset_size: 43441872132
- config_name: subset_49
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43803148032
num_examples: 28496
download_size: 6348966745
dataset_size: 43803148032
- config_name: subset_5
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 3073752
num_examples: 2
download_size: 269532
dataset_size: 3073752
- config_name: subset_50
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43892315672
num_examples: 28554
download_size: 6352365538
dataset_size: 43892315672
- config_name: subset_51
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44030510104
num_examples: 28644
download_size: 6357746911
dataset_size: 44030510104
- config_name: subset_52
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44004611300
num_examples: 28627
download_size: 6395577673
dataset_size: 44004611300
- config_name: subset_54
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43924607164
num_examples: 28575
download_size: 6394467746
dataset_size: 43924607164
- config_name: subset_55
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43766336872
num_examples: 28472
download_size: 6382887005
dataset_size: 43766336872
- config_name: subset_56
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43924612260
num_examples: 28575
download_size: 6358387007
dataset_size: 43924612260
- config_name: subset_57
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44125903328
num_examples: 28706
download_size: 6429743630
dataset_size: 44125903328
- config_name: subset_58
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44061228392
num_examples: 28664
download_size: 6403276947
dataset_size: 44061228392
- config_name: subset_59
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44005810400
num_examples: 28628
download_size: 6399433408
dataset_size: 44005810400
- config_name: subset_6
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44378012200
num_examples: 28870
download_size: 6424397700
dataset_size: 44378012200
- config_name: subset_60
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44225890868
num_examples: 28771
download_size: 6419332378
dataset_size: 44225890868
- config_name: subset_61
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43650843212
num_examples: 28397
download_size: 6326376655
dataset_size: 43650843212
- config_name: subset_62
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43827520656
num_examples: 28512
download_size: 6330616794
dataset_size: 43827520656
- config_name: subset_63
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44172218520
num_examples: 28736
download_size: 6409944210
dataset_size: 44172218520
- config_name: subset_64
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43918314476
num_examples: 28571
download_size: 6359242235
dataset_size: 43918314476
- config_name: subset_65
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43906125500
num_examples: 28563
download_size: 6375398199
dataset_size: 43906125500
- config_name: subset_66
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44075027964
num_examples: 28673
download_size: 6398349127
dataset_size: 44075027964
- config_name: subset_67
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43609456344
num_examples: 28370
download_size: 6307862180
dataset_size: 43609456344
- config_name: subset_68
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43666361020
num_examples: 28407
download_size: 6328770887
dataset_size: 43666361020
- config_name: subset_69
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44025932180
num_examples: 28641
download_size: 6372276607
dataset_size: 44025932180
- config_name: subset_7
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44059710956
num_examples: 28663
download_size: 6383885034
dataset_size: 44059710956
- config_name: subset_70
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43781700552
num_examples: 28482
download_size: 6318262101
dataset_size: 43781700552
- config_name: subset_71
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44175190528
num_examples: 28738
download_size: 6420404767
dataset_size: 44175190528
- config_name: subset_72
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44059988804
num_examples: 28663
download_size: 6403791239
dataset_size: 44059988804
- config_name: subset_73
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44242682800
num_examples: 28782
download_size: 6393278746
dataset_size: 44242682800
- config_name: subset_74
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43664734768
num_examples: 28406
download_size: 6293869164
dataset_size: 43664734768
- config_name: subset_75
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43830625696
num_examples: 28514
download_size: 6347303356
dataset_size: 43830625696
- config_name: subset_76
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43924502708
num_examples: 28575
download_size: 6368149688
dataset_size: 43924502708
- config_name: subset_77
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43754158544
num_examples: 28464
download_size: 6347205297
dataset_size: 43754158544
- config_name: subset_78
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43781508304
num_examples: 28482
download_size: 6362656422
dataset_size: 43781508304
- config_name: subset_79
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43978478208
num_examples: 28610
download_size: 6398609121
dataset_size: 43978478208
- config_name: subset_8
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44007563004
num_examples: 28629
download_size: 6358760125
dataset_size: 44007563004
- config_name: subset_80
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43807663524
num_examples: 28499
download_size: 6383713010
dataset_size: 43807663524
- config_name: subset_81
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43958216180
num_examples: 28597
download_size: 6360362244
dataset_size: 43958216180
- config_name: subset_82
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44018307032
num_examples: 28636
download_size: 6388770182
dataset_size: 44018307032
- config_name: subset_83
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43431184792
num_examples: 28254
download_size: 6273446746
dataset_size: 43431184792
- config_name: subset_84
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4611316
num_examples: 3
download_size: 813473
dataset_size: 4611316
- config_name: subset_85
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43873788512
num_examples: 28542
download_size: 6358732185
dataset_size: 43873788512
- config_name: subset_86
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43505081840
num_examples: 28302
download_size: 6336792534
dataset_size: 43505081840
- config_name: subset_87
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44099477124
num_examples: 28689
download_size: 6376905811
dataset_size: 44099477124
- config_name: subset_88
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43800091792
num_examples: 28494
download_size: 6331140342
dataset_size: 43800091792
- config_name: subset_89
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44185886628
num_examples: 28745
download_size: 6399823294
dataset_size: 44185886628
- config_name: subset_9
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43959761872
num_examples: 28598
download_size: 6369092508
dataset_size: 43959761872
- config_name: subset_90
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43943002092
num_examples: 28587
download_size: 6384008687
dataset_size: 43943002092
- config_name: subset_91
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43709159980
num_examples: 28435
download_size: 6348468066
dataset_size: 43709159980
- config_name: subset_92
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43803194856
num_examples: 28496
download_size: 6384519799
dataset_size: 43803194856
- config_name: subset_93
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43806228672
num_examples: 28498
download_size: 6353242379
dataset_size: 43806228672
- config_name: subset_94
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43918235972
num_examples: 28571
download_size: 6359165774
dataset_size: 43918235972
- config_name: subset_95
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44013722788
num_examples: 28633
download_size: 6372836215
dataset_size: 44013722788
- config_name: subset_96
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43912328076
num_examples: 28567
download_size: 6360540190
dataset_size: 43912328076
- config_name: subset_97
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43784551296
num_examples: 28484
download_size: 6341270112
dataset_size: 43784551296
- config_name: subset_98
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44568669984
num_examples: 28994
download_size: 6461359260
dataset_size: 44568669984
- config_name: subset_99
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43989120876
num_examples: 28617
download_size: 6385093647
dataset_size: 43989120876
configs:
- config_name: subset_0
data_files:
- split: train
path: subset_0/train-*
- config_name: subset_1
data_files:
- split: train
path: subset_1/train-*
- config_name: subset_10
data_files:
- split: train
path: subset_10/train-*
- config_name: subset_100
data_files:
- split: train
path: subset_100/train-*
- config_name: subset_101
data_files:
- split: train
path: subset_101/train-*
- config_name: subset_102
data_files:
- split: train
path: subset_102/train-*
- config_name: subset_103
data_files:
- split: train
path: subset_103/train-*
- config_name: subset_104
data_files:
- split: train
path: subset_104/train-*
- config_name: subset_106
data_files:
- split: train
path: subset_106/train-*
- config_name: subset_107
data_files:
- split: train
path: subset_107/train-*
- config_name: subset_108
data_files:
- split: train
path: subset_108/train-*
- config_name: subset_109
data_files:
- split: train
path: subset_109/train-*
- config_name: subset_11
data_files:
- split: train
path: subset_11/train-*
- config_name: subset_110
data_files:
- split: train
path: subset_110/train-*
- config_name: subset_111
data_files:
- split: train
path: subset_111/train-*
- config_name: subset_112
data_files:
- split: train
path: subset_112/train-*
- config_name: subset_113
data_files:
- split: train
path: subset_113/train-*
- config_name: subset_114
data_files:
- split: train
path: subset_114/train-*
- config_name: subset_115
data_files:
- split: train
path: subset_115/train-*
- config_name: subset_116
data_files:
- split: train
path: subset_116/train-*
- config_name: subset_117
data_files:
- split: train
path: subset_117/train-*
- config_name: subset_118
data_files:
- split: train
path: subset_118/train-*
- config_name: subset_119
data_files:
- split: train
path: subset_119/train-*
- config_name: subset_12
data_files:
- split: train
path: subset_12/train-*
- config_name: subset_120
data_files:
- split: train
path: subset_120/train-*
- config_name: subset_121
data_files:
- split: train
path: subset_121/train-*
- config_name: subset_122
data_files:
- split: train
path: subset_122/train-*
- config_name: subset_123
data_files:
- split: train
path: subset_123/train-*
- config_name: subset_124
data_files:
- split: train
path: subset_124/train-*
- config_name: subset_125
data_files:
- split: train
path: subset_125/train-*
- config_name: subset_126
data_files:
- split: train
path: subset_126/train-*
- config_name: subset_127
data_files:
- split: train
path: subset_127/train-*
- config_name: subset_128
data_files:
- split: train
path: subset_128/train-*
- config_name: subset_129
data_files:
- split: train
path: subset_129/train-*
- config_name: subset_13
data_files:
- split: train
path: subset_13/train-*
- config_name: subset_130
data_files:
- split: train
path: subset_130/train-*
- config_name: subset_131
data_files:
- split: train
path: subset_131/train-*
- config_name: subset_132
data_files:
- split: train
path: subset_132/train-*
- config_name: subset_133
data_files:
- split: train
path: subset_133/train-*
- config_name: subset_134
data_files:
- split: train
path: subset_134/train-*
- config_name: subset_135
data_files:
- split: train
path: subset_135/train-*
- config_name: subset_136
data_files:
- split: train
path: subset_136/train-*
- config_name: subset_137
data_files:
- split: train
path: subset_137/train-*
- config_name: subset_138
data_files:
- split: train
path: subset_138/train-*
- config_name: subset_139
data_files:
- split: train
path: subset_139/train-*
- config_name: subset_14
data_files:
- split: train
path: subset_14/train-*
- config_name: subset_140
data_files:
- split: train
path: subset_140/train-*
- config_name: subset_141
data_files:
- split: train
path: subset_141/train-*
- config_name: subset_142
data_files:
- split: train
path: subset_142/train-*
- config_name: subset_143
data_files:
- split: train
path: subset_143/train-*
- config_name: subset_144
data_files:
- split: train
path: subset_144/train-*
- config_name: subset_145
data_files:
- split: train
path: subset_145/train-*
- config_name: subset_146
data_files:
- split: train
path: subset_146/train-*
- config_name: subset_147
data_files:
- split: train
path: subset_147/train-*
- config_name: subset_148
data_files:
- split: train
path: subset_148/train-*
- config_name: subset_149
data_files:
- split: train
path: subset_149/train-*
- config_name: subset_15
data_files:
- split: train
path: subset_15/train-*
- config_name: subset_150
data_files:
- split: train
path: subset_150/train-*
- config_name: subset_151
data_files:
- split: train
path: subset_151/train-*
- config_name: subset_152
data_files:
- split: train
path: subset_152/train-*
- config_name: subset_153
data_files:
- split: train
path: subset_153/train-*
- config_name: subset_154
data_files:
- split: train
path: subset_154/train-*
- config_name: subset_155
data_files:
- split: train
path: subset_155/train-*
- config_name: subset_156
data_files:
- split: train
path: subset_156/train-*
- config_name: subset_157
data_files:
- split: train
path: subset_157/train-*
- config_name: subset_158
data_files:
- split: train
path: subset_158/train-*
- config_name: subset_159
data_files:
- split: train
path: subset_159/train-*
- config_name: subset_16
data_files:
- split: train
path: subset_16/train-*
- config_name: subset_160
data_files:
- split: train
path: subset_160/train-*
- config_name: subset_161
data_files:
- split: train
path: subset_161000/train-*
- config_name: subset_162
data_files:
- split: train
path: subset_162/train-*
- config_name: subset_163
data_files:
- split: train
path: subset_163/train-*
- config_name: subset_164
data_files:
- split: train
path: subset_164/train-*
- config_name: subset_165
data_files:
- split: train
path: subset_165/train-*
- config_name: subset_166
data_files:
- split: train
path: subset_166/train-*
- config_name: subset_167
data_files:
- split: train
path: subset_167/train-*
- config_name: subset_168
data_files:
- split: train
path: subset_168/train-*
- config_name: subset_169
data_files:
- split: train
path: subset_169/train-*
- config_name: subset_17
data_files:
- split: train
path: subset_17/train-*
- config_name: subset_170
data_files:
- split: train
path: subset_170/train-*
- config_name: subset_171
data_files:
- split: train
path: subset_171/train-*
- config_name: subset_172
data_files:
- split: train
path: subset_172/train-*
- config_name: subset_173
data_files:
- split: train
path: subset_173/train-*
- config_name: subset_174
data_files:
- split: train
path: subset_174/train-*
- config_name: subset_175
data_files:
- split: train
path: subset_175/train-*
- config_name: subset_176
data_files:
- split: train
path: subset_176/train-*
- config_name: subset_177
data_files:
- split: train
path: subset_177/train-*
- config_name: subset_178
data_files:
- split: train
path: subset_178/train-*
- config_name: subset_179
data_files:
- split: train
path: subset_179/train-*
- config_name: subset_18
data_files:
- split: train
path: subset_18/train-*
- config_name: subset_180
data_files:
- split: train
path: subset_180/train-*
- config_name: subset_181
data_files:
- split: train
path: subset_181/train-*
- config_name: subset_182
data_files:
- split: train
path: subset_182/train-*
- config_name: subset_183
data_files:
- split: train
path: subset_183/train-*
- config_name: subset_184
data_files:
- split: train
path: subset_184/train-*
- config_name: subset_185
data_files:
- split: train
path: subset_185/train-*
- config_name: subset_186
data_files:
- split: train
path: subset_186/train-*
- config_name: subset_187
data_files:
- split: train
path: subset_187/train-*
- config_name: subset_188
data_files:
- split: train
path: subset_188/train-*
- config_name: subset_189
data_files:
- split: train
path: subset_189/train-*
- config_name: subset_19
data_files:
- split: train
path: subset_19000/train-*
- config_name: subset_190
data_files:
- split: train
path: subset_190/train-*
- config_name: subset_191
data_files:
- split: train
path: subset_191/train-*
- config_name: subset_192
data_files:
- split: train
path: subset_192/train-*
- config_name: subset_193
data_files:
- split: train
path: subset_193/train-*
- config_name: subset_194
data_files:
- split: train
path: subset_194/train-*
- config_name: subset_195
data_files:
- split: train
path: subset_195/train-*
- config_name: subset_196
data_files:
- split: train
path: subset_196/train-*
- config_name: subset_197
data_files:
- split: train
path: subset_197/train-*
- config_name: subset_198
data_files:
- split: train
path: subset_198/train-*
- config_name: subset_199
data_files:
- split: train
path: subset_199/train-*
- config_name: subset_2
data_files:
- split: train
path: subset_2/train-*
- config_name: subset_20
data_files:
- split: train
path: subset_20/train-*
- config_name: subset_200
data_files:
- split: train
path: subset_200/train-*
- config_name: subset_201
data_files:
- split: train
path: subset_201/train-*
- config_name: subset_202
data_files:
- split: train
path: subset_202/train-*
- config_name: subset_203
data_files:
- split: train
path: subset_203/train-*
- config_name: subset_204
data_files:
- split: train
path: subset_204/train-*
- config_name: subset_205
data_files:
- split: train
path: subset_205000/train-*
- config_name: subset_206
data_files:
- split: train
path: subset_206000/train-*
- config_name: subset_207
data_files:
- split: train
path: subset_207/train-*
- config_name: subset_208
data_files:
- split: train
path: subset_208000/train-*
- config_name: subset_209
data_files:
- split: train
path: subset_209/train-*
- config_name: subset_21
data_files:
- split: train
path: subset_21/train-*
- config_name: subset_210
data_files:
- split: train
path: subset_210/train-*
- config_name: subset_211
data_files:
- split: train
path: subset_211/train-*
- config_name: subset_212
data_files:
- split: train
path: subset_212/train-*
- config_name: subset_213
data_files:
- split: train
path: subset_213/train-*
- config_name: subset_214
data_files:
- split: train
path: subset_214000/train-*
- config_name: subset_215
data_files:
- split: train
path: subset_215/train-*
- config_name: subset_216
data_files:
- split: train
path: subset_216/train-*
- config_name: subset_217
data_files:
- split: train
path: subset_217/train-*
- config_name: subset_218
data_files:
- split: train
path: subset_218/train-*
- config_name: subset_219
data_files:
- split: train
path: subset_219/train-*
- config_name: subset_22
data_files:
- split: train
path: subset_22/train-*
- config_name: subset_220
data_files:
- split: train
path: subset_220/train-*
- config_name: subset_221
data_files:
- split: train
path: subset_221/train-*
- config_name: subset_222
data_files:
- split: train
path: subset_222/train-*
- config_name: subset_223
data_files:
- split: train
path: subset_223/train-*
- config_name: subset_53
data_files:
- split: train
path: subset_224/train-*
- config_name: subset_105
data_files:
- split: train
path: subset_225/train-*
- config_name: subset_23
data_files:
- split: train
path: subset_23/train-*
- config_name: subset_24
data_files:
- split: train
path: subset_24/train-*
- config_name: subset_25
data_files:
- split: train
path: subset_25/train-*
- config_name: subset_26
data_files:
- split: train
path: subset_26/train-*
- config_name: subset_27
data_files:
- split: train
path: subset_27/train-*
- config_name: subset_28
data_files:
- split: train
path: subset_28/train-*
- config_name: subset_29
data_files:
- split: train
path: subset_29/train-*
- config_name: subset_3
data_files:
- split: train
path: subset_3/train-*
- config_name: subset_30
data_files:
- split: train
path: subset_30/train-*
- config_name: subset_31
data_files:
- split: train
path: subset_31/train-*
- config_name: subset_32
data_files:
- split: train
path: subset_32/train-*
- config_name: subset_33
data_files:
- split: train
path: subset_33/train-*
- config_name: subset_34
data_files:
- split: train
path: subset_34/train-*
- config_name: subset_35
data_files:
- split: train
path: subset_35/train-*
- config_name: subset_36
data_files:
- split: train
path: subset_36/train-*
- config_name: subset_37
data_files:
- split: train
path: subset_37/train-*
- config_name: subset_38
data_files:
- split: train
path: subset_38/train-*
- config_name: subset_39
data_files:
- split: train
path: subset_39/train-*
- config_name: subset_4
data_files:
- split: train
path: subset_4/train-*
- config_name: subset_40
data_files:
- split: train
path: subset_40/train-*
- config_name: subset_41
data_files:
- split: train
path: subset_41/train-*
- config_name: subset_42
data_files:
- split: train
path: subset_42/train-*
- config_name: subset_43
data_files:
- split: train
path: subset_43/train-*
- config_name: subset_44
data_files:
- split: train
path: subset_44/train-*
- config_name: subset_45
data_files:
- split: train
path: subset_45/train-*
- config_name: subset_46
data_files:
- split: train
path: subset_46/train-*
- config_name: subset_47
data_files:
- split: train
path: subset_47/train-*
- config_name: subset_48
data_files:
- split: train
path: subset_48/train-*
- config_name: subset_49
data_files:
- split: train
path: subset_49/train-*
- config_name: subset_5
data_files:
- split: train
path: subset_5/train-*
- config_name: subset_50
data_files:
- split: train
path: subset_50/train-*
- config_name: subset_51
data_files:
- split: train
path: subset_51/train-*
- config_name: subset_52
data_files:
- split: train
path: subset_52/train-*
- config_name: subset_54
data_files:
- split: train
path: subset_54/train-*
- config_name: subset_55
data_files:
- split: train
path: subset_55/train-*
- config_name: subset_56
data_files:
- split: train
path: subset_56/train-*
- config_name: subset_57
data_files:
- split: train
path: subset_57/train-*
- config_name: subset_58
data_files:
- split: train
path: subset_58/train-*
- config_name: subset_59
data_files:
- split: train
path: subset_59/train-*
- config_name: subset_6
data_files:
- split: train
path: subset_6/train-*
- config_name: subset_60
data_files:
- split: train
path: subset_60/train-*
- config_name: subset_61
data_files:
- split: train
path: subset_61/train-*
- config_name: subset_62
data_files:
- split: train
path: subset_62/train-*
- config_name: subset_63
data_files:
- split: train
path: subset_63/train-*
- config_name: subset_64
data_files:
- split: train
path: subset_64/train-*
- config_name: subset_65
data_files:
- split: train
path: subset_65/train-*
- config_name: subset_66
data_files:
- split: train
path: subset_66/train-*
- config_name: subset_67
data_files:
- split: train
path: subset_67/train-*
- config_name: subset_68
data_files:
- split: train
path: subset_68/train-*
- config_name: subset_69
data_files:
- split: train
path: subset_69/train-*
- config_name: subset_7
data_files:
- split: train
path: subset_7/train-*
- config_name: subset_70
data_files:
- split: train
path: subset_70/train-*
- config_name: subset_71
data_files:
- split: train
path: subset_71/train-*
- config_name: subset_72
data_files:
- split: train
path: subset_72/train-*
- config_name: subset_73
data_files:
- split: train
path: subset_73/train-*
- config_name: subset_74
data_files:
- split: train
path: subset_74/train-*
- config_name: subset_75
data_files:
- split: train
path: subset_75/train-*
- config_name: subset_76
data_files:
- split: train
path: subset_76/train-*
- config_name: subset_77
data_files:
- split: train
path: subset_77/train-*
- config_name: subset_78
data_files:
- split: train
path: subset_78/train-*
- config_name: subset_79
data_files:
- split: train
path: subset_79/train-*
- config_name: subset_8
data_files:
- split: train
path: subset_8/train-*
- config_name: subset_80
data_files:
- split: train
path: subset_80/train-*
- config_name: subset_81
data_files:
- split: train
path: subset_81/train-*
- config_name: subset_82
data_files:
- split: train
path: subset_82/train-*
- config_name: subset_83
data_files:
- split: train
path: subset_83/train-*
- config_name: subset_84
data_files:
- split: train
path: subset_84/train-*
- config_name: subset_85
data_files:
- split: train
path: subset_85/train-*
- config_name: subset_86
data_files:
- split: train
path: subset_86/train-*
- config_name: subset_87
data_files:
- split: train
path: subset_87/train-*
- config_name: subset_88
data_files:
- split: train
path: subset_88/train-*
- config_name: subset_89
data_files:
- split: train
path: subset_89/train-*
- config_name: subset_9
data_files:
- split: train
path: subset_9/train-*
- config_name: subset_90
data_files:
- split: train
path: subset_90/train-*
- config_name: subset_91
data_files:
- split: train
path: subset_91/train-*
- config_name: subset_92
data_files:
- split: train
path: subset_92/train-*
- config_name: subset_93
data_files:
- split: train
path: subset_93/train-*
- config_name: subset_94
data_files:
- split: train
path: subset_94/train-*
- config_name: subset_95
data_files:
- split: train
path: subset_95/train-*
- config_name: subset_96
data_files:
- split: train
path: subset_96/train-*
- config_name: subset_97
data_files:
- split: train
path: subset_97/train-*
- config_name: subset_98
data_files:
- split: train
path: subset_98/train-*
- config_name: subset_99
data_files:
- split: train
path: subset_99/train-*
---
|
HuggingFaceM4/OBELICS | HuggingFaceM4 | "2023-08-22T20:50:09Z" | 11,493 | 143 | [
"language:en",
"license:cc-by-4.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2306.16527",
"region:us"
] | null | "2023-05-30T23:06:14Z" | ---
language:
- en
license: cc-by-4.0
size_categories:
- 100M<n<1B
pretty_name: OBELICS
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: opt_out_docs_removed_2023_07_12
data_files:
- split: train
path: opt_out_docs_removed_2023_07_12/train-*
dataset_info:
- config_name: default
features:
- name: images
sequence: string
- name: metadata
dtype: string
- name: general_metadata
dtype: string
- name: texts
sequence: string
splits:
- name: train
num_bytes: 715724717192
num_examples: 141047697
download_size: 71520629655
dataset_size: 715724717192
- config_name: opt_out_docs_removed_2023_07_12
features:
- name: images
sequence: string
- name: metadata
dtype: string
- name: general_metadata
dtype: string
- name: texts
sequence: string
splits:
- name: train
num_bytes: 684638314215
num_examples: 134648855
download_size: 266501092920
dataset_size: 684638314215
---
# Dataset Card for OBELICS
## Dataset Description
- **Visualization of OBELICS web documents:** https://huggingface.co/spaces/HuggingFaceM4/obelics_visualization
- **Paper:** [OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents](https://arxiv.org/abs/2306.16527)
- **Repository:** https://github.com/huggingface/OBELICS
- **Point of Contact: [email protected]**
`OBELICS` is an open, massive, and curated collection of interleaved image-text web documents, containing 141M English documents, 115B text tokens, and 353M images, extracted from Common Crawl dumps between February 2020 and February 2023. The collection and filtering steps are described in our [paper](https://huggingface.co/papers/2306.16527).
Interleaved image-text web documents are a succession of text paragraphs interleaved by images, such as web pages that contain images. Models trained on these web documents outperform vision and language models trained solely on image-text pairs on various benchmarks. They can also generate long and coherent text about a set of multiple images. As an example, we trained [IDEFICS](https://huggingface.co/HuggingFaceM4/idefics-80b), a visual language model that accepts arbitrary sequences of image and text inputs and produces text outputs.
We provide an [interactive visualization](https://atlas.nomic.ai/map/f2fba2aa-3647-4f49-a0f3-9347daeee499/ee4a84bd-f125-4bcc-a683-1b4e231cb10f) of OBELICS that allows exploring the content of OBELICS. The map shows a subset of 11M of the 141M documents.
[![OBELICS Nomic map](assets/nomic_map.png)](https://atlas.nomic.ai/map/f2fba2aa-3647-4f49-a0f3-9347daeee499/ee4a84bd-f125-4bcc-a683-1b4e231cb10f)
## Data Fields
An example of a sample looks as follows:
```
# The example has been cropped
{
'images': [
'https://cdn.motor1.com/images/mgl/oRKO0/s1/lamborghini-urus-original-carbon-fiber-accessories.jpg',
None
],
'metadata': '[{"document_url": "https://lamborghinichat.com/forum/news/vw-group-allegedly-receives-offer-to-sell-lamborghini-for-9-2-billion.728/", "unformatted_src": "https://cdn.motor1.com/images/mgl/oRKO0/s1/lamborghini-urus-original-carbon-fiber-accessories.jpg", "src": "https://cdn.motor1.com/images/mgl/oRKO0/s1/lamborghini-urus-original-carbon-fiber-accessories.jpg", "formatted_filename": "lamborghini urus original carbon fiber accessories", "alt_text": "VW Group Allegedly Receives Offer To Sell Lamborghini For $9.2 Billion", "original_width": 1920, "original_height": 1080, "format": "jpeg"}, null]',
'general_metadata': '{"url": "https://lamborghinichat.com/forum/news/vw-group-allegedly-receives-offer-to-sell-lamborghini-for-9-2-billion.728/", "warc_filename": "crawl-data/CC-MAIN-2021-25/segments/1623488528979.69/warc/CC-MAIN-20210623011557-20210623041557-00312.warc.gz", "warc_record_offset": 322560850, "warc_record_length": 17143}',
'texts': [
None,
'The buyer would get everything, including Lambo\'s headquarters.\n\nThe investment groupQuantum Group AG has submitted a€7.5 billion ($9.2 billion at current exchange rates) offer to purchase Lamborghini from Volkswagen Group, Autocar reports. There\'s no info yet about whether VW intends to accept the offer or further negotiate the deal.\n\nQuantum ... Group Chief Executive Herbert Diess said at the time.'
]
}
```
Each sample is composed of the same 4 fields: `images`, `texts`, `metadata`, and `general_metadata`. `images` and `texts` are two lists of the same size, where for each index, one element and only one is not `None`. For example, for the interleaved web document `<image_1>text<image_2>`, we would find `[image_1, None, image_2]` in `images` and `[None, text, None]` in `texts`.
The images are replaced by their URLs, and the users need to download the images, for instance, with the library [img2dataset](https://github.com/rom1504/img2dataset).
`metadata` is the string representation of a list containing information about each of the images. It has the same length as `texts` and `images` and logs for each image relevant information such as original source document, unformatted source, alternative text if present, etc.
`general_metadata` is the string representation of a dictionary containing the URL of the document, and information regarding the extraction from Common Crawl snapshots.
## Size and Data Splits
There is only one split, `train`, that contains 141,047,697 documents.
`OBELICS` with images replaced by their URLs weighs 666.6 GB (😈) in arrow format and 377 GB in the uploaded `parquet` format.
## Considerations for Using the Data
### Discussion of Biases
A subset of this dataset `train`, of ~50k was evaluated using the Data Measurements Tool, with a particular focus on the nPMI metric
> nPMI scores for a word help to identify potentially problematic associations, ranked by how close the association is.
> nPMI bias scores for paired words help to identify how word associations are skewed between the selected selected words (Aka et al., 2021).
> You can select from gender and sexual orientation identity terms that appear in the dataset at least 10 times.
> The resulting ranked words are those that co-occur with both identity terms.
> The more positive the score, the more associated the word is with the first identity term. The more negative the score, the more associated the word is with the second identity term.
While there was a positive skew of words relating occupations e.g _`government`_, _`jobs`_ towards she, her, and similar attributions of the masculine and feminine words to they and them, more harmful words attributions such as _`escort`_ and even _`colour`_ presented with greater attributions to she, her and him, his, respectively.
![Data Measurement Tool Associations Eval](assets/DMT_eval.png)
We welcome users to explore the [Data Measurements nPMI Visualitons for OBELICS](https://huggingface.co/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool) further and to see the [idefics-9b model card](https://huggingface.co/HuggingFaceM4/idefics-9b) for further Bias considerations.
## Opted-out content
To respect the preferences of content creators, we removed from OBELICS all images for which creators explicitly opted out of AI model training. We used the [Spawning API](https://api.spawning.ai/spawning-api) to verify that the images in the dataset respect the original copyright owners’ choices.
However, due to an error on our side, we did not remove entire documents (i.e., URLs) that opted out of AI model training. As of July 12, 2023, it represents 4.25% of the totality of OBELICS. The config `opt_out_docs_removed_2023_07_12` applies the correct filtering at the web document level as of July 2023: `ds = load_dataset("HuggingFaceM4/OBELICS", "opt_out_docs_removed_2023_07_12")`.
We recommend users of OBELICS to regularly check every document against the API.
## Content warnings
Despite our efforts in filtering, OBELICS contains a small proportion of documents that are not suitable for all audiences. For instance, while navigating the interactive map, you might find the cluster named "Sex" which predominantly contains descriptions of pornographic movies along with pornographic images. Other clusters would contain advertising for sex workers or reports of violent shootings. In our experience, these documents represent a small proportion of all the documents.
## Terms of Use
By using the dataset, you agree to comply with the original licenses of the source content as well as the dataset license (CC-BY-4.0). Additionally, if you use this dataset to train a Machine Learning model, you agree to disclose your use of the dataset when releasing the model or an ML application using the model.
### Licensing Information
License CC-BY-4.0.
### Citation Information
If you are using this dataset, please cite
```
@misc{laurencon2023obelics,
title={OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents},
author={Hugo Laurençon and Lucile Saulnier and Léo Tronchon and Stas Bekman and Amanpreet Singh and Anton Lozhkov and Thomas Wang and Siddharth Karamcheti and Alexander M. Rush and Douwe Kiela and Matthieu Cord and Victor Sanh},
year={2023},
eprint={2306.16527},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
|
zalando-datasets/fashion_mnist | zalando-datasets | "2024-08-08T06:10:25Z" | 11,382 | 49 | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1708.07747",
"region:us"
] | [
"image-classification"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
paperswithcode_id: fashion-mnist
pretty_name: FashionMNIST
dataset_info:
config_name: fashion_mnist
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': T - shirt / top
'1': Trouser
'2': Pullover
'3': Dress
'4': Coat
'5': Sandal
'6': Shirt
'7': Sneaker
'8': Bag
'9': Ankle boot
splits:
- name: train
num_bytes: 31049107.0
num_examples: 60000
- name: test
num_bytes: 5192560.0
num_examples: 10000
download_size: 36106894
dataset_size: 36241667.0
configs:
- config_name: fashion_mnist
data_files:
- split: train
path: fashion_mnist/train-*
- split: test
path: fashion_mnist/test-*
default: true
---
# Dataset Card for FashionMNIST
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)
- **Repository:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)
- **Paper:** [arXiv](https://arxiv.org/pdf/1708.07747.pdf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Fashion-MNIST is a dataset of Zalando's article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image of Zalando's article into one of 10 classes. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-fashion-mnist).
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
A data point comprises an image and its label.
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x27601169DD8>,
'label': 9
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing the 28x28 image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `label`: an integer between 0 and 9 representing the classes with the following mapping:
| Label | Description |
| --- | --- |
| 0 | T-shirt/top |
| 1 | Trouser |
| 2 | Pullover |
| 3 | Dress |
| 4 | Coat |
| 5 | Sandal |
| 6 | Shirt |
| 7 | Sneaker |
| 8 | Bag |
| 9 | Ankle boot |
### Data Splits
The data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images.
## Dataset Creation
### Curation Rationale
**From the arXiv paper:**
The original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. "If it doesn't work on MNIST, it won't work at all", they said. "Well, if it does work on MNIST, it may still fail on others."
Here are some good reasons:
- MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read "Most pairs of MNIST digits can be distinguished pretty well by just one pixel."
- MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST.
- MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author François Chollet.
### Source Data
#### Initial Data Collection and Normalization
**From the arXiv paper:**
Fashion-MNIST is based on the assortment on Zalando’s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 × 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.
We use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 × 73) are then fed into the following conversion pipeline:
1. Converting the input to a PNG image.
2. Trimming any edges that are close to the color of the corner pixels. The “closeness” is defined by the distance within 5% of the maximum possible intensity in RGB space.
3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over.
4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines.
5. Extending the shortest edge to 28 and put the image to the center of the canvas.
6. Negating the intensities of the image.
7. Converting the image to 8-bit grayscale pixels.
#### Who are the source language producers?
**From the arXiv paper:**
Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit.
### Annotations
#### Annotation process
**From the arXiv paper:**
For the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe’s largest online fashion platform. Each product contains only one silhouette code.
#### Who are the annotators?
**From the arXiv paper:**
The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Han Xiao and Kashif Rasul and Roland Vollgraf
### Licensing Information
MIT Licence
### Citation Information
```
@article{DBLP:journals/corr/abs-1708-07747,
author = {Han Xiao and
Kashif Rasul and
Roland Vollgraf},
title = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning
Algorithms},
journal = {CoRR},
volume = {abs/1708.07747},
year = {2017},
url = {http://arxiv.org/abs/1708.07747},
archivePrefix = {arXiv},
eprint = {1708.07747},
timestamp = {Mon, 13 Aug 2018 16:47:27 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/abs-1708-07747},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset. |
nyanko7/danbooru2023 | nyanko7 | "2024-05-22T18:43:24Z" | 11,375 | 219 | [
"task_categories:image-classification",
"task_categories:image-to-image",
"task_categories:text-to-image",
"language:en",
"language:ja",
"license:mit",
"size_categories:1M<n<10M",
"region:us"
] | [
"image-classification",
"image-to-image",
"text-to-image"
] | "2024-01-04T13:28:13Z" | ---
license: mit
task_categories:
- image-classification
- image-to-image
- text-to-image
language:
- en
- ja
pretty_name: danbooru2023
size_categories:
- 1M<n<10M
viewer: false
---
<img src="https://huggingface.co/datasets/nyanko7/danbooru2023/resolve/main/cover.webp" alt="cover" width="750"/>
# Danbooru2023: A Large-Scale Crowdsourced and Tagged Anime Illustration Dataset
<!-- Provide a quick summary of the dataset. -->
Danbooru2023 is a large-scale anime image dataset with over 5 million images contributed and annotated in detail by an enthusiast community. Image tags cover aspects like characters, scenes, copyrights, artists, etc with an average of 30 tags per image.
Danbooru is a veteran anime image board with high-quality images and extensive tag metadata. The dataset can be used to train image classification, multi-label tagging, character detection, generative models, and other computer vision tasks.
- **Shared by:** Nyanko Devs
- **Language(s):** English, Japanese
- **License:** MIT
This dataset is built on the top of [danbooru2021](https://gwern.net/danbooru2021). We expands the dataset to include images up to ID #6,857,737, adding over 1.8 million additional images and total size is now approximately 8 terabytes (8,000 GB).
## Use
## Format
The goal of the dataset is to be as easy as possible to use immediately, avoiding obscure file formats, while allowing simultaneous research & seeding of the torrent, with easy updates.
Images are provided in the full original form (be that JPG, PNG, GIF or otherwise) for reference/archival purposes, and bucketed into 1000 subdirectories 0000–0999 (0-padded), which is the Danbooru ID modulo 1000 (ie. all images in 0999/ have an ID ending in ‘999’); IDs can be turned into paths by dividing & padding (eg. in Bash, BUCKET=$(printf "%04d" $(( ID % 1000 )) )) and then the file is at {original,512px}/$BUCKET/$ID.$EXT.
The reason for the bucketing is that a single directory would cause pathological filesystem performance, and modulo ID is a simple hash which spreads images evenly without requiring additional future directories to be made or a filesystem IO to check where the file is. The ID is not zero-padded and files end in the relevant extension, hence the file layout looks like this:
```bash
$ tree / | less
/
├── danbooru2023 -> /mnt/diffusionstorage/workspace/danbooru/
│ ├── metadata
│ ├── readme.md
│ ├── original
│ │ ├── 0000 -> data-0000.tar
│ │ ├── 0001 -> data-0001.tar
│ │ │ ├── 10001.jpg
│ │ │ ├── 210001.png
│ │ │ ├── 3120001.webp
│ │ │ ├── 6513001.jpg
│ │
│ ├── recent
│ │ ├── 0000 -> data-1000.tar
│ │ ├── 0001 -> data-1001.tar
│ │
│ ├── updates
│ │ ├── 20240319
│ │ │ ├── dataset-0.tar
│ │ │ ├── dataset-1.tar
│ │ │
│ │ ├── 2024xxxx
│ │ │ ├── dataset-0.tar
│ │ │ ├── dataset-1.tar
```
Where `data-{1000..1999}.tar` refer to recent update files (should be updated every few months) and `updates` refer to fast patches (should be updated every few days to few weeks).
Currently represented file extensions are: avi/bmp/gif/html/jpeg/jpg/mp3/mp4/mpg/pdf/png/rar/swf/webm/wmv/zip.
Raw original files are treacherous. Be careful if working with the original dataset. There are many odd files: truncated, non-sRGB colorspace, wrong file extensions (eg. some PNGs have .jpg extensions like original/0146/1525146.jpg or original/0558/1422558.jpg), etc. |
appier-ai-research/StreamBench_public | appier-ai-research | "2024-10-13T13:22:20Z" | 11,374 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-10-03T05:32:46Z" | ---
dataset_info:
- config_name: bird
features:
- name: db_id
dtype: string
- name: question
dtype: string
- name: evidence
dtype: string
- name: SQL
dtype: string
- name: question_id
dtype: int64
- name: difficulty
dtype: string
splits:
- name: test
num_bytes: 586056.0
num_examples: 1534
download_size: 251724
dataset_size: 586056.0
- config_name: ddxplus
features:
- name: AGE
dtype: int64
- name: SEX
dtype: string
- name: INITIAL_EVIDENCE_ENG
dtype: string
- name: EVIDENCES_ENG
dtype: string
- name: PATIENT_PROFILE
dtype: string
- name: DIFFERENTIAL_DIAGNOSIS
dtype: string
- name: PATHOLOGY
dtype: string
- name: INITIAL_EVIDENCE
dtype: string
- name: EVIDENCES
dtype: string
splits:
- name: test
num_bytes: 3318154.0
num_examples: 1764
download_size: 845456
dataset_size: 3318154.0
configs:
- config_name: bird
data_files:
- split: test
path: bird/test-*
- config_name: ddxplus
data_files:
- split: test
path: ddxplus/test-*
---
|
EpicPinkPenguin/procgen | EpicPinkPenguin | "2024-11-20T14:26:06Z" | 11,372 | 0 | [
"task_categories:reinforcement-learning",
"language:en",
"license:apache-2.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:1707.06347",
"region:us",
"procgen",
"bigfish",
"benchmark",
"openai",
"bossfight",
"caveflyer",
"chaser",
"climber",
"dodgeball",
"fruitbot",
"heist",
"jumper",
"leaper",
"maze",
"miner",
"ninja",
"plunder",
"starpilot"
] | [
"reinforcement-learning"
] | "2024-06-02T07:31:08Z" | ---
language:
- en
license: apache-2.0
size_categories:
- 10M<n<100M
task_categories:
- reinforcement-learning
pretty_name: Procgen Benchmark Dataset
dataset_info:
- config_name: bigfish
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 129932068797
dataset_size: 289372500000
- config_name: bossfight
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 198057598671
dataset_size: 289372500000
- config_name: caveflyer
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 149023406845
dataset_size: 289372500000
- config_name: chaser
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 63831099402
dataset_size: 289372500000
- config_name: climber
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 63990304413
dataset_size: 289372500000
- config_name: coinrun
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 76990220716
dataset_size: 289372500000
- config_name: dodgeball
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 104691253324
dataset_size: 289372500000
- config_name: fruitbot
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 271549939959
dataset_size: 289372500000
- config_name: heist
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 74316944819
dataset_size: 289372500000
- config_name: jumper
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 101573987650
dataset_size: 289372500000
- config_name: leaper
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 66796546658
dataset_size: 289372500000
- config_name: maze
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 75397896559
dataset_size: 289372500000
- config_name: miner
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 57170722948
dataset_size: 289372500000
- config_name: ninja
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 99759972643
dataset_size: 289372500000
- config_name: plunder
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 103307437365
dataset_size: 289372500000
- config_name: starpilot
features:
- name: observation
dtype:
array3_d:
shape:
- 64
- 64
- 3
dtype: uint8
- name: action
dtype: uint8
- name: reward
dtype: float32
- name: done
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 260435250000
num_examples: 9000000
- name: test
num_bytes: 28937250000
num_examples: 1000000
download_size: 170031712117
dataset_size: 289372500000
configs:
- config_name: bigfish
data_files:
- split: train
path: bigfish/train-*
- split: test
path: bigfish/test-*
- config_name: bossfight
data_files:
- split: train
path: bossfight/train-*
- split: test
path: bossfight/test-*
- config_name: caveflyer
data_files:
- split: train
path: caveflyer/train-*
- split: test
path: caveflyer/test-*
- config_name: chaser
data_files:
- split: train
path: chaser/train-*
- split: test
path: chaser/test-*
- config_name: climber
data_files:
- split: train
path: climber/train-*
- split: test
path: climber/test-*
- config_name: coinrun
data_files:
- split: train
path: coinrun/train-*
- split: test
path: coinrun/test-*
- config_name: dodgeball
data_files:
- split: train
path: dodgeball/train-*
- split: test
path: dodgeball/test-*
- config_name: fruitbot
data_files:
- split: train
path: fruitbot/train-*
- split: test
path: fruitbot/test-*
- config_name: heist
data_files:
- split: train
path: heist/train-*
- split: test
path: heist/test-*
- config_name: jumper
data_files:
- split: train
path: jumper/train-*
- split: test
path: jumper/test-*
- config_name: leaper
data_files:
- split: train
path: leaper/train-*
- split: test
path: leaper/test-*
- config_name: maze
data_files:
- split: train
path: maze/train-*
- split: test
path: maze/test-*
- config_name: miner
data_files:
- split: train
path: miner/train-*
- split: test
path: miner/test-*
- config_name: ninja
data_files:
- split: train
path: ninja/train-*
- split: test
path: ninja/test-*
- config_name: plunder
data_files:
- split: train
path: plunder/train-*
- split: test
path: plunder/test-*
- config_name: starpilot
data_files:
- split: train
path: starpilot/train-*
- split: test
path: starpilot/test-*
tags:
- procgen
- bigfish
- benchmark
- openai
- bossfight
- caveflyer
- chaser
- climber
- dodgeball
- fruitbot
- heist
- jumper
- leaper
- maze
- miner
- ninja
- plunder
- starpilot
---
# Procgen Benchmark
This dataset contains expert trajectories generated by a [PPO](https://arxiv.org/abs/1707.06347) reinforcement learning agent trained on each of the 16 procedurally-generated gym environments from the [Procgen Benchmark](https://openai.com/index/procgen-benchmark/). The environments were created on `distribution_mode=easy` and with unlimited levels.
Disclaimer: This is not an official repository from OpenAI.
## Dataset Usage
Regular usage (for environment bigfish):
```python
from datasets import load_dataset
train_dataset = load_dataset("EpicPinkPenguin/procgen", name="bigfish", split="train")
test_dataset = load_dataset("EpicPinkPenguin/procgen", name="bigfish", split="test")
```
Usage with PyTorch (for environment bossfight):
```python
from datasets import load_dataset
train_dataset = load_dataset("EpicPinkPenguin/procgen", name="bossfight", split="train").with_format("torch")
test_dataset = load_dataset("EpicPinkPenguin/procgen", name="bossfight", split="test").with_format("torch")
```
## Agent Performance
The PPO RL agent was trained for 25M steps on each environment and obtained the following final performance metrics on the evaluation environment. These values are attain or surpass the performance described in "Easy Difficulty Baseline Results" in Appendix I of the paper.
| Environment | Steps (Train) | Steps (Test) | Return | Observation |
|:------------|:----------------|:---------------|:-------|:------------|
| bigfish | 9,000,000 | 1,000,000 | 29.72 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/lHQXBqLdoWicXlt68I9QX.mp4"></video> |
| bossfight | 9,000,000 | 1,000,000 | 11.13 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/LPoafGi4YBWqqkuFlEN_l.mp4"></video> |
| caveflyer | 9,000,000 | 1,000,000 | 08.95 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/XVqRwu_9yfX4ECQc4At4G.mp4"></video> |
| chaser | 9,000,000 | 1,000,000 | 10.98 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/FIKVv48SThqiC1Z2PYQ7U.mp4"></video> |
| climber | 9,000,000 | 1,000,000 | 11.66 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/XJQlA7IyF9_gwUiw-FkND.mp4"></video> |
| coinrun | 9,000,000 | 1,000,000 | 09.61 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/Ucv3HZttewMRQzTL8r_Tw.mp4"></video> |
| dodgeball | 9,000,000 | 1,000,000 | 11.07 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/5HetbKuXBpO-v1jcVyLTU.mp4"></video> |
| fruitbot | 9,000,000 | 1,000,000 | 32.49 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/zKCyxXvauXjUac-5kEAWz.mp4"></video> |
| heist | 9,000,000 | 1,000,000 | 08.37 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/AdZ6XNmUN5_00BKd9BN8R.mp4"></video> |
| jumper | 9,000,000 | 1,000,000 | 08.46 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/s5k31gWK2Vc6Lp6QVzQXA.mp4"></video> |
| leaper | 9,000,000 | 1,000,000 | 07.11 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/_hDMocxjmzutc0t5FfoTX.mp4"></video> |
| maze | 9,000,000 | 1,000,000 | 09.95 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/uhNdDPuNhZpxVns91Ba-9.mp4"></video> |
| miner | 9,000,000 | 1,000,000 | 12.21 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/ElpJ8l2WHJGrprZ3-giHU.mp4"></video> |
| ninja | 9,000,000 | 1,000,000 | 08.88 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/b9i-fb2Twh8XmBBNf2DRG.mp4"></video> |
| plunder | 9,000,000 | 1,000,000 | 22.19 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/JPeGNOVzrotuYUjfzZj40.mp4"></video> |
| starpilot | 9,000,000 | 1,000,000 | 49.94 | <video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/wY9lZgkw5tor19hCWmm6A.mp4"></video> |
## Dataset Structure
### Data Instances
Each data instance represents a single step consisting of tuples of the form (observation, action, reward, done, truncated) = (o_t, a_t, r_{t+1}, done_{t+1}, trunc_{t+1}).
```json
{'action': 1,
'done': False,
'observation': [[[0, 166, 253],
[0, 174, 255],
[0, 170, 251],
[0, 191, 255],
[0, 191, 255],
[0, 221, 255],
[0, 243, 255],
[0, 248, 255],
[0, 243, 255],
[10, 239, 255],
[25, 255, 255],
[0, 241, 255],
[0, 235, 255],
[17, 240, 255],
[10, 243, 255],
[27, 253, 255],
[39, 255, 255],
[58, 255, 255],
[85, 255, 255],
[111, 255, 255],
[135, 255, 255],
[151, 255, 255],
[173, 255, 255],
...
[0, 0, 37],
[0, 0, 39]]],
'reward': 0.0,
'truncated': False}
```
### Data Fields
- `observation`: The current RGB observation from the environment.
- `action`: The action predicted by the agent for the current observation.
- `reward`: The received reward from stepping the environment with the current action.
- `done`: If the new observation is the start of a new episode. Obtained after stepping the environment with the current action.
- `truncated`: If the new observation is the start of a new episode due to truncation. Obtained after stepping the environment with the current action.
### Data Splits
The dataset is divided into a `train` (90%) and `test` (10%) split. Each environment-dataset has in sum 10M steps (data points).
## Dataset Creation
The dataset was created by training an RL agent with [PPO](https://arxiv.org/abs/1707.06347) for 25M steps in each environment. The trajectories where generated by sampling from the predicted action distribution at each step (not taking the argmax). The environments were created on `distribution_mode=easy` and with unlimited levels.
## Procgen Benchmark
The [Procgen Benchmark](https://openai.com/index/procgen-benchmark/), released by OpenAI, consists of 16 procedurally-generated environments designed to measure how quickly reinforcement learning (RL) agents learn generalizable skills. It emphasizes experimental convenience, high diversity within and across environments, and is ideal for evaluating both sample efficiency and generalization. The benchmark allows for distinct training and test sets in each environment, making it a standard research platform for the OpenAI RL team. It aims to address the need for more diverse RL benchmarks compared to complex environments like Dota and StarCraft. |
haonan-li/cmmlu | haonan-li | "2023-07-13T10:19:29Z" | 11,366 | 63 | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"language:zh",
"license:cc-by-nc-4.0",
"size_categories:10K<n<100K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2306.09212",
"region:us",
"chinese",
"llm",
"evaluation"
] | [
"multiple-choice",
"question-answering"
] | "2023-06-25T16:37:44Z" | ---
license: cc-by-nc-4.0
task_categories:
- multiple-choice
- question-answering
language:
- zh
tags:
- chinese
- llm
- evaluation
pretty_name: CMMLU
size_categories:
- 10K<n<100K
---
# CMMLU: Measuring massive multitask language understanding in Chinese
- **Homepage:** [https://github.com/haonan-li/CMMLU](https://github.com/haonan-li/CMMLU)
- **Repository:** [https://huggingface.co/datasets/haonan-li/cmmlu](https://huggingface.co/datasets/haonan-li/cmmlu)
- **Paper:** [CMMLU: Measuring Chinese Massive Multitask Language Understanding](https://arxiv.org/abs/2306.09212).
## Table of Contents
- [Introduction](#introduction)
- [Leaderboard](#leaderboard)
- [Data](#data)
- [Citation](#citation)
- [License](#license)
## Introduction
CMMLU is a comprehensive Chinese assessment suite specifically designed to evaluate the advanced knowledge and reasoning abilities of LLMs within the Chinese language and cultural context.
CMMLU covers a wide range of subjects, comprising 67 topics that span from elementary to advanced professional levels. It includes subjects that require computational expertise, such as physics and mathematics, as well as disciplines within humanities and social sciences.
Many of these tasks are not easily translatable from other languages due to their specific contextual nuances and wording.
Furthermore, numerous tasks within CMMLU have answers that are specific to China and may not be universally applicable or considered correct in other regions or languages.
## Leaderboard
Latest leaderboard is in our [github](https://github.com/haonan-li/CMMLU).
## Data
We provide development and test dataset for each of 67 subjects, with 5 questions in development set and 100+ quesitons in test set.
Each question in the dataset is a multiple-choice questions with 4 choices and only one choice as the correct answer.
Here are two examples:
```
题目:同一物种的两类细胞各产生一种分泌蛋白,组成这两种蛋白质的各种氨基酸含量相同,但排列顺序不同。其原因是参与这两种蛋白质合成的:
A. tRNA种类不同
B. 同一密码子所决定的氨基酸不同
C. mRNA碱基序列不同
D. 核糖体成分不同
答案是:C
```
```
题目:某种植物病毒V是通过稻飞虱吸食水稻汁液在水稻间传播的。稻田中青蛙数量的增加可减少该病毒在水稻间的传播。下列叙述正确的是:
A. 青蛙与稻飞虱是捕食关系
B. 水稻和病毒V是互利共生关系
C. 病毒V与青蛙是寄生关系
D. 水稻与青蛙是竞争关系
答案是:
```
#### Load data
```python
from datasets import load_dataset
cmmlu=load_dataset(r"haonan-li/cmmlu", 'agronomy')
print(cmmlu['test'][0])
```
#### Load all data at once
```python
task_list = ['agronomy', 'anatomy', 'ancient_chinese', 'arts', 'astronomy', 'business_ethics', 'chinese_civil_service_exam', 'chinese_driving_rule', 'chinese_food_culture', 'chinese_foreign_policy', 'chinese_history', 'chinese_literature',
'chinese_teacher_qualification', 'clinical_knowledge', 'college_actuarial_science', 'college_education', 'college_engineering_hydrology', 'college_law', 'college_mathematics', 'college_medical_statistics', 'college_medicine', 'computer_science',
'computer_security', 'conceptual_physics', 'construction_project_management', 'economics', 'education', 'electrical_engineering', 'elementary_chinese', 'elementary_commonsense', 'elementary_information_and_technology', 'elementary_mathematics',
'ethnology', 'food_science', 'genetics', 'global_facts', 'high_school_biology', 'high_school_chemistry', 'high_school_geography', 'high_school_mathematics', 'high_school_physics', 'high_school_politics', 'human_sexuality',
'international_law', 'journalism', 'jurisprudence', 'legal_and_moral_basis', 'logical', 'machine_learning', 'management', 'marketing', 'marxist_theory', 'modern_chinese', 'nutrition', 'philosophy', 'professional_accounting', 'professional_law',
'professional_medicine', 'professional_psychology', 'public_relations', 'security_study', 'sociology', 'sports_science', 'traditional_chinese_medicine', 'virology', 'world_history', 'world_religions']
from datasets import load_dataset
cmmlu = {k: load_dataset(r"haonan-li/cmmlu", k) for k in task_list}
```
## Citation
```
@misc{li2023cmmlu,
title={CMMLU: Measuring massive multitask language understanding in Chinese},
author={Haonan Li and Yixuan Zhang and Fajri Koto and Yifei Yang and Hai Zhao and Yeyun Gong and Nan Duan and Timothy Baldwin},
year={2023},
eprint={2306.09212},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
The CMMLU dataset is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/).
|
alvations/c4p0-x1-en-ja | alvations | "2024-03-24T03:55:23Z" | 11,278 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-23T09:54:37Z" | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
- name: target_backto_source
dtype: string
- name: raw_target
list:
- name: generated_text
dtype: string
- name: raw_target_backto_source
list:
- name: generated_text
dtype: string
- name: prompt
dtype: string
- name: reverse_prompt
dtype: string
- name: source_langid
dtype: string
- name: target_langid
dtype: string
- name: target_backto_source_langid
dtype: string
- name: doc_id
dtype: int64
- name: sent_id
dtype: int64
- name: timestamp
dtype: string
- name: url
dtype: string
- name: doc_hash
dtype: string
splits:
- name: train
num_bytes: 49764
num_examples: 42
download_size: 37636
dataset_size: 49764
configs:
- config_name: default
data_files:
- split: train
path: 66034f82c5c65ae4/train-*
---
|
KBLab/overlim | KBLab | "2022-10-25T06:13:06Z" | 11,251 | 3 | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-classification",
"task_ids:sentiment-classification",
"task_ids:text-scoring",
"annotations_creators:other",
"language_creators:other",
"multilinguality:translation",
"source_datasets:extended|glue",
"source_datasets:extended|super_glue",
"language:sv",
"language:da",
"language:nb",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"modality:tabular",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"qa-nli",
"paraphrase-identification"
] | [
"text-classification"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- other
language_creators:
- other
language:
- sv
- da
- nb
license:
- cc-by-4.0
multilinguality:
- translation
size_categories:
- unknown
source_datasets:
- extended|glue
- extended|super_glue
task_categories:
- text-classification
task_ids:
- natural-language-inference
- semantic-similarity-classification
- sentiment-classification
- text-scoring
pretty_name: overlim
tags:
- qa-nli
- paraphrase-identification
---
# Dataset Card for OverLim
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The _OverLim_ dataset contains some of the GLUE and SuperGLUE tasks automatically
translated to Swedish, Danish, and Norwegian (bokmål), using the OpusMT models
for MarianMT.
The translation quality was not manually checked and may thus be faulty.
Results on these datasets should thus be interpreted carefully.
If you want to have an easy script to train and evaluate your models have a look [here](https://github.com/kb-labb/overlim_eval)
### Supported Tasks and Leaderboards
The data contains the following tasks from GLUE and SuperGLUE:
- GLUE
- `mnli`
- `mrpc`
- `qnli`
- `qqp`
- `rte`
- `sst`
- `stsb`
- `wnli`
- SuperGLUE
- `boolq`
- `cb`
- `copa`
- `rte`
### Languages
- Swedish
- Danish
- Norwegian (bokmål)
## Dataset Structure
### Data Instances
Every task has their own set of features, but all share an `idx` and `label`.
- GLUE
- `mnli`
- `premise`, `hypothesis`
- `mrpc`
- `text_a`, `text_b`
- `qnli`
- `premise`, `hypothesis`
- `qqp`
- `text_a`, `text_b`
- `sst`
- `text`
- `stsb`
- `text_a`, `text_b`
- `wnli`
- `premise`, `hypothesis`
- SuperGLUE
- `boolq`
- `question`, `passage`
- `cb`
- `premise`, `hypothesis`
- `copa`
- `premise`, `choice1`, `choice2`, `question`
- `rte`
- `premise`, `hypothesis`
### Data Splits
In order to have test-split, we repurpose the original validation-split as
test-split, and split the training-split into a new training- and
validation-split, with an 80-20 distribution.
## Dataset Creation
For more information about the individual tasks see (https://gluebenchmark.com) and (https://super.gluebenchmark.com).
### Curation Rationale
Training non-English models is easy, but there is a lack of evaluation datasets to compare their actual performance.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@kb-labb](https://github.com/kb-labb) for adding this dataset.
|
openbmb/UltraInteract_sft | openbmb | "2024-04-05T14:29:52Z" | 11,117 | 118 | [
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2404.02078",
"region:us"
] | null | "2024-04-02T15:45:18Z" | ---
language:
- en
license: mit
pretty_name: UltraInteract_sft
configs:
- config_name: default
data_files:
- split: train
path: 0000_sft.parquet
dataset_info:
features:
- name: task
dtype: string
- name: dataset
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
- name: id
dtype: string
- name: parent_id
dtype: string
splits:
- name: train
num_bytes: 687238
num_examples: 288579
download_size: 687238
dataset_size: 687238
---
## Introduction
- 📜 [Paper](https://arxiv.org/abs/2404.02078)
- 🤗 [Eurus Collection](https://huggingface.co/collections/openbmb/eurus-660bc40bec5376b3adc9d1c5)
- 🤗 UltraInteract
- [SFT](https://huggingface.co/datasets/openbmb/UltraInteract_sft)
- [Preference Learning](https://huggingface.co/datasets/openbmb/UltraInteract_pair)
- [GitHub Repo](https://github.com/OpenBMB/Eurus)
UltraInteract is a large-scale, high-quality alignment dataset specifically designed for complex reasoning tasks. For each instruction, it includes a preference tree consisting of
- (1) reasoning chains with diverse planning strategies in a unified format
- (2) multi-turn interaction trajectories with the environment and the critique
- (3) pairwise data to facilitate preference learning
## Structure
UltraInteract collects a preference tree for each instruction, with the instruction being the root and each action a node. A trajectory is a root-to-leaf path consisting of a sequence of actions. In each preference tree, all nodes of correct actions and all trajectories ending with correct actions can be used for SFT. Paired correct and incorrect nodes or trajectories can be used for preference learning.
<img src="./figures/tree.png" alt="tree" style="zoom: 20%;" />
## Illustrative Example
Here is an illustrative example of an UltraInteract trajectory over two turns. In each turn, the actor model generates step-by-step reasoning chains, and the environment and the critique model provide observations and textual critique respectively.
<img src="./figures/ui_example.png" alt="ui_example" style="zoom: 25%;" />
## Stats
Below are some statistics about UltraInteract. It consists of 86k instructions, 286k correct answers, and 219k pairs.
<img src="./figures/stats.png" alt="stats" style="zoom: 40%;" />
## Dataset Format (SFT)
Coding
```JSONC
{
"id": "92a7356beaa75391539cb79624bdf85b7e5d36d5e74e46a2f1c4001f40b3e055_1_1", // unique data id, contains three parts, the unique string, the depth of the preference tree, and current turn, separated by '_'
"parent_id": "535550938ff58bd73cd7fb85be834bb90b7600daf6b6f6a16b64091e5ecbfe43", // data with the same parent_id have the same instruction but different actions.
"task": "Coding", // task type, choices = ["Coding", "Math_Cot", "Math_PoT", "Logic"]
"dataset": "codecontest", // the dataset where the instruction comes from
"instruction": "Write Python code to solve the task.\nMike has a string s consisting of only lowercase English letters. He wants to change exactly one character from the string so that the resulting one is a palindrome. \n\nA palindrome is a string that reads the same backward as forward, for example strings \"z\", \"aaa\", \"aba\", \"abccba\" are palindromes, but strings \"codeforces\", \"reality\", \"ab\" are not.\n\nInput\n\nThe first and single line contains string s (1 ≤ |s| ≤ 15).\n\nOutput\n\nPrint \"YES\" (without quotes) if Mike can change exactly one character so that the resulting string is palindrome or \"NO\" (without quotes) otherwise. \n\nExamples\n\nInput\n\nabccaa\n\n\nOutput\n\nYES\n\n\nInput\n\nabbcca\n\n\nOutput\n\nNO\n\n\nInput\n\nabcda\n\n\nOutput\n\nYES", // the task description.
"response": "
Step 1: This problem asks us to determine if it is possible to change exactly one character in the string to make it a palindrome.
Step 2: We can start by iterating through the characters in the string and counting the number of differences between corresponding characters from the beginning and end of the string.
Step 3: If the length of the string is odd and there is only one difference, or if the length is even and there is exactly one difference, then it is possible to change exactly one character to make the string a palindrome.
Step 4: We need to consider edge cases such as a single character string or an empty string.
```python
# Step 5: Take the input string
s = input()
# Step 6: Calculate the length of the string
n = len(s)
# Step 7: Initialize a counter to keep track of the number of differences
cnt = 0
# Step 8: Iterate through the first half of the string and count the number of differences between corresponding characters from the beginning and end of the string.
for i in range(n//2):
if s[i] != s[n-i-1]:
cnt += 1
# Step 9: If the length of the string is odd and there are no differences, or if the length is even and there is exactly one difference, print "YES" (without quotes)
if (n % 2 == 1 and cnt == 0) or (cnt == 1):
print('YES')
# Step 10: Otherwise, print "NO" (without quotes)
else:
print('NO')
```" // The step-by-step solution.
}
```
Math
```JSONC
{
"id": "647b2e8706cf459a8e3ec8e99727d2d51ff3414bf187db43d92cff87f8012713_1_1",
"parent_id": "499d250c5852eddcd48a28b6c173f1a2349a16555990a1cd6fb5c7954cb94587",
"task": "Math_PoT",
"dataset": "mathqa",
"instruction": "Tool available:\n[1] Python interpreter\nWhen you send a message containing Python code to python, it will be executed in a stateful Jupyter notebook environment.\nSolve the following math problem step-by-step.\nthe length of a rectangle is doubled while its width is doubled . what is the % change in area ?",
"response": "
Step 1: The area of a rectangle is given by the product of its length and width. If the length is doubled, the area changes from l*w to 2l*w. Similarly, if the width is doubled, the area changes from l*w to l*2w.
Step 2: If both the length and width are doubled, the area changes from l*w to 2l*2w. The percentage change in area is given by ((new area - old area) / old area)*100.
Step 3: We can calculate the percentage change in area using the formula above.
Step 4: By creating a tool to calculate the percentage change in area, we can easily solve the question.
```python
# Step 5: Define a tool to calculate the percentage change in area
def calculate_percentage_change(length, width):
# Step 6: Calculate the old and new areas
old_area = length * width
new_area = 2 * length * 2 * width
# Step 7: Calculate the percentage change
percentage_change = ((new_area - old_area) / old_area) * 100
return percentage_change
# Step 8: Call the function to solve the question
percentage_change = calculate_percentage_change(2, 2)
print(percentage_change)
```"
}
```
Logic
```JSONC
{
"id": "92a19848da4fb0fc0b414f061f7500a81ea4cbd8f922ca1e32a6a15ef6f7e1a0_1_1",
"parent_id": "107cc9147d8c09e62e2ba9e74cb41d9dbafd9a3adf62b67ba3da2b31c81b3300",
"task": "Logic",
"dataset": "hotpotqa",
"instruction": "
Tool available:\n[1] wikipedia_search(query: str) -> str\nThe Wikipedia Search tool provides access to a vast collection of articles covering a wide range of topics.\nCan query specific keywords or topics to retrieve accurate and comprehensive information.
Solve the following problem step-by-step:\nWhich musician or group is from the US, 3OH!3 or Regina Spektor?
Step 1: To determine which musician or group is from the US, we can search for information about each of them.
```python
# Step 2: Searching for the information about 3OH!3.
print(wikipedia_search('3OH!3'))
```
Page: 3OH!3
Summary: 3OH!3 (pronounced \"three oh three\") is an American electronic music duo from Boulder, Colorado, made up of Sean Foreman and Nathaniel Motte. They are best known for their single \"DONTTRUSTME\" from their album Want, which reached number seven on the Billboard Hot 100. Their second single, a remix of \"Starstrukk\" featuring Katy Perry from Want, was a top ten hit in the United Kingdom, Ireland, Finland, Poland, and Australia. They gained further recognition by featuring Kesha on the song \"My First Kiss\", which was made the lead single from their album Streets of Gold. The album later peaked at number seven on the Billboard 200.\n\n\n\nPage: 3OH!3 (album)\nSummary: 3OH!3 is the debut studio album by American electronic music band 3OH!3. The tracks \"Holler Til You Pass Out\" and \"Chokechain\" were later remixed for the band's major label debut. \"Holler Til You Pass Out\", \"Chokechain\", and \"Dance with Me\" are excluded from the iTunes release of the album.",
"response": "
Step 1: The provided information confirms that 3OH!3 is an American electronic music duo from Boulder, Colorado.
Step 2: We no longer need to perform a search for Regina Spektor as we were only tasked to determine which of the two options is from the US and we have already confirmed 3OH!3's origin.
Step 3: We can confidently conclude that 3OH!3 is the musician/group from the US.
Answer:3OH!3"
}
```
## Citation
```bib
@misc{yuan2024advancing,
title={Advancing LLM Reasoning Generalists with Preference Trees},
author={Lifan Yuan and Ganqu Cui and Hanbin Wang and Ning Ding and Xingyao Wang and Jia Deng and Boji Shan and Huimin Chen and Ruobing Xie and Yankai Lin and Zhenghao Liu and Bowen Zhou and Hao Peng and Zhiyuan Liu and Maosong Sun},
year={2024},
eprint={2404.02078},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
|
mhenrichsen/alpaca_2k_test | mhenrichsen | "2023-07-22T19:48:57Z" | 11,047 | 25 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-07-22T19:48:22Z" | ---
license: apache-2.0
---
|
ArmelR/the-pile-splitted | ArmelR | "2023-09-06T09:53:16Z" | 11,047 | 21 | [
"size_categories:10M<n<100M",
"format:arrow",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2101.00027",
"arxiv:2201.07311",
"region:us"
] | null | "2023-07-30T14:21:26Z" | ---
configs:
- config_name: all
data_files:
- split: train
path:
- "data/ArXiv/train/*.arrow"
- "data/BookCorpus2/train/*.arrow"
- "data/Books3/train/*.arrow"
- "data/DM Mathematics/train/*.arrow"
- "data/Enron Emails/train/*.arrow"
- "data/EuroParl/train/*.arrow"
- "data/FreeLaw/train/*.arrow"
- "data/Github/train/*.arrow"
- "data/Gutenberg (PG-19)/train/*.arrow"
- "data/HackerNews/train/*.arrow"
- "data/NIH ExPorter/train/*.arrow"
- "data/OpenSubtitles/train/*.arrow"
- "data/OpenWebText2/train/*.arrow"
- "data/PhilPapers/train/*.arrow"
- "data/Pile-CC/train/*.arrow"
- "data/PubMed Abstracts/train/*.arrow"
- "data/PubMed Central/train/*.arrow"
- "data/StackExchange/train/*.arrow"
- "data/UPSTO Backgrounds/train/*.arrow"
- "data/Ubuntu IRC/train/*.arrow"
- "data/Wikipedia (en)/train/*.arrow"
- "data/YoutubeSubtitles/train/*.arrow"
- split: test
path:
- "data/ArXiv/test/*.arrow"
- "data/BookCorpus2/test/*.arrow"
- "data/Books3/test/*.arrow"
- "data/DM Mathematics/test/*.arrow"
- "data/Enron Emails/test/*.arrow"
- "data/EuroParl/test/*.arrow"
- "data/FreeLaw/test/*.arrow"
- "data/Github/test/*.arrow"
- "data/Gutenberg (PG-19)/test/*.arrow"
- "data/HackerNews/test/*.arrow"
- "data/NIH ExPorter/test/*.arrow"
- "data/OpenSubtitles/test/*.arrow"
- "data/OpenWebText2/test/*.arrow"
- "data/PhilPapers/test/*.arrow"
- "data/Pile-CC/test/*.arrow"
- "data/PubMed Abstracts/test/*.arrow"
- "data/PubMed Central/test/*.arrow"
- "data/StackExchange/test/*.arrow"
- "data/UPSTO Backgrounds/test/*.arrow"
- "data/Ubuntu IRC/test/*.arrow"
- "data/Wikipedia (en)/test/*.arrow"
- "data/YoutubeSubtitles/test/*.arrow"
default: true
- config_name: ArXiv
data_files:
- split: train
path: "data/ArXiv/train/*.arrow"
- split: test
path: "data/ArXiv/test/*.arrow"
- config_name: BookCorpus2
data_files:
- split: train
path: "data/BookCorpus2/train/*.arrow"
- split: test
path: "data/BookCorpus2/test/*.arrow"
- config_name: Books3
data_files:
- split: train
path: "data/Books3/train/*.arrow"
- split: test
path: "data/Books3/test/*.arrow"
- config_name: DM Mathematics
data_files:
- split: train
path: "data/DM Mathematics/train/*.arrow"
- split: test
path: "data/DM Mathematics/test/*.arrow"
- config_name: Enron Emails
data_files:
- split: train
path: "data/Enron Emails/train/*.arrow"
- split: test
path: "data/Enron Emails/test/*.arrow"
- config_name: EuroParl
data_files:
- split: train
path: "data/EuroParl/train/*.arrow"
- split: test
path: "data/EuroParl/test/*.arrow"
- config_name: FreeLaw
data_files:
- split: train
path: "data/FreeLaw/train/*.arrow"
- split: test
path: "data/FreeLaw/test/*.arrow"
- config_name: Github
data_files:
- split: train
path: "data/Github/train/*.arrow"
- split: test
path: "data/Github/test/*.arrow"
- config_name: Gutenberg (PG-19)
data_files:
- split: train
path: "data/Gutenberg (PG-19)/train/*.arrow"
- split: test
path: "data/Gutenberg (PG-19)/test/*.arrow"
- config_name: HackerNews
data_files:
- split: train
path: "data/HackerNews/train/*.arrow"
- split: test
path: "data/HackerNews/test/*.arrow"
- config_name: NIH ExPorter
data_files:
- split: train
path: "data/NIH ExPorter/train/*.arrow"
- split: test
path: "data/NIH ExPorter/test/*.arrow"
- config_name: OpenSubtitles
data_files:
- split: train
path: "data/OpenSubtitles/train/*.arrow"
- split: test
path: "data/OpenSubtitles/test/*.arrow"
- config_name: OpenWebText2
data_files:
- split: train
path: "data/OpenWebText2/train/*.arrow"
- split: test
path: "data/OpenWebText2/test/*.arrow"
- config_name: PhilPapers
data_files:
- split: train
path: "data/PhilPapers/train/*.arrow"
- split: test
path: "data/PhilPapers/test/*.arrow"
- config_name: Pile-CC
data_files:
- split: train
path: "data/Pile-CC/train/*.arrow"
- split: test
path: "data/Pile-CC/test/*.arrow"
- config_name: PubMed Abstracts
data_files:
- split: train
path: "data/PubMed Abstracts/train/*.arrow"
- split: test
path: "data/PubMed Abstracts/test/*.arrow"
- config_name: PubMed Central
data_files:
- split: train
path: "data/PubMed Central/train/*.arrow"
- split: test
path: "data/PubMed Central/test/*.arrow"
- config_name: StackExchange
data_files:
- split: train
path: "data/StackExchange/train/*.arrow"
- split: test
path: "data/StackExchange/test/*.arrow"
- config_name: UPSTO Backgrounds
data_files:
- split: train
path: "data/UPSTO Backgrounds/train/*.arrow"
- split: test
path: "data/UPSTO Backgrounds/test/*.arrow"
- config_name: Ubuntu IRC
data_files:
- split: train
path: "data/Ubuntu IRC/train/*.arrow"
- split: test
path: "data/Ubuntu IRC/test/*.arrow"
- config_name: Wikipedia (en)
data_files:
- split: train
path: "data/Wikipedia (en)/train/*.arrow"
- split: test
path: "data/Wikipedia (en)/test/*.arrow"
- config_name: YoutubeSubtitles
data_files:
- split: train
path: "data/YoutubeSubtitles/train/*.arrow"
- split: test
path: "data/YoutubeSubtitles/test/*.arrow"
---
# Dataset description
[The pile](https://arxiv.org/abs/2101.00027) is an 800GB dataset of english text
designed by EleutherAI to train large-scale language models. The original version of
the dataset can be found [here](https://huggingface.co/datasets/EleutherAI/pile).
The dataset is divided into 22 smaller high-quality datasets. For more information
each of them, please refer to [the datasheet for the pile](https://arxiv.org/abs/2201.07311).
However, the current version of the dataset, available on the Hub, is not splitted accordingly.
We had to solve this problem in order to improve the user experience when it comes to deal with
the pile via the hub.
Here is an instance of the pile
```
{
'meta': {'pile_set_name': 'Pile-CC'},
'text': 'It is done, and submitted. You can play “Survival of the Tastiest” on Android, and on the web. Playing on...'
}
```
We used the `meta` column to properly divide the dataset in subsets. Each instance `example` belongs to the subset
`domain` and `domain = example['meta']['pile_set_name']`. By doing this, we were able to create a [new version of the pile](https://huggingface.co/datasets/ArmelR/sharded-pile)
that is properly divided, each instance having a new column `domain`.
We further splitted each subset in train/test (97%/3%) to build the current dataset which the following structure
```
data
ArXiv
train
test
BookCorpus2
train
test
Books3
train
test
```
# Usage
```python
from datasets import load_dataset
dataset = load_dataset(
"ArmelR/the-pile-splitted",
subset_of_interest,
num_proc=8
)
```
Using `subset_of_interest = "default"` will load the whole dataset.
|
labelmaker/arkit_labelmaker | labelmaker | "2024-10-22T19:00:08Z" | 10,877 | 1 | [
"language:en",
"license:bsd",
"size_categories:1K<n<10K",
"arxiv:2410.13924",
"doi:10.57967/hf/2389",
"region:us",
"3D semantic segmentation",
"indoor 3D scene dataset"
] | null | "2024-04-24T17:17:33Z" | ---
viewer: false
license: bsd
language:
- en
tags:
- 3D semantic segmentation
- indoor 3D scene dataset
pretty_name: arkit_labelmaker
size_categories:
- 1K<n<10K
---
# ARKit Labelmaker: A New Scale for Indoor 3D Scene Understanding
[[arxiv]](https://arxiv.org/abs/2410.13924) [[website]](https://labelmaker.org/)
We complement ARKitScenes dataset with dense semantic annotations that are automatically generated at scale. This produces the first large-scale, real-world 3D dataset with dense semantic annotations.
Training on this auto-generated data, we push forward the state-of-the-art performance on ScanNet and ScanNet200 with prevalent 3D semantic segmentation models.
|
McAuley-Lab/Amazon-Reviews-2023 | McAuley-Lab | "2024-12-08T22:21:49Z" | 10,876 | 91 | [
"language:en",
"size_categories:10B<n<100B",
"arxiv:2403.03952",
"region:us",
"recommendation",
"reviews"
] | null | "2024-01-23T04:53:25Z" | ---
language:
- en
tags:
- recommendation
- reviews
size_categories:
- 10B<n<100B
dataset_info:
- config_name: raw_meta_All_Beauty
features:
- name: main_category
dtype: string
- name: title
dtype: string
- name: average_rating
dtype: float64
- name: rating_number
dtype: int64
- name: features
sequence: string
- name: description
sequence: string
- name: price
dtype: string
- name: images
sequence:
- name: hi_res
dtype: string
- name: large
dtype: string
- name: thumb
dtype: string
- name: variant
dtype: string
- name: videos
sequence:
- name: title
dtype: string
- name: url
dtype: string
- name: user_id
dtype: string
- name: store
dtype: string
- name: categories
sequence: string
- name: details
dtype: string
- name: parent_asin
dtype: string
- name: bought_together
dtype: string
- name: subtitle
dtype: string
- name: author
dtype: string
splits:
- name: full
num_bytes: 172622243
num_examples: 112590
download_size: 59635138
dataset_size: 172622243
- config_name: raw_meta_Arts_Crafts_and_Sewing
features:
- name: main_category
dtype: string
- name: title
dtype: string
- name: average_rating
dtype: float64
- name: rating_number
dtype: int64
- name: features
sequence: string
- name: description
sequence: string
- name: price
dtype: string
- name: images
sequence:
- name: hi_res
dtype: string
- name: large
dtype: string
- name: thumb
dtype: string
- name: variant
dtype: string
- name: videos
sequence:
- name: title
dtype: string
- name: url
dtype: string
- name: user_id
dtype: string
- name: store
dtype: string
- name: categories
sequence: string
- name: details
dtype: string
- name: parent_asin
dtype: string
- name: bought_together
dtype: string
- name: subtitle
dtype: string
- name: author
dtype: string
splits:
- name: full
num_bytes: 1893257069
num_examples: 801446
download_size: 806711170
dataset_size: 1893257069
- config_name: raw_meta_Cell_Phones_and_Accessories
features:
- name: main_category
dtype: string
- name: title
dtype: string
- name: average_rating
dtype: float64
- name: rating_number
dtype: int64
- name: features
sequence: string
- name: description
sequence: string
- name: price
dtype: string
- name: images
sequence:
- name: hi_res
dtype: string
- name: large
dtype: string
- name: thumb
dtype: string
- name: variant
dtype: string
- name: videos
sequence:
- name: title
dtype: string
- name: url
dtype: string
- name: user_id
dtype: string
- name: store
dtype: string
- name: categories
sequence: string
- name: details
dtype: string
- name: parent_asin
dtype: string
- name: bought_together
dtype: string
- name: subtitle
dtype: string
- name: author
dtype: string
splits:
- name: full
num_bytes: 3497596478
num_examples: 1288490
download_size: 1262072469
dataset_size: 3497596478
- config_name: raw_meta_Electronics
features:
- name: main_category
dtype: string
- name: title
dtype: string
- name: average_rating
dtype: float64
- name: rating_number
dtype: int64
- name: features
sequence: string
- name: description
sequence: string
- name: price
dtype: string
- name: images
sequence:
- name: hi_res
dtype: string
- name: large
dtype: string
- name: thumb
dtype: string
- name: variant
dtype: string
- name: videos
sequence:
- name: title
dtype: string
- name: url
dtype: string
- name: user_id
dtype: string
- name: store
dtype: string
- name: categories
sequence: string
- name: details
dtype: string
- name: parent_asin
dtype: string
- name: bought_together
dtype: string
- name: subtitle
dtype: string
- name: author
dtype: string
splits:
- name: full
num_bytes: 4603602269
num_examples: 1610012
download_size: 1955009715
dataset_size: 4603602269
- config_name: raw_meta_Gift_Cards
features:
- name: main_category
dtype: string
- name: title
dtype: string
- name: average_rating
dtype: float64
- name: rating_number
dtype: int64
- name: features
sequence: string
- name: description
sequence: string
- name: price
dtype: string
- name: images
sequence:
- name: hi_res
dtype: string
- name: large
dtype: string
- name: thumb
dtype: string
- name: variant
dtype: string
- name: videos
sequence:
- name: title
dtype: string
- name: url
dtype: string
- name: user_id
dtype: string
- name: store
dtype: string
- name: categories
sequence: string
- name: details
dtype: string
- name: parent_asin
dtype: string
- name: bought_together
dtype: string
- name: subtitle
dtype: string
- name: author
dtype: string
splits:
- name: full
num_bytes: 1740761
num_examples: 1137
download_size: 401887
dataset_size: 1740761
- config_name: raw_meta_Handmade_Products
features:
- name: main_category
dtype: string
- name: title
dtype: string
- name: average_rating
dtype: float64
- name: rating_number
dtype: int64
- name: features
sequence: string
- name: description
sequence: string
- name: price
dtype: string
- name: images
sequence:
- name: hi_res
dtype: string
- name: large
dtype: string
- name: thumb
dtype: string
- name: variant
dtype: string
- name: videos
sequence:
- name: title
dtype: string
- name: url
dtype: string
- name: user_id
dtype: string
- name: store
dtype: string
- name: categories
sequence: string
- name: details
dtype: string
- name: parent_asin
dtype: string
- name: bought_together
dtype: string
- name: subtitle
dtype: string
- name: author
dtype: string
splits:
- name: full
num_bytes: 340772183
num_examples: 164817
download_size: 132049123
dataset_size: 340772183
- config_name: raw_meta_Industrial_and_Scientific
features:
- name: main_category
dtype: string
- name: title
dtype: string
- name: average_rating
dtype: float64
- name: rating_number
dtype: int64
- name: features
sequence: string
- name: description
sequence: string
- name: price
dtype: string
- name: images
sequence:
- name: hi_res
dtype: string
- name: large
dtype: string
- name: thumb
dtype: string
- name: variant
dtype: string
- name: videos
sequence:
- name: title
dtype: string
- name: url
dtype: string
- name: user_id
dtype: string
- name: store
dtype: string
- name: categories
sequence: string
- name: details
dtype: string
- name: parent_asin
dtype: string
- name: bought_together
dtype: string
- name: subtitle
dtype: string
- name: author
dtype: string
splits:
- name: full
num_bytes: 986632649
num_examples: 427564
download_size: 425007659
dataset_size: 986632649
- config_name: raw_meta_Musical_Instruments
features:
- name: main_category
dtype: string
- name: title
dtype: string
- name: average_rating
dtype: float64
- name: rating_number
dtype: int64
- name: features
sequence: string
- name: description
sequence: string
- name: price
dtype: string
- name: images
sequence:
- name: hi_res
dtype: string
- name: large
dtype: string
- name: thumb
dtype: string
- name: variant
dtype: string
- name: videos
sequence:
- name: title
dtype: string
- name: url
dtype: string
- name: user_id
dtype: string
- name: store
dtype: string
- name: categories
sequence: string
- name: details
dtype: string
- name: parent_asin
dtype: string
- name: bought_together
dtype: string
- name: subtitle
dtype: string
- name: author
dtype: string
splits:
- name: full
num_bytes: 553296301
num_examples: 213593
download_size: 229633633
dataset_size: 553296301
- config_name: raw_meta_Toys_and_Games
features:
- name: main_category
dtype: string
- name: title
dtype: string
- name: average_rating
dtype: float64
- name: rating_number
dtype: int64
- name: features
sequence: string
- name: description
sequence: string
- name: price
dtype: string
- name: images
sequence:
- name: hi_res
dtype: string
- name: large
dtype: string
- name: thumb
dtype: string
- name: variant
dtype: string
- name: videos
sequence:
- name: title
dtype: string
- name: url
dtype: string
- name: user_id
dtype: string
- name: store
dtype: string
- name: categories
sequence: string
- name: details
dtype: string
- name: parent_asin
dtype: string
- name: bought_together
dtype: string
- name: subtitle
dtype: string
- name: author
dtype: string
splits:
- name: full
num_bytes: 2291736294
num_examples: 890874
download_size: 972667016
dataset_size: 2291736294
configs:
- config_name: raw_meta_All_Beauty
data_files:
- split: full
path: raw_meta_All_Beauty/full-*
- config_name: raw_meta_Arts_Crafts_and_Sewing
data_files:
- split: full
path: raw_meta_Arts_Crafts_and_Sewing/full-*
- config_name: raw_meta_Cell_Phones_and_Accessories
data_files:
- split: full
path: raw_meta_Cell_Phones_and_Accessories/full-*
- config_name: raw_meta_Electronics
data_files:
- split: full
path: raw_meta_Electronics/full-*
- config_name: raw_meta_Gift_Cards
data_files:
- split: full
path: raw_meta_Gift_Cards/full-*
- config_name: raw_meta_Handmade_Products
data_files:
- split: full
path: raw_meta_Handmade_Products/full-*
- config_name: raw_meta_Industrial_and_Scientific
data_files:
- split: full
path: raw_meta_Industrial_and_Scientific/full-*
- config_name: raw_meta_Musical_Instruments
data_files:
- split: full
path: raw_meta_Musical_Instruments/full-*
- config_name: raw_meta_Toys_and_Games
data_files:
- split: full
path: raw_meta_Toys_and_Games/full-*
---
# Amazon Reviews 2023
**Please also visit [amazon-reviews-2023.github.io/](https://amazon-reviews-2023.github.io/) for more details, loading scripts, and preprocessed benchmark files.**
**[April 7, 2024]** We add two useful files:
1. `all_categories.txt`: 34 lines (33 categories + "Unknown"), each line contains a category name.
2. `asin2category.json`: A mapping between `parent_asin` (item ID) to its corresponding category name.
---
<!-- Provide a quick summary of the dataset. -->
This is a large-scale **Amazon Reviews** dataset, collected in **2023** by [McAuley Lab](https://cseweb.ucsd.edu/~jmcauley/), and it includes rich features such as:
1. **User Reviews** (*ratings*, *text*, *helpfulness votes*, etc.);
2. **Item Metadata** (*descriptions*, *price*, *raw image*, etc.);
3. **Links** (*user-item* / *bought together* graphs).
## What's New?
In the Amazon Reviews'23, we provide:
1. **Larger Dataset:** We collected 571.54M reviews, 245.2% larger than the last version;
2. **Newer Interactions:** Current interactions range from May. 1996 to Sep. 2023;
3. **Richer Metadata:** More descriptive features in item metadata;
4. **Fine-grained Timestamp:** Interaction timestamp at the second or finer level;
5. **Cleaner Processing:** Cleaner item metadata than previous versions;
6. **Standard Splitting:** Standard data splits to encourage RecSys benchmarking.
## Basic Statistics
> We define the <b>#R_Tokens</b> as the number of [tokens](https://pypi.org/project/tiktoken/) in user reviews and <b>#M_Tokens</b> as the number of [tokens](https://pypi.org/project/tiktoken/) if treating the dictionaries of item attributes as strings. We emphasize them as important statistics in the era of LLMs.
> We count the number of items based on user reviews rather than item metadata files. Note that some items lack metadata.
### Compared to Previous Versions
| Year | #Review | #User | #Item | #R_Token | #M_Token | #Domain | Timespan |
| ----------- | ---------: | -------: | -------: | ---------: | ------------: | ------------: | ------------: |
| [2013](https://snap.stanford.edu/data/web-Amazon-links.html) | 34.69M | 6.64M | 2.44M | 5.91B | -- | 28 | Jun'96 - Mar'13 |
| [2014](https://cseweb.ucsd.edu/~jmcauley/datasets/amazon/links.html) | 82.83M | 21.13M | 9.86M | 9.16B | 4.14B | 24 | May'96 - Jul'14 |
| [2018](https://cseweb.ucsd.edu/~jmcauley/datasets/amazon_v2/) | 233.10M | 43.53M | 15.17M | 15.73B | 7.99B | 29 | May'96 - Oct'18 |
| <b>[2023](https://)</b> | **571.54M** | **54.51M** | **48.19M** | **30.14B** | **30.78B** | **33** | **May'96 - Sep'23** |
### Grouped by Category
| Category | #User | #Item | #Rating | #R_Token | #M_Token | Download |
| ------------------------ | ------: | ------: | --------: | -------: | -------: | ------------------------------: |
| All_Beauty | 632.0K | 112.6K | 701.5K | 31.6M | 74.1M | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/All_Beauty.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_All_Beauty.jsonl.gz' download> meta </a> |
| Amazon_Fashion | 2.0M | 825.9K | 2.5M | 94.9M | 510.5M | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Amazon_Fashion.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Amazon_Fashion.jsonl.gz' download> meta </a> |
| Appliances | 1.8M | 94.3K | 2.1M | 92.8M | 95.3M | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Appliances.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Appliances.jsonl.gz' download> meta </a> |
| Arts_Crafts_and_Sewing | 4.6M | 801.3K | 9.0M | 350.0M | 695.4M | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Arts_Crafts_and_Sewing.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Arts_Crafts_and_Sewing.jsonl.gz' download> meta </a> |
| Automotive | 8.0M | 2.0M | 20.0M | 824.9M | 1.7B | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Automotive.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Automotive.jsonl.gz' download> meta </a> |
| Baby_Products | 3.4M | 217.7K | 6.0M | 323.3M | 218.6M | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Baby_Products.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Baby_Products.jsonl.gz' download> meta </a> |
| Beauty_and_Personal_Care | 11.3M | 1.0M | 23.9M | 1.1B | 913.7M | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Beauty_and_Personal_Care.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Beauty_and_Personal_Care.jsonl.gz' download> meta </a> |
| Books | 10.3M | 4.4M | 29.5M | 2.9B | 3.7B | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Books.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Books.jsonl.gz' download> meta </a> |
| CDs_and_Vinyl | 1.8M | 701.7K | 4.8M | 514.8M | 287.5M | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/CDs_and_Vinyl.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_CDs_and_Vinyl.jsonl.gz' download> meta </a> |
| Cell_Phones_and_Accessories | 11.6M | 1.3M | 20.8M | 935.4M | 1.3B | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Cell_Phones_and_Accessories.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Cell_Phones_and_Accessories.jsonl.gz' download> meta </a> |
| Clothing_Shoes_and_Jewelry | 22.6M | 7.2M | 66.0M | 2.6B | 5.9B | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Clothing_Shoes_and_Jewelry.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Clothing_Shoes_and_Jewelry.jsonl.gz' download> meta </a> |
| Digital_Music | 101.0K | 70.5K | 130.4K | 11.4M | 22.3M | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Digital_Music.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Digital_Music.jsonl.gz' download> meta </a> |
| Electronics | 18.3M | 1.6M | 43.9M | 2.7B | 1.7B | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Electronics.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Electronics.jsonl.gz' download> meta </a> |
| Gift_Cards | 132.7K | 1.1K | 152.4K | 3.6M | 630.0K | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Gift_Cards.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Gift_Cards.jsonl.gz' download> meta </a> |
| Grocery_and_Gourmet_Food | 7.0M | 603.2K | 14.3M | 579.5M | 462.8M | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Grocery_and_Gourmet_Food.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Grocery_and_Gourmet_Food.jsonl.gz' download> meta </a> |
| Handmade_Products | 586.6K | 164.7K | 664.2K | 23.3M | 125.8M | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Handmade_Products.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Handmade_Products.jsonl.gz' download> meta </a> |
| Health_and_Household | 12.5M | 797.4K | 25.6M | 1.2B | 787.2M | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Health_and_Household.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Health_and_Household.jsonl.gz' download> meta </a> |
| Health_and_Personal_Care | 461.7K | 60.3K | 494.1K | 23.9M | 40.3M | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Health_and_Personal_Care.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Health_and_Personal_Care.jsonl.gz' download> meta </a> |
| Home_and_Kitchen | 23.2M | 3.7M | 67.4M | 3.1B | 3.8B | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Home_and_Kitchen.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Home_and_Kitchen.jsonl.gz' download> meta </a> |
| Industrial_and_Scientific | 3.4M | 427.5K | 5.2M | 235.2M | 363.1M | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Industrial_and_Scientific.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Industrial_and_Scientific.jsonl.gz' download> meta </a> |
| Kindle_Store | 5.6M | 1.6M | 25.6M | 2.2B | 1.7B | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Kindle_Store.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Kindle_Store.jsonl.gz' download> meta </a> |
| Magazine_Subscriptions | 60.1K | 3.4K | 71.5K | 3.8M | 1.3M | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Magazine_Subscriptions.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Magazine_Subscriptions.jsonl.gz' download> meta </a> |
| Movies_and_TV | 6.5M | 747.8K | 17.3M | 1.0B | 415.5M | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Movies_and_TV.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Movies_and_TV.jsonl.gz' download> meta </a> |
| Musical_Instruments | 1.8M | 213.6K | 3.0M | 182.2M | 200.1M | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Musical_Instruments.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Musical_Instruments.jsonl.gz' download> meta </a> |
| Office_Products | 7.6M | 710.4K | 12.8M | 574.7M | 682.8M | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Office_Products.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Office_Products.jsonl.gz' download> meta </a> |
| Patio_Lawn_and_Garden | 8.6M | 851.7K | 16.5M | 781.3M | 875.1M | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Patio_Lawn_and_Garden.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Patio_Lawn_and_Garden.jsonl.gz' download> meta </a> |
| Pet_Supplies | 7.8M | 492.7K | 16.8M | 905.9M | 511.0M | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Pet_Supplies.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Pet_Supplies.jsonl.gz' download> meta </a> |
| Software | 2.6M | 89.2K | 4.9M | 179.4M | 67.1M | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Software.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Software.jsonl.gz' download> meta </a> |
| Sports_and_Outdoors | 10.3M | 1.6M | 19.6M | 986.2M | 1.3B | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Sports_and_Outdoors.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Sports_and_Outdoors.jsonl.gz' download> meta </a> |
| Subscription_Boxes | 15.2K | 641 | 16.2K | 1.0M | 447.0K | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Subscription_Boxes.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Subscription_Boxes.jsonl.gz' download> meta </a> |
| Tools_and_Home_Improvement | 12.2M | 1.5M | 27.0M | 1.3B | 1.5B | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Tools_and_Home_Improvement.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Tools_and_Home_Improvement.jsonl.gz' download> meta </a> |
| Toys_and_Games | 8.1M | 890.7K | 16.3M | 707.9M | 848.3M | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Toys_and_Games.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Toys_and_Games.jsonl.gz' download> meta </a> |
| Video_Games | 2.8M | 137.2K | 4.6M | 347.9M | 137.3M | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Video_Games.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Video_Games.jsonl.gz' download> meta </a> |
| Unknown | 23.1M | 13.2M | 63.8M | 3.3B | 232.8M | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Unknown.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Unknown.jsonl.gz' download> meta </a> |
> Check Pure ID files and corresponding data splitting strategies in <b>[Common Data Processing](https://amazon-reviews-2023.github.io/data_processing/index.html)</b> section.
## Quick Start
### Load User Reviews
```python
from datasets import load_dataset
dataset = load_dataset("McAuley-Lab/Amazon-Reviews-2023", "raw_review_All_Beauty", trust_remote_code=True)
print(dataset["full"][0])
```
```json
{'rating': 5.0,
'title': 'Such a lovely scent but not overpowering.',
'text': "This spray is really nice. It smells really good, goes on really fine, and does the trick. I will say it feels like you need a lot of it though to get the texture I want. I have a lot of hair, medium thickness. I am comparing to other brands with yucky chemicals so I'm gonna stick with this. Try it!",
'images': [],
'asin': 'B00YQ6X8EO',
'parent_asin': 'B00YQ6X8EO',
'user_id': 'AGKHLEW2SOWHNMFQIJGBECAF7INQ',
'timestamp': 1588687728923,
'helpful_vote': 0,
'verified_purchase': True}
```
### Load Item Metadata
```python
dataset = load_dataset("McAuley-Lab/Amazon-Reviews-2023", "raw_meta_All_Beauty", split="full", trust_remote_code=True)
print(dataset[0])
```
```json
{'main_category': 'All Beauty',
'title': 'Howard LC0008 Leather Conditioner, 8-Ounce (4-Pack)',
'average_rating': 4.8,
'rating_number': 10,
'features': [],
'description': [],
'price': 'None',
'images': {'hi_res': [None,
'https://m.media-amazon.com/images/I/71i77AuI9xL._SL1500_.jpg'],
'large': ['https://m.media-amazon.com/images/I/41qfjSfqNyL.jpg',
'https://m.media-amazon.com/images/I/41w2yznfuZL.jpg'],
'thumb': ['https://m.media-amazon.com/images/I/41qfjSfqNyL._SS40_.jpg',
'https://m.media-amazon.com/images/I/41w2yznfuZL._SS40_.jpg'],
'variant': ['MAIN', 'PT01']},
'videos': {'title': [], 'url': [], 'user_id': []},
'store': 'Howard Products',
'categories': [],
'details': '{"Package Dimensions": "7.1 x 5.5 x 3 inches; 2.38 Pounds", "UPC": "617390882781"}',
'parent_asin': 'B01CUPMQZE',
'bought_together': None,
'subtitle': None,
'author': None}
```
> Check data loading examples and Huggingface datasets APIs in <b>[Common Data Loading](https://amazon-reviews-2023.github.io/data_loading/index.html)</b> section.
## Data Fields
### For User Reviews
| Field | Type | Explanation |
| ----- | ---- | ----------- |
| rating | float | Rating of the product (from 1.0 to 5.0). |
| title | str | Title of the user review. |
| text | str | Text body of the user review. |
| images | list | Images that users post after they have received the product. Each image has different sizes (small, medium, large), represented by the small_image_url, medium_image_url, and large_image_url respectively. |
| asin | str | ID of the product. |
| parent_asin | str | Parent ID of the product. Note: Products with different colors, styles, sizes usually belong to the same parent ID. The “asin” in previous Amazon datasets is actually parent ID. <b>Please use parent ID to find product meta.</b> |
| user_id | str | ID of the reviewer |
| timestamp | int | Time of the review (unix time) |
| verified_purchase | bool | User purchase verification |
| helpful_vote | int | Helpful votes of the review |
### For Item Metadata
| Field | Type | Explanation |
| ----- | ---- | ----------- |
| main_category | str | Main category (i.e., domain) of the product. |
| title | str | Name of the product. |
| average_rating | float | Rating of the product shown on the product page. |
| rating_number | int | Number of ratings in the product. |
| features | list | Bullet-point format features of the product. |
| description | list | Description of the product. |
| price | float | Price in US dollars (at time of crawling). |
| images | list | Images of the product. Each image has different sizes (thumb, large, hi_res). The “variant” field shows the position of image. |
| videos | list | Videos of the product including title and url. |
| store | str | Store name of the product. |
| categories | list | Hierarchical categories of the product. |
| details | dict | Product details, including materials, brand, sizes, etc. |
| parent_asin | str | Parent ID of the product. |
| bought_together | list | Recommended bundles from the websites. |
## Citation
```bibtex
@article{hou2024bridging,
title={Bridging Language and Items for Retrieval and Recommendation},
author={Hou, Yupeng and Li, Jiacheng and He, Zhankui and Yan, An and Chen, Xiusi and McAuley, Julian},
journal={arXiv preprint arXiv:2403.03952},
year={2024}
}
```
## Contact Us
- **Report Bugs**: To report bugs in the dataset, please file an issue on our [GitHub](https://github.com/hyp1231/AmazonReviews2023/issues/new).
- **Others**: For research collaborations or other questions, please email **yphou AT ucsd.edu**. |
openclimatefix/dwd-icon-eu | openclimatefix | "2024-12-25T08:31:44Z" | 10,856 | 10 | [
"license:mit",
"size_categories:1K<n<10K",
"doi:10.57967/hf/0879",
"region:us",
"climate"
] | null | "2023-03-18T09:42:30Z" | ---
license: mit
tags:
- climate
pretty_name: DWD ICON-EU Forecasts
size_categories:
- 1K<n<10K
---
# Dataset Card for DWD ICON-EU Forecast
This dataset is comprised of forecasts from the German Weather Service's (DWD) ICON-EU model. From 2020-March 2023 the forecasts contain variables that are relevant to solar and wind
forecasting. From March 2023 to the present, all variables are included. Each forecast runs up to 5 days into the future, and the model is ran 4 times per day. This data is an archive of
the publicly available data at https://opendata.dwd.de/weather/nwp/, converted to Zarr format with Xarray. No other processing of the data is performed.
## Dataset Details
- **Curated by:** Jacob Bieker, Sol Cotton, Open Climate Fix
- **License:** German Government Open Data License
### Dataset Sources [optional]
- **Raw files:** https://opendata.dwd.de/weather/nwp/
Note: The raw files are deleted after 24 hours, and there is no long-term archive available publicly.
## Uses
This data is intended for use in renewable energy forecasting, weather forecasting, and anything that can use high-quality weather forecasts over Europe.
## Dataset Structure
The dataset is comprised of one Zarr file per forecast initialization time, and each forecast goes out between 48-120 hours. The files are located at data/year/month/day/YYYYMMDDHH.zarr.zip.
## Dataset Creation
### Curation Rationale
The DWD ICON-EU model provides high-quality, high-resolution forecasts for European weather that is also publicly available and free of charge. The model should generally outperform
NOAA's GFS forecast model, and has a higher temporal and spatial resolution. The main downside of this model is that the files are only available for a short period publicly, so this dataset
was setup to provide a public archive of the forecasts for use by researchers in many fields, but especially renewable energy forecasting and weather forecasting.
### Source Data
The source data is the grib2 files from the DWD Open Data Server.
#### Data Collection and Processing
The data is collected every day, around 6-8 hours after forecast initialization time to ensure the forecast is finished running before the data is pulled. The grib2 files are opened
with Xarray and collated into a single Xarray Dataset, with one data variable per ICON variable. Surface variables have "_s" appended to their names to differentiate them from multi-level variables.
The Dataset is then written to Zarr using "ocf_blosc2" to encode and compress the variables. No scaling or changing of the variables values is performed.
#### Who are the source data producers?
German Weather Service (DWD)
### Recommendations
These files can be opened directly from HuggingFace, and streamed in with Xarray. HuggingFace is fairly slow though, so the recommended way would be to download the files you want
and open them locally. In either case, to access the data you can do the following
```python
import ocf_blosc2
import xarray as xr
data = xr.open_zarr("path/to/zarr/file")
print(data)
```
Alternatively, for using the data in forecasting, there is the `ocf_datapipes` package for loading and training renewable energy forecasting models with multi-modal inputs, including
ICON, but also satellite data, PV readings, etc.
## Dataset Card Contact
OCF Data Team: [email protected] |
openslr/librispeech_asr | openslr | "2024-08-14T10:48:50Z" | 10,804 | 131 | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"task_ids:speaker-identification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"region:us"
] | [
"automatic-speech-recognition",
"audio-classification"
] | "2022-03-02T23:29:22Z" | ---
pretty_name: LibriSpeech
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: librispeech-1
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- audio-classification
task_ids:
- speaker-identification
dataset_info:
- config_name: clean
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.100
num_bytes: 6619683041
num_examples: 28539
- name: train.360
num_bytes: 23898214592
num_examples: 104014
- name: validation
num_bytes: 359572231
num_examples: 2703
- name: test
num_bytes: 367705423
num_examples: 2620
download_size: 30121377654
dataset_size: 31245175287
- config_name: other
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.500
num_bytes: 31810256902
num_examples: 148688
- name: validation
num_bytes: 337283304
num_examples: 2864
- name: test
num_bytes: 352396474
num_examples: 2939
download_size: 31236565377
dataset_size: 32499936680
- config_name: all
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.clean.100
num_bytes: 6627791685
num_examples: 28539
- name: train.clean.360
num_bytes: 23927767570
num_examples: 104014
- name: train.other.500
num_bytes: 31852502880
num_examples: 148688
- name: validation.clean
num_bytes: 359505691
num_examples: 2703
- name: validation.other
num_bytes: 337213112
num_examples: 2864
- name: test.clean
num_bytes: 368449831
num_examples: 2620
- name: test.other
num_bytes: 353231518
num_examples: 2939
download_size: 61357943031
dataset_size: 63826462287
---
# Dataset Card for librispeech_asr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [LibriSpeech ASR corpus](http://www.openslr.org/12)
- **Repository:** [Needs More Information]
- **Paper:** [LibriSpeech: An ASR Corpus Based On Public Domain Audio Books](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf)
- **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
- **Point of Contact:** [Daniel Povey](mailto:[email protected])
### Dataset Summary
LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`, `audio-speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER. An external leaderboard at https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-clean ranks the latest models from research and academia.
### Languages
The audio is in English. There are two configurations: `clean` and `other`.
The speakers in the corpus were ranked according to the WER of the transcripts of a model trained on
a different dataset, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher WER speakers designated as "other".
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'chapter_id': 141231,
'file': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'audio': {'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346,
0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'id': '1272-141231-0000',
'speaker_id': 1272,
'text': 'A MAN SAID TO THE UNIVERSE SIR I EXIST'}
```
### Data Fields
- file: A path to the downloaded audio file in .flac format.
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- id: unique id of the data sample.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
- chapter_id: id of the audiobook chapter which includes the transcription.
### Data Splits
The size of the corpus makes it impractical, or at least inconvenient
for some users, to distribute it as a single large archive. Thus the
training portion of the corpus is split into three subsets, with approximate size 100, 360 and 500 hours respectively.
A simple automatic
procedure was used to select the audio in the first two sets to be, on
average, of higher recording quality and with accents closer to US
English. An acoustic model was trained on WSJ’s si-84 data subset
and was used to recognize the audio in the corpus, using a bigram
LM estimated on the text of the respective books. We computed the
Word Error Rate (WER) of this automatic transcript relative to our
reference transcripts obtained from the book texts.
The speakers in the corpus were ranked according to the WER of
the WSJ model’s transcripts, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher-WER speakers designated as "other".
For "clean", the data is split into train, validation, and test set. The train set is further split into train.100 and train.360
respectively accounting for 100h and 360h of the training data.
For "other", the data is split into train, validation, and test set. The train set contains approximately 500h of recorded speech.
| | Train.500 | Train.360 | Train.100 | Valid | Test |
| ----- | ------ | ----- | ---- | ---- | ---- |
| clean | - | 104014 | 28539 | 2703 | 2620|
| other | 148688 | - | - | 2864 | 2939 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur.
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--5210},
year={2015},
organization={IEEE}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
lithium0003/findtextCenterNet_dataset | lithium0003 | "2024-11-16T15:43:06Z" | 10,799 | 0 | [
"license:mit",
"size_categories:100K<n<1M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"region:us"
] | null | "2024-01-14T08:58:51Z" | ---
license: mit
---
|
deepghs/gelbooru_full | deepghs | "2024-12-26T01:15:52Z" | 10,789 | 36 | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"task_categories:text-to-image",
"annotations_creators:no-annotation",
"source_datasets:gelbooru",
"language:en",
"license:other",
"size_categories:10M<n<100M",
"region:us",
"art",
"anime",
"not-for-all-audiences"
] | [
"image-classification",
"zero-shot-image-classification",
"text-to-image"
] | "2024-04-26T18:57:28Z" | ---
license: other
task_categories:
- image-classification
- zero-shot-image-classification
- text-to-image
language:
- en
tags:
- art
- anime
- not-for-all-audiences
size_categories:
- 10M<n<100M
annotations_creators:
- no-annotation
source_datasets:
- gelbooru
---
# Gelbooru Full Dataset
This is the full dataset of [gelbooru.com](https://gelbooru.com/). And all the original images are maintained here.
# How to Painlessly Use This
Use [cheesechaser](https://github.com/deepghs/cheesechaser) to quickly get images from this repository.
Before using this code, you have to **grant the access from this gated repository**. And then **set your personal HuggingFace token into `HF_TOKEN` environment variable** to give the code authorization for this repository.
```python
from cheesechaser.datapool import GelbooruDataPool
pool = GelbooruDataPool()
pool.batch_download_to_directory(
# download images #7000000-7000100, any ranges or id lists are okay
resource_ids=range(7000000, 7000100),
# save to directory /data/gelbooru
dst_dir='/data/gelbooru',
)
```
# Information
## Images
There are 10083708 images in total. The maximum ID of these images is 11191848. Last updated at `2024-12-26 10:23:03 JST`.
These are the information of recent 50 images:
| id | filename | width | height | mimetype | tags | file_size | file_url |
|---------:|:--------------|--------:|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------:|:-----------------------------------------------------------------------------|
| 11191848 | 11191848.jpg | 3000 | 3000 | image/jpeg | 1boy absurdres artist_name black_fur bracelet cape christmas chromatic_aberration closed_mouth commentary feet_out_of_frame film_grain fur-trimmed_headwear fur_trim furry furry_male glint gloves hat hedgehog_boy highres instagram_logo instagram_username jewelry looking_at_viewer male_focus red_cape red_eyes red_fur red_hat santa_hat shadow_the_hedgehog shiiyou shoes smile snowflakes solo sonic_(series) symbol-only_commentary twitter_username twitter_x_logo two-tone_fur white_footwear white_gloves | 2152633 | https://img3.gelbooru.com/images/f4/23/f423150983af5d9a5c54808bb0e10d0c.jpg |
| 11191845 | 11191845.png | 1012 | 1433 | image/png | 1girl :p bike_shorts blue_eyes blush breasts brown_hair cameltoe clothes_pull covering_breasts covering_privates creatures_(company) game_freak grey_background groin highres holding huuuna large_breasts licking_lips looking_at_viewer may_(pokemon) navel nintendo pokemon pokemon_rse pulling_own_clothes red_shirt shirt short_hair shorts skirt skirt_pull solo thighs tongue tongue_out undressing | 746214 | https://img3.gelbooru.com/images/cc/d3/ccd32a632db8e9d2e7eadfacc03473a0.png |
| 11191803 | 11191803.jpg | 2397 | 3871 | image/jpeg | 1girl absurdres aged_down armlet barefoot black_dress blush bracer branch detached_sleeves dress feet gold hands_in_hair highres holding holding_own_hair hololive hololive_english jewelry legs long_hair looking_at_viewer mori_calliope mori_calliope_(1st_costume) multicolored_hair no_shoes pink_hair plant red_eyes single_detached_sleeve single_thighhigh sitting skindentation smile soles solo thigh_strap thighhighs thighs tiara toes very_long_hair virtual_youtuber white_background zserty555 | 5590034 | https://img3.gelbooru.com/images/c1/dc/c1dc83ec7faec617050652b2ceb39a1e.jpg |
| 11191793 | 11191793.jpg | 2906 | 4096 | image/jpeg | 2girls :3 :d absurdres animal_ears black_cat black_eyes black_hair black_hoodie blush_stickers bright_pupils brown_eyes cat cat_ears cat_girl christmas christmas_lights christmas_ornaments christmas_tree commentary_request grey_pants highres hood hood_down hoodie indoors jacket long_hair long_sleeves megateru motion_blur multiple_girls open_clothes open_jacket open_mouth original outstretched_arms pants ponytail purple_skirt red_hair shirt sidelocks signature skirt smile socks solo_focus star_ornament surprised turtleneck turtleneck_shirt white_jacket white_pupils white_socks | 1875473 | https://img3.gelbooru.com/images/c7/1d/c71d9be511d90e18707fa52ee73fd229.jpg |
| 11191775 | 11191775.png | 2800 | 2600 | image/png | 1boy 1girl absurdly_long_hair absurdres akari_(pokemon) arms_behind_back ass bdsm black_bodysuit black_eyes blue_hair blue_jacket blush bodysuit bondage bound bound_wrists breasts creatures_(company) crotch_rope cursed_delta drooling fellatio female_focus fur-trimmed_sleeves fur_trim game_freak headband headscarf highres huge_ass jacket large_breasts long_hair long_sleeves looking_at_viewer looking_to_the_side looking_up multiple_views nintendo obi obijime oral penis pokemon pokemon_legends:_arceus red_scarf rope saliva sash scarf seductive_gaze seductive_smile shibari shibari_over_clothes short_sleeves sidelocks simple_background skin_tight skindentation smile solo_focus thick_thighs thighs very_long_hair white_background white_headband white_headscarf | 2722576 | https://img3.gelbooru.com/images/95/8c/958c49c10bda5cc4ca817b35ea7d7234.png |
| 11191762 | 11191762.jpg | 1668 | 2388 | image/jpeg | 1girl black_serafuku black_skirt blush brown_eyes brown_hair chibi chibi_inset envelope grey_background highres holding holding_envelope irasutoya irasutoya_challenge kagenone long_sleeves medium_hair neckerchief pale_skin red_neckerchief school_uniform serafuku skirt | 4096724 | https://img3.gelbooru.com/images/30/d9/30d9cf039413877951a134b8d15f8b54.jpg |
| 11191750 | 11191750.jpg | 1920 | 1200 | image/jpeg | 1boy 3girls ahoge antonio_(ragnarok_online) arms_up bell black_gloves boots bow box breasts brown_footwear brown_hair brown_jacket brown_skirt brown_socks cape christmas christmas_tree copyright_notice creator_(ragnarok_online) dress exyagi from_above full_body fur-trimmed_footwear fur-trimmed_headwear fur_trim gift gift_box gloves green_bow green_eyes green_hair guest_art hair_bow hat highres jacket lif_(ragnarok_online) logo long_hair long_sleeves looking_at_viewer medium_breasts merchant_(ragnarok_online) multiple_girls navel official_art official_wallpaper open_clothes open_jacket open_mouth pink_hair plunging_neckline pom_pom_(clothes) ragnarok_online red_cape red_hat santa_hat shirt shoes short_dress skirt sleigh smile snow snowing socks teeth very_long_hair white_gloves white_shirt | 1268939 | https://img3.gelbooru.com/images/e4/ab/e4ab421a022bd0143c70b2a9cc7daa99.jpg |
| 11191739 | 11191739.jpg | 1500 | 1500 | image/jpeg | animal_focus artist_name ball black_hole border_collie brown_eyes commission device dog drooling highres kihara_utsuru novelty_glasses pickles_(reverse:1999) pixiv_commission reverse:1999 signature soccer_ball space speech_bubble thought_bubble | 1271817 | https://img3.gelbooru.com/images/d0/3a/d03ada1a3e1cbe3c86ce85f9e797c1e4.jpg |
| 11191738 | 11191738.jpg | 1668 | 2388 | image/jpeg | 1girl autumn_leaves blush book braided_twintails brown_eyes brown_hair brown_sweater highres holding holding_book kagenone leaf long_hair long_sleeves original solo sweater upper_body wide_sleeves | 2326928 | https://img3.gelbooru.com/images/cf/86/cf86585c82c5ebd78ec4f090d8985522.jpg |
| 11191737 | 11191737.png | 2586 | 3776 | image/png | 1girl absurdres black_pantyhose blue_eyes breasts brown_hair cowboy_shot creatures_(company) double_bun doughnut_hair_bun game_freak grey_background hair_bun highres keiz looking_at_viewer medium_breasts nintendo nipples open_mouth pantyhose pantyhose_under_shorts pokemon rosa_(pokemon) see-through_clothes see-through_shirt shirt shorts simple_background solo visor_cap wet wet_clothes wet_shirt yellow_shorts | 2230557 | https://img3.gelbooru.com/images/a4/3c/a43c05614b01563b289716e3ee911fb6.png |
| 11191727 | 11191727.png | 2894 | 4093 | image/png | 1girl absurdres animal_ear_piercing animal_ears ball bare_shoulders beachball bikini bikini_top_only bow breasts brown_hair brown_tail camouflage_bikini_top collarbone commentary_request cowlick double_bikini eyewear_on_head fangs full_body gold_necklace hair_ornament hairclip highres horse_ears horse_girl horse_tail jewelry jungle_pocket_(umamusume) layered_bikini light_blush looking_at_viewer navel nebusoku necklace open_mouth sidelocks sky small_breasts solo swimsuit tail umamusume water water_drop wet yellow_bikini yellow_eyes | 4552461 | https://img3.gelbooru.com/images/17/b9/17b99f3804aa0881a0e9b26fa6727212.png |
| 11191726 | 11191726.jpg | 688 | 912 | image/jpeg | 1girl alternate_costume bare_shoulders blue_archive blush bow cross_hair_ornament dress foulo_paints green_bow green_eyes green_halo grey_hair hair_ornament hairclip halo hare_(blue_archive) long_hair open_mouth ponytail red_dress santa_dress sidelocks simple_background solo white_background | 263343 | https://img3.gelbooru.com/images/02/34/02345898623075346a7029057bdbcb73.jpg |
| 11191722 | 11191722.jpeg | 2507 | 3541 | image/jpeg | 1girl akan_mori areola_slip breasts dot_nose dragon_girl dragon_horns female_focus functionally_nude gigantic_breasts grey_eyes horns japanese_text looking_at_viewer lovely_labrynth_of_the_silver_castle matching_hair/eyes open_mouth pointy_ears shiny_skin smile solo standing sweat translation_request white_background white_hair wide_hips yu-gi-oh! | 532172 | https://img3.gelbooru.com/images/68/a7/68a73548c2cbcefcae9e231eaf9a7c4d.jpeg |
| 11191720 | 11191720.jpeg | 2507 | 3541 | image/jpeg | 1girl ahoge akan_mori anila_(granblue_fantasy) areola_slip blonde_hair breasts bursting_breasts dot_nose female_focus gigantic_breasts granblue_fantasy horns loincloth long_hair open_mouth shiny_skin smile solo standing thick_thighs thighhighs thighs white_background white_thighhighs wide_hips wide_sleeves | 695982 | https://img3.gelbooru.com/images/1b/b0/1bb0d29d5eaeb66e6e7799ccfe0c4be9.jpeg |
| 11191714 | 11191714.png | 1500 | 2200 | image/png | 1girl ahoge alternate_costume animal_ears beach black_hair blue_archive blush breasts cleanerjay clenched_hands collarbone competition_swimsuit day double-parted_bangs fang from_above hair_between_eyes hair_bun hair_ornament halo hands_up highres hip_vent huge_breasts kaede_(blue_archive) kaede_(swimsuit)_(blue_archive) large_breasts looking_at_viewer magatama ocean one-piece_swimsuit open_mouth oppai_loli outdoors pink_halo sand short_hair sidelocks single_side_bun skin_fang skindentation smile solo swimsuit thighhighs thighs white_one-piece_swimsuit white_thighhighs yellow_eyes | 2461625 | https://img3.gelbooru.com/images/a1/48/a14869c5100df87a3b42248a10eae9da.png |
| 11191713 | 11191713.jpg | 4096 | 2642 | image/jpeg | 1girl absurdres animal_ears antlers black_hair bleach breasts candy candy_cane christmas_present christmas_tree couch cushion deer_ears elbow_gloves fake_animal_ears fake_antlers food fur-trimmed_gloves fur-trimmed_thighhighs fur_trim gift gloves hair_between_eyes highleg highleg_leotard highres horns kuchiki_rukia leotard lying on_back on_couch purple_eyes red_gloves red_leotard red_thighhighs reindeer_antlers ribbon short_hair small_breasts solo thighhighs waligner | 7319148 | https://img3.gelbooru.com/images/c9/ee/c9ee0daaf2259b49de1209e1d522b05a.jpg |
| 11191709 | 11191709.jpg | 1277 | 1381 | image/jpeg | 1girl aqua_eyes blush christmas commentary_request cool-kyou_shinja full_body gloves grey_hair hat highres kanna_kamui kobayashi-san_chi_no_maidragon long_hair long_sleeves looking_to_the_side low_twintails official_art open_mouth red_gloves red_hat red_thighhighs santa_costume santa_hat speech_bubble thighhighs translation_request twintails | 196150 | https://img3.gelbooru.com/images/30/ba/30baa75409510c4f82fbf5939cc94275.jpg |
| 11191700 | 11191700.png | 852 | 1309 | image/png | 1girl belt black_hair breasts cassandra_(tangled) closed_mouth collared_shirt cowboy_shot expressionless eyewear_on_head gloves hair_between_eyes highres looking_down medium_breasts monochrome pants shirt short_hair solo standing tangled tangled:_the_series yukima | 1195882 | https://img3.gelbooru.com/images/c0/0a/c00a3b7f06738829801db0c377b55157.png |
| 11191699 | 11191699.jpg | 1000 | 1088 | image/jpeg | alternate_costume animal_ear_fluff animal_ears ayamy black_skirt blonde_hair blue_eyes blue_hair breasts cheerleader cowboy_shot cropped_shirt dog_ears dog_girl dog_tail fuwawa_abyssgard hololive hololive_english large_breasts long_hair miniskirt mococo_abyssgard multicolored_hair one_eye_closed pink_eyes pink_hair pleated_skirt pom_pom_(cheerleading) shirt short_hair siblings sisters skirt sleeveless sleeveless_shirt small_breasts streaked_hair tail twins two-tone_hair two_side_up v virtual_youtuber white_background white_shirt | 228372 | https://img3.gelbooru.com/images/fe/bd/febd3505f8919d9a00d11515caefed24.jpg |
| 11191695 | 11191695.jpg | 6253 | 4388 | image/jpeg | 1girl :o absurdres alternate_costume animal_collar animal_ear_fluff animal_ear_hairband animal_ears arms_up bell bespectacled black-framed_eyewear black_collar black_eyes black_footwear black_hairband black_jacket black_pants black_thighhighs blue_eyes blue_hair blue_nails blue_ribbon blue_thighhighs bracelet buttons cat_ear_hairband cat_ears cat_tail chain chain_necklace character_name chibi chinese_commentary closed_mouth coat collar collared_coat commentary_request copyright_name diamond-shaped_pupils diamond_(shape) eyelashes fake_animal_ears fake_tail fortissimo frilled_jacket frills glasses gloves gradient_thighhighs grey_pupils hair_between_eyes hair_ornament hairband hairclip hand_on_another's_chin hands_on_eyewear heart highres jacket jacket_partially_removed jewelry lips long_sleeves looking_at_viewer mahou_shoujo_madoka_magica mahou_shoujo_madoka_magica_(anime) miki_sayaka multiple_bracelets multiple_rings multiple_views musical_note musical_note_hair_ornament nail_polish neck_bell necklace nervous no_mouth ouge_dz pants parted_lips paw_pose pink_lips ribbon ring shirt shoes short_hair sneakers solid_oval_eyes spoken_heart sweat symbol-shaped_pupils tail thighhighs turtleneck turtleneck_shirt upper_body weibo_watermark white_background white_coat white_footwear white_gloves white_shirt | 5259522 | https://img3.gelbooru.com/images/1e/c6/1ec6566b4027b4ca179727e99b392b9a.jpg |
| 11191691 | 11191691.jpg | 1600 | 1200 | image/jpeg | 1girl azure_striker_gunvolt azure_striker_gunvolt_3 blue_eyes breasts brown_hair china_dress chinese_clothes chinese_zodiac cleavage cowboy_shot dress gunvolt highres horns kirin_(armor) large_breasts multicolored_hair official_art pink_hair red_dress red_hair translation_request wide_sleeves year_of_the_dragon | 1470362 | https://img3.gelbooru.com/images/19/dd/19dd6069593eb8d78958d948e1e8979f.jpg |
| 11191689 | 11191689.png | 1179 | 982 | image/png | 1boy 1girl apron belt black_hair blue_eyes blush boots brown_shirt cassandra_(tangled) chibi dagger eyewear_on_head freckles full_body gloves green_eyes green_shirt hair_between_eyes heart holding_hands knife long_sleeves off-shoulder_shirt off_shoulder pants red_shirt shirt shirt_under_shirt short_hair striped_clothes striped_pants tangled tangled:_the_series varian_(tangled) vertical-striped_clothes vertical-striped_pants walking weapon yukima | 481898 | https://img3.gelbooru.com/images/b0/37/b037b1f72ac11d92d1a6671a41afd9ee.png |
| 11191683 | 11191683.jpeg | 727 | 1200 | image/jpeg | 1girl blue_archive gun kayoko_(blue_archive) muginosa tagme weapon | 1047363 | https://img3.gelbooru.com/images/63/60/636037b767e63881d61765344f756e62.jpeg |
| 11191680 | 11191680.jpeg | 1200 | 810 | image/jpeg | 2girls angel angel_wings catfight jirai_kei muginosa multiple_girls school_uniform tagme wings | 1436180 | https://img3.gelbooru.com/images/83/73/837340945882471a5839a9808d149c99.jpeg |
| 11191679 | 11191679.jpeg | 849 | 1200 | image/jpeg | 1girl arms_behind_head artist_logo breasts brown_eyes brown_hair christmas christmas_tree cleavage deviantart_username discord_username facebook_username fire fireplace grin hat indoors jadenkaiba large_breasts logo long_hair looking_at_viewer matching_hair/eyes navel original panties pixiv_username santa_costume santa_hat shadow smile solo striped_clothes striped_panties striped_thighhighs thick_thighs thighhighs thighs underwear watermark web_address | 939438 | https://img3.gelbooru.com/images/e0/28/e028abaa781d7bb8ace89ef449f172fc.jpeg |
| 11191672 | 11191672.jpg | 1060 | 1800 | image/jpeg | 1girl alternate_costume areola_slip bare_shoulders bell bikini black_thighhighs blush breasts brown_hair cleavage commentary cowboy_shot elbow_gloves english_commentary front-tie_bikini_top front-tie_top genshin_impact gift gloves green_eyes hair_ornament hand_up highres huge_breasts lisa_(genshin_impact) looking_at_viewer medium_hair navel parted_lips red_bikini red_gloves sack side-tie_bikini_bottom sideboob smile solo stomach swimsuit thighhighs underboob zaphn | 1786012 | https://img3.gelbooru.com/images/b9/91/b9917d3ac26fa807a44be22bf7bda9e4.jpg |
| 11191655 | 11191655.jpg | 2893 | 4093 | image/jpeg | 1girl absurdres animal_ears blue_eyes blush boots breasts collarbone commentary_request freely2327 full_body grey_hair hair_between_eyes hair_ornament highres horse_ears horse_girl horse_tail long_hair looking_at_viewer pantyhose simple_background solo tail thick_eyebrows umamusume wonder_acute_(umamusume) | 3235653 | https://img3.gelbooru.com/images/c9/ab/c9abbf876bda585598318d8c4964cc85.jpg |
| 11191653 | 11191653.jpg | 1000 | 1000 | image/jpeg | 1boy arm_up baseball_cap collarbone commentary english_commentary fangs hand_on_own_face hat head_tilt male_focus no_eyes off_(game) pastel_colors pink_background shirt solo sparkle t-shirt tanblaque teeth the_batter_(off) | 63350 | https://img3.gelbooru.com/images/d7/17/d7175671d85e22a11f0fa81077b2c2ce.jpg |
| 11191649 | 11191649.jpg | 1200 | 1200 | image/jpeg | 2girls :d arashi_chisato black_hair black_jacket black_shorts blue_dress blunt_bangs bow bright_pupils chibi closed_mouth collarbone commentary_request cropped_shoulders double_bun dress grey_hair hair_bow hair_bun hair_down hazuki_ren heart high_ponytail highres holding_hands jacket long_hair long_sleeves looking_at_viewer looking_back love_live! love_live!_superstar!! miyaco_(shirataki_168) multiple_girls multiple_views off_shoulder open_clothes open_jacket open_mouth pink_eyes pink_shirt ponytail shirt shorts sidelocks smile split_mouth translation_request twintails very_long_hair white_background white_bow white_pupils white_shirt yellow_eyes yellow_shirt yuri | 718946 | https://img3.gelbooru.com/images/a4/39/a439354e177571968ffcfaef7173d6c3.jpg |
| 11191623 | 11191623.png | 3300 | 5500 | image/png | 1girl absurdres armband black_armband black_gloves black_horns black_jacket black_pants breasts button_gap closed_mouth commentary_request cowboy_shot curled_horns demon_tail earrings f.trigger gloves half_gloves hand_on_table highres horns jacket jacket_on_shoulders jewelry key_earrings large_breasts long_hair looking_at_viewer multicolored_hair necktie pants partial_commentary red_eyes red_necktie revision sleeves_rolled_up solo streaked_hair tail white_hair wuthering_waves zani_(wuthering_waves) | 5586730 | https://img3.gelbooru.com/images/57/77/577786e815ccfd59cf2ad744a90d83d8.png |
| 11191618 | 11191618.jpg | 1137 | 1491 | image/jpeg | 1girl alternate_costume blunt_bangs breasts capelet cleavage crescent crescent_hat_ornament green_ribbon hair_ribbon hat hat_ornament highres long_hair medium_breasts midriff navel patchouli_knowledge purple_eyes purple_hair red_capelet red_thighhighs reddizen ribbon santa_hat sidelocks simple_background solo thighhighs touhou very_long_hair white_background | 536827 | https://img3.gelbooru.com/images/db/fe/dbfe87da2679fadbe91bc6e4a50a93af.jpg |
| 11191609 | 11191609.jpg | 2257 | 1301 | image/jpeg | 1boy 9175x911 coat derivative_work dilated_pupils hair_over_one_eye highres homicipher looking_at_viewer male_focus monitoring_(vocaloid) mr._scarletella pov pov_peephole red_coat red_eyes red_hair short_hair solo | 417490 | https://img3.gelbooru.com/images/a3/20/a320ac304aaaa60847474fe6323f3699.jpg |
| 11191608 | 11191608.jpg | 1087 | 1235 | image/jpeg | 1girl black_ribbon breasts capelet christmas_present closed_mouth elbow_gloves fur-trimmed_capelet fur-trimmed_gloves fur_trim gift gloves green_capelet green_gloves green_hair green_thighhighs highres holding holding_scissors jk_arts large_breasts merry_christmas naked_ribbon purple_eyes ribbon scissors side_ponytail smile solo sousou_no_frieren striped_clothes striped_gloves striped_thighhighs thighhighs ubel_(sousou_no_frieren) | 469028 | https://img3.gelbooru.com/images/db/31/db31b54d37e047214a74fbc77cef7ffe.jpg |
| 11191599 | 11191599.jpg | 860 | 1214 | image/jpeg | 1girl bare_shoulders blush breasts dress elf frieren green_eyes grey_hair highres long_hair looking_at_viewer lying mimic mimic_chest pantyhose parted_bangs pointy_ears red_dress red_ribbon ribbon small_breasts solo_focus sousou_no_frieren ten-chan_(eternal_s) thighs tongue tongue_out twintails white_pantyhose | 145853 | https://img3.gelbooru.com/images/9b/fa/9bfae4c181f543f5860c0effe09efdcd.jpg |
| 11191597 | 11191597.png | 3000 | 4000 | image/png | 1girl absurdres aqua_eyes aqua_hair arm_warmers ascot bare_shoulders blush breasts christmas christmas_tree_hair_ornament cleavage detached_collar detached_sleeves dress full_body fur-trimmed_dress fur-trimmed_sleeves fur_trim hair_ornament hairclip highres holding holding_sack kantai_collection large_breasts long_hair looking_at_viewer orange_ascot red_dress sack santa_costume santa_dress smile solo strapless strapless_dress suzuya_(christmas_mode)_(kancolle) suzuya_(kancolle) thighhighs white_background yoko_hata zoom_layer | 8248923 | https://img3.gelbooru.com/images/c3/cd/c3cdefcc4d11a5c971e4893026b8218d.png |
| 11191593 | 11191593.jpg | 1900 | 2000 | image/jpeg | 2girls bell breasts dildo double_dildo highres mdthetest multiple_girls neck_bell santa_costume sex_toy small_breasts thighhighs | 376210 | https://img3.gelbooru.com/images/26/07/26072bd72f72e6dd9b1a521430d5cf22.jpg |
| 11191591 | 11191591.jpg | 900 | 1277 | image/jpeg | 1girl blue_eyes brown_hair commentary english_commentary full_body goggles goggles_on_headwear green_jacket hat highres jacket long_hair original red_hat sandragh signature simple_background smile solo sparkle twintails | 185098 | https://img3.gelbooru.com/images/ed/38/ed38cbd122cebfe0fce2583f85a5b69f.jpg |
| 11191587 | 11191587.png | 3000 | 4385 | image/png | 1girl absurdres animal_ears bikini breasts christmas elbow_gloves full_moon gloves gold_ship_(umamusume) green_scarf grey_hair hat highres horse_ears horse_girl horse_tail large_breasts long_hair looking_at_viewer moon navel night night_sky outdoors pink_eyes red_bikini red_gloves red_thighhighs santa_hat scarf sky smile snow solo standing swimsuit tail thighhighs thighs ugohato umamusume | 10013126 | https://img3.gelbooru.com/images/b2/ed/b2edecd00d1a11b42ee09af7efbfb488.png |
| 11191586 | 11191586.png | 3000 | 4000 | image/png | 1girl absurdres aqua_hair blue_eyes blush boots breasts brown_hair capelet christmas dress full_body fur-trimmed_capelet fur-trimmed_dress fur_trim gloves hair_ornament hairclip highres kantai_collection kumano_(christmas_mode)_(kancolle) kumano_(kancolle) long_hair looking_at_viewer mittens open_mouth ponytail red_capelet red_dress red_gloves red_mittens santa_costume small_breasts smile solo white_background yoko_hata zoom_layer | 7290383 | https://img3.gelbooru.com/images/3f/a9/3fa94e79b4bc369fa70c7d4342e0fcfb.png |
| 11191574 | 11191574.jpg | 1024 | 1862 | image/jpeg | 1girl 987645321o belt black_belt black_pantyhose boots brown_footwear capelet earrings elf frieren full_body green_eyes highres jewelry long_hair pantyhose pointy_ears shirt skirt solo sousou_no_frieren standing striped_clothes striped_shirt twintails white_background white_capelet white_hair white_skirt | 668212 | https://img3.gelbooru.com/images/0f/ad/0fad74acf5d0e4c2f59b2f1619888f35.jpg |
| 11191569 | 11191569.jpg | 3512 | 2551 | image/jpeg | 3girls ;3 ^_^ absurdres blush bow brown_hair capelet christmas closed_eyes closed_mouth clueless commentary crossed_bangs english_text flying_sweatdrops fujishima_megumi fur-trimmed_capelet fur_trim fuyu_no_okurimono_(love_live!) girl_sandwich glaring green_background green_eyes hair_bow half_up_braid hanazora_satsuki heart high-waist_skirt highres hinoshita_kaho jealous link!_like!_love_live! long_hair looking_at_another love_live! medium_hair multiple_girls musical_note neck_ribbon official_alternate_costume official_alternate_hairstyle one_eye_closed orange_hair otomune_kozue outline purple_eyes purple_hair red_bow red_capelet red_ribbon red_skirt ribbon sandwiched shirt sidelocks skirt split_mouth striped_ribbon symbol-only_commentary twintails two_side_up virtual_youtuber white_capelet white_outline white_ribbon white_shirt yuri | 737568 | https://img3.gelbooru.com/images/b3/f8/b3f811bb0c062e26ab482e2dbb1a206b.jpg |
| 11191563 | 11191563.jpg | 1920 | 1200 | image/jpeg | 1boy 1girl bell black_coat black_eyes black_hair blue_background blush bow briefs closed_mouth coat cone_horns copyright_notice cross cross_necklace detached_sleeves expressionless fake_horns flower gold_trim green_eyes guest_art gypsy_(ragnarok_online) hair_bell hair_bow hair_ornament highres holding holding_flower holding_underwear holding_unworn_clothes horns jewelry jingle_bell kankiriko logo long_hair long_sleeves looking_at_viewer male_underwear multicolored_coat necklace official_art official_wallpaper open_mouth pink_bow pink_sleeves polka_dot polka_dot_background priest_(ragnarok_online) purple_hair ragnarok_online red_coat short_hair sunflower sweatdrop torn_underwear two-tone_coat underwear unworn_male_underwear upper_body white_male_underwear yellow_flower yellow_horns | 1426344 | https://img3.gelbooru.com/images/b8/ca/b8cad78e216a6065ab1c88594ffcecd2.jpg |
| 11191561 | 11191561.jpg | 1821 | 1290 | image/jpeg | 2girls apron averting_eyes black_dress blue_eyes blue_hair blue_ribbon blush bow_background braid breasts burger closed_eyes commentary crown_braid dark_blue_hair diagonal_bangs dollchestra dress facing_another feeding floating_hair food french_fries frilled_apron frills grey_hair hair_ribbon heart highres holding holding_burger holding_food juliet_sleeves link!_like!_love_live! long_hair long_sleeves looking_to_the_side love_live! low_twin_braids low_twintails medium_breasts mimori_(cotton_heart) multiple_girls murano_sayaka official_alternate_costume open_mouth pink_background puffy_sleeves ribbon short_hair short_twintails sideways_glance striped_clothes striped_dress symbol-only_commentary twin_braids twintails two-tone_dress upper_body vertical-striped_clothes vertical-striped_dress virtual_youtuber waist_apron waitress wendy's white_dress yellow_apron yugiri_tsuzuri yuri | 292691 | https://img3.gelbooru.com/images/45/42/45427e4052f298c7eda5daae62f49abc.jpg |
| 11191560 | 11191560.jpeg | 1049 | 744 | image/jpeg | 2girls borrowed_character catfight commission mixed_martial_arts multiple_girls original pixiv_commission sage_tory tagme | 572195 | https://img3.gelbooru.com/images/37/6f/376feef555143507476be4a2054b467e.jpeg |
| 11191558 | 11191558.png | 2300 | 3000 | image/png | 1boy 1girl 2koma after_vaginal aftersex bar_censor bed blush breasts bukkake censored comic cum cum_in_pussy cum_on_body cum_on_breasts cum_on_hair cum_on_lower_body cum_on_upper_body heart huge_penis japanese_text long_hair lying monochrome on_bed open_mouth original penis smile speech_bubble twintails veins veiny_penis | 3267940 | https://img3.gelbooru.com/images/4e/95/4e953b7f950899ea5e12f7021a234cb5.png |
| 11191550 | 11191550.jpeg | 1024 | 768 | image/jpeg | air_master black_eyes black_hair black_jacket headband jacket kinjiro_kitaeda red_headband shirt sign watch white_shirt wristwatch | 87745 | https://img3.gelbooru.com/images/8d/7a/8d7acee8805a8af2f0edec3042b95fcd.jpeg |
| 11191543 | 11191543.png | 3035 | 4299 | image/png | 1boy 1girl absurdres black-framed_eyewear breasts censored completely_nude dark-skinned_male dark_skin english_text feet full_nelson glasses heart highres imsofckinlost interracial large_penis mosaic_censoring nipples nude orange_hair penis persona persona_5 purple_eyes pussy sakura_futaba sex soles sound_effects steam sweat tongue tongue_out vaginal | 17576059 | https://img3.gelbooru.com/images/b9/05/b9052937f0834f77538744bd54c29fb3.png |
| 11191540 | 11191540.jpg | 1000 | 1000 | image/jpeg | 1boy animal_ears antlers aynoh breath brown_jacket cup deer_antlers deer_boy deer_ears green_eyes grey_background holding holding_cup horns hot_chocolate jacket looking_at_viewer male_focus mug open_mouth orange_hair original red_scarf scarf short_hair simple_background solo upper_body very_short_hair | 135211 | https://img3.gelbooru.com/images/a8/18/a8184d7e4f5edb93102e938eb55acfb1.jpg |
| 11191533 | 11191533.jpg | 2048 | 1072 | image/jpeg | 1girl blonde_hair blush_stickers breasts cheek_pinching chibi closed_eyes ebora fate/grand_order fate_(series) gareth_(fate) green_eyes hair_flaps highres medium_breasts multiple_views open_mouth pinching shirt short_hair short_sleeves smile white_shirt | 156439 | https://img3.gelbooru.com/images/95/77/9577d226a701f537a84e11078ab9e1a8.jpg |
| 11191530 | 11191530.jpg | 1500 | 2300 | image/jpeg | 1girl black_gloves blue_background blue_eyes blue_hair boots c: capelet closed_mouth commentary_request dark_blue_hair embarrassed feet_out_of_frame floating_hair gloves hair_intakes hair_ribbon hat high-waist_skirt highres kanduki_kamibukuro knee_boots link!_like!_love_live! long_hair looking_at_viewer love_live! low_twintails miniskirt murano_sayaka nervous_smile pleated_skirt red_capelet red_footwear red_hat red_ribbon red_skirt ribbon santa_costume santa_hat shirt skirt skirt_hold smile solo sweatdrop twintails virtual_youtuber white_shirt | 297251 | https://img3.gelbooru.com/images/9d/4a/9d4a0fcc1d56853fa5ccb77fd91ccd58.jpg |
## Tags
There are 967837 tags in total.
These are the top 30 tags (125 tags in total) of type `unknown (-1)`:
| id | name | type | count | ambiguous |
|-----:|:--------------------------------------------------------------------------------------------------------------------------------------------|-------:|--------:|:------------|
| -1 | straightchromia | -1 | 263 | False |
| -1 | gekijigen_tag:_blanc_+_neptune_vs_zombie_gundan | -1 | 171 | False |
| -1 | seiki_kyushu | -1 | 23 | False |
| -1 | toyotaro | -1 | 15 | False |
| -1 | ensemble_stars!;character:akehoshi_subaru;happy_elements;male | -1 | 9 | False |
| -1 | _double_dash!! | -1 | 7 | False |
| -1 | dash!! | -1 | 7 | False |
| -1 | fubuki_kyoko | -1 | 7 | False |
| -1 | mario_k | -1 | 7 | False |
| -1 | star_\(symbol\) | -1 | 7 | False |
| -1 | \// | -1 | 6 | False |
| -1 | € | -1 | 6 | False |
| -1 | kami-sama_onegai!_onii-chan_no_aka-chan_ninshin_shitai_no!_~tsundere_imouto_&_seijun_imouto_to_ecchi_na_kiseki_de_trouble_kozukuri_zanmai♪~ | -1 | 5 | False |
| -1 | shrug_\(clothing\) | -1 | 5 | False |
| -1 | \\// | -1 | 4 | False |
| -1 | akizuki_rasenn | -1 | 4 | False |
| -1 | juju_(pixiv4563634) | -1 | 4 | False |
| -1 | source:https://nijie.info/view.php?id=151930 | -1 | 4 | False |
| -1 | +\l.l./+_(path_to_nowhere) | -1 | 3 | False |
| -1 | handing_breasts | -1 | 3 | False |
| -1 | pom_pom_\(clothes\) | -1 | 3 | False |
| -1 | slime_\(creature\) | -1 | 3 | False |
| -1 | star_\(sky\) | -1 | 3 | False |
| -1 | /tm | -1 | 2 | False |
| -1 | compl\pussy | -1 | 2 | False |
| -1 | mahitoⅶ | -1 | 2 | False |
| -1 | to_heart:_remember_my_memories | -1 | 2 | False |
| -1 | ulquiorra_schiffer | -1 | 2 | False |
| -1 | violet_plan | -1 | 2 | False |
| -1 | "artist: | -1 | 1 | False |
These are the top 30 tags (453388 tags in total) of type `general (0)`:
| id | name | type | count | ambiguous |
|-------:|:------------------|-------:|--------:|:------------|
| 152532 | 1girl | 0 | 6958797 | False |
| 12336 | solo | 0 | 5455769 | False |
| 265 | long_hair | 0 | 4657350 | True |
| 27 | breasts | 0 | 4401214 | False |
| 33975 | looking_at_viewer | 0 | 3638931 | False |
| 92 | blush | 0 | 3537662 | False |
| 796 | smile | 0 | 3084785 | False |
| 1100 | open_mouth | 0 | 2658791 | False |
| 52 | short_hair | 0 | 2400755 | False |
| 67 | blue_eyes | 0 | 2037042 | False |
| 28545 | simple_background | 0 | 1992127 | False |
| 21905 | large_breasts | 0 | 1988659 | False |
| 271 | blonde_hair | 0 | 1789137 | False |
| 1999 | shirt | 0 | 1776330 | False |
| 66 | black_hair | 0 | 1703108 | True |
| 337 | brown_hair | 0 | 1700323 | False |
| 179739 | white_background | 0 | 1629836 | False |
| 138893 | 1boy | 0 | 1617673 | False |
| 107 | skirt | 0 | 1586227 | False |
| 98 | gloves | 0 | 1420724 | True |
| 1864 | hair_ornament | 0 | 1420019 | False |
| 175068 | multiple_girls | 0 | 1409693 | False |
| 51 | red_eyes | 0 | 1389635 | True |
| 13957 | long_sleeves | 0 | 1368362 | False |
| 3477 | navel | 0 | 1343018 | False |
| 432 | nipples | 0 | 1325691 | False |
| 123 | dress | 0 | 1308851 | False |
| 23 | thighhighs | 0 | 1296314 | False |
| 6383 | holding | 0 | 1229633 | False |
| 153 | animal_ears | 0 | 1184287 | False |
These are the top 30 tags (281286 tags in total) of type `artist (1)`:
| id | name | type | count | ambiguous |
|--------:|:-------------------------|-------:|--------:|:------------|
| 46733 | qp:flapper | 1 | 15595 | False |
| 555502 | kagami_hirotaka | 1 | 8249 | False |
| 219408 | nel-zel_formula | 1 | 8063 | False |
| 594229 | ebifurya | 1 | 5768 | False |
| 719488 | aoi_nagisa_(metalder) | 1 | 5400 | False |
| 470499 | haruyama_kazunori | 1 | 5386 | False |
| 25270 | lolita_channel | 1 | 4910 | False |
| 401040 | hammer_(sunset_beach) | 1 | 4808 | False |
| 603058 | butcha-u | 1 | 4537 | False |
| 56027 | yaegashi_nan | 1 | 4460 | False |
| 67040 | piromizu | 1 | 4309 | False |
| 38088 | yoko_juusuke | 1 | 4149 | False |
| 21718 | drawfag | 1 | 4000 | False |
| 652987 | ruu_(tksymkw) | 1 | 3874 | False |
| 118829 | kanon_(kurogane_knights) | 1 | 3872 | False |
| 487842 | boris_(noborhys) | 1 | 3757 | False |
| 76506 | circle_anco | 1 | 3714 | False |
| 410 | azasuke | 1 | 3622 | False |
| 1128557 | kou_hiyoyo | 1 | 3404 | False |
| 1069930 | tony_taka | 1 | 3396 | False |
| 380097 | matsunaga_kouyou | 1 | 3394 | False |
| 481438 | itomugi-kun | 1 | 3284 | False |
| 729154 | naga_u | 1 | 3175 | False |
| 1051176 | hara_(harayutaka) | 1 | 3060 | False |
| 569895 | ojipon | 1 | 3044 | False |
| 354817 | bow_(bhp) | 1 | 3023 | False |
| 445614 | blade_(galaxist) | 1 | 2989 | False |
| 355711 | rebecca_(keinelove) | 1 | 2960 | False |
| 14795 | awa | 1 | 2856 | True |
| 509171 | neocoill | 1 | 2811 | False |
These are the top 30 tags (39857 tags in total) of type `copyright (3)`:
| id | name | type | count | ambiguous |
|--------:|:----------------------------|-------:|--------:|:------------|
| 118 | original | 3 | 1269200 | False |
| 126 | touhou | 3 | 780335 | False |
| 44106 | nintendo | 3 | 621025 | False |
| 448625 | kantai_collection | 3 | 423218 | True |
| 43567 | pokemon | 3 | 359839 | False |
| 306228 | game_freak | 3 | 357543 | False |
| 875834 | creatures_(company) | 3 | 357526 | False |
| 342429 | fate_(series) | 3 | 330978 | False |
| 1037340 | blue_archive | 3 | 246333 | False |
| 545521 | fate/grand_order | 3 | 238769 | False |
| 86 | idolmaster | 3 | 235114 | True |
| 943985 | genshin_impact | 3 | 224899 | False |
| 865663 | hololive | 3 | 216894 | False |
| 44086 | vocaloid | 3 | 162444 | False |
| 705390 | love_live! | 3 | 150528 | False |
| 807658 | arknights | 3 | 135366 | False |
| 805915 | azur_lane | 3 | 130239 | False |
| 338818 | idolmaster_cinderella_girls | 3 | 120598 | False |
| 5074 | fire_emblem | 3 | 112235 | False |
| 924 | digimon | 3 | 110069 | False |
| 851189 | umamusume | 3 | 102137 | False |
| 238 | final_fantasy | 3 | 98764 | False |
| 878809 | honkai_(series) | 3 | 88571 | False |
| 247 | one_piece | 3 | 81582 | False |
| 374082 | girls_und_panzer | 3 | 66219 | False |
| 237493 | mahou_shoujo_madoka_magica | 3 | 64433 | False |
| 1048700 | hololive_english | 3 | 63157 | False |
| 845788 | nijisanji | 3 | 62122 | False |
| 1121184 | girls'_frontline | 3 | 61878 | False |
| 7 | gundam | 3 | 60290 | True |
These are the top 30 tags (189418 tags in total) of type `character (4)`:
| id | name | type | count | ambiguous |
|--------:|:-------------------------|-------:|--------:|:------------|
| 14087 | hatsune_miku | 4 | 110042 | False |
| 855 | hakurei_reimu | 4 | 60110 | False |
| 130 | kirisame_marisa | 4 | 50889 | False |
| 486 | flandre_scarlet | 4 | 46303 | False |
| 850 | remilia_scarlet | 4 | 40796 | False |
| 1141830 | artoria_pendragon_(fate) | 4 | 37520 | False |
| 849 | izayoi_sakuya | 4 | 34149 | False |
| 36382 | komeiji_koishi | 4 | 30381 | False |
| 848 | konpaku_youmu | 4 | 29123 | False |
| 1293 | cirno | 4 | 27945 | False |
| 127 | alice_margatroid | 4 | 26925 | False |
| 484 | patchouli_knowledge | 4 | 26738 | False |
| 14543 | kochiya_sanae | 4 | 26429 | False |
| 658 | yakumo_yukari | 4 | 25281 | False |
| 83295 | souryuu_asuka_langley | 4 | 23165 | False |
| 481 | shameimaru_aya | 4 | 22616 | False |
| 237491 | akemi_homura | 4 | 21488 | False |
| 847 | reisen_udongein_inaba | 4 | 21351 | False |
| 237492 | kaname_madoka | 4 | 21296 | False |
| 24290 | nami_(one_piece) | 4 | 20817 | False |
| 36383 | komeiji_satori | 4 | 20736 | False |
| 761745 | saber_(fate) | 4 | 20331 | False |
| 125 | fujiwara_no_mokou | 4 | 20195 | False |
| 493 | saigyouji_yuyuko | 4 | 20135 | False |
| 14131 | kagamine_rin | 4 | 20122 | False |
| 15099 | inubashiri_momiji | 4 | 20037 | False |
| 804254 | artoria_pendragon_(all) | 4 | 20024 | False |
| 1059472 | ganyu_(genshin_impact) | 4 | 18770 | False |
| 881 | link | 4 | 18167 | False |
| 2335 | tifa_lockhart | 4 | 17409 | False |
These are the top 30 tags (407 tags in total) of type `metadata (5)`:
| id | name | type | count | ambiguous |
|--------:|:-----------------------|-------:|--------:|:------------|
| 262 | highres | 5 | 5938133 | False |
| 559 | absurdres | 5 | 1987478 | False |
| 150649 | commentary_request | 5 | 1184836 | False |
| 136261 | bad_id | 5 | 830999 | False |
| 760546 | bad_pixiv_id | 5 | 698199 | False |
| 25266 | commentary | 5 | 650901 | False |
| 136 | translation_request | 5 | 521034 | False |
| 1306 | official_art | 5 | 323671 | False |
| 52372 | tagme | 5 | 299899 | False |
| 23213 | artist_request | 5 | 247376 | False |
| 831896 | english_commentary | 5 | 241830 | False |
| 69 | game_cg | 5 | 189181 | True |
| 13710 | translated | 5 | 155560 | False |
| 209468 | commission | 5 | 155146 | False |
| 2229 | lowres | 5 | 113399 | False |
| 47252 | character_request | 5 | 110530 | False |
| 755092 | bad_twitter_id | 5 | 103797 | False |
| 19982 | traditional_media | 5 | 84244 | False |
| 789724 | chinese_commentary | 5 | 83177 | False |
| 888401 | non-web_source | 5 | 72887 | False |
| 426 | scan | 5 | 64325 | False |
| 853984 | third-party_edit | 5 | 64310 | False |
| 66551 | copyright_request | 5 | 63146 | False |
| 323949 | revision | 5 | 56891 | False |
| 1139885 | symbol-only_commentary | 5 | 56642 | False |
| 1034097 | skeb_commission | 5 | 53326 | False |
| 1223605 | paid_reward_available | 5 | 44343 | False |
| 191513 | md5_mismatch | 5 | 39237 | False |
| 2481 | source_request | 5 | 38976 | False |
| 63275 | huge_filesize | 5 | 34141 | True |
These are the top 30 tags (3356 tags in total) of type `deprecated (6)`:
| id | name | type | count | ambiguous |
|--------:|:------------------|-------:|--------:|:------------|
| 275 | silver_hair | 6 | 336681 | False |
| 205 | striped | 6 | 264448 | False |
| 24219 | see-through | 6 | 160050 | False |
| 286 | uniform | 6 | 159560 | True |
| 214641 | grabbing | 6 | 145174 | False |
| 3404 | plaid | 6 | 137227 | False |
| 889718 | black_headwear | 6 | 112049 | False |
| 264199 | black_legwear | 6 | 102143 | False |
| 56035 | light_brown_hair | 6 | 87604 | False |
| 850221 | white_headwear | 6 | 82957 | False |
| 264263 | white_legwear | 6 | 80489 | False |
| 47407 | looking_away | 6 | 65435 | False |
| 2818 | ass_grab | 6 | 59392 | False |
| 918581 | red_headwear | 6 | 41345 | False |
| 82157 | french_braid | 6 | 36466 | False |
| 54997 | multiple_penises | 6 | 36285 | False |
| 853048 | blue_headwear | 6 | 35686 | False |
| 17164 | breast_hold | 6 | 35279 | False |
| 18310 | vertical_stripes | 6 | 34542 | False |
| 1048560 | light_purple_hair | 6 | 33258 | False |
| 26046 | arm_grab | 6 | 26380 | False |
| 712440 | red_neckwear | 6 | 26251 | False |
| 488916 | oni_horns | 6 | 25061 | False |
| 670 | wallpaper | 6 | 24516 | False |
| 268269 | torn_legwear | 6 | 21955 | False |
| 51586 | screencap | 6 | 20521 | False |
| 842746 | green_headwear | 6 | 20206 | False |
| 918583 | brown_headwear | 6 | 20205 | False |
| 265499 | striped_legwear | 6 | 19590 | False |
| 20303 | turret | 6 | 17887 | False |
|
TempoFunk/tempofunk-sdance | TempoFunk | "2023-05-07T07:38:48Z" | 10,772 | 5 | [
"task_categories:text-to-video",
"task_categories:text-to-image",
"task_categories:video-classification",
"task_categories:image-classification",
"language:en",
"license:agpl-3.0",
"size_categories:1K<n<10K",
"region:us"
] | [
"text-to-video",
"text-to-image",
"video-classification",
"image-classification"
] | "2023-04-19T05:08:11Z" | ---
task_categories:
- text-to-video
- text-to-image
- video-classification
- image-classification
language:
- en
size_categories:
- 1K<n<10K
license: agpl-3.0
---
# TempoFunk S(mall)Dance
10k samples of metadata and encoded latents & prompts of videos themed around **dance**.
## Data format
- Video frame latents
- Numpy arrays
- 120 frames, 512x512 source size
- Encoded shape (120, 4, 64, 64)
- CLIP (openai) encoded prompts
- Video description (as seen in metadata)
- Encoded shape (77,768)
- Video metadata as JSON (description, tags, categories, source URLs, etc.) |
isp-uv-es/WorldFloodsv2 | isp-uv-es | "2024-01-10T11:38:21Z" | 10,769 | 6 | [
"license:cc-by-nc-4.0",
"modality:geospatial",
"modality:image",
"doi:10.57967/hf/3149",
"region:us",
"remote sensing",
"sentinel2",
"landsat",
"floods"
] | null | "2023-12-13T17:43:27Z" | ---
license: cc-by-nc-4.0
pipeline_tag: image-segmentation
tags:
- remote sensing
- sentinel2
- landsat
- floods
---
# *WorldFloodsv2* dataset
This repository contains the *WorldFloodsv2* dataset released with the publication:
> E. Portalés-Julià, G. Mateo-García, C. Purcell, and L. Gómez-Chova [Global flood extent segmentation in optical satellite images](https://www.nature.com/articles/s41598-023-47595-7). _Scientific Reports 13, 20316_ (2023). DOI: 10.1038/s41598-023-47595-7.
The [*WorldFloodsv2* database](https://www.nature.com/articles/s41598-023-47595-7) contains 509 pairs of Sentinel-2 images and flood segmentation masks. Splitted in train, val and test sets.
It requires approximately 76GB of hard-disk storage.
<img src="worldfloods_v2.png" alt="licence" width="65%"/>
## Download the dataset
```
huggingface-cli download --cache-dir /path/to/cachedir --local-dir /path/to/localdir/WorldFloodsv2 --repo-type dataset isp-uv-es/WorldFloodsv2
```
## Explore the dataset
The [exploring *WorldFloodsv2*](https://spaceml-org.github.io/ml4floods/content/prep/exploring_worldfloods.html) tutorial in the [ml4floods](https://github.com/spaceml-org/ml4floods) package shows how to
process the dataset and plot the images and masks.
## Licence
The *WorldFloods* database and all pre-trained models are released under a [Creative Commons non-commercial licence](https://creativecommons.org/licenses/by-nc/4.0/legalcode.txt)
## Cite
If you find this work useful, please cite:
```
@article{portales-julia_global_2023,
title = {Global flood extent segmentation in optical satellite images},
volume = {13},
issn = {2045-2322},
doi = {10.1038/s41598-023-47595-7},
number = {1},
urldate = {2023-11-30},
journal = {Scientific Reports},
author = {Portalés-Julià, Enrique and Mateo-García, Gonzalo and Purcell, Cormac and Gómez-Chova, Luis},
month = nov,
year = {2023},
pages = {20316},
}
```
|
ylacombe/cml-tts | ylacombe | "2023-11-24T14:48:29Z" | 10,766 | 14 | [
"task_categories:text-to-speech",
"task_categories:text-to-audio",
"language:nl",
"language:fr",
"language:de",
"language:it",
"language:pl",
"language:pt",
"language:es",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2306.10097",
"region:us"
] | [
"text-to-speech",
"text-to-audio"
] | "2023-11-23T12:01:49Z" | ---
language:
- nl
- fr
- de
- it
- pl
- pt
- es
license: cc-by-4.0
size_categories:
- 1M<n<10M
task_categories:
- text-to-speech
- text-to-audio
pretty_name: CML-TTS
dataset_info:
- config_name: dutch
features:
- name: audio
dtype: audio
- name: wav_filesize
dtype: int64
- name: text
dtype: string
- name: transcript_wav2vec
dtype: string
- name: levenshtein
dtype: float64
- name: duration
dtype: float64
- name: num_words
dtype: int64
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 186374683541.98
num_examples: 309785
- name: dev
num_bytes: 2912063172.928
num_examples: 4834
- name: test
num_bytes: 2757891736.78
num_examples: 4570
download_size: 132987704971
dataset_size: 192044638451.68802
- config_name: french
features:
- name: audio
dtype: audio
- name: wav_filesize
dtype: int64
- name: text
dtype: string
- name: transcript_wav2vec
dtype: string
- name: levenshtein
dtype: float64
- name: duration
dtype: float64
- name: num_words
dtype: int64
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 64984002840.768
num_examples: 107598
- name: dev
num_bytes: 2257393207.796
num_examples: 3739
- name: test
num_bytes: 2281630546.306
num_examples: 3763
download_size: 48345998335
dataset_size: 69523026594.87
- config_name: german
features:
- name: audio
dtype: audio
- name: wav_filesize
dtype: int64
- name: text
dtype: string
- name: transcript_wav2vec
dtype: string
- name: levenshtein
dtype: float64
- name: duration
dtype: float64
- name: num_words
dtype: int64
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 369052038020.872
num_examples: 608296
- name: dev
num_bytes: 3197115278.604
num_examples: 5314
- name: test
num_bytes: 3288183839.092
num_examples: 5466
download_size: 280438261836
dataset_size: 375537337138.568
- config_name: italian
features:
- name: audio
dtype: audio
- name: wav_filesize
dtype: int64
- name: text
dtype: string
- name: transcript_wav2vec
dtype: string
- name: levenshtein
dtype: float64
- name: duration
dtype: float64
- name: num_words
dtype: int64
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 30242801015.92
num_examples: 50345
- name: dev
num_bytes: 938644924.81
num_examples: 1765
- name: test
num_bytes: 979116355.51
num_examples: 1835
download_size: 21996805791
dataset_size: 32160562296.239998
- config_name: polish
features:
- name: audio
dtype: audio
- name: wav_filesize
dtype: int64
- name: text
dtype: string
- name: transcript_wav2vec
dtype: string
- name: levenshtein
dtype: float64
- name: duration
dtype: float64
- name: num_words
dtype: int64
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 11127461686.356
num_examples: 18719
- name: dev
num_bytes: 356048249
num_examples: 853
- name: test
num_bytes: 367796887
num_examples: 814
download_size: 8114633186
dataset_size: 11851306822.356
- config_name: portuguese
features:
- name: audio
dtype: audio
- name: wav_filesize
dtype: int64
- name: text
dtype: string
- name: transcript_wav2vec
dtype: string
- name: levenshtein
dtype: float64
- name: duration
dtype: float64
- name: num_words
dtype: int64
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 20722423371.0
num_examples: 34265
- name: dev
num_bytes: 622824524.224
num_examples: 1134
- name: test
num_bytes: 673141068.9
num_examples: 1297
download_size: 14421097659
dataset_size: 22018388964.124
- config_name: spanish
features:
- name: audio
dtype: audio
- name: wav_filesize
dtype: int64
- name: text
dtype: string
- name: transcript_wav2vec
dtype: string
- name: levenshtein
dtype: float64
- name: duration
dtype: float64
- name: num_words
dtype: int64
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 101377452063.176
num_examples: 168524
- name: dev
num_bytes: 1882729515.184
num_examples: 3148
- name: test
num_bytes: 1851592818.0
num_examples: 3080
download_size: 73687756096
dataset_size: 105111774396.36
configs:
- config_name: dutch
data_files:
- split: train
path: dutch/train-*
- split: dev
path: dutch/dev-*
- split: test
path: dutch/test-*
- config_name: french
data_files:
- split: train
path: french/train-*
- split: dev
path: french/dev-*
- split: test
path: french/test-*
- config_name: german
data_files:
- split: train
path: german/train-*
- split: dev
path: german/dev-*
- split: test
path: german/test-*
- config_name: italian
data_files:
- split: train
path: italian/train-*
- split: dev
path: italian/dev-*
- split: test
path: italian/test-*
- config_name: polish
data_files:
- split: train
path: polish/train-*
- split: dev
path: polish/dev-*
- split: test
path: polish/test-*
- config_name: portuguese
data_files:
- split: train
path: portuguese/train-*
- split: dev
path: portuguese/dev-*
- split: test
path: portuguese/test-*
- config_name: spanish
data_files:
- split: train
path: spanish/train-*
- split: dev
path: spanish/dev-*
- split: test
path: spanish/test-*
---
# Dataset Card for CML-TTS
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Data Statistics](#data-statistics)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [MultiLingual LibriSpeech ASR corpus](https://www.openslr.org/146/)
- **Repository:** [CML-TTS-Dataset](https://github.com/freds0/CML-TTS-Dataset)
- **Paper:** [CML-TTS A Multilingual Dataset for Speech Synthesis in Low-Resource Languages](https://arxiv.org/abs/2306.10097)
### Dataset Summary
CML-TTS is a recursive acronym for CML-Multi-Lingual-TTS, a Text-to-Speech (TTS) dataset developed at the Center of Excellence in Artificial Intelligence (CEIA) of the Federal University of Goias (UFG).
CML-TTS is a dataset comprising audiobooks sourced from the public domain books of Project Gutenberg, read by volunteers from the LibriVox project. The dataset includes recordings in Dutch, German, French, Italian, Polish, Portuguese, and Spanish, all at a sampling rate of 24kHz.
The data archives were restructured from the original ones from [OpenSLR](http://www.openslr.org/146) to make it easier to stream.
### Supported Tasks
- `text-to-speech`, `text-to-audio`: The dataset can also be used to train a model for Text-To-Speech (TTS).
### Languages
The dataset includes recordings in Dutch, German, French, Italian, Polish, Portuguese, and Spanish, all at a sampling rate of 24kHz.
### How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the German config, simply specify the corresponding language config name (i.e., "german" for German):
```python
from datasets import load_dataset
mls = load_dataset("ylacombe/cml-tts", "german", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
mls = load_dataset("ylacombe/cml-tts", "german", split="train", streaming=True)
print(next(iter(mls)))
```
#### *Bonus*
You can create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
**Local:**
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
mls = load_dataset("ylacombe/cml-tts", "german", split="train")
batch_sampler = BatchSampler(RandomSampler(mls), batch_size=32, drop_last=False)
dataloader = DataLoader(mls, batch_sampler=batch_sampler)
```
**Streaming:**
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
mls = load_dataset("ylacombe/cml-tts", "german", split="train", streaming=True)
dataloader = DataLoader(mls, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'audio': {'path': '6892_8912_000729.wav', 'array': array([-1.52587891e-...7344e-05]), 'sampling_rate': 24000}, 'wav_filesize': 601964, 'text': 'Proszę pana, tu pano... zdziwiony', 'transcript_wav2vec': 'proszę pana tu panow... zdziwiony', 'levenshtein': 0.96045197740113, 'duration': 13.648979591836737, 'num_words': 29, 'speaker_id': 6892}
```
### Data Fields
- audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
- transcript_wav2vec: the transcription of the audio file using the wav2vec model. Has been used to curate the dataset.
- wav_filesize: The size of the audio waveform file. Has been used to curate the dataset.
- levenshtein: The [Levenshtein distance](https://en.wikipedia.org/wiki/Levenshtein_distance) between the wav2vec transcription and the original transcription. Has been used to curate the dataset.
- duration: The duration of the audio in seconds.
- num_words: The number of words of the transcription.
### Data Splits
| # Samples | Train | Dev | Test |
|------------|--------|------|------|
| german | 608296 | 5314 | 5466 |
| dutch | 309785 | 4834 | 4570 |
| french | 107598 | 3739 | 3763 |
| spanish | 168524 | 3148 | 3080 |
| italian | 50345 | 1765 | 1835 |
| portuguese | 34265 | 1134 | 1297 |
| polish | 18719 | 853 | 814 |
### Data Statistics
| Language | Duration (Train) | Duration (Test) | Duration (Dev) | Speakers (Train) | Speakers (Test) | Speakers (Dev) |
|------------|-------------------|------------------|----------------|------------------|-----------------|----------------|
| | M | F | M | F | M | F | M | F | M | F | M | F |
| Dutch | 482.82 | 162.17 | 2.46 | 1.29 | 2.24 | 1.67 | 8 | 27 | 3 | 3 | 2 | 4 |
| French | 260.08 | 24.04 | 2.48 | 3.55 | 3.31 | 2.72 | 25 | 20 | 8 | 9 | 10 | 8 |
| German | 1128.96 | 436.64 | 3.75 | 5.27 | 4.31 | 5.03 | 78 | 90 | 13 | 17 | 13 | 15 |
| Italian | 73.78 | 57.51 | 1.47 | 0.85 | 0.40 | 1.52 | 23 | 38 | 5 | 5 | 4 | 6 |
| Polish | 30.61 | 8.32 | 0.70 | 0.90 | 0.56 | 0.80 | 4 | 4 | 2 | 2 | 2 | 2 |
| Portuguese | 23.14 | 44.81 | 0.28 | 0.24 | 0.68 | 0.20 | 20 | 10 | 5 | 4 | 6 | 3 |
| Spanish | 279.15 | 164.08 | 2.77 | 2.06 | 3.40 | 2.34 | 35 | 42 | 10 | 8 | 11 | 9 |
| Total | 3,176.13| | 28.11 | | 29.19 | | 424 | | 94 | | 95 | |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Public Domain, Creative Commons Attribution 4.0 International Public License ([CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/legalcode))
### Citation Information
```
@misc{oliveira2023cmltts,
title={CML-TTS A Multilingual Dataset for Speech Synthesis in Low-Resource Languages},
author={Frederico S. Oliveira and Edresson Casanova and Arnaldo Cândido Júnior and Anderson S. Soares and Arlindo R. Galvão Filho},
year={2023},
eprint={2306.10097},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
### Contributions
Thanks to [@ylacombe](https://github.com/ylacombe) for adding this dataset.
|
lerobot/berkeley_cable_routing | lerobot | "2024-11-20T15:19:08Z" | 10,758 | 0 | [
"task_categories:robotics",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2307.08927",
"region:us",
"LeRobot"
] | [
"robotics"
] | "2024-07-24T00:35:17Z" | ---
license: cc-by-4.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** https://sites.google.com/view/cablerouting/home
- **Paper:** https://arxiv.org/abs/2307.08927
- **License:** cc-by-4.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "unknown",
"total_episodes": 1647,
"total_frames": 42328,
"total_tasks": 1,
"total_videos": 6588,
"total_chunks": 2,
"chunks_size": 1000,
"fps": 10,
"splits": {
"train": "0:1647"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.images.top_image": {
"dtype": "video",
"shape": [
128,
128,
3
],
"names": [
"height",
"width",
"channel"
],
"video_info": {
"video.fps": 10.0,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.wrist225_image": {
"dtype": "video",
"shape": [
128,
128,
3
],
"names": [
"height",
"width",
"channel"
],
"video_info": {
"video.fps": 10.0,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.wrist45_image": {
"dtype": "video",
"shape": [
128,
128,
3
],
"names": [
"height",
"width",
"channel"
],
"video_info": {
"video.fps": 10.0,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.image": {
"dtype": "video",
"shape": [
128,
128,
3
],
"names": [
"height",
"width",
"channel"
],
"video_info": {
"video.fps": 10.0,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"language_instruction": {
"dtype": "string",
"shape": [
1
],
"names": null
},
"observation.state": {
"dtype": "float32",
"shape": [
8
],
"names": {
"motors": [
"motor_0",
"motor_1",
"motor_2",
"motor_3",
"motor_4",
"motor_5",
"motor_6",
"motor_7"
]
}
},
"action": {
"dtype": "float32",
"shape": [
7
],
"names": {
"motors": [
"motor_0",
"motor_1",
"motor_2",
"motor_3",
"motor_4",
"motor_5",
"motor_6"
]
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"next.reward": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
@article{luo2023multistage,
author = {Jianlan Luo and Charles Xu and Xinyang Geng and Gilbert Feng and Kuan Fang and Liam Tan and Stefan Schaal and Sergey Levine},
title = {Multi-Stage Cable Routing through Hierarchical Imitation Learning},
journal = {arXiv pre-print},
year = {2023},
url = {https://arxiv.org/abs/2307.08927},
}
``` |
alvations/c4p0-x1-en-it | alvations | "2024-03-24T03:48:42Z" | 10,723 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-23T10:05:05Z" | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
- name: target_backto_source
dtype: string
- name: raw_target
list:
- name: generated_text
dtype: string
- name: raw_target_backto_source
list:
- name: generated_text
dtype: string
- name: prompt
dtype: string
- name: reverse_prompt
dtype: string
- name: source_langid
dtype: string
- name: target_langid
dtype: string
- name: target_backto_source_langid
dtype: string
- name: doc_id
dtype: int64
- name: sent_id
dtype: int64
- name: timestamp
dtype: string
- name: url
dtype: string
- name: doc_hash
dtype: string
splits:
- name: train
num_bytes: 6913
num_examples: 5
download_size: 19494
dataset_size: 6913
configs:
- config_name: default
data_files:
- split: train
path: 5eeb99e4b632b370/train-*
---
|
Anthropic/hh-rlhf | Anthropic | "2023-05-26T18:47:34Z" | 10,633 | 1,232 | [
"license:mit",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2204.05862",
"region:us",
"human-feedback"
] | null | "2022-12-08T20:11:33Z" | ---
license: mit
tags:
- human-feedback
---
# Dataset Card for HH-RLHF
## Dataset Summary
This repository provides access to two different kinds of data:
1. Human preference data about helpfulness and harmlessness from [Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback](https://arxiv.org/abs/2204.05862). These data are meant to train preference (or reward) models for subsequent RLHF training. These data are *not* meant for supervised training of dialogue agents. Training dialogue agents on these data is likely to lead to harmful models and this shold be avoided.
2. Human-generated and annotated red teaming dialogues from [Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned](https://www.anthropic.com/red_teaming.pdf). These data are meant to understand how crowdworkers red team models and what types of red team attacks are succesful or not. The data are *not* meant for fine-tuning or preference modeling (use the data above for preference modeling). These data are entire transcripts of conversations that are derived from the harmlessness preference modeling data described above, where only the chosen response is incorporated into the overall transcript. Furthermore, the transcripts are annotated with human and automated measurements of how harmful the overall dialogues are.
**Disclaimer**: The data (especially the harmlessness preference data and the red team data) contain content that may be offensive or upsetting. Topics include, but are not limited to, discriminatory language and discussions of abuse, violence, self-harm, exploitation, and other potentially upsetting subject matter. Please only engage with the data in accordance with your own personal risk tolerance. The data are intended for research purposes, especially research that can make models *less* harmful. The views expressed in the data do not reflect the views of Anthropic or any of its employees. As mentioned above, these data are *not* intended for training dialogue agents as this will likely lead to harmful model behavior.
Each of these datasets are described further below.
## Human preference data about helpfulness and harmlessness (PM Data)
The data are described in the paper: [Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback](https://arxiv.org/abs/2204.05862). If you find the data useful, please cite the paper. The data format is very simple -- each line of the jsonl files contains a pair of texts, one "chosen" and one "rejected".
For **helpfulness**, the data are grouped into train/test splits in three tranches: from our base models (context-distilled 52B language models), via rejection sampling (mostly with best-of-16 sampling) against an early preference model, and a dataset sampled during our iterated "online" process.
For **harmlessness**, the data are only collected for our base models, but otherwise formatted in the same way.
Details about the data collection process and crowdworker population can be found in the paper, specifically in section 2 and appendix D.
## Red teaming data (not PM Data)
The data are described in the paper: [Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned](https://www.anthropic.com/red_teaming.pdf). If you find the data useful, please cite the paper. At a high level these data can be examined to understand what succesful red team attempts look like.
Details about the data and data collection procedures can be found in the Datasheet in the appendix of the paper.
Each line of the jsonl file contains a dictionary with the following fields:
- `transcript` a text transcript of a conversation between a human adversary (red team member) and an AI assistant
- `min_harmlessness_score_transcript` a real value score of the harmlessness of the AI assistant (lower is more harmful) as obtained from a preference model
- `num_params` number of parameters in the language model powering the AI assistant
- `model_type` type of model powering the AI assistant
- `rating` the red team member's rating of how successful they were at breaking the AI assistant (Likert scale, higher is more successful)
- `task_description` a short text description written by the red team member about how they tried to red team the AI assistant
- `task_description_harmlessness_score` a real value score of the harmlessness of the task description (lower is more harmful) as obtained from a preference model
- `red_team_member_id` an arbitrary identifier of the red team member. one red team member can generate multiple red team attacks
- `is_upworker` a binary indicator that is true if the red team member was from the crowd platform Upwork or false if they were from MTurk
- `tags` a list of up to 6 tags per transcript. tags are short descriptions of the red team attempts generated by crowdworkers who reviewed red team data post-hoc. tags were only provided for a random sample of 1000 red team attempts for two of four model types.
## Usage
Each of the above datasets is located in a separate sub-directory. To load an individual subset, use the `data_dir` argument of the `load_dataset()` function as follows:
```python
from datasets import load_dataset
# Load all helpfulness/harmless subsets (share the same schema)
dataset = load_dataset("Anthropic/hh-rlhf")
# Load one of the harmless subsets
dataset = load_dataset("Anthropic/hh-rlhf", data_dir="harmless-base")
# Load the red teaming subset
dataset = load_dataset("Anthropic/hh-rlhf", data_dir="red-team-attempts")
```
## Contact
The original authors host this dataset on GitHub here: https://github.com/anthropics/hh-rlhf
You can submit inquiries to: [email protected] |
deepghs/sankaku_full | deepghs | "2024-12-26T01:20:51Z" | 10,598 | 27 | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"task_categories:text-to-image",
"annotations_creators:no-annotation",
"source_datasets:sankaku",
"language:en",
"language:ja",
"license:other",
"size_categories:10M<n<100M",
"region:us",
"art",
"anime",
"not-for-all-audiences"
] | [
"image-classification",
"zero-shot-image-classification",
"text-to-image"
] | "2024-10-23T06:42:37Z" | ---
license: other
task_categories:
- image-classification
- zero-shot-image-classification
- text-to-image
language:
- en
- ja
tags:
- art
- anime
- not-for-all-audiences
size_categories:
- 10M<n<100M
annotations_creators:
- no-annotation
source_datasets:
- sankaku
---
# Sankaku Full Dataset
This is the full dataset of [chan.sankakucomplex.com](https://chan.sankakucomplex.com/). And all the original images are maintained here.
# How to Painlessly Use This
Use [cheesechaser](https://github.com/deepghs/cheesechaser) to quickly get images from this repository.
Before using this code, you have to **grant the access from this gated repository**. And then **set your personal HuggingFace token into `HF_TOKEN` environment variable** to give the code authorization for this repository.
```python
from cheesechaser.datapool import SankakuDataPool
pool = SankakuDataPool()
pool.batch_download_to_directory(
# download images #7000000-7000100, any ranges or id lists are okay
resource_ids=range(7000000, 7000100),
# save to directory /data/sankaku
dst_dir='/data/sankaku',
)
```
# Information
## Images
There are 16498615 images in total. The maximum ID of these images is 36864051. Last updated at `2024-12-19 02:45:07 UTC`.
16498615 images in total. Only 50 images shown.
| id | filename | width | height | rating | mimetype | file_size | tags | created_at |
|---------:|:--------------|--------:|---------:|:---------|:-----------|------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------:|
| 36864051 | 36864051.png | 2048 | 945 | q | image/png | 3754284 | [] | 1731168844 |
| 36864049 | 36864049.png | 1267 | 1043 | q | image/png | 931791 | ['code_geass', 'code_geass:_lost_stories', 'ledo_offen', 'androgynous', 'clothing', 'shirt', 'uniform', 'gloves', 'long_sleeves', 'jacket', 'white_shirt', 'sidelocks', 'black_gloves', 'cape', 'military_uniform', 'hair_intakes', 'blue_jacket', 'gold_trim', 'blue_cape', 'female', 'long_hair', 'male', 'black_hair', 'red_eyes', 'brown_eyes', 'dark_skin', 'upper_body', 'dark-skinned_female', 'dark-skinned_male', 'looking_at_viewer', 'standing', 'parted_lips', 'hands_up', 'adjusting_clothes', 'adjusting_gloves', 'military', 'solo', '1boy', 'male_focus', 'cropped_torso', 'side_cape', 'cg_art', 'simple_background', 'game_cg', 'official_art', 'useless_tags', 'black_background', 'transparent_background', 'tagme'] | 1731168821 |
| 36864048 | 36864048.jpg | 2480 | 3508 | e | image/jpeg | 3041918 | ['naruto', 'naruto_shippuden', 'uzumaki_naruto', 'inuzuka_kiba', 'destor777', 'yaoi', 'bdsm', 'bara', 'foot_fetish', 'clothing', 'nude', 'headwear', 'completely_nude', 'collar', 'headband', 'forehead_protector', 'male', 'blush', 'short_hair', 'nipples', 'penis', 'brown_hair', 'black_hair', 'navel', 'thighs', 'tail', 'feet', 'testicles', 'teeth', 'erection', 'muscle', 'black_eyes', 'stomach', 'abs', 'muscular_male', 'pectorals', 'facial_mark', 'large_pectorals', 'dog_tail', 'circumcised', 'open_mouth', 'looking_at_viewer', 'spread_legs', 'bondage', 'screaming', 'chain', 'leash', 'konohagakure_symbol', 'genital_fluids', 'precum', '1boy', 'multiple_boys', 'solo_focus', '2boys', 'male_focus', 'male_only', 'chain_leash', 'stomping', 'uncensored', 'high_resolution', 'tagme', 'very_high_resolution'] | 1731168820 |
| 36864047 | 36864047.avif | 1280 | 1967 | e | image/avif | 94315 | ['original', 'egami', 'hetero', 'loli', 'age_difference', 'teenage', 'teenage_girl', 'clothing', 'tied_hair', 'nude', 'legwear', 'bangs', 'shirt', 'thighhighs', 'skirt', 'uniform', 'twintails', 'school_uniform', 'black_legwear', 'white_shirt', 'short_sleeves', 'pleated_skirt', 'serafuku', 'black_thighhighs', 'nude_male', 'clothed_female', 'clothed_female_nude_male', 'female', 'breasts', 'male', 'blush', 'short_hair', 'nipples', 'penis', 'brown_hair', 'black_hair', 'ass', 'brown_eyes', 'small_breasts', 'erect_nipples', 'symbol-shaped_pupils', 'heart-shaped_pupils', 'bulge', 'stomach_bulge', 'open_mouth', 'standing', 'clothes_pull', 'clothes_lift', 'doggystyle', 'shirt_lift', 'standing_sex', 'tiptoes', 'sex', 'vaginal', 'taken_from_behind', 'orgasm', 'trembling', 'clothed_sex', 'female_orgasm', 'torso_grab', 'semen', 'sweat', 'overflow', '1girl', '1boy', 'indoors', 'limited_visibility', 'contentious_content', 'heart', 'high_resolution'] | 1731168814 |
| 36864046 | 36864046.jpg | 2894 | 3695 | e | image/jpeg | 2170162 | ['touhou_project', 'cirno', 'kame_(kamepan44231)', 'chibi', 'clothing', 'hair_ornament', 'shirt', 'underwear', 'footwear', 'dress', 'ribbon', 'bow', 'hair_between_eyes', 'shoes', 'white_shirt', 'hair_bow', 'short_sleeves', 'sleeveless', 'collared_shirt', 'puffy_sleeves', 'puffy_short_sleeves', 'sleeveless_dress', 'blue_dress', 'blue_bow', 'blush_stickers', 'blue_footwear', 'bloomers', 'long_dress', 'pinafore_dress', 'female', 'short_hair', 'blue_eyes', 'blue_hair', 'ahoge', 'wings', 'looking_at_viewer', 'closed_mouth', 'restrained', 'fairy', 'rope', 'ice', '1girl', 'solo', ':<', 'fumo_(doll)', 'ice_wings', 'tied_up_(nonsexual)', 'simple_background', 'white_background', 'full_body', 'useless_tags', 'high_resolution', 'tagme', 'very_high_resolution'] | 1731168798 |
| 36864045 | 36864045.jpg | 3070 | 4096 | e | image/jpeg | 437973 | ['minecraft', 'ellie_walls_(odysseyellie)', 'odysseyellie', 'hetero', 'threesome', 'interracial', 'mmf_threesome', 'prostitution', 'blacked', 'clothing', 'legwear', 'headwear', 'hair_ornament', 'bangs', 'thighhighs', 'underwear', 'gloves', 'hairband', 'bottomless', 'bra', 'black_gloves', 'blunt_bangs', 'nopan', 'stockings', 'sports_bra', 'skull_hair_ornament', 'blacked_clothing', 'female', 'male', 'blush', 'penis', 'black_hair', 'ass', 'thighs', 'green_eyes', 'anus', 'red_hair', 'dark_skin', 'multicolored_hair', 'thick_thighs', 'dark-skinned_male', 'large_ass', 'humanoid', 'freckles', 'shortstack', 'looking_at_viewer', 'smile', 'standing', 'double_penetration', 'standing_sex', 'sex', 'vaginal', 'nakadashi', 'looking_back', 'anal', 'penetration', 'looking_at_another', 'embarrassed', 'looking_at_penis', 'looking_at_partner', 'horny', 'goth_girl', 'semen', 'semen_in_anus', '1girl', '1boy', 'multiple_boys', 'solo_focus', '2boys', 'outdoors', 'black_and_red_hair', 'goth', 'happy_face', 'happy_female', 'horny_female', 'male/female/male', 'ponytails', 'spiked_hairband', '3d', 'useless_tags', 'mine-imator', 'commission', 'english_language', 'text', 'english_text', 'high_resolution', 'very_high_resolution'] | 1731168793 |
| 36864044 | 36864044.avif | 1280 | 1967 | e | image/avif | 86235 | ['original', 'egami', 'hetero', 'loli', 'teenage', 'mind_control', 'teenage_girl', 'hypnosis', 'clothing', 'tied_hair', 'nude', 'legwear', 'bangs', 'shirt', 'thighhighs', 'skirt', 'uniform', 'twintails', 'school_uniform', 'black_legwear', 'white_shirt', 'short_sleeves', 'pleated_skirt', 'serafuku', 'black_thighhighs', 'nude_male', 'clothed_female', 'blue_skirt', 'clothed_female_nude_male', 'female', 'breasts', 'long_hair', 'male', 'blush', 'nipples', 'penis', 'brown_hair', 'black_hair', 'ass', 'thighs', 'brown_eyes', 'small_breasts', 'erect_nipples', 'bulge', 'stomach_bulge', 'open_mouth', 'standing', 'clothes_pull', 'clothes_lift', 'shirt_lift', 'standing_sex', 'against_wall', 'tiptoes', 'sex', 'vaginal', 'taken_from_behind', 'trembling', 'clothed_sex', 'torso_grab', 'sweat', '1girl', '1boy', 'indoors', 'limited_visibility', 'contentious_content', 'heart', 'high_resolution'] | 1731168782 |
| 36864043 | 36864043.jpg | 1536 | 2048 | s | image/jpeg | 482685 | ['vocaloid', 'pixiv', 'hatsune_miku', 'sakura_miku', 'aoiyui', 'clothing', 'tied_hair', 'hair_ornament', 'bangs', 'shirt', 'skirt', 'twintails', 'bare_shoulders', 'necktie', 'pleated_skirt', 'detached_sleeves', 'sleeveless', 'tattoo', 'sleeveless_shirt', 'pink_shirt', 'pink_skirt', 'shoulder_tattoo', 'leaf_hair_ornament', 'female', 'long_hair', 'blush', 'pink_hair', 'very_long_hair', 'ahoge', 'upper_body', 'pink_eyes', 'open_mouth', 'looking_at_viewer', 'hand_up', 'outstretched_hand', 'looking_up', 'headphones', '1girl', 'solo', 'cherry_hair_ornament', 'food-themed_hair_ornament', 'headset', 'pink_necktie', 'pink_sleeves', 'sakura_design', 'useless_tags', 'pink_background', 'pink_theme', 'high_resolution', 'tagme'] | 1731168773 |
| 36864041 | 36864041.jpg | 1280 | 1807 | e | image/jpeg | 214630 | ['genshin_impact', 'barbara_(genshin_impact)', 'gweda', 'hetero', 'group_sex', 'gangbang', 'clothing', 'tied_hair', 'legwear', 'headwear', 'bangs', 'twintails', 'hat', 'long_sleeves', 'pantyhose', 'white_legwear', 'detached_sleeves', 'torn_clothes', 'topless', 'detached_collar', 'drill_hair', 'white_headwear', 'torn_legwear', 'torn_pantyhose', 'white_pantyhose', 'female', 'breasts', 'long_hair', 'male', 'blush', 'nipples', 'penis', 'blonde_hair', 'blue_eyes', 'vagina', 'navel', 'tongue', 'medium_breasts', 'erection', 'multiple_penises', 'open_mouth', 'looking_at_viewer', 'spread_legs', 'straddling', 'on_top', 'girl_on_top', 'cowgirl_position', 'sex', 'vaginal', 'tongue_out', 'oral', 'nakadashi', 'fellatio', 'ejaculation', 'facial', 'tekoki', 'female_penetrated', 'double_handjob', 'nun', 'semen', 'semen_in_mouth', '1girl', 'multiple_boys', 'solo_focus', 'group', '3boys', 'male_pov', 'female_on_male', 'uncensored', 'pov', 'high_resolution', 'tagme'] | 1731168769 |
| 36864040 | 36864040.png | 1567 | 2800 | q | image/png | 7273267 | ['genshin_impact', 'lumine_(genshin_impact)', 'lumine', 'kabu_usagi', 'clothing', 'legwear', 'hair_ornament', 'bangs', 'thighhighs', 'underwear', 'pantsu', 'bare_shoulders', 'hair_between_eyes', 'black_legwear', 'sidelocks', 'lingerie', 'black_clothes', 'bra', 'hair_flower', 'black_thighhighs', 'black_underwear', 'wardrobe_malfunction', 'black_panties', 'garter_straps', 'no_shoes', 'underwear_only', 'garter_belt', 'black_bra', 'short_hair_with_long_locks', 'strap_slip', 'female', 'breasts', 'blush', 'large_breasts', 'blonde_hair', 'navel', 'thighs', 'cleavage', 'yellow_eyes', 'clavicle', 'stomach', 'feathers', 'thigh_gap', 'looking_at_viewer', 'sitting', 'closed_mouth', 'bent_knees', 'wariza', 'outstretched_arms', 'raised_eyebrows', 'flower', 'white_flower', 'bed', 'clock', 'sweat', '1girl', 'solo', 'indoors', 'on_bed', 'feather_hair_ornament', 'paid_reward', 'high_resolution', 'very_high_resolution', 'large_filesize'] | 1731168755 |
| 36864039 | 36864039.jpg | 4168 | 5894 | q | image/jpeg | 1617286 | ['original', 'twitter', 'original_character', 'bodyguard_(roborobocop)', 'robocap', 'roborobocop', 'clothing', 'tied_hair', 'shirt', 'underwear', 'ponytail', 'jacket', 'white_shirt', 'short_sleeves', 'necktie', 'neckwear', 'choker', 'pants', 'black_clothes', 'bra', 'belt', 'collared_shirt', 'see-through', 'black_underwear', 'red_neckwear', 'black_jacket', 'crop_top', 'formal', 'suit', 'black_choker', 'dress_shirt', 'black_pants', 'high_ponytail', 'denim', 'blue_shirt', 'black_bra', 'red_necktie', 'jeans', 'tight_clothing', 'shirt_tucked_in', 'brown_belt', 'bra_visible_through_clothes', 'blue_crop_top', 'female', 'breasts', 'long_hair', 'male', 'short_hair', 'large_breasts', 'blonde_hair', 'black_hair', 'thighs', 'cleavage', 'huge_breasts', 'clavicle', 'muscle', 'midriff', 'curvaceous', 'muscular_male', 'voluptuous', 'scar', 'hourglass_figure', 'scar_on_face', 'height_difference', 'platinum_blonde_hair', 'massive_breasts', 'enormous_breasts', 'standing', 'hand_in_pocket', 'holding_smartphone', 'surprised_expression', 'holding', 'holding_object', 'looking_at_another', 'surprised', 'holding_phone', 'looking_at_phone', 'bodyguard', 'window', 'phone', 'cellphone', 'smartphone', 'sweat', 'sweatdrop', '1girl', '1boy', 'artist_website', 'holding_cellphone', 'tall_male', 'tight_jeans', 'tight_pants', 'tight_shirt', 'yena_lee_(roborobocop)', 'artist_name', 'speech_bubble', 'signature', 'username', 'blurry', 'twitter_username', 'useless_tags', 'blurry_background', 'web_address', 'artist_signature', 'english_language', 'text', 'korean_language', 'english_text', '!', 'korean_text', 'high_resolution', 'tagme', 'very_high_resolution'] | 1731168746 |
| 36864038 | 36864038.jpg | 2355 | 4096 | q | image/jpeg | 736606 | ['blue_archive', 'tsurugi_(blue_archive)', 'ankoku_kishi_m', 'clothing', 'bangs', 'dress', 'barefoot', 'short_sleeves', 'alternative_costume', 'white_dress', 'alternative_hairstyle', 'sleepwear', 'nightgown', 'female', 'breasts', 'long_hair', 'blush', 'large_breasts', 'black_hair', 'red_eyes', 'cleavage', 'feet', 'very_long_hair', 'clavicle', 'ahoge', 'wings', 'halo', 'black_wings', 'looking_at_viewer', 'standing', 'closed_mouth', 'hand_on_chest', 'hand_on_own_chest', '1girl', 'solo', 'absurdly_long_hair', 'simple_background', 'white_background', 'full_body', 'useless_tags', 'high_resolution', 'tagme', 'very_high_resolution'] | 1731168736 |
| 36864037 | 36864037.png | 2480 | 3496 | e | image/png | 1757821 | ['hetero', 'anthro', 'furry', 'interspecies', 'mature', 'zoophilia', 'furry_with_non-furry', 'nude', 'collar', 'female', 'breasts', 'male', 'blush', 'nipples', 'large_breasts', 'blue_eyes', 'vagina', 'tongue', 'tail', 'huge_breasts', 'genitals', 'fur', 'mature_female', 'white_fur', 'white_body', 'humanoid', 'open_mouth', 'ahegao', 'doggystyle', 'all_fours', 'sex', 'tongue_out', 'taken_from_behind', 'naughty_face', 'furry_female', 'mammal', 'animal', 'canine', 'feline', 'feral', 'felid', 'canis', 'dog', 'wolf', 'felis', 'domestic_cat', 'animal_humanoid', 'mammal_humanoid', 'sweat', 'bodily_fluids', 'genital_fluids', 'vaginal_juices', '1girl', '1boy', 'duo', 'cat_humanoid', 'felid_humanoid', 'feline_humanoid', 'lion_granny', 'useless_tags', 'limited_visibility', 'extreme_content', 'high_resolution', 'tagme', 'very_high_resolution'] | 1731168721 |
| 36864036 | 36864036.png | 6200 | 4000 | e | image/png | 18128569 | ['tuca_and_bertie', 'netflix', 'draca_(tuca_and_bertie)', 'mr_valentine00', 'anthro', 'public_humiliation', 'nude', 'nude_female', 'public_nudity', 'female', 'breasts', 'ass', 'huge_ass', 'colored_skin', 'large_ass', 'tanned', 'tanlines', 'green_skin', 'green_body', 'from_behind', 'hand_on_hip', 'viewed_from_behind', 'embarrassed', 'walking', 'humiliation', 'embarrassed_nude_female', 'plant', 'flora_fauna', 'elemental_creature', 'window', 'potted_plant', 'smoke', '1girl', 'solo', 'public', 'building', 'elemental_humanoid', 'for_a_head', 'plant_humanoid', 'english_language', 'text', 'english_text', 'high_resolution', 'tagme', 'very_high_resolution', 'large_filesize', 'extremely_large_filesize'] | 1731168720 |
| 36864035 | 36864035.jpg | 4000 | 4000 | q | image/jpeg | 2096852 | ['yu-gi-oh!', "yu-gi-oh!_5d's", 'fudou_yuusei', 'junk_warrior', 'junk_synchron', 'speed_warrior', 'clothing', 'shirt', 'gloves', 'jacket', 'armor', 'black_shirt', 'blue_jacket', 'brown_gloves', 'blue_armor', 'white_armor', 'male', 'blonde_hair', 'blue_eyes', 'black_hair', 'multicolored_hair', 'upper_body', 'two-tone_hair', 'streaked_hair', 'facial_mark', 'open_mouth', 'outstretched_arm', 'duel_monster', '1boy', 'multiple_boys', 'male_focus', '4boys', 'vincent_graphic_art', '1:1_aspect_ratio', 'useless_tags', 'high_resolution', 'tagme', 'very_high_resolution'] | 1731168713 |
| 36864034 | 36864034.jpg | 1500 | 2000 | q | image/jpeg | 1005992 | ['touhou_project', 'hinanawi_tenshi', 'clothing', 'headwear', 'bangs', 'shirt', 'skirt', 'hat', 'bow', 'hair_between_eyes', 'white_shirt', 'short_sleeves', 'neckwear', 'frills', 'collared_shirt', 'bowtie', 'puffy_sleeves', 'red_neckwear', 'red_bow', 'blue_skirt', 'puffy_short_sleeves', 'black_headwear', 'red_bowtie', 'black_hat', 'center_frills', 'female', 'long_hair', 'blush', 'red_eyes', 'blue_hair', 'eyebrows', 'v-shaped_eyebrows', 'middle_finger', 'looking_at_viewer', 'closed_mouth', 'hand_on_hip', 'hand_on_own_hip', 'leaf', 'food', 'fruit', '1girl', 'solo', 'elucipate', 'fruit_hat_ornament', 'peach', 'rainbow_order', 'simple_background', 'white_background', 'cowboy_shot', 'sketch', 'useless_tags', 'high_resolution', 'tagme'] | 1731168689 |
| 36864033 | 36864033.jpg | 2480 | 3508 | e | image/jpeg | 3016480 | ['naruto', 'naruto_shippuden', 'inuzuka_kiba', 'destor777', 'yaoi', 'bdsm', 'bara', 'foot_fetish', 'clothing', 'nude', 'headwear', 'barefoot', 'completely_nude', 'collar', 'headband', 'forehead_protector', 'male', 'blush', 'short_hair', 'nipples', 'penis', 'brown_hair', 'black_hair', 'navel', 'thighs', 'tail', 'feet', 'testicles', 'teeth', 'erection', 'muscle', 'toes', 'black_eyes', 'stomach', 'abs', 'muscular_male', 'pectorals', 'facial_mark', 'large_pectorals', 'dog_tail', 'circumcised', 'looking_at_viewer', 'spread_legs', 'chain', 'leash', 'konohagakure_symbol', '1boy', 'multiple_boys', 'solo_focus', '2boys', 'male_focus', 'male_only', 'chain_leash', 'stomping', 'uncensored', 'high_resolution', 'tagme', 'very_high_resolution'] | 1731168688 |
| 36864032 | 36864032.avif | 2135 | 2880 | e | image/avif | 133404 | ['original', 'egami', 'hetero', 'loli', 'teenage', 'mind_control', 'teenage_girl', 'hypnosis', 'clothing', 'hair_ornament', 'bangs', 'shirt', 'uniform', 'long_sleeves', 'school_uniform', 'bow', 'white_shirt', 'open_clothes', 'neckwear', 'hairclip', 'collared_shirt', 'bowtie', 'open_shirt', 'red_neckwear', 'red_bow', 'red_bowtie', 'female', 'breasts', 'male', 'blush', 'short_hair', 'nipples', 'penis', 'black_hair', 'vagina', 'navel', 'small_breasts', 'dark_skin', 'erect_nipples', 'black_eyes', 'symbol-shaped_pupils', 'dark-skinned_male', 'heart-shaped_pupils', 'bulge', 'stomach_bulge', 'uterus', 'open_mouth', 'looking_at_viewer', 'spread_legs', 'lying', 'on_back', 'hands_up', 'missionary_position', 'sex', 'vaginal', 'ejaculation', 'orgasm', 'female_orgasm', 'internal_cumshot', 'torso_grab', 'bed', 'pillow', 'bed_sheet', 'semen', 'sweat', '1girl', '1boy', 'solo_focus', 'on_bed', 'cross_section', 'limited_visibility', 'contentious_content', 'heart', 'high_resolution', 'very_high_resolution'] | 1731168685 |
| 36864031 | 36864031.jpg | 553 | 700 | e | image/jpeg | 50377 | ['tokyo_afterschool_summoners', 'garouzuki', 'anthro', 'furry', 'bara', 'clothing', 'underwear', 'footwear', 'jewelry', 'traditional_clothes', 'wafuku', 'necklace', 'animal_print', 'sandals', 'fundoshi', 'flip-flops', 'asian_clothing', 'japanese_clothing', 'east_asian_clothing', 'male', 'blush', 'short_hair', 'nipples', 'navel', 'thighs', 'animal_ears', 'tail', 'eyebrows', 'thick_thighs', 'fur', 'muscle', 'muscular_male', 'bulge', 'pectorals', 'plump', 'scar', 'bipedal', 'belly', 'brown_fur', 'thick_eyebrows', 'overweight', 'dog_ears', 'two-tone_fur', 'overweight_male', 'dog_tail', 'scar_on_face', 'overweight_anthro', 'obese', 'obese_male', 'squatting', 'furry_male', 'mammal', 'animal', 'canine', 'canis', 'dog', 'dog_boy', 'dog_tags', 'solo', '1boy', 'male_focus', 'happi', 'happi_(clothing)', 'lifewonders', 'moritaka_(tas)', 'obese_anthro', 'paw_print', 'countershading', 'limited_visibility'] | 1731168674 |
| 36864030 | 36864030.jpg | 661 | 800 | q | image/jpeg | 107018 | ['rockman', 'mega_man_(series)', 'mega_man_legends_(series)', 'roll_caskett', 'tron_bonne_(mega_man)', 'hirotaka_(hrtk990203)', 'clothing', 'headwear', 'gloves', 'jewelry', 'dress', 'hat', 'jacket', 'earrings', 'open_clothes', 'pantyhose', 'hairband', 'short_sleeves', 'shorts', 'belt', 'bodysuit', 'open_jacket', 'covered_navel', 'short_dress', 'blue_jacket', 'pink_dress', 'red_headwear', 'red_jacket', 'brown_gloves', 'cropped_jacket', 'sleeves_rolled_up', 'black_bodysuit', 'cabbie_hat', 'red_shorts', 'red_hat', 'pink_hairband', 'bodysuit_under_clothes', 'female', 'breasts', 'long_hair', 'short_hair', 'blonde_hair', 'brown_hair', 'navel', 'green_eyes', 'medium_hair', 'open_mouth', 'looking_at_viewer', 'smile', 'closed_mouth', 'open-mouth_smile', ':d', 'crossed_arms', 'hands_on_hips', 'hands_on_own_hips', 'multiple_girls', '2girls', 'crotch_plate', 'skull_brooch', 'skull_earrings', 'artist_name', 'cowboy_shot', 'tagme'] | 1731168665 |
| 36864029 | 36864029.jpg | 494 | 620 | e | image/jpeg | 33553 | ['tokyo_afterschool_summoners', 'garouzuki', 'anthro', 'furry', 'bara', 'clothing', 'animal_print', 'male', 'blush', 'short_hair', 'nipples', 'navel', 'thighs', 'animal_ears', 'eyebrows', 'fur', 'muscle', 'stomach', 'muscular_male', 'fangs', 'pectorals', 'plump', 'scar', 'belly', 'brown_fur', 'thick_eyebrows', 'dog_ears', 'scar_on_face', 'open_mouth', 'looking_at_viewer', 'smile', 'standing', 'furry_male', 'mammal', 'animal', 'canine', 'canis', 'dog', 'dog_boy', 'towel', 'solo', '1boy', 'male_focus', 'lifewonders', 'moritaka_(tas)', 'musclegut', 'paw_print', 'countershading', 'limited_visibility', 'tagme'] | 1731168653 |
| 36864028 | 36864028.png | 832 | 1216 | e | image/png | 1153331 | ['chrono_trigger', 'lucca', 'nude', 'bangs', 'eyewear', 'glasses', 'completely_nude', 'blunt_bangs', 'female', 'breasts', 'blush', 'short_hair', 'nipples', 'large_breasts', 'vagina', 'navel', 'thighs', 'eyes_closed', 'pink_hair', 'purple_hair', 'areolae', 'teeth', 'armpits', 'stomach', 'upper_teeth', 'upper_teeth_only', 'open_mouth', 'spread_legs', 'lying', 'on_back', 'arms_up', 'sex', 'vaginal', 'masturbation', 'insertion', 'object_insertion', 'vaginal_insertion', 'female_masturbation', 'female_ejaculation', 'vaginal_object_insertion', 'bed', 'sex_toy', 'pillow', 'window', 'bed_sheet', 'dildo', 'vibrator', 'machine', 'sex_machine', 'sweat', 'genital_fluids', 'vaginal_juices', '1girl', 'solo', 'female_only', 'indoors', 'on_bed', 'fucking_machine', 'novelai', 'high_resolution', 'tagme', 'ai-created'] | 1731168647 |
| 36864027 | 36864027.png | 1500 | 1000 | q | image/png | 469457 | ['original', 'kemomimi-chan_(naga_u)', 'naga_u', 'clothing', 'tied_hair', 'legwear', 'hair_ornament', 'bangs', 'shirt', 'thighhighs', 'skirt', 'long_sleeves', 'hair_between_eyes', 'white_legwear', 'hair_bun', 'sailor_collar', 'white_thighhighs', 'sleeves_past_wrists', 'single_hair_bun', 'ribbon_trim', 'sleeves_past_fingers', 'green_shirt', 'white_sailor_collar', 'purple_skirt', 'female', 'blush', 'blonde_hair', 'animal_ears', 'tail', 'purple_eyes', 'animal_tail', 'fox_ears', 'animal_ear_fluff', 'violet_eyes', 'fox_tail', 'empty_eyes', 'shaded_face', 'looking_at_viewer', 'lying', 'on_stomach', 'fox_girl', '1girl', 'solo', 'hot', 'melting', 'ribbon-trimmed_legwear', 'ribbon-trimmed_thighhighs', 'simple_background', 'white_background', 'full_body', 'shadow', 'tagme', '3:2_aspect_ratio'] | 1731168642 |
| 36864026 | 36864026.avif | 2400 | 2090 | e | image/avif | 169361 | ['original', 'egami', 'hetero', 'loli', 'teenage', 'mind_control', 'teenage_girl', 'clothing', 'bangs', 'shirt', 'skirt', 'underwear', 'uniform', 'pantsu', 'school_uniform', 'white_shirt', 'short_sleeves', 'serafuku', 'no_bra', 'sailor_collar', 'blue_skirt', 'female', 'breasts', 'long_hair', 'male', 'blush', 'nipples', 'penis', 'black_hair', 'vagina', 'navel', 'brown_eyes', 'small_breasts', 'erect_nipples', 'black_eyes', 'uterus', 'open_mouth', 'spread_legs', 'lying', 'closed_mouth', 'on_back', 'clothes_pull', 'clothes_lift', 'shirt_lift', 'sex', 'vaginal', 'nakadashi', 'ejaculation', 'orgasm', 'internal_cumshot', 'table', 'desk', 'school_desk', 'semen', 'sweat', '1girl', '1boy', 'solo_focus', 'indoors', 'classroom', 'cross_section', 'limited_visibility', 'contentious_content', 'high_resolution'] | 1731168639 |
| 36864025 | 36864025.jpg | 1500 | 2307 | q | image/jpeg | 247461 | ['kantai_collection', 'okinami_(kantai_collection)', 'mori_tarou', 'clothing', 'legwear', 'shirt', 'skirt', 'underwear', 'uniform', 'footwear', 'long_sleeves', 'school_uniform', 'bow', 'eyewear', 'shoes', 'glasses', 'white_shirt', 'alternative_costume', 'socks', 'pleated_skirt', 'white_legwear', 'bra', 'serafuku', 'bowtie', 'sailor_collar', 'brown_footwear', 'white_socks', 'slip-on_shoes', 'loafers', 'grey_shirt', 'adapted_costume', 'brown_skirt', 'green_underwear', 'sailor_shirt', 'loose_socks', 'aqua_bow', 'green_bra', 'female', 'short_hair', 'blonde_hair', 'brown_hair', 'black_hair', 'green_eyes', 'pink_hair', 'multicolored_hair', 'looking_at_viewer', 'smile', 'standing', 'holding', 'holding_instrument', 'star_(symbol)', 'instrument', 'guitar', 'electric_guitar', '1girl', 'solo', 'aqua_bowtie', 'blue-framed_eyewear', 'brown_sailor_collar', 'holding_guitar', 'holding_musical_instrument', 'okinami_kai_ni_(kancolle)', 'simple_background', 'white_background', 'full_body', 'useless_tags', 'high_resolution', 'tagme'] | 1731168620 |
| 36864024 | 36864024.avif | 2400 | 2090 | e | image/avif | 162038 | ['original', 'egami', 'hetero', 'teenage', 'mind_control', 'teenage_girl', 'hypnosis', 'clothing', 'bangs', 'shirt', 'skirt', 'uniform', 'school_uniform', 'white_shirt', 'short_sleeves', 'pleated_skirt', 'serafuku', 'sailor_collar', 'blue_skirt', 'female', 'breasts', 'long_hair', 'male', 'blush', 'nipples', 'penis', 'black_hair', 'vagina', 'navel', 'brown_eyes', 'small_breasts', 'erect_nipples', 'black_eyes', 'open_mouth', 'spread_legs', 'lying', 'on_back', 'clothes_pull', 'clothes_lift', 'shirt_lift', 'skirt_pull', 'skirt_lift', 'sex', 'vaginal', 'table', 'desk', 'sweat', '1girl', '1boy', 'solo_focus', 'indoors', 'classroom', 'high_resolution'] | 1731168601 |
| 36864023 | 36864023.png | 1000 | 1200 | e | image/png | 502627 | ['sailor_moon_(series)', 'kino_makoto', 'sailor_jupiter', 'pokemilf', 'hetero', 'mature', 'clothing', 'tied_hair', 'nude', 'headwear', 'skirt', 'jewelry', 'ponytail', 'earrings', 'miniskirt', 'crown', 'nude_male', 'tiara', 'partially_clothed', 'mostly_nude', 'microskirt', 'sailor_senshi_uniform', 'female', 'breasts', 'long_hair', 'male', 'nipples', 'large_breasts', 'penis', 'vagina', 'navel', 'thighs', 'huge_breasts', 'eyebrows', 'eyebrows_visible_through_hair', 'areolae', 'genitals', 'thick_thighs', 'hair', 'midriff', 'wide_hips', 'lips', 'eyelashes', 'mature_female', 'plump', 'voluptuous', 'belly', 'hips', 'fat', 'unaligned_breasts', 'looking_at_viewer', 'spread_legs', 'parted_lips', 'straddling', 'on_top', 'girl_on_top', 'cowgirl_position', 'male_on_bottom', 'sex', 'vaginal', 'penetration', 'male_penetrating', 'bouncing_breasts', 'clothed_sex', 'female_penetrated', 'male_penetrating_female', 'riding', 'magical_girl', 'mammal', 'sweat', 'bodily_fluids', 'genital_fluids', 'vaginal_juices', '1girl', '1boy', 'solo_focus', 'female_focus', 'male_pov', 'unseen_male_face', 'monochrome', 'pov', 'useless_tags', 'motion_lines', 'green_theme', 'alternate_version_available', 'high_resolution', 'tagme'] | 1731168576 |
| 36864022 | 36864022.jpg | 1111 | 1899 | q | image/jpeg | 276951 | ['blue_archive', 'kyouyama_kazusa', 'dainillust', 'clothing', 'headwear', 'hair_ornament', 'bangs', 'skirt', 'uniform', 'hat', 'long_sleeves', 'school_uniform', 'jacket', 'choker', 'pleated_skirt', 'hairclip', 'blunt_bangs', 'hood', 'sailor_collar', 'black_jacket', 'hoodie', 'neckerchief', 'black_choker', 'bob_cut', 'hood_down', 'hooded_jacket', 'black_hoodie', 'pink_hat', 'female', 'blush', 'short_hair', 'black_hair', 'animal_ears', 'pink_hair', 'clavicle', 'multicolored_hair', 'upper_body', 'cat_ears', 'pink_eyes', 'grin', 'two-tone_hair', 'eyelashes', 'halo', 'extra_ears', 'colored_inner_hair', 'tsurime', 'looking_at_viewer', 'smile', 'hands_up', 'holding', 'holding_object', 'happy', 'flower', 'leaf', 'pink_flower', 'blue_flower', 'bouquet', 'hydrangea', '1girl', 'solo', 'cat_choker', 'food-themed_hair_ornament', 'green_sailor_collar', 'holding_bouquet', 'ice_cream_hair_ornament', 'pink_neckerchief', 'simple_background', 'white_background', 'useless_tags', 'high_resolution', 'tagme'] | 1731168576 |
| 36864021 | 36864021.avif | 2135 | 2880 | e | image/avif | 128014 | ['original', 'egami', 'hetero', 'loli', 'teenage', 'mind_control', 'teenage_girl', 'hypnosis', 'clothing', 'hair_ornament', 'bangs', 'shirt', 'uniform', 'long_sleeves', 'school_uniform', 'bow', 'white_shirt', 'open_clothes', 'neckwear', 'bottomless', 'hairclip', 'collared_shirt', 'bowtie', 'open_shirt', 'red_neckwear', 'red_bow', 'red_bowtie', 'female', 'breasts', 'male', 'blush', 'short_hair', 'nipples', 'penis', 'black_hair', 'vagina', 'navel', 'small_breasts', 'erect_nipples', 'black_eyes', 'symbol-shaped_pupils', 'heart-shaped_pupils', 'bulge', 'stomach_bulge', 'open_mouth', 'looking_at_viewer', 'spread_legs', 'lying', 'on_back', 'hands_up', 'missionary_position', 'sex', 'vaginal', 'orgasm', 'female_orgasm', 'torso_grab', 'bed', 'pillow', 'bed_sheet', 'sweat', '1girl', '1boy', 'solo_focus', 'on_bed', 'limited_visibility', 'contentious_content', 'heart', 'high_resolution', 'very_high_resolution'] | 1731168565 |
| 36864020 | 36864020.png | 1000 | 1200 | e | image/png | 513634 | ['sailor_moon_(series)', 'kino_makoto', 'sailor_jupiter', 'pokemilf', 'hetero', 'mature', 'clothing', 'tied_hair', 'nude', 'headwear', 'skirt', 'jewelry', 'ponytail', 'earrings', 'miniskirt', 'crown', 'nude_male', 'tiara', 'partially_clothed', 'mostly_nude', 'microskirt', 'sailor_senshi_uniform', 'female', 'breasts', 'long_hair', 'male', 'nipples', 'large_breasts', 'penis', 'vagina', 'navel', 'thighs', 'huge_breasts', 'eyebrows', 'eyebrows_visible_through_hair', 'areolae', 'genitals', 'pubic_hair', 'thick_thighs', 'hair', 'midriff', 'wide_hips', 'lips', 'female_pubic_hair', 'eyelashes', 'mature_female', 'plump', 'voluptuous', 'belly', 'hips', 'fat', 'hairy_vagina', 'unaligned_breasts', 'looking_at_viewer', 'spread_legs', 'parted_lips', 'straddling', 'on_top', 'girl_on_top', 'cowgirl_position', 'male_on_bottom', 'sex', 'vaginal', 'penetration', 'male_penetrating', 'bouncing_breasts', 'clothed_sex', 'female_penetrated', 'male_penetrating_female', 'riding', 'magical_girl', 'mammal', 'sweat', 'bodily_fluids', 'genital_fluids', 'vaginal_juices', '1girl', '1boy', 'solo_focus', 'female_focus', 'male_pov', 'unseen_male_face', 'monochrome', 'pov', 'useless_tags', 'motion_lines', 'green_theme', 'alternate_version_available', 'high_resolution', 'tagme'] | 1731168553 |
| 36864019 | 36864019.png | 4096 | 4096 | e | image/png | 6795474 | ['original', 'loli', 'clothing', 'legwear', 'bangs', 'shirt', 'skirt', 'uniform', 'footwear', 'school_uniform', 'white_shirt', 'socks', 'pleated_skirt', 'white_legwear', 'bottomless', 'serafuku', 'nopan', 'sailor_collar', 'blue_skirt', 'no_shoes', 'white_socks', 'female', 'long_hair', 'blush', 'black_hair', 'ass', 'vagina', 'thighs', 'brown_eyes', 'feet', 'anus', 'legs', 'black_eyes', 'soles', 'cleft_of_venus', 'open_mouth', 'looking_at_viewer', 'smile', 'spread_legs', 'lying', 'on_back', 'clothes_pull', 'clothes_lift', 'skirt_pull', 'skirt_lift', 'legs_up', 'seductive_look', 'bed', 'pillow', 'window', '1girl', 'solo', 'indoors', 'on_bed', '1:1_aspect_ratio', 'sankaku_ai', 'limited_visibility', 'contentious_content', 'high_resolution', 'tagme', 'very_high_resolution', 'large_filesize', 'ai-created'] | 1731168553 |
| 36864018 | 36864018.jpg | 4093 | 2894 | s | image/jpeg | 2054006 | ['love_live!_(series)', 'link!_like!_love_live!', 'hinoshita_kaho', 'murano_sayaka', 'fujishima_megumi', 'otomune_kozue', 'yugiri_tsuzuri', 'osawa_rurino', 'sunkeun04', 'virtual_youtuber', 'clothing', 'tied_hair', 'hair_ornament', 'bangs', 'uniform', 'dress', 'twintails', 'ponytail', 'long_sleeves', 'ribbon', 'school_uniform', 'open_clothes', 'hair_ribbon', 'neckwear', 'hairclip', 'asymmetrical_hair', 'hair_bun', 'hair_flower', 'blunt_bangs', 'side_ponytail', 'sailor_collar', 'red_neckwear', 'two_side_up', 'parted_bangs', 'neckerchief', 'low_twintails', 'bob_cut', 'straight_hair', 'cardigan', 'blue_ribbon', 'red_neckerchief', 'white_sailor_collar', 'star_hair_ornament', 'sailor_dress', 'pleated_dress', 'single_side_bun', 'brown_dress', 'open_cardigan', 'winter_uniform', 'inverted_bob', 'female', 'long_hair', 'short_hair', 'blonde_hair', 'blue_eyes', 'brown_hair', 'green_eyes', 'blue_hair', 'purple_eyes', 'eyes_closed', 'purple_hair', 'teeth', 'red_hair', 'multicolored_hair', 'one_eye_closed', 'green_hair', 'grey_hair', 'medium_hair', 'pink_eyes', 'grin', 'streaked_hair', 'violet_eyes', 'upper_teeth', 'gradient_hair', 'upper_teeth_only', 'light_brown_hair', 'colored_inner_hair', 'crossed_bangs', 'light_blue_hair', 'dark_blue_hair', 'open_mouth', 'looking_at_viewer', 'smile', 'closed_mouth', 'v', 'arms_behind_back', '^_^', ';d', "hand_on_another's_shoulder", 'behind_another', 'holding', 'holding_object', 'wink', 'flower', 'rose', 'white_flower', 'pink_flower', 'blue_flower', 'bouquet', 'purple_flower', 'pink_rose', 'star_(symbol)', 'photo_(object)', 'tube', 'multiple_girls', '6+girls', 'aqua_neckerchief', 'artistic_error', 'brown_cardigan', 'bunny_ears_prank', 'congratulations', 'daisy', 'graduation', 'hasu_no_sora_school_uniform', 'holding_bouquet', 'kotori_photobomb', 'oogami_sachi', 'prank', 'rabbit_hair_ornament', 'triangle_mouth', 'blurry', 'useless_tags', 'blurry_background', 'selfie', 'blurry_foreground', 'chromatic_aberration', 'film_grain', 'high_resolution', 'very_high_resolution'] | 1731168549 |
| 36864017 | 36864017.png | 1567 | 2800 | q | image/png | 7084962 | ['genshin_impact', 'lumine_(genshin_impact)', 'lumine', 'kabu_usagi', 'clothing', 'legwear', 'hair_ornament', 'bangs', 'thighhighs', 'underwear', 'pantsu', 'bare_shoulders', 'black_legwear', 'sidelocks', 'lingerie', 'black_clothes', 'bra', 'hair_flower', 'black_thighhighs', 'black_underwear', 'wardrobe_malfunction', 'black_panties', 'garter_straps', 'no_shoes', 'underwear_only', 'garter_belt', 'black_bra', 'short_hair_with_long_locks', 'strap_slip', 'female', 'breasts', 'blush', 'large_breasts', 'blonde_hair', 'navel', 'thighs', 'cleavage', 'yellow_eyes', 'clavicle', 'stomach', 'feathers', 'thigh_gap', 'looking_at_viewer', 'sitting', 'closed_mouth', 'bent_knees', 'wariza', 'outstretched_arms', 'raised_eyebrows', 'flower', 'white_flower', 'bed', 'clock', 'sweat', '1girl', 'solo', 'indoors', 'on_bed', 'feather_hair_ornament', 'paid_reward', 'high_resolution', 'very_high_resolution', 'large_filesize'] | 1731168542 |
| 36864016 | 36864016.png | 1980 | 2000 | e | image/png | 942894 | ['the_elder_scrolls', 'the_elder_scrolls_v:_skyrim', 'bethesda_softworks', 'microsoft', 'hetero', 'anthro', 'furry', 'interspecies', 'scalie', 'nude', 'armor', 'shoulder_armor', 'female', 'breasts', 'male', 'large_breasts', 'ass', 'thighs', 'tongue', 'tail', 'teeth', 'genitals', 'thick_thighs', 'claws', 'scar', 'sharp_teeth', 'scales', 'non-mammal_breasts', 'long_tongue', 'looking_at_viewer', 'hand_on_ass', 'hand_on_breast', 'tongue_out', 'groping', 'breast_grab', 'embracing', 'hug', 'ass_grab', 'deep_skin', 'groping_from_behind', 'furry_female', 'dragon', 'monster_girl', 'dragon_girl', '1girl', 'duo', 'argonian', 'cokesero', 'deeja', 'face_in_breasts', 'jaree-ra', 'simple_background', 'monochrome', 'useless_tags', 'limited_visibility', 'high_resolution', 'tagme'] | 1731168528 |
| 36864015 | 36864015.png | 2048 | 945 | s | image/png | 3931977 | ['idolmaster', 'code_geass', 'code_geass:_lost_stories', 'shirley_fenette', 'crossdressing', 'clothing', 'shirt', 'jewelry', 'long_sleeves', 'bow', 'jacket', 'open_clothes', 'sidelocks', 'pants', 'frills', 'striped', 'collared_shirt', 'bowtie', 'open_jacket', 'vest', 'formal', 'suit', 'white_jacket', 'frilled_sleeves', 'brooch', 'green_shirt', 'white_pants', 'brown_bow', 'white_vest', 'female', 'breasts', 'long_hair', 'blush', 'brown_hair', 'green_eyes', 'medium_breasts', 'orange_hair', 'light_blush', 'open_mouth', 'looking_at_viewer', 'smile', 'hands_up', 'hand_on_chest', 'hand_on_own_chest', 'clenched_hand', 'on_one_knee', 'holding', 'holding_object', 'happy', 'groom', 'flower', 'petals', 'rose', 'white_flower', 'red_flower', 'pink_flower', 'yellow_flower', 'bouquet', 'red_rose', 'white_rose', 'pink_rose', 'sparkle', 'buttons', 'gem', 'blue_gemstone', '1girl', 'solo', 'outdoors', 'day', 'stairs', 'fountain', 'brown_bowtie', 'frilled_jacket', 'holding_bouquet', 'red_carpet', 'straight-on', 'white_suit', 'yellow_rose', 'cg_art', 'game_cg', 'official_art', 'useless_tags', 'dutch_angle', 'high_resolution'] | 1731168523 |
| 36864014 | 36864014.avif | 2135 | 2880 | e | image/avif | 119809 | ['original', 'egami', 'hetero', 'loli', 'teenage', 'mind_control', 'teenage_girl', 'hypnosis', 'clothing', 'hair_ornament', 'bangs', 'shirt', 'uniform', 'long_sleeves', 'school_uniform', 'bow', 'white_shirt', 'open_clothes', 'neckwear', 'hairclip', 'collared_shirt', 'bowtie', 'open_shirt', 'red_neckwear', 'red_bow', 'red_bowtie', 'female', 'breasts', 'male', 'blush', 'short_hair', 'nipples', 'penis', 'black_hair', 'vagina', 'navel', 'small_breasts', 'dark_skin', 'erect_nipples', 'black_eyes', 'symbol-shaped_pupils', 'dark-skinned_male', 'heart-shaped_pupils', 'bulge', 'stomach_bulge', 'open_mouth', 'looking_at_viewer', 'spread_legs', 'lying', 'on_back', 'hands_up', 'missionary_position', 'sex', 'vaginal', 'orgasm', 'female_orgasm', 'torso_grab', 'bed', 'pillow', 'bed_sheet', 'sweat', '1girl', '1boy', 'solo_focus', 'on_bed', 'limited_visibility', 'contentious_content', 'heart', 'high_resolution', 'very_high_resolution'] | 1731168513 |
| 36864013 | 36864013.webp | 1280 | 853 | e | image/webp | 79318 | ['hetero', 'clothing', 'tied_hair', 'nude', 'hair_ornament', 'bangs', 'bow', 'braid', 'completely_nude', 'hair_bow', 'pink_bow', 'female', 'breasts', 'long_hair', 'male', 'blush', 'nipples', 'large_breasts', 'penis', 'ass', 'vagina', 'navel', 'thighs', 'red_eyes', 'white_hair', 'anus', 'testicles', 'grey_hair', 'open_mouth', 'spread_legs', 'lying', 'ahegao', 'top-down_bottom-up', 'hands_on_ground', 'sex', 'vaginal', 'nakadashi', 'anal', 'taken_from_behind', 'ejaculation', 'breast_press', 'orgasm', 'trembling', 'female_ejaculation', 'cloud', 'semen', 'sweat', 'water', 'wet', 'overflow', '1girl', '1boy', 'solo_focus', 'outdoors', 'sky', 'day', 'blue_sky', 'beach', 'ocean', 'ai-created', '3:2_aspect_ratio'] | 1731168511 |
| 36864012 | 36864012.jpg | 1280 | 1161 | e | image/jpeg | 361930 | ['warcraft', 'blizzard_entertainment', 'worgen', 'vulpera', 'spelunker_sal', 'anthro', 'furry', 'yaoi', 'size_difference', 'bara', 'nude', 'jewelry', 'earrings', 'completely_nude', 'collar', 'male', 'nipples', 'penis', 'ass', 'thighs', 'animal_ears', 'tongue', 'tail', 'feet', 'testicles', 'teeth', 'erection', 'genitals', 'thick_thighs', 'fur', 'muscle', 'toes', 'large_penis', 'huge_ass', 'abs', 'animal_genitalia', 'muscular_male', 'claws', 'pectorals', 'animal_penis', 'scar', 'sharp_teeth', 'muscular_anthro', 'toe_claws', 'smile', 'all_fours', 'sex', 'holding', 'anal', 'penetration', 'taken_from_behind', 'male_penetrating', 'leash_pull', 'submissive', 'furry_male', 'male_penetrated', 'small_dom_big_sub', 'mammal', 'animal', 'canine', 'canis', 'dog_boy', 'leash', 'semen', 'bodily_fluids', 'genital_fluids', 'precum', 'multiple_boys', 'duo', '2boys', 'male_focus', 'furry_with_furry', 'male_penetrating_male', 'muscular_bottom', 'muscular_sub', 'were', 'werecanid', 'wolf_boy', 'simple_background', 'monochrome', 'white_background', 'limited_visibility'] | 1731168506 |
| 36864011 | 36864011.png | 844 | 1133 | s | image/png | 1046129 | ['code_geass', 'code_geass:_lost_stories', 'shirley_fenette', 'crossdressing', 'clothing', 'shirt', 'jewelry', 'long_sleeves', 'bow', 'jacket', 'open_clothes', 'sidelocks', 'pants', 'frills', 'collared_shirt', 'bowtie', 'open_jacket', 'vest', 'formal', 'suit', 'white_jacket', 'frilled_sleeves', 'brooch', 'green_shirt', 'white_pants', 'brown_bow', 'white_vest', 'female', 'breasts', 'long_hair', 'blush', 'brown_hair', 'green_eyes', 'medium_breasts', 'orange_hair', 'light_blush', 'open_mouth', 'looking_at_viewer', 'smile', 'standing', 'hands_up', 'hand_on_chest', 'hand_on_own_chest', 'clenched_hand', 'holding', 'holding_object', 'happy', 'groom', 'flower', 'rose', 'red_flower', 'bouquet', 'red_rose', 'buttons', 'gem', 'blue_gemstone', '1girl', 'solo', 'brown_bowtie', 'cropped_legs', 'frilled_jacket', 'holding_bouquet', 'straight-on', 'white_suit', 'cg_art', 'simple_background', 'game_cg', 'official_art', 'cowboy_shot', 'useless_tags', 'dutch_angle', 'transparent_background'] | 1731168500 |
| 36864010 | 36864010.jpg | 1280 | 1920 | e | image/jpeg | 268772 | ['hetero', 'clothing', 'tied_hair', 'nude', 'hair_ornament', 'bangs', 'ribbon', 'bow', 'swimsuit', 'braid', 'bikini', 'hair_ribbon', 'hair_bow', 'nude_male', 'clothed_female', 'clothed_female_nude_male', 'swept_bangs', 'sarong', 'female', 'breasts', 'long_hair', 'male', 'blush', 'short_hair', 'nipples', 'large_breasts', 'penis', 'brown_hair', 'vagina', 'navel', 'thighs', 'red_eyes', 'tongue', 'white_hair', 'testicles', 'dark_skin', 'stomach', 'dark-skinned_male', 'faceless', 'clitoris', 'faceless_male', 'open_mouth', 'looking_at_viewer', 'spread_legs', 'standing', 'leg_up', 'standing_on_one_leg', 'standing_sex', 'sex', 'vaginal', 'tongue_out', 'nakadashi', 'anal', 'groping', 'ejaculation', 'breast_grab', 'grabbing', 'orgasm', 'trembling', 'heavy_breathing', 'tree', 'palm_tree', 'cloud', 'semen', 'sweat', 'saliva', 'water', 'wet', 'overflow', 'sand', '1girl', '1boy', 'solo_focus', 'outdoors', 'sky', 'day', 'blue_sky', 'beach', 'ocean', 'sunlight', 'high_resolution', 'ai-created'] | 1731168495 |
| 36864009 | 36864009.jpg | 1483 | 2048 | q | image/jpeg | 275331 | ['sousou_no_frieren', 'frieren', 'hikentai0112', 'clothing', 'tied_hair', 'bangs', 'shirt', 'jewelry', 'twintails', 'jacket', 'earrings', 'striped', 'parted_bangs', 'capelet', 'striped_shirt', 'striped_clothing', 'female', 'long_hair', 'green_eyes', 'eyebrows', 'upper_body', 'pointed_ears', 'thick_eyebrows', 'looking_at_viewer', 'parted_lips', 'elf', '1girl', 'solo', 'cropped_torso', 'dangle_earrings', 'drop_earrings', 'simple_background', 'monochrome', 'white_background', 'greyscale', 'useless_tags', 'spot_color', 'high_resolution', 'tagme'] | 1731168478 |
| 36864008 | 36864008.jpg | 1280 | 1920 | e | image/jpeg | 276801 | ['hetero', 'clothing', 'tied_hair', 'nude', 'hair_ornament', 'bangs', 'skirt', 'bow', 'swimsuit', 'braid', 'bikini', 'hair_bow', 'frills', 'nude_male', 'clothed_female', 'clothed_female_nude_male', 'pink_bow', 'frilled_swimsuit', 'frilled_bikini', 'pink_swimsuit', 'pink_bikini', 'sarong', 'bikini_skirt', 'female', 'breasts', 'long_hair', 'male', 'blush', 'nipples', 'large_breasts', 'penis', 'ass', 'vagina', 'thighs', 'red_eyes', 'tongue', 'white_hair', 'anus', 'dark_skin', 'pubic_hair', 'dark-skinned_male', 'male_pubic_hair', 'open_mouth', 'looking_at_viewer', 'standing', 'clothes_pull', 'from_behind', 'sex', 'vaginal', 'tongue_out', 'nakadashi', 'looking_back', 'anal', 'taken_from_behind', 'trembling', 'heavy_breathing', 'arm_grab', 'tree', 'palm_tree', 'cloud', 'semen', 'sweat', 'genital_fluids', 'vaginal_juices', 'water', 'wet', 'sand', '1girl', '1boy', 'solo_focus', 'outdoors', 'sky', 'day', 'blue_sky', 'beach', 'ocean', 'sunlight', 'high_resolution', 'ai-created'] | 1731168475 |
| 36864007 | 36864007.avif | 3028 | 2880 | e | image/avif | 169423 | ['original', 'egami', 'hetero', 'loli', 'teenage', 'teenage_girl', 'clothing', 'tied_hair', 'nude', 'legwear', 'bangs', 'shirt', 'thighhighs', 'skirt', 'uniform', 'school_uniform', 'black_legwear', 'white_shirt', 'neckwear', 'black_thighhighs', 'sailor_collar', 'red_neckwear', 'neckerchief', 'red_neckerchief', 'female', 'breasts', 'male', 'blush', 'short_hair', 'nipples', 'penis', 'brown_hair', 'black_hair', 'vagina', 'navel', 'thighs', 'brown_eyes', 'small_breasts', 'erect_nipples', 'open_mouth', 'spread_legs', 'standing', 'clothes_lift', 'shirt_lift', 'leg_up', 'standing_sex', 'sex', 'vaginal', 'nakadashi', 'taken_from_behind', 'orgasm', 'trembling', 'female_orgasm', 'semen', 'sweat', '1girl', '1boy', 'indoors', 'classroom', 'limited_visibility', 'contentious_content', 'high_resolution', 'very_high_resolution'] | 1731168464 |
| 36864006 | 36864006.png | 1800 | 2400 | e | image/png | 1644531 | ['slime_rancher', 'hetero', 'anthro', 'hyper', 'imminent_sex', 'nude', 'female', 'breasts', 'male', 'blush', 'penis', 'ass', 'vagina', 'thighs', 'anus', 'testicles', 'erection', 'genitals', 'thick_thighs', 'shiny_skin', 'huge_ass', 'colored_skin', 'large_ass', 'humanoid', 'humanoid_genitalia', 'disembodied', 'disembodied_penis', 'orange_body', 'red_skin', 'orange_skin', 'big_anus', 'lying', 'from_behind', 'viewed_from_behind', 'looking_back', 'eating', 'grass', 'mammal', 'animal', 'bunny', 'lagomorph', 'leporid', 'slime_monster', 'hybrid', 'slime', 'food', 'carrot', 'sweat', 'dripping', '1girl', '1boy', 'solo_focus', 'duo', 'outdoors', 'bunny_slime_(slime_rancher)', 'hotwoodcoldwood', 'slime_(slime_rancher)', 'translucent', 'translucent_body', 'shiny', 'useless_tags', '3:4', '?', 'high_resolution', 'tagme', 'very_high_resolution'] | 1731168462 |
| 36864005 | 36864005.png | 2400 | 3300 | q | image/png | 3678561 | ['she-ra_and_the_princesses_of_power', 'adora', 'scotchwaifus', 'bdsm', 'femsub', 'clothing', 'tied_hair', 'shirt', 'footwear', 'ponytail', 'white_shirt', 'boots', 'pants', 'gagged', 'fully_clothed', 'grey_pants', 'red_boots', 'female', 'breasts', 'blush', 'blonde_hair', 'blue_eyes', 'light_skin', 'light-skinned_female', 'sitting', 'arms_behind_back', 'bound_legs', 'bondage', 'restrained', 'looking_to_the_side', 'looking_away', 'bound_arms', 'tied_up', 'bound_ankles', 'rope_bondage', 'submissive', 'prisoner', 'kidnapped', 'rope', 'gag', 'tape', 'improvised_gag', 'tapegag', 'sweat', '1girl', 'solo', 'female_only', 'female_focus', 'prison', 'prison_cell', 'blonde_female', 'damsel_in_distress', 'female_solo', 'fright_zone', 'mouth_taped', 'sub_only', 'tape_bondage', 'full_body', 'artist_name', 'watermark', 'useless_tags', 'web_address', '2019', '2010s', 'high_resolution', 'very_high_resolution'] | 1731168410 |
| 36864004 | 36864004.avif | 1280 | 1430 | e | image/avif | 71646 | ['original', 'egami', 'hetero', 'loli', 'teenage', 'teenage_girl', 'clothing', 'nude', 'legwear', 'shirt', 'thighhighs', 'skirt', 'underwear', 'uniform', 'school_uniform', 'black_legwear', 'short_sleeves', 'black_clothes', 'black_thighhighs', 'black_skirt', 'nude_male', 'clothed_female', 'clothed_female_nude_male', 'female', 'breasts', 'male', 'blush', 'short_hair', 'nipples', 'penis', 'brown_hair', 'black_hair', 'vagina', 'navel', 'tongue', 'testicles', 'small_breasts', 'dark_skin', 'erect_nipples', 'uterus', 'cervix', 'open_mouth', 'spread_legs', 'clothes_lift', 'straddling', 'on_top', 'head_back', 'sex', 'vaginal', 'tongue_out', 'nakadashi', 'groping', 'ejaculation', 'breast_grab', 'grabbing', 'orgasm', 'trembling', 'clothed_sex', 'nipple_tweak', 'female_orgasm', 'internal_cumshot', 'reverse_upright_straddle', 'semen', 'sweat', 'genital_fluids', 'vaginal_juices', 'overflow', '1girl', '1boy', 'indoors', 'limited_visibility', 'contentious_content', 'heart', 'high_resolution'] | 1731168405 |
| 36864003 | 36864003.avif | 3040 | 2880 | e | image/avif | 200735 | ['original', 'egami', 'hetero', 'teenage', 'mind_control', 'teenage_girl', 'hypnosis', 'clothing', 'tied_hair', 'nude', 'legwear', 'bangs', 'shirt', 'thighhighs', 'skirt', 'underwear', 'uniform', 'twintails', 'pantsu', 'school_uniform', 'black_legwear', 'white_shirt', 'short_sleeves', 'neckwear', 'pleated_skirt', 'pants', 'serafuku', 'belt', 'black_thighhighs', 'sailor_collar', 'nude_male', 'clothed_female', 'clothed_female_nude_male', 'neckerchief', 'female', 'breasts', 'long_hair', 'male', 'blush', 'short_hair', 'nipples', 'penis', 'black_hair', 'feet', 'erect_nipples', 'grey_eyes', 'open_mouth', 'clothes_pull', 'clothes_lift', 'skirt_pull', 'skirt_lift', 'pants_pull', 'sex', 'vaginal', 'taken_from_behind', 'orgasm', 'trembling', 'clothed_sex', 'female_orgasm', 'sweat', '1girl', '1boy', 'indoors', 'high_resolution', 'very_high_resolution'] | 1731168369 |
| 36864002 | 36864002.png | 4967 | 3095 | q | image/png | 2600889 | ['sonic_the_hedgehog_(series)', 'sega', 'sonic_the_hedgehog_(character)', 'miles_prower', 'blaze_the_cat', 'dr._eggman', 'anthro', 'furry', 'clothing', 'tied_hair', 'hair_ornament', 'shirt', 'gloves', 'jewelry', 'ponytail', 'long_sleeves', 'eyewear', 'jacket', 'glasses', 'sleeveless', 'necklace', 'white_gloves', 'sunglasses', 'topwear', 'eyewear_on_head', 'goggles', 'red_shirt', 'one_eye_covered', 'handwear', 'red_jacket', 'goggles_on_head', 'mustache', 'hair_tie', 'purple_shirt', 'tinted_eyewear', 'topknot', 'female', 'long_hair', 'male', 'short_hair', 'blonde_hair', 'blue_eyes', 'black_hair', 'animal_ears', 'green_eyes', 'blue_hair', 'tail', 'purple_eyes', 'purple_hair', 'yellow_eyes', 'teeth', 'multicolored_hair', 'fur', 'hair', 'animal_tail', 'cat_ears', 'black_eyes', 'two-tone_hair', 'hair_over_one_eye', 'claws', 'clenched_teeth', 'white_fur', 'facial_hair', 'fox_ears', 'fox_tail', 'black_fur', 'body_fur', 'glowing_eyes', 'two-tone_fur', 'blue_fur', 'yellow_fur', 'animal_nose', 'wide-eyed', 'bald', 'constricted_pupils', 'open_mouth', 'looking_at_viewer', 'closed_mouth', 'half-closed_eyes', 'narrowed_eyes', 'surprised', 'transformation', 'scared', 'furry_female', 'furry_male', 'mammal', 'animal', 'canine', 'feline', 'felid', 'canis', 'fox', 'wolf', 'felis', 'domestic_cat', 'hedgehog', 'eulipotyphlan', 'cat_girl', 'fox_girl', 'fox_boy', 'gem', 'electricity', 'multiple_girls', '2girls', 'multiple_boys', 'group', '4boys', '5boys', 'accessory', 'blue-tinted_eyewear', 'chris_hayabusa', 'finimun', 'green_glow', 'mid_transformation', 'reaching_towards_viewer', 'simple_background', 'signature', 'gradient', 'gradient_background', 'useless_tags', 'glowing', 'abstract_background', 'alpha_channel', 'limited_visibility', 'high_resolution', 'very_high_resolution', '16:10_aspect_ratio'] | 1731168350 |
| 36864001 | 36864001.jpg | 1396 | 1730 | e | image/jpeg | 213991 | ['anthro', 'clothing', 'tied_hair', 'ponytail', 'female', 'breasts', 'large_breasts', 'thighs', 'thick_thighs', 'open_mouth', 'robot', 'robot_girl', 'bell', '1girl', 'solo', 'jinx_doodle', 'open_hands', 'high_resolution', 'tagme'] | 1731168336 |
| 36864000 | 36864000.jpg | 921 | 1300 | q | image/jpeg | 104589 | ['original', 'nakkar', 'clothing', 'tied_hair', 'legwear', 'hair_ornament', 'bangs', 'skirt', 'uniform', 'footwear', 'dress', 'twintails', 'long_sleeves', 'ribbon', 'school_uniform', 'bow', 'shoes', 'hair_ribbon', 'hair_bow', 'socks', 'white_legwear', 'frills', 'serafuku', 'puffy_sleeves', 'white_dress', 'sailor_collar', 'black_footwear', 'sleeves_past_wrists', 'white_socks', 'frilled_dress', 'pink_bow', 'blue_sailor_collar', 'puffy_long_sleeves', 'pink_ribbon', 'mary_janes', 'sailor_dress', 'grey_legwear', 'female', 'long_hair', 'blush', 'animal_ears', 'tail', 'purple_eyes', 'very_long_hair', 'ahoge', 'grey_hair', 'animal_tail', 'cat_ears', 'cat_tail', 'animal_ear_fluff', 'violet_eyes', 'looking_at_viewer', 'standing', 'closed_mouth', 'hand_up', 'standing_on_one_leg', 'running', 'cat_girl', '1girl', 'solo', ':<', 'escalator', 'grey_socks', 'full_body', 'username', 'blurry', 'watermark', 'twitter_username', 'useless_tags', 'blurry_background', 'motion_blur', 'speed_lines', 'photo_background', 'high_resolution', 'tagme'] | 1731168334 |
## Tags
1911041 tags in total.
These are the top 30 tags (237 tags in total) of type `unknown (-1)`:
| id | tag | name | name_en | name_ja | type | rating | count | total_count |
|-----:|:-----------------------------------|:-----------------------------------|:----------|:----------|-------:|:---------|--------:|--------------:|
| -1 | night | night | | | -1 | | 26 | 0 |
| -1 | workout | workout | | | -1 | | 22 | 0 |
| -1 | | | | | -1 | | 20 | 0 |
| -1 | ポケモン | ポケモン | | | -1 | | 20 | 0 |
| -1 | fate/stay | fate/stay | | | -1 | | 20 | 0 |
| -1 | x・y | x・y | | | -1 | | 20 | 0 |
| -1 | _la_tour_d'horloge | _la_tour_d'horloge | | | -1 | | 19 | 0 |
| -1 | tokeidai_no_jeanne:_jeanne_Ã | tokeidai_no_jeanne:_jeanne_Ã | | | -1 | | 19 | 0 |
| -1 | ht-mimi | ht-mimi | | | -1 | | 18 | 0 |
| -1 | school | school | | | -1 | | 17 | 0 |
| -1 | プリズマ☆イリヤ | プリズマ☆イリヤ | | | -1 | | 16 | 0 |
| -1 | fate/kaleid | fate/kaleid | | | -1 | | 16 | 0 |
| -1 | liner | liner | | | -1 | | 16 | 0 |
| -1 | ラブライブ! | ラブライブ! | | | -1 | | 14 | 0 |
| -1 | idol | idol | | | -1 | | 14 | 0 |
| -1 | project | project | | | -1 | | 14 | 0 |
| -1 | ãã«ã«ã¼å | ãã«ã«ã¼å | | | -1 | | 11 | 0 |
| -1 | é®®è¡çªé·ï¼ | é®®è¡çªé·ï¼ | | | -1 | | 11 | 0 |
| -1 | 上柚木綾瀬 | 上柚木綾瀬 | | | -1 | | 11 | 0 |
| -1 | loveる | loveる | | | -1 | | 11 | 0 |
| -1 | to | to | | | -1 | | 11 | 0 |
| -1 | ダイワスカーレット(ウマ娘)ウマ娘 | ダイワスカーレット(ウマ娘)ウマ娘 | | | -1 | | 10 | 0 |
| -1 | é®®è¡çªé·ï¼_ãã«ã«ã¼å | é®®è¡çªé·ï¼_ãã«ã«ã¼å | | | -1 | | 9 | 0 |
| -1 | -とらぶる- | -とらぶる- | | | -1 | | 8 | 0 |
| -1 | フェラ | フェラ | | | -1 | | 7 | 0 |
| -1 | in-birth | in-birth | | | -1 | | 6 | 0 |
| -1 | under | under | | | -1 | | 6 | 0 |
| -1 | -イリス- | -イリス- | | | -1 | | 5 | 0 |
| -1 | soft | soft | | | -1 | | 5 | 0 |
| -1 | 328ï¼ | 328ï¼ | | | -1 | | 4 | 0 |
These are the top 30 tags (1124446 tags in total) of type `general (0)`:
| id | tag | name | name_en | name_ja | type | rating | count | total_count |
|-------:|:------------------|:------------------|:------------------|:-------------|-------:|:---------|---------:|--------------:|
| 34240 | female | Female | Female | 女性 | 0 | s | 12356484 | 12356484 |
| 560 | clothing | clothing | clothing | 衣類 | 0 | q | 10628832 | 10628832 |
| 464292 | high_resolution | High resolution | High resolution | 高解像度 | 0 | s | 9576405 | 9576405 |
| 43623 | 1girl | 1girl | 1girl | 女性一人 | 0 | q | 8541226 | 8541226 |
| 6878 | solo | solo | solo | 一人 | 0 | s | 7630765 | 7630765 |
| 163 | breasts | breasts | breasts | 乳 | 0 | q | 6783614 | 6783614 |
| 142 | long_hair | long hair | long hair | ロングヘア | 0 | q | 6554683 | 6554683 |
| 156 | blush | Blush | Blush | 赤面 | 0 | q | 5074226 | 5074226 |
| 2679 | male | male | male | 男性 | 0 | q | 5029960 | 5029960 |
| 97161 | looking_at_viewer | Looking at viewer | Looking at viewer | カメラ目線 | 0 | s | 4703847 | 4703847 |
| 820 | smile | Smile | Smile | 笑顔 | 0 | s | 4348542 | 4348542 |
| 768 | short_hair | short hair | short hair | ショートヘア | 0 | s | 4057546 | 4057546 |
| 1356 | open_mouth | Open mouth | Open mouth | 開口 | 0 | q | 3852067 | 3852067 |
| 77229 | tied_hair | tied hair | tied hair | 結んだ髪 | 0 | q | 3560060 | 3560060 |
| 32 | large_breasts | large breasts | large breasts | 巨乳 | 0 | q | 3219457 | 3219457 |
| 454834 | blonde_hair | Blonde hair | Blonde hair | 金髪の毛 | 0 | q | 3158774 | 3158774 |
| 371 | nipples | nipples | nipples | 乳首 | 0 | e | 3076796 | 3076796 |
| 236202 | female_only | Female only | Female only | 女性のみ | 0 | s | 2964095 | 2964095 |
| 424 | blue_eyes | blue eyes | blue eyes | 青い目 | 0 | q | 2941055 | 2941054 |
| 3722 | hair_ornament | hair ornament | hair ornament | 髪飾り | 0 | s | 2661704 | 2661704 |
| 143919 | legwear | Legwear | Legwear | 靴下類 | 0 | q | 2645090 | 2645087 |
| 176064 | headwear | headwear | headwear | | 0 | s | 2636271 | 2636271 |
| 414 | brown_hair | brown hair | brown hair | 茶髪 | 0 | q | 2585942 | 2585942 |
| 310 | black_hair | Black hair | Black hair | 黒髪 | 0 | q | 2576526 | 2576526 |
| 654 | bangs | Bangs | Bangs | 前髪 | 0 | s | 2492691 | 2492691 |
| 165 | nude | Nude | Nude | 裸 | 0 | e | 2327371 | 2327371 |
| 140037 | 1boy | 1boy | 1boy | 男一人 | 0 | e | 2241747 | 2241747 |
| 127 | penis | penis | penis | ペニス | 0 | e | 2177521 | 2177521 |
| 1123 | shirt | shirt | shirt | シャツ | 0 | s | 2148794 | 2148794 |
| 278 | navel | navel | navel | おへそ | 0 | q | 2142828 | 2142828 |
These are the top 30 tags (466894 tags in total) of type `artist (1)`:
| id | tag | name | name_en | name_ja | type | rating | count | total_count |
|-------:|:----------------------|:----------------------|:----------------------|:----------------------------------|-------:|:---------|--------:|--------------:|
| 853326 | pochincoff | Pochincoff | Pochincoff | ポチンコフ | 1 | e | 13606 | 13606 |
| 6250 | yaegashi_nan | Yaegashi Nan | Yaegashi Nan | 八重樫南 | 1 | q | 8218 | 8218 |
| 2305 | clamp | Clamp | Clamp | CLAMP | 1 | s | 7914 | 7914 |
| 36476 | sakimichan | Sakimichan | Sakimichan | 咲美ちゃん | 1 | q | 7683 | 7683 |
| 96621 | annindoufu_(oicon) | Annindoufu (Oicon) | Annindoufu (Oicon) | 杏仁豆腐(OI☆CON) | 1 | s | 7666 | 7666 |
| 580836 | kagami_(lilith-soft) | Kagami (Lilith-Soft) | Kagami (Lilith-Soft) | 嘉臥深(カガミ)_(リリスソフト) | 1 | e | 7202 | 7202 |
| 870474 | twistedscarlett60 | Twistedscarlett60 | Twistedscarlett60 | | 1 | e | 7121 | 7121 |
| 31565 | drawfag | Drawfag | Drawfag | | 1 | q | 7047 | 7047 |
| 627253 | barazoku | Barazoku | Barazoku | | 1 | e | 6973 | 6973 |
| 2649 | kantoku | Kantoku | Kantoku | カントク | 1 | s | 6699 | 6699 |
| 556881 | boris_(noborhys) | Boris (Noborhys) | Boris (Noborhys) | | 1 | e | 6675 | 6675 |
| 7355 | palcomix | Palcomix | Palcomix | | 1 | e | 6567 | 6567 |
| 57 | tony | Tony | Tony | Tony | 1 | q | 6470 | 6470 |
| 613034 | neocoill | Neocoill | Neocoill | | 1 | e | 6225 | 6225 |
| 841522 | dandon_fuga | Dandon Fuga | Dandon Fuga | | 1 | e | 6131 | 6131 |
| 122866 | ebifurya | Ebifurya | Ebifurya | えびふりゃ | 1 | s | 6015 | 6015 |
| 115960 | huracan | Huracan | Huracan | | 1 | e | 5906 | 5906 |
| 2482 | carnelian | Carnelian | Carnelian | CARNELIAN | 1 | s | 5787 | 5787 |
| 549062 | haruyama_kazunori | Haruyama Kazunori | Haruyama Kazunori | 春山和則 | 1 | q | 5427 | 5427 |
| 900381 | merunyaa | Merunyaa | Merunyaa | | 1 | q | 5233 | 5233 |
| 986 | noizi_ito | Noizi Ito | Noizi Ito | いとうのいぢ | 1 | s | 5151 | 5151 |
| 101820 | personalami | Personalami | Personalami | | 1 | e | 5041 | 5041 |
| 226611 | nel-zel_formula | Nel-Zel Formula | Nel-Zel Formula | | 1 | e | 4970 | 4970 |
| 913753 | aoi_nagisa_(metalder) | Aoi Nagisa (Metalder) | Aoi Nagisa (Metalder) | 葵渚 | 1 | e | 4876 | 4876 |
| 212158 | lm_(legoman) | Lm (Legoman) | Lm (Legoman) | | 1 | e | 4767 | 4767 |
| 39484 | bokuman | Bokuman | Bokuman | ボクマン | 1 | q | 4685 | 4685 |
| 144406 | slugbox | Slugbox | Slugbox | | 1 | q | 4685 | 4685 |
| 212936 | matsunaga_kouyou | Matsunaga Kouyou | Matsunaga Kouyou | 松永紅葉 | 1 | q | 4558 | 4558 |
| 53603 | tsunako | Tsunako | Tsunako | つなこ | 1 | s | 4549 | 4549 |
| 125740 | sparrow_(artist) | Sparrow (artist) | Sparrow (artist) | | 1 | e | 4527 | 4527 |
These are the top 30 tags (8741 tags in total) of type `studio (2)`:
| id | tag | name | name_en | name_ja | type | rating | count | total_count |
|--------:|:-----------------------|:-----------------------|:-----------------------|:-------------------------------|-------:|:---------|--------:|--------------:|
| 4537 | nintendo | Nintendo | Nintendo | 任天堂 | 2 | q | 699202 | 699202 |
| 214348 | game_freak | Game Freak | Game Freak | ゲームフリーク | 2 | q | 173050 | 173050 |
| 1325604 | creatures_inc. | Creatures Inc. | Creatures Inc. | | 2 | q | 161288 | 161288 |
| 161870 | hasbro | Hasbro | Hasbro | | 2 | e | 134553 | 134553 |
| 7314 | disney | disney | disney | ディズニー | 2 | e | 107845 | 107845 |
| 2258 | capcom | Capcom | Capcom | カプコン | 2 | q | 63695 | 63695 |
| 18059 | dc_comics | Dc Comics | Dc Comics | DCコミック | 2 | e | 59567 | 59567 |
| 4874 | sega | Sega | Sega | セガ | 2 | e | 51380 | 51380 |
| 624520 | blizzard_entertainment | Blizzard Entertainment | Blizzard Entertainment | ブリザードエンターテインメント | 2 | e | 39089 | 39089 |
| 22263 | cartoon_network | Cartoon Network | Cartoon Network | | 2 | e | 38163 | 38163 |
| 86536 | marvel_comics | Marvel Comics | Marvel Comics | マーベル・コミック | 2 | q | 34658 | 34658 |
| 69872 | nickelodeon | Nickelodeon | Nickelodeon | ニッケルオデオン | 2 | e | 32588 | 32588 |
| 81970 | deviantart | Deviantart | Deviantart | デヴィアントアート | 2 | s | 29668 | 29668 |
| 11044 | square_enix | Square Enix | Square Enix | スクウェア・エニックス | 2 | q | 27690 | 27690 |
| 155460 | namco_bandai | Namco Bandai | Namco Bandai | | 2 | e | 22512 | 22512 |
| 569386 | shueisha | Shueisha | Shueisha | 集英社 | 2 | e | 20818 | 20818 |
| 17018 | shounen_jump | Shounen Jump | Shounen Jump | 少年ジャンプ | 2 | e | 20383 | 20383 |
| 2467 | snk | Snk | Snk | SNK | 2 | q | 18769 | 18769 |
| 2638 | konami | Konami | Konami | コナミ | 2 | s | 17118 | 17118 |
| 6495 | lilith-soft | Lilith-soft | Lilith-soft | リリスソフト | 2 | e | 14161 | 14161 |
| 18346 | qp:flapper | Qp:Flapper | Qp:Flapper | QP:flapper | 2 | s | 13164 | 13164 |
| 1655458 | project_moon | Project Moon | Project Moon | | 2 | s | 11611 | 11611 |
| 55177 | idea_factory | Idea Factory | Idea Factory | アイディアファクトリー | 2 | s | 9680 | 9680 |
| 149297 | hanna-barbera | Hanna-barbera | Hanna-barbera | | 2 | e | 8865 | 8865 |
| 4172 | atlus | Atlus | Atlus | アトラス | 2 | e | 8841 | 8841 |
| 458136 | warner_brothers | Warner Brothers | Warner Brothers | | 2 | e | 8798 | 8798 |
| 2518 | nitroplus | Nitroplus | Nitroplus | ニトロプラス | 2 | q | 8416 | 8416 |
| 1652511 | scottgames | Scottgames | Scottgames | | 2 | e | 8320 | 8320 |
| 459605 | naruto_pixxx | Naruto Pixxx | Naruto Pixxx | | 2 | e | 8289 | 8289 |
| 221337 | activision | Activision | Activision | | 2 | e | 8265 | 8265 |
These are the top 30 tags (48621 tags in total) of type `copyright (3)`:
| id | tag | name | name_en | name_ja | type | rating | count | total_count |
|--------:|:------------------------------------|:------------------------------------|:------------------------------------|:-------------------------------------|-------:|:---------|--------:|--------------:|
| 1397 | original | Original | Original | オリジナル | 3 | e | 1672354 | 1672354 |
| 83524 | touhou_project | Touhou Project | Touhou Project | 東方Project | 3 | q | 848956 | 848956 |
| 1384 | pokemon | Pokemon | Pokemon | ポケモン | 3 | q | 671784 | 671784 |
| 175211 | fate_(series) | Fate (Series) | Fate (Series) | フェイト・シリーズ | 3 | q | 495365 | 495365 |
| 458437 | kantai_collection | Kantai Collection | Kantai Collection | 艦隊これくしょん | 3 | q | 492851 | 492851 |
| 3735 | pixiv | Pixiv | Pixiv | ピクシブ | 3 | s | 472601 | 472601 |
| 682608 | fate/grand_order | Fate/grand Order | Fate/grand Order | Fate/Grand Order | 3 | q | 382658 | 382658 |
| 91872 | pokemon_(game) | Pokemon (game) | Pokemon (game) | ポケモン(ゲーム) | 3 | q | 326892 | 326892 |
| 1268449 | genshin_impact | Genshin Impact | Genshin Impact | 原神 | 3 | q | 294313 | 294312 |
| 912 | idolmaster | Idolmaster | Idolmaster | アイドルマスター | 3 | e | 275773 | 275773 |
| 571 | vocaloid | Vocaloid | Vocaloid | ボーカロイド | 3 | s | 271455 | 271455 |
| 1129497 | hololive | Hololive | Hololive | ホロライブ | 3 | s | 232651 | 232651 |
| 1425396 | blue_archive | Blue Archive | Blue Archive | ブルーアーカイブ | 3 | q | 193053 | 193053 |
| 1023439 | azur_lane | Azur Lane | Azur Lane | アズールレーン | 3 | q | 176441 | 176441 |
| 882999 | love_live!_(series) | Love Live! (Series) | Love Live! (Series) | ラブライブ!(シリーズ) | 3 | q | 170692 | 170692 |
| 60150 | my_little_pony | My Little Pony | My Little Pony | マイリトルポニー | 3 | e | 169298 | 169298 |
| 701 | naruto | Naruto | Naruto | ナルト | 3 | e | 163842 | 163842 |
| 1031332 | arknights | Arknights | Arknights | アークナイツ | 3 | s | 156971 | 156971 |
| 172796 | idolmaster_cinderella_girls | Idolmaster Cinderella Girls | Idolmaster Cinderella Girls | アイドルマスター シンデレラガールズ | 3 | s | 151896 | 151896 |
| 7479 | fire_emblem | Fire Emblem | Fire Emblem | ファイアーエムブレム | 3 | q | 146170 | 146170 |
| 2240 | final_fantasy | Final Fantasy | Final Fantasy | ファイナルファンタジー | 3 | e | 141470 | 141470 |
| 151035 | my_little_pony:_friendship_is_magic | My Little Pony: Friendship Is Magic | My Little Pony: Friendship Is Magic | マイリトルポニー〜トモダチは魔法〜 | 3 | e | 137685 | 137685 |
| 4047 | precure | Precure | Precure | プリキュア | 3 | q | 130664 | 130664 |
| 24606 | twitter | Twitter | Twitter | ツイッター | 3 | s | 109812 | 109810 |
| 1752 | one_piece | One Piece | One Piece | ワンピース | 3 | q | 109617 | 109617 |
| 779230 | my_hero_academia | My Hero Academia | My Hero Academia | 僕のヒーローアカデミア | 3 | q | 103009 | 103009 |
| 42841 | yu-gi-oh! | Yu-gi-oh! | Yu-gi-oh! | 遊戯王! | 3 | q | 102995 | 102995 |
| 930765 | uma_musume_-_pretty_derby | Uma Musume - Pretty Derby | Uma Musume - Pretty Derby | ウマ娘 プリティーダービー | 3 | s | 98554 | 98554 |
| 330716 | hetalia:_axis_powers | Hetalia: Axis Powers | Hetalia: Axis Powers | Axis powers ヘタリア | 3 | s | 94211 | 94211 |
| 125008 | mahou_shoujo_madoka_magica | Mahou Shoujo Madoka Magica | Mahou Shoujo Madoka Magica | 魔法少女まどか☆マギカ | 3 | q | 94126 | 94126 |
These are the top 30 tags (258367 tags in total) of type `character (4)`:
| id | tag | name | name_en | name_ja | type | rating | count | total_count |
|--------:|:-------------------------------|:-------------------------------|:-------------------------------|:---------------------------|-------:|:---------|--------:|--------------:|
| 460935 | pokemon_character | Pokemon Character | Pokemon Character | ポケモンキャラクター | 4 | q | 262532 | 262532 |
| 3155 | original_character | Original Character | Original Character | オリジナルキャラクター | 4 | e | 197618 | 197618 |
| 138 | hatsune_miku | Hatsune Miku | Hatsune Miku | 初音ミク | 4 | s | 156356 | 156356 |
| 1094140 | gen_1_pokemon | Gen 1 Pokemon | Gen 1 Pokemon | 第一世代のポケモン | 4 | q | 79689 | 79689 |
| 534 | hakurei_reimu | Hakurei Reimu | Hakurei Reimu | 博麗霊夢 | 4 | s | 70055 | 70055 |
| 1127 | kirisame_marisa | Kirisame Marisa | Kirisame Marisa | 霧雨魔理沙 | 4 | s | 60175 | 60175 |
| 85949 | fan_character | Fan Character | Fan Character | | 4 | e | 57906 | 57906 |
| 857685 | artoria_pendragon | Artoria Pendragon | Artoria Pendragon | アルトリア・ペンドラゴン | 4 | s | 54607 | 54607 |
| 1144 | flandre_scarlet | Flandre Scarlet | Flandre Scarlet | フランドール・スカーレット | 4 | s | 49773 | 49773 |
| 536883 | allied_forces_(hetalia) | Allied Forces (Hetalia) | Allied Forces (Hetalia) | 連合国(ヘタリア) | 4 | s | 49382 | 49382 |
| 1095014 | gen_4_pokemon | Gen 4 Pokemon | Gen 4 Pokemon | | 4 | q | 48694 | 48694 |
| 1143 | remilia_scarlet | Remilia Scarlet | Remilia Scarlet | レミリア・スカーレット | 4 | s | 47056 | 47056 |
| 536881 | axis_power_countries_(hetalia) | Axis Power Countries (Hetalia) | Axis Power Countries (Hetalia) | 枢軸国(ヘタリア) | 4 | s | 45399 | 45399 |
| 780805 | remodel_(kantai_collection) | Remodel (Kantai Collection) | Remodel (Kantai Collection) | 改造後(艦これ) | 4 | s | 45240 | 45240 |
| 1094473 | gen_3_pokemon | Gen 3 Pokemon | Gen 3 Pokemon | | 4 | q | 44347 | 44347 |
| 574 | kagamine_rin | Kagamine Rin | Kagamine Rin | 鏡音リン | 4 | s | 43819 | 43819 |
| 6959 | uzumaki_naruto | Uzumaki Naruto | Uzumaki Naruto | うずまきナルト | 4 | q | 41217 | 41217 |
| 1142 | izayoi_sakuya | Izayoi Sakuya | Izayoi Sakuya | 十六夜咲夜 | 4 | s | 40157 | 40157 |
| 220249 | legendary_pokemon | Legendary Pokemon | Legendary Pokemon | 伝説のポケモン | 4 | q | 39295 | 39295 |
| 1192124 | digimon_(species) | Digimon (Species) | Digimon (Species) | | 4 | e | 36493 | 36493 |
| 1094161 | gen_2_pokemon | Gen 2 Pokemon | Gen 2 Pokemon | | 4 | q | 36092 | 36092 |
| 1243 | kagamine_len | Kagamine Len | Kagamine Len | 鏡音レン | 4 | s | 35982 | 35982 |
| 1128 | konpaku_youmu | Konpaku Youmu | Konpaku Youmu | 魂魄妖夢 | 4 | s | 32943 | 32943 |
| 183 | cirno | Cirno | Cirno | チルノ | 4 | s | 32545 | 32545 |
| 1095028 | gen_5_pokemon | Gen 5 Pokemon | Gen 5 Pokemon | | 4 | q | 32295 | 32295 |
| 125006 | akemi_homura | Akemi Homura | Akemi Homura | 暁美ほむら | 4 | s | 31842 | 31842 |
| 1130 | komeiji_koishi | Komeiji Koishi | Komeiji Koishi | 古明地こいし | 4 | s | 31628 | 31628 |
| 1159 | alice_margatroid | Alice Margatroid | Alice Margatroid | アリス・マーガトロイド | 4 | s | 31500 | 31500 |
| 125007 | kaname_madoka | Kaname Madoka | Kaname Madoka | 鹿目まどか | 4 | s | 31451 | 31451 |
| 799 | patchouli_knowledge | Patchouli Knowledge | Patchouli Knowledge | パチュリー・ノーレッジ | 4 | s | 31315 | 31315 |
These are the top 30 tags (304 tags in total) of type `genre (5)`:
| id | tag | name | name_en | name_ja | type | rating | count | total_count |
|-------:|:-----------------|:-----------------|:-----------------|:---------------------------|-------:|:---------|--------:|--------------:|
| 40895 | anthro | Anthro | Anthro | 人外 | 5 | e | 1524129 | 1524129 |
| 192913 | hetero | Hetero | Hetero | 異性愛 | 5 | e | 1285790 | 1285790 |
| 2 | loli | Loli | Loli | ロリ | 5 | e | 809785 | 809785 |
| 4622 | furry | Furry | Furry | ケモノ | 5 | e | 667189 | 667188 |
| 1583 | yaoi | Yaoi | Yaoi | やおい | 5 | e | 444276 | 444276 |
| 971778 | virtual_youtuber | Virtual youtuber | Virtual youtuber | バーチャルYouTuber | 5 | s | 388766 | 388766 |
| 209 | yuri | Yuri | Yuri | 百合 | 5 | q | 386622 | 386622 |
| 284125 | intersex | Intersex | Intersex | インターセックス | 5 | e | 374253 | 374253 |
| 934 | bdsm | BDSM | BDSM | BDSM | 5 | e | 336149 | 336149 |
| 147749 | interspecies | Interspecies | Interspecies | 異種姦 | 5 | e | 299230 | 299230 |
| 7 | chibi | Chibi | Chibi | ちび | 5 | s | 278718 | 278718 |
| 30953 | age_difference | Age difference | Age difference | 年齢差 | 5 | e | 250599 | 250599 |
| 2891 | group_sex | Group sex | Group sex | グループセックス | 5 | e | 224882 | 224882 |
| 13 | shota | Shota | Shota | ショタ | 5 | e | 213052 | 213052 |
| 4868 | mature | Mature | Mature | 成熟した | 5 | e | 184767 | 184767 |
| 388960 | scalie | Scalie | Scalie | | 5 | e | 179492 | 179492 |
| 459708 | violation | Violation | Violation | | 5 | e | 173413 | 173413 |
| 568 | cosplay | Cosplay | Cosplay | コスプレ | 5 | s | 172355 | 172355 |
| 236 | crossover | Crossover | Crossover | クロスオーバー | 5 | s | 161175 | 161175 |
| 1419 | trap | Trap | Trap | 男の娘 | 5 | q | 160800 | 160800 |
| 7188 | interracial | Interracial | Interracial | 異人種間 | 5 | e | 155412 | 155412 |
| 921 | milf | MILF | MILF | MILF | 5 | e | 154152 | 154152 |
| 2767 | androgynous | Androgynous | Androgynous | 中性的 | 5 | q | 154037 | 154037 |
| 1177 | genderswap | Genderswap | Genderswap | 性転換 | 5 | q | 150004 | 150004 |
| 745 | tentacles | Tentacles | Tentacles | 触手 | 5 | e | 138450 | 138450 |
| 125 | crossdressing | Crossdressing | Crossdressing | 女装・男装 | 5 | q | 137427 | 137427 |
| 1092 | threesome | Threesome | Threesome | 3P | 5 | e | 116348 | 116348 |
| 7295 | bara | Bara | Bara | 薔薇(ジャンル) | 5 | e | 103015 | 103015 |
| 3386 | femdom | Femdom | Femdom | フェムドム | 5 | e | 96819 | 96819 |
| 11841 | zoophilia | Zoophilia | Zoophilia | | 5 | e | 95126 | 95126 |
These are the top 30 tags (2006 tags in total) of type `medium (8)`:
| id | tag | name | name_en | name_ja | type | rating | count | total_count |
|--------:|:------------------------|:------------------------|:------------------------|:-----------------------|-------:|:---------|--------:|--------------:|
| 610607 | very_high_resolution | Very high resolution | Very high resolution | 非常に高い解像度 | 8 | s | 3278554 | 3278554 |
| 7470 | simple_background | Simple background | Simple background | シンプル背景 | 8 | s | 3113846 | 3113846 |
| 116956 | white_background | White background | White background | 白い背景 | 8 | s | 1843645 | 1843645 |
| 379 | censored | Censored | Censored | 修正あり | 8 | e | 1100315 | 1100315 |
| 66315 | large_filesize | Large filesize | Large filesize | 大きいファイルサイズ | 8 | s | 1037075 | 1037075 |
| 610654 | 1:1_aspect_ratio | 1:1 aspect ratio | 1:1 aspect ratio | 1:1アスペクト比 | 8 | s | 967672 | 967671 |
| 126842 | full_body | Full body | Full body | 全身 | 8 | s | 909655 | 909654 |
| 486 | uncensored | Uncensored | Uncensored | 無修正 | 8 | e | 857549 | 857549 |
| 110 | monochrome | Monochrome | Monochrome | 白黒 | 8 | s | 783897 | 783897 |
| 610648 | 4:3_aspect_ratio | 4:3 aspect ratio | 4:3 aspect ratio | 4:3アスペクト比 | 8 | e | 708260 | 708260 |
| 13280 | official_art | Official art | Official art | 公式アート | 8 | s | 660664 | 660664 |
| 175652 | cowboy_shot | Cowboy shot | Cowboy shot | カウボーイ・ショット | 8 | s | 575436 | 575436 |
| 159929 | artist_name | Artist name | Artist name | アーティスト名前 | 8 | s | 572668 | 572668 |
| 610649 | 16:9_aspect_ratio | 16:9 aspect ratio | 16:9 aspect ratio | 16:9アスペクト比 | 8 | s | 554346 | 554346 |
| 3007 | comic | Comic | Comic | コミック | 8 | s | 546482 | 546482 |
| 27996 | photoshop_(medium) | Photoshop (medium) | Photoshop (medium) | | 8 | s | 506622 | 506622 |
| 580618 | useless_tags | Useless tags | Useless tags | 役に立たないタグ | 8 | e | 506342 | 506342 |
| 679 | speech_bubble | Speech bubble | Speech bubble | 吹き出し | 8 | q | 410167 | 410167 |
| 1820 | signature | Signature | Signature | 署名 | 8 | s | 404693 | 404693 |
| 768090 | digital_media_(artwork) | Digital media (artwork) | Digital media (artwork) | | 8 | e | 390631 | 390631 |
| 253759 | cg_art | Cg art | Cg art | CGアート | 8 | s | 362765 | 362765 |
| 1198 | 3d | 3d | 3d | 3DCG | 8 | e | 356004 | 356004 |
| 4480 | greyscale | Greyscale | Greyscale | グレースケール | 8 | q | 353593 | 353593 |
| 2034518 | ai-created | AI-created | AI-created | | 8 | s | 341083 | 341083 |
| 191661 | mosaic_censoring | Mosaic censoring | Mosaic censoring | モザイク修正 | 8 | e | 331188 | 331188 |
| 165358 | grey_background | Grey background | Grey background | グレーバックグラウンド | 8 | s | 309907 | 309907 |
| 6138 | pov | Pov | Pov | 一人称視点 | 8 | s | 292638 | 292637 |
| 935844 | username | Username | Username | | 8 | q | 280402 | 280402 |
| 5666 | sketch | Sketch | Sketch | スケッチ | 8 | q | 269032 | 269032 |
| 289 | wallpaper | Wallpaper | Wallpaper | 壁紙 | 8 | s | 268578 | 268578 |
These are the top 30 tags (1425 tags in total) of type `meta (9)`:
| id | tag | name | name_en | name_ja | type | rating | count | total_count |
|--------:|:----------------------------|:----------------------------|:----------------------------|:-------------------------|-------:|:---------|--------:|--------------:|
| 2230072 | limited_visibility | Limited Visibility | Limited Visibility | | 9 | e | 2241239 | 2241239 |
| 1093096 | contentious_content | Contentious Content | Contentious Content | 不和なコンテンツ | 9 | e | 1028468 | 1028468 |
| 56191 | extreme_content | Extreme Content | Extreme Content | 過激なコンテンツ | 9 | e | 911763 | 911762 |
| 316134 | potential_duplicate | Potential duplicate | Potential duplicate | 重複可能性あり | 9 | s | 469106 | 469106 |
| 2678 | translation_request | Translation request | Translation request | 翻訳希望 | 9 | q | 228836 | 228836 |
| 1112475 | third-party_edit | Third-party edit | Third-party edit | コラ・加工 | 9 | s | 176686 | 176686 |
| 98093 | commission | Commission | Commission | 有料イラリク | 9 | s | 133751 | 133751 |
| 976119 | png-to-jpg_conversion | Png-to-jpg conversion | Png-to-jpg conversion | pngからjpgへの変換 | 9 | s | 113581 | 113581 |
| 7672 | artist_request | Artist request | Artist request | 絵師詳細希望 | 9 | q | 79599 | 79599 |
| 320 | translated | Translated | Translated | 翻訳済み | 9 | q | 73221 | 73221 |
| 958415 | bad_twitter_id | Bad twitter id | Bad twitter id | 不正なツイッターid | 9 | s | 67535 | 67535 |
| 165874 | revision | Revision | Revision | 修正版 | 9 | s | 60351 | 60351 |
| 5803 | duplicate | Duplicate | Duplicate | 重複 | 9 | s | 57024 | 57024 |
| 107370 | md5_mismatch | Md5 mismatch | Md5 mismatch | | 9 | s | 55655 | 55655 |
| 313595 | colorization | Colorization | Colorization | 着色 | 9 | q | 45983 | 45983 |
| 1421384 | skeb_commission | Skeb commission | Skeb commission | | 9 | s | 40724 | 40724 |
| 21283 | character_request | Character request | Character request | キャラクター希望 | 9 | q | 35337 | 35337 |
| 10305 | spoilers | Spoilers | Spoilers | ネタバレ | 9 | s | 31955 | 31955 |
| 566512 | borrowed_character | Borrowed character | Borrowed character | | 9 | q | 29418 | 29418 |
| 1484107 | paid_reward_available | Paid reward available | Paid reward available | | 9 | q | 29201 | 29201 |
| 162467 | pleated | Pleated | Pleated | | 9 | s | 26924 | 26924 |
| 77304 | decensored | Decensored | Decensored | 無修正化 | 9 | e | 24165 | 24165 |
| 56184 | upscaled | Upscaled | Upscaled | | 9 | q | 24027 | 24027 |
| 808206 | alternate_version_available | Alternate version available | Alternate version available | | 9 | e | 21774 | 21774 |
| 7059 | color | Color | Color | カラー | 9 | e | 19760 | 19760 |
| 661 | blue | Blue | Blue | 青い | 9 | s | 18900 | 18900 |
| 359 | eyes | Eyes | Eyes | 目 | 9 | q | 18369 | 18369 |
| 1388906 | resolution_mismatch | Resolution mismatch | Resolution mismatch | | 9 | s | 15311 | 15311 |
| 107058 | copyright_request | Copyright request | Copyright request | 著作権希望 | 9 | q | 15033 | 15033 |
| 4199 | rubber | Rubber | Rubber | | 9 | e | 14554 | 14554 |
|
TempoFunk/webvid-10M | TempoFunk | "2023-08-19T09:03:19Z" | 10,515 | 62 | [
"task_categories:text-to-video",
"task_categories:text-to-image",
"task_categories:video-classification",
"task_categories:image-classification",
"language:en",
"license:agpl-3.0",
"size_categories:10M<n<100M",
"format:csv",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-to-video",
"text-to-image",
"video-classification",
"image-classification"
] | "2023-06-16T19:17:16Z" | ---
license: agpl-3.0
task_categories:
- text-to-video
- text-to-image
- video-classification
- image-classification
language:
- en
size_categories:
- 1M<n<10M
--- |
AmazonScience/massive | AmazonScience | "2022-11-16T15:44:51Z" | 10,466 | 63 | [
"task_categories:text-classification",
"task_ids:intent-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:af-ZA",
"multilinguality:am-ET",
"multilinguality:ar-SA",
"multilinguality:az-AZ",
"multilinguality:bn-BD",
"multilinguality:ca-ES",
"multilinguality:cy-GB",
"multilinguality:da-DK",
"multilinguality:de-DE",
"multilinguality:el-GR",
"multilinguality:en-US",
"multilinguality:es-ES",
"multilinguality:fa-IR",
"multilinguality:fi-FI",
"multilinguality:fr-FR",
"multilinguality:he-IL",
"multilinguality:hi-IN",
"multilinguality:hu-HU",
"multilinguality:hy-AM",
"multilinguality:id-ID",
"multilinguality:is-IS",
"multilinguality:it-IT",
"multilinguality:ja-JP",
"multilinguality:jv-ID",
"multilinguality:ka-GE",
"multilinguality:km-KH",
"multilinguality:kn-IN",
"multilinguality:ko-KR",
"multilinguality:lv-LV",
"multilinguality:ml-IN",
"multilinguality:mn-MN",
"multilinguality:ms-MY",
"multilinguality:my-MM",
"multilinguality:nb-NO",
"multilinguality:nl-NL",
"multilinguality:pl-PL",
"multilinguality:pt-PT",
"multilinguality:ro-RO",
"multilinguality:ru-RU",
"multilinguality:sl-SL",
"multilinguality:sq-AL",
"multilinguality:sv-SE",
"multilinguality:sw-KE",
"multilinguality:ta-IN",
"multilinguality:te-IN",
"multilinguality:th-TH",
"multilinguality:tl-PH",
"multilinguality:tr-TR",
"multilinguality:ur-PK",
"multilinguality:vi-VN",
"multilinguality:zh-CN",
"multilinguality:zh-TW",
"source_datasets:original",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2204.08582",
"region:us",
"natural-language-understanding"
] | [
"text-classification"
] | "2022-04-27T20:48:46Z" | ---
annotations_creators:
- expert-generated
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- af-ZA
- am-ET
- ar-SA
- az-AZ
- bn-BD
- ca-ES
- cy-GB
- da-DK
- de-DE
- el-GR
- en-US
- es-ES
- fa-IR
- fi-FI
- fr-FR
- he-IL
- hi-IN
- hu-HU
- hy-AM
- id-ID
- is-IS
- it-IT
- ja-JP
- jv-ID
- ka-GE
- km-KH
- kn-IN
- ko-KR
- lv-LV
- ml-IN
- mn-MN
- ms-MY
- my-MM
- nb-NO
- nl-NL
- pl-PL
- pt-PT
- ro-RO
- ru-RU
- sl-SL
- sq-AL
- sv-SE
- sw-KE
- ta-IN
- te-IN
- th-TH
- tl-PH
- tr-TR
- ur-PK
- vi-VN
- zh-CN
- zh-TW
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- intent-classification
- multi-class-classification
paperswithcode_id: massive
pretty_name: MASSIVE
language_bcp47:
- af-ZA
- am-ET
- ar-SA
- az-AZ
- bn-BD
- ca-ES
- cy-GB
- da-DK
- de-DE
- el-GR
- en-US
- es-ES
- fa-IR
- fi-FI
- fr-FR
- he-IL
- hi-IN
- hu-HU
- hy-AM
- id-ID
- is-IS
- it-IT
- ja-JP
- jv-ID
- ka-GE
- km-KH
- kn-IN
- ko-KR
- lv-LV
- ml-IN
- mn-MN
- ms-MY
- my-MM
- nb-NO
- nl-NL
- pl-PL
- pt-PT
- ro-RO
- ru-RU
- sl-SL
- sq-AL
- sv-SE
- sw-KE
- ta-IN
- te-IN
- th-TH
- tl-PH
- tr-TR
- ur-PK
- vi-VN
- zh-CN
- zh-TW
tags:
- natural-language-understanding
---
# MASSIVE 1.1: A 1M-Example Multilingual Natural Language Understanding Dataset with 52 Typologically-Diverse Languages
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [No Warranty](#no-warranty)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/alexa/massive
- **Repository:** https://github.com/alexa/massive
- **Paper:** https://arxiv.org/abs/2204.08582
- **Leaderboard:** https://eval.ai/web/challenges/challenge-page/1697/overview
- **Point of Contact:** [GitHub](https://github.com/alexa/massive/issues)
### Dataset Summary
MASSIVE 1.1 is a parallel dataset of > 1M utterances across 52 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions.
| Name | Lang | Utt/Lang | Domains | Intents | Slots |
|:-------------------------------------------------------------------------------:|:-------:|:--------------:|:-------:|:--------:|:------:|
| MASSIVE 1.1 | 52 | 19,521 | 18 | 60 | 55 |
| SLURP (Bastianelli et al., 2020) | 1 | 16,521 | 18 | 60 | 55 |
| NLU Evaluation Data (Liu et al., 2019) | 1 | 25,716 | 18 | 54 | 56 |
| Airline Travel Information System (ATIS) (Price, 1990) | 1 | 5,871 | 1 | 26 | 129 |
| ATIS with Hindi and Turkish (Upadhyay et al., 2018) | 3 | 1,315-5,871 | 1 | 26 | 129 |
| MultiATIS++ (Xu et al., 2020) | 9 | 1,422-5,897 | 1 | 21-26 | 99-140 |
| Snips (Coucke et al., 2018) | 1 | 14,484 | - | 7 | 53 |
| Snips with French (Saade et al., 2019) | 2 | 4,818 | 2 | 14-15 | 11-12 |
| Task Oriented Parsing (TOP) (Gupta et al., 2018) | 1 | 44,873 | 2 | 25 | 36 |
| Multilingual Task-Oriented Semantic Parsing (MTOP) (Li et al., 2021) | 6 | 15,195-22,288 | 11 | 104-113 | 72-75 |
| Cross-Lingual Multilingual Task Oriented Dialog (Schuster et al., 2019) | 3 | 5,083-43,323 | 3 | 12 | 11 |
| Microsoft Dialog Challenge (Li et al., 2018) | 1 | 38,276 | 3 | 11 | 29 |
| Fluent Speech Commands (FSC) (Lugosch et al., 2019) | 1 | 30,043 | - | 31 | - |
| Chinese Audio-Textual Spoken Language Understanding (CATSLU) (Zhu et al., 2019) | 1 | 16,258 | 4 | - | 94 |
### Supported Tasks and Leaderboards
The dataset can be used to train a model for `natural-language-understanding` (NLU) :
- `intent-classification`
- `multi-class-classification`
- `natural-language-understanding`
### Languages
The MASSIVE 1.1 corpora consists of parallel sentences from 52 languages :
- `Afrikaans - South Africa (af-ZA)`
- `Amharic - Ethiopia (am-ET)`
- `Arabic - Saudi Arabia (ar-SA)`
- `Azeri - Azerbaijan (az-AZ)`
- `Bengali - Bangladesh (bn-BD)`
- `Catalan - Spain (ca-ES)`
- `Chinese - China (zh-CN)`
- `Chinese - Taiwan (zh-TW)`
- `Danish - Denmark (da-DK)`
- `German - Germany (de-DE)`
- `Greek - Greece (el-GR)`
- `English - United States (en-US)`
- `Spanish - Spain (es-ES)`
- `Farsi - Iran (fa-IR)`
- `Finnish - Finland (fi-FI)`
- `French - France (fr-FR)`
- `Hebrew - Israel (he-IL)`
- `Hungarian - Hungary (hu-HU)`
- `Armenian - Armenia (hy-AM)`
- `Indonesian - Indonesia (id-ID)`
- `Icelandic - Iceland (is-IS)`
- `Italian - Italy (it-IT)`
- `Japanese - Japan (ja-JP)`
- `Javanese - Indonesia (jv-ID)`
- `Georgian - Georgia (ka-GE)`
- `Khmer - Cambodia (km-KH)`
- `Korean - Korea (ko-KR)`
- `Latvian - Latvia (lv-LV)`
- `Mongolian - Mongolia (mn-MN)`
- `Malay - Malaysia (ms-MY)`
- `Burmese - Myanmar (my-MM)`
- `Norwegian - Norway (nb-NO)`
- `Dutch - Netherlands (nl-NL)`
- `Polish - Poland (pl-PL)`
- `Portuguese - Portugal (pt-PT)`
- `Romanian - Romania (ro-RO)`
- `Russian - Russia (ru-RU)`
- `Slovanian - Slovania (sl-SL)`
- `Albanian - Albania (sq-AL)`
- `Swedish - Sweden (sv-SE)`
- `Swahili - Kenya (sw-KE)`
- `Hindi - India (hi-IN)`
- `Kannada - India (kn-IN)`
- `Malayalam - India (ml-IN)`
- `Tamil - India (ta-IN)`
- `Telugu - India (te-IN)`
- `Thai - Thailand (th-TH)`
- `Tagalog - Philippines (tl-PH)`
- `Turkish - Turkey (tr-TR)`
- `Urdu - Pakistan (ur-PK)`
- `Vietnamese - Vietnam (vi-VN)`
- `Welsh - United Kingdom (cy-GB)`
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
dataset = load_dataset("AmazonScience/massive", "en-US", split='train')
print(dataset[0])
```
## Dataset Structure
### Data Instances
```json
{
"id": "0",
"locale": "fr-FR",
"partition": "test",
"scenario": "alarm",
"intent": "alarm_set",
"utt": "réveille-moi à cinq heures du matin cette semaine",
"annot_utt": "réveille-moi à [time : cinq heures du matin] [date : cette semaine]",
"worker_id": "22",
"slot_method": [
{ "slot": "time", "method": "translation" },
{ "slot": "date", "method": "translation" }
],
"judgments": [
{
"worker_id": "22",
"intent_score": 1,
"slots_score": 1,
"grammar_score": 4,
"spelling_score": 2,
"language_identification": "target"
},
{
"worker_id": "8",
"intent_score": 1,
"slots_score": 1,
"grammar_score": 4,
"spelling_score": 2,
"language_identification": "target"
},
{
"worker_id": "0",
"intent_score": 1,
"slots_score": 1,
"grammar_score": 4,
"spelling_score": 2,
"language_identification": "target"
}
]
}
```
### Data Fields
`id`: maps to the original ID in the [SLURP](https://github.com/pswietojanski/slurp) collection. Mapping back to the SLURP en-US utterance, this utterance served as the basis for this localization.
`locale`: is the language and country code accoring to ISO-639-1 and ISO-3166.
`partition`: is either `train`, `dev`, or `test`, according to the original split in [SLURP](https://github.com/pswietojanski/slurp).
`scenario`: is the general domain, aka "scenario" in SLURP terminology, of an utterance
`intent`: is the specific intent of an utterance within a domain formatted as `{scenario}_{intent}`
`utt`: the raw utterance text without annotations
`annot_utt`: the text from `utt` with slot annotations formatted as `[{label} : {entity}]`
`worker_id`: The obfuscated worker ID from MTurk of the worker completing the localization of the utterance. Worker IDs are specific to a locale and do *not* map across locales.
`slot_method`: for each slot in the utterance, whether that slot was a `translation` (i.e., same expression just in the target language), `localization` (i.e., not the same expression but a different expression was chosen more suitable to the phrase in that locale), or `unchanged` (i.e., the original en-US slot value was copied over without modification).
`judgments`: Each judgment collected for the localized utterance has 6 keys. `worker_id` is the obfuscated worker ID from MTurk of the worker completing the judgment. Worker IDs are specific to a locale and do *not* map across locales, but *are* consistent across the localization tasks and the judgment tasks, e.g., judgment worker ID 32 in the example above may appear as the localization worker ID for the localization of a different de-DE utterance, in which case it would be the same worker.
```plain
intent_score : "Does the sentence match the intent?"
0: No
1: Yes
2: It is a reasonable interpretation of the goal
slots_score : "Do all these terms match the categories in square brackets?"
0: No
1: Yes
2: There are no words in square brackets (utterance without a slot)
grammar_score : "Read the sentence out loud. Ignore any spelling, punctuation, or capitalization errors. Does it sound natural?"
0: Completely unnatural (nonsensical, cannot be understood at all)
1: Severe errors (the meaning cannot be understood and doesn't sound natural in your language)
2: Some errors (the meaning can be understood but it doesn't sound natural in your language)
3: Good enough (easily understood and sounds almost natural in your language)
4: Perfect (sounds natural in your language)
spelling_score : "Are all words spelled correctly? Ignore any spelling variances that may be due to differences in dialect. Missing spaces should be marked as a spelling error."
0: There are more than 2 spelling errors
1: There are 1-2 spelling errors
2: All words are spelled correctly
language_identification : "The following sentence contains words in the following languages (check all that apply)"
1: target
2: english
3: other
4: target & english
5: target & other
6: english & other
7: target & english & other
```
### Data Splits
|Language|Train|Dev|Test|
|:---:|:---:|:---:|:---:|
|af-ZA|11514|2033|2974|
|am-ET|11514|2033|2974|
|ar-SA|11514|2033|2974|
|az-AZ|11514|2033|2974|
|bn-BD|11514|2033|2974|
|ca-ES|11514|2033|2974|
|cy-GB|11514|2033|2974|
|da-DK|11514|2033|2974|
|de-DE|11514|2033|2974|
|el-GR|11514|2033|2974|
|en-US|11514|2033|2974|
|es-ES|11514|2033|2974|
|fa-IR|11514|2033|2974|
|fi-FI|11514|2033|2974|
|fr-FR|11514|2033|2974|
|he-IL|11514|2033|2974|
|hi-IN|11514|2033|2974|
|hu-HU|11514|2033|2974|
|hy-AM|11514|2033|2974|
|id-ID|11514|2033|2974|
|is-IS|11514|2033|2974|
|it-IT|11514|2033|2974|
|ja-JP|11514|2033|2974|
|jv-ID|11514|2033|2974|
|ka-GE|11514|2033|2974|
|km-KH|11514|2033|2974|
|kn-IN|11514|2033|2974|
|ko-KR|11514|2033|2974|
|lv-LV|11514|2033|2974|
|ml-IN|11514|2033|2974|
|mn-MN|11514|2033|2974|
|ms-MY|11514|2033|2974|
|my-MM|11514|2033|2974|
|nb-NO|11514|2033|2974|
|nl-NL|11514|2033|2974|
|pl-PL|11514|2033|2974|
|pt-PT|11514|2033|2974|
|ro-RO|11514|2033|2974|
|ru-RU|11514|2033|2974|
|sl-SL|11514|2033|2974|
|sq-AL|11514|2033|2974|
|sv-SE|11514|2033|2974|
|sw-KE|11514|2033|2974|
|ta-IN|11514|2033|2974|
|te-IN|11514|2033|2974|
|th-TH|11514|2033|2974|
|tl-PH|11514|2033|2974|
|tr-TR|11514|2033|2974|
|ur-PK|11514|2033|2974|
|vi-VN|11514|2033|2974|
|zh-CN|11514|2033|2974|
|zh-TW|11514|2033|2974|
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Additional Information
### Dataset Curators
__MASSIVE__: Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan.
__SLURP__: Bastianelli, Emanuele and Vanzo, Andrea and Swietojanski, Pawel and Rieser, Verena.
__Hugging Face Upload and Integration__: Labrak Yanis (Not affiliated with the original corpus)
### Licensing Information
```plain
Copyright Amazon.com Inc. or its affiliates.
Attribution 4.0 International
=======================================================================
Creative Commons Corporation ("Creative Commons") is not a law firm and
does not provide legal services or legal advice. Distribution of
Creative Commons public licenses does not create a lawyer-client or
other relationship. Creative Commons makes its licenses and related
information available on an "as-is" basis. Creative Commons gives no
warranties regarding its licenses, any material licensed under their
terms and conditions, or any related information. Creative Commons
disclaims all liability for damages resulting from their use to the
fullest extent possible.
Using Creative Commons Public Licenses
Creative Commons public licenses provide a standard set of terms and
conditions that creators and other rights holders may use to share
original works of authorship and other material subject to copyright
and certain other rights specified in the public license below. The
following considerations are for informational purposes only, are not
exhaustive, and do not form part of our licenses.
Considerations for licensors: Our public licenses are
intended for use by those authorized to give the public
permission to use material in ways otherwise restricted by
copyright and certain other rights. Our licenses are
irrevocable. Licensors should read and understand the terms
and conditions of the license they choose before applying it.
Licensors should also secure all rights necessary before
applying our licenses so that the public can reuse the
material as expected. Licensors should clearly mark any
material not subject to the license. This includes other CC-
licensed material, or material used under an exception or
limitation to copyright. More considerations for licensors:
wiki.creativecommons.org/Considerations_for_licensors
Considerations for the public: By using one of our public
licenses, a licensor grants the public permission to use the
licensed material under specified terms and conditions. If
the licensor's permission is not necessary for any reason--for
example, because of any applicable exception or limitation to
copyright--then that use is not regulated by the license. Our
licenses grant only permissions under copyright and certain
other rights that a licensor has authority to grant. Use of
the licensed material may still be restricted for other
reasons, including because others have copyright or other
rights in the material. A licensor may make special requests,
such as asking that all changes be marked or described.
Although not required by our licenses, you are encouraged to
respect those requests where reasonable. More considerations
for the public:
wiki.creativecommons.org/Considerations_for_licensees
=======================================================================
Creative Commons Attribution 4.0 International Public License
By exercising the Licensed Rights (defined below), You accept and agree
to be bound by the terms and conditions of this Creative Commons
Attribution 4.0 International Public License ("Public License"). To the
extent this Public License may be interpreted as a contract, You are
granted the Licensed Rights in consideration of Your acceptance of
these terms and conditions, and the Licensor grants You such rights in
consideration of benefits the Licensor receives from making the
Licensed Material available under these terms and conditions.
Section 1 -- Definitions.
a. Adapted Material means material subject to Copyright and Similar
Rights that is derived from or based upon the Licensed Material
and in which the Licensed Material is translated, altered,
arranged, transformed, or otherwise modified in a manner requiring
permission under the Copyright and Similar Rights held by the
Licensor. For purposes of this Public License, where the Licensed
Material is a musical work, performance, or sound recording,
Adapted Material is always produced where the Licensed Material is
synched in timed relation with a moving image.
b. Adapter's License means the license You apply to Your Copyright
and Similar Rights in Your contributions to Adapted Material in
accordance with the terms and conditions of this Public License.
c. Copyright and Similar Rights means copyright and/or similar rights
closely related to copyright including, without limitation,
performance, broadcast, sound recording, and Sui Generis Database
Rights, without regard to how the rights are labeled or
categorized. For purposes of this Public License, the rights
specified in Section 2(b)(1)-(2) are not Copyright and Similar
Rights.
d. Effective Technological Measures means those measures that, in the
absence of proper authority, may not be circumvented under laws
fulfilling obligations under Article 11 of the WIPO Copyright
Treaty adopted on December 20, 1996, and/or similar international
agreements.
e. Exceptions and Limitations means fair use, fair dealing, and/or
any other exception or limitation to Copyright and Similar Rights
that applies to Your use of the Licensed Material.
f. Licensed Material means the artistic or literary work, database,
or other material to which the Licensor applied this Public
License.
g. Licensed Rights means the rights granted to You subject to the
terms and conditions of this Public License, which are limited to
all Copyright and Similar Rights that apply to Your use of the
Licensed Material and that the Licensor has authority to license.
h. Licensor means the individual(s) or entity(ies) granting rights
under this Public License.
i. Share means to provide material to the public by any means or
process that requires permission under the Licensed Rights, such
as reproduction, public display, public performance, distribution,
dissemination, communication, or importation, and to make material
available to the public including in ways that members of the
public may access the material from a place and at a time
individually chosen by them.
j. Sui Generis Database Rights means rights other than copyright
resulting from Directive 96/9/EC of the European Parliament and of
the Council of 11 March 1996 on the legal protection of databases,
as amended and/or succeeded, as well as other essentially
equivalent rights anywhere in the world.
k. You means the individual or entity exercising the Licensed Rights
under this Public License. Your has a corresponding meaning.
Section 2 -- Scope.
a. License grant.
1. Subject to the terms and conditions of this Public License,
the Licensor hereby grants You a worldwide, royalty-free,
non-sublicensable, non-exclusive, irrevocable license to
exercise the Licensed Rights in the Licensed Material to:
a. reproduce and Share the Licensed Material, in whole or
in part; and
b. produce, reproduce, and Share Adapted Material.
2. Exceptions and Limitations. For the avoidance of doubt, where
Exceptions and Limitations apply to Your use, this Public
License does not apply, and You do not need to comply with
its terms and conditions.
3. Term. The term of this Public License is specified in Section
6(a).
4. Media and formats; technical modifications allowed. The
Licensor authorizes You to exercise the Licensed Rights in
all media and formats whether now known or hereafter created,
and to make technical modifications necessary to do so. The
Licensor waives and/or agrees not to assert any right or
authority to forbid You from making technical modifications
necessary to exercise the Licensed Rights, including
technical modifications necessary to circumvent Effective
Technological Measures. For purposes of this Public License,
simply making modifications authorized by this Section 2(a)
(4) never produces Adapted Material.
5. Downstream recipients.
a. Offer from the Licensor -- Licensed Material. Every
recipient of the Licensed Material automatically
receives an offer from the Licensor to exercise the
Licensed Rights under the terms and conditions of this
Public License.
b. No downstream restrictions. You may not offer or impose
any additional or different terms or conditions on, or
apply any Effective Technological Measures to, the
Licensed Material if doing so restricts exercise of the
Licensed Rights by any recipient of the Licensed
Material.
6. No endorsement. Nothing in this Public License constitutes or
may be construed as permission to assert or imply that You
are, or that Your use of the Licensed Material is, connected
with, or sponsored, endorsed, or granted official status by,
the Licensor or others designated to receive attribution as
provided in Section 3(a)(1)(A)(i).
b. Other rights.
1. Moral rights, such as the right of integrity, are not
licensed under this Public License, nor are publicity,
privacy, and/or other similar personality rights; however, to
the extent possible, the Licensor waives and/or agrees not to
assert any such rights held by the Licensor to the limited
extent necessary to allow You to exercise the Licensed
Rights, but not otherwise.
2. Patent and trademark rights are not licensed under this
Public License.
3. To the extent possible, the Licensor waives any right to
collect royalties from You for the exercise of the Licensed
Rights, whether directly or through a collecting society
under any voluntary or waivable statutory or compulsory
licensing scheme. In all other cases the Licensor expressly
reserves any right to collect such royalties.
Section 3 -- License Conditions.
Your exercise of the Licensed Rights is expressly made subject to the
following conditions.
a. Attribution.
1. If You Share the Licensed Material (including in modified
form), You must:
a. retain the following if it is supplied by the Licensor
with the Licensed Material:
i. identification of the creator(s) of the Licensed
Material and any others designated to receive
attribution, in any reasonable manner requested by
the Licensor (including by pseudonym if
designated);
ii. a copyright notice;
iii. a notice that refers to this Public License;
iv. a notice that refers to the disclaimer of
warranties;
v. a URI or hyperlink to the Licensed Material to the
extent reasonably practicable;
b. indicate if You modified the Licensed Material and
retain an indication of any previous modifications; and
c. indicate the Licensed Material is licensed under this
Public License, and include the text of, or the URI or
hyperlink to, this Public License.
2. You may satisfy the conditions in Section 3(a)(1) in any
reasonable manner based on the medium, means, and context in
which You Share the Licensed Material. For example, it may be
reasonable to satisfy the conditions by providing a URI or
hyperlink to a resource that includes the required
information.
3. If requested by the Licensor, You must remove any of the
information required by Section 3(a)(1)(A) to the extent
reasonably practicable.
4. If You Share Adapted Material You produce, the Adapter's
License You apply must not prevent recipients of the Adapted
Material from complying with this Public License.
Section 4 -- Sui Generis Database Rights.
Where the Licensed Rights include Sui Generis Database Rights that
apply to Your use of the Licensed Material:
a. for the avoidance of doubt, Section 2(a)(1) grants You the right
to extract, reuse, reproduce, and Share all or a substantial
portion of the contents of the database;
b. if You include all or a substantial portion of the database
contents in a database in which You have Sui Generis Database
Rights, then the database in which You have Sui Generis Database
Rights (but not its individual contents) is Adapted Material; and
c. You must comply with the conditions in Section 3(a) if You Share
all or a substantial portion of the contents of the database.
For the avoidance of doubt, this Section 4 supplements and does not
replace Your obligations under this Public License where the Licensed
Rights include other Copyright and Similar Rights.
Section 5 -- Disclaimer of Warranties and Limitation of Liability.
a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
c. The disclaimer of warranties and limitation of liability provided
above shall be interpreted in a manner that, to the extent
possible, most closely approximates an absolute disclaimer and
waiver of all liability.
Section 6 -- Term and Termination.
a. This Public License applies for the term of the Copyright and
Similar Rights licensed here. However, if You fail to comply with
this Public License, then Your rights under this Public License
terminate automatically.
b. Where Your right to use the Licensed Material has terminated under
Section 6(a), it reinstates:
1. automatically as of the date the violation is cured, provided
it is cured within 30 days of Your discovery of the
violation; or
2. upon express reinstatement by the Licensor.
For the avoidance of doubt, this Section 6(b) does not affect any
right the Licensor may have to seek remedies for Your violations
of this Public License.
c. For the avoidance of doubt, the Licensor may also offer the
Licensed Material under separate terms or conditions or stop
distributing the Licensed Material at any time; however, doing so
will not terminate this Public License.
d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
License.
Section 7 -- Other Terms and Conditions.
a. The Licensor shall not be bound by any additional or different
terms or conditions communicated by You unless expressly agreed.
b. Any arrangements, understandings, or agreements regarding the
Licensed Material not stated herein are separate from and
independent of the terms and conditions of this Public License.
Section 8 -- Interpretation.
a. For the avoidance of doubt, this Public License does not, and
shall not be interpreted to, reduce, limit, restrict, or impose
conditions on any use of the Licensed Material that could lawfully
be made without permission under this Public License.
b. To the extent possible, if any provision of this Public License is
deemed unenforceable, it shall be automatically reformed to the
minimum extent necessary to make it enforceable. If the provision
cannot be reformed, it shall be severed from this Public License
without affecting the enforceability of the remaining terms and
conditions.
c. No term or condition of this Public License will be waived and no
failure to comply consented to unless expressly agreed to by the
Licensor.
d. Nothing in this Public License constitutes or may be interpreted
as a limitation upon, or waiver of, any privileges and immunities
that apply to the Licensor or You, including from the legal
processes of any jurisdiction or authority.
=======================================================================
Creative Commons is not a party to its public licenses.
Notwithstanding, Creative Commons may elect to apply one of its public
licenses to material it publishes and in those instances will be
considered the “Licensor.” The text of the Creative Commons public
licenses is dedicated to the public domain under the CC0 Public Domain
Dedication. Except for the limited purpose of indicating that material
is shared under a Creative Commons public license or as otherwise
permitted by the Creative Commons policies published at
creativecommons.org/policies, Creative Commons does not authorize the
use of the trademark "Creative Commons" or any other trademark or logo
of Creative Commons without its prior written consent including,
without limitation, in connection with any unauthorized modifications
to any of its public licenses or any other arrangements,
understandings, or agreements concerning use of licensed material. For
the avoidance of doubt, this paragraph does not form part of the public
licenses.
Creative Commons may be contacted at creativecommons.org.
```
### Citation Information
Please cite the following papers when using this dataset.
```latex
@misc{fitzgerald2022massive,
title={MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages},
author={Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan},
year={2022},
eprint={2204.08582},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{bastianelli-etal-2020-slurp,
title = "{SLURP}: A Spoken Language Understanding Resource Package",
author = "Bastianelli, Emanuele and
Vanzo, Andrea and
Swietojanski, Pawel and
Rieser, Verena",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.588",
doi = "10.18653/v1/2020.emnlp-main.588",
pages = "7252--7262",
abstract = "Spoken Language Understanding infers semantic meaning directly from audio data, and thus promises to reduce error propagation and misunderstandings in end-user applications. However, publicly available SLU resources are limited. In this paper, we release SLURP, a new SLU package containing the following: (1) A new challenging dataset in English spanning 18 domains, which is substantially bigger and linguistically more diverse than existing datasets; (2) Competitive baselines based on state-of-the-art NLU and ASR systems; (3) A new transparent metric for entity labelling which enables a detailed error analysis for identifying potential areas of improvement. SLURP is available at https://github.com/pswietojanski/slurp."
}
```
|
HuggingFaceM4/WebSight | HuggingFaceM4 | "2024-03-26T15:37:29Z" | 10,457 | 336 | [
"language:en",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2403.09029",
"region:us",
"code",
"synthetic"
] | null | "2024-01-04T10:08:47Z" | ---
language:
- en
license: cc-by-4.0
size_categories:
- 1M<n<10M
pretty_name: WebSight
dataset_info:
- config_name: v0.2
features:
- name: image
dtype: image
- name: text
dtype: string
- name: llm_generated_idea
dtype: string
splits:
- name: train
num_bytes: 368943620718.125
num_examples: 1922671
download_size: 144861710051
dataset_size: 368943620718.125
- config_name: v0.1
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 35386660486.65
num_examples: 822987
download_size: 31394170440
dataset_size: 35386660486.65
configs:
- config_name: v0.2
default: true
data_files:
- split: train
path: v0.2/train-*
- config_name: v0.1
data_files:
- split: train
path: data/train-*
tags:
- code
- synthetic
---
# Dataset Card for WebSight
## Dataset Description
WebSight is a large synthetic dataset containing HTML/CSS codes representing synthetically generated English websites, each accompanied by a corresponding screenshot.
This dataset serves as a valuable resource for tasks such as generating UI codes from a screenshot.
It comes in two versions:
- v0.1: Websites are coded with HTML + CSS. They do not include real images.
- v0.2: Websites are coded with HTML + Tailwind CSS. They do include real images.
Essentially, here are the improvements in version v0.2, compared to version v0.1:
- Websites include real images (related to the context of the website!!)
- Usage of Tailwind CSS instead of traditional CSS
- Contains 2x more examples
- Contains more tables
- Better resolution for the screenshots
- Presence of a column indicating the LLM-generated idea to create the websites
<details>
<summary>Details for WebSight-v0.1 (HTML + CSS)</summary>
## Data Fields
An example of a sample appears as follows:
```
{
'images': PIL.Image,
'text': '<html>\n<style>\n{css}</style>\n{body}\n</html>',
}
```
where `css` is the CSS code, and `body` is the body of the HTML code.
In other words, the CSS code is embedded directly within the HTML code, facilitating the straightforward training of a model.
## Data Splits
There is only one split, `train`, that contains 822,987 images and codes.
## Dataset Creation
This dataset was created using [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) to generate random website ideas with the prompt
```
Generate diverse website layout ideas for different companies, each with a unique design element.
Examples include: a car company site with a left column, a webpage footer with a centered logo.
Explore variations in colors, positions, and company fields.
Don't give any explanations or recognition that you have understood the request,
just give the list of 10 ideas, with a line break between each.
```
which were then passed to [Deepseek-Coder-33b-Instruct](https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct) with the prompt
```
Create a very SIMPLE and SHORT website with the following elements: {idea}
Be creative with the design, size, position of the elements, columns, etc...
Don't give any explanation, just the content of the HTML code `index.html` starting with `<!DOCTYPE html>`,
followed by the CSS code `styles.css` starting with `/* Global Styles */`.
Write real and short sentences for the paragraphs, don't use Lorem ipsum.
When you want to display an image, don't use <img> in the HTML, always display a colored rectangle instead.
```
Following these steps, the HTML and CSS codes were extracted from the outputs of Deepseek-Coder and formatted into the structure `'<html>\n<style>\n{css}</style>\n{body}\n</html>'`.
</details>
<details>
<summary>Details for WebSight-v0.2 (HTML + Tailwind CSS)</summary>
## Data Fields
An example of a sample appears as follows:
```
{
'images': PIL.Image,
'text': '<html>\n<link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/tailwind.min.css" rel="stylesheet">\n{body}\n</html>',
}
```
where `body` is the body of the HTML code, containing directly the Tailwind CSS code and facilitating the straightforward training of a model.
## Data Splits
There is only one split, `train`, that contains TO DO images and codes.
## Dataset Creation
TO DO.
For now, the creation of the dataset is documented in the technical report.
</details>
## Terms of Use
By using the dataset, you agree to comply with the original licenses of the source content as well as the dataset license (CC-BY-4.0). Additionally, if you use this dataset to train a Machine Learning model, you agree to disclose your use of the dataset when releasing the model or an ML application using the model.
### Licensing Information
License CC-BY-4.0.
### Citation Information
If you are using this dataset, please cite our [technical report](https://arxiv.org/abs/2403.09029)
```
@misc{laurençon2024unlocking,
title={Unlocking the conversion of Web Screenshots into HTML Code with the WebSight Dataset},
author={Hugo Laurençon and Léo Tronchon and Victor Sanh},
year={2024},
eprint={2403.09029},
archivePrefix={arXiv},
primaryClass={cs.HC}
}
``` |
AnonymousGM/MultiSetTransformerData | AnonymousGM | "2024-09-02T00:56:24Z" | 10,426 | 0 | [
"license:mit",
"region:us"
] | null | "2024-02-19T22:05:51Z" | ---
license: mit
---
## General Description
MultiSetTransformerData is a large dataset designed to train and validate neural Symbolic Regression models. It was designed to solve the Multi-Set Symbolic Skeleton Prediction (MSSP) problems, described in the paper **"Univariate Skeleton Prediction in Multivariate Systems Using Transformers"**. However, it can be used for training generic SR models as well.
This dataset consists of artificially generated **univariate symbolic skeletons**, from which mathematical expressions are sampled, which are then used to sample data sets.
In this repository, a dataset **Q1** is presented:
* **Q1**: Consists of mathematical expressions that use up to 5 unary and binary operators (e.g., \\(1 + 1 / (\sin(2x) + 3)\\) uses five operators). It allows up to one nested operator (e.g., \\(\sin( \exp(x))\\) is allowed but \\(\sin( \exp(x^2))\\) is not).
## Dataset Structure
In the **Q1** folder, you will find a training set alongside its corresponding validation set.
Then, each folder consists of a collection of HDF5 files, as shown below:
```
├── Q1
│ ├── training
│ │ ├── 0.h5
│ │ ├── 1.h5
│ │ ├── ...
│ ├── validation
│ │ ├── 0.h5
│ │ ├── 1.h5
│ │ ├── ...
```
Each HDF5 file contains 5000 **blocks** and has the following structure:
```
{ "block_1": {
"X": "Support vector, shape (10000, 10)",
"Y": "Response vector, shape (10000, 10)",
"tokenized": "Symbolic skeleton expression tokenized using vocabulary, list",
"exprs": "Symbolic skeleton expression, str",
"sampled_exprs": "Ten mathematical expressions sampled from a common skeleton"
},
"block_2": {
"X": "Support, shape (10000, 10)",
"Y": "Response, shape (10000, 10)",
"tokenized": "Symbolic skeleton expression tokenized using vocabulary, list",
"exprs": "Symbolic skeleton expression, str",
"sampled_exprs": "Ten mathematical expressions sampled from a common skeleton"
},
...
}
```
More specifically, each block corresponds to one univariate symbolic skeleton (i.e., a function without defined constant values); for example, `c + c/(c*sin(c*x_1) + c)`.
From this skeleton, 10 random functions are sampled; for example:
* `-2.284 + 0.48/(-sin(0.787*x_1) - 1.136)`
* `4.462 - 2.545/(3.157*sin(0.422*x_1) - 1.826)`, ...
Then, for the \\(i\\)-th function (where \\(i \in [0, 1, ..., 9]\\)), we sample a **support vector** `X[:, i]` of 10000 elements whose values are drawn from a uniform distribution \\(\mathcal{U}(-10, 10)\\).
The support vector `X[:, i]` is evaluated on the \\(i\\)-th function to obtain the response vector `Y[:, i]`.
In other words, a block contains input-output data generated from 10 **different functions that share the same symbolic skeleton**.
For instance, the following figure shows 10 sets of data generated from the symbolic skeleton `c + c/(c*sin(c*x_1) + c)`:
<p align="center">
<img src="images/data_example.jpg" alt="alt text" width="600">
</p>
## Loading Data
Once the data is downloaded, it can be loaded using Python as follows:
```
imort os
import glob
import h5py
def open_h5(path):
block = []
with h5py.File(path, "r") as hf:
# Iterate through the groups in the HDF5 file (group names are integers)
for group_name in hf:
group = hf[group_name]
X = group["X"][:]
Y = group["Y"][:]
# Load 'tokenized' as a list of integers
tokenized = list(group["tokenized"])
# Load 'exprs' as a string
exprs = group["exprs"][()].tobytes().decode("utf-8")
# Load 'sampled_exprs' as a list of sympy expressions
sampled_exprs = [expr_str for expr_str in group["sampled_exprs"][:].astype(str)]
block.append([X, Y, tokenized, exprs, sampled_exprs])
return block
train_path = 'data/Q1/training'
train_files = glob.glob(os.path.join(self.sampledData_train_path, '*.h5'))
for tfile in train_files:
# Read block
block = open_h5(tfile)
# Do stuff with your data
```
## Vocabulary and Expression Generation
The table below provides the vocabulary used to construct the expressions of this dataset.
<p align="center">
<img src="images/vocabulary.jpg" alt="alt text" width="500">
</p>
We use a method that builds the expression tree recursively in a preorder fashion, which allows us to enforce certain conditions and constraints effectively.
That is, we forbid certain combinations of operators and set a maximum limit on the nesting depth of unary operators within each other.
For example, we avoid embedding the operator \\(\text{log}\\) within the operator \\(\text{exp}\\), or vice versa, since such composition could lead to direct simplification (e.g., \\(\text{log}\left( \text{exp} (x) \right) = x\\).
We can also avoid combinations of operators that would generate extremely large values (e.g., \\(\text{exp}\left( \text{exp} (x) \right)\\) and \\(\text{sinh} \left( \text{sinh} (x) \right)\\)).
The table below shows the forbidden operators we considered for some specific parent operators.
<p align="center">
<img src="images/forbidden_ops.jpg" alt="alt text" width="500">
</p>
## Citation
Use this Bibtex to cite this repository
```
@INPROCEEDINGS{MultiSetSR,
author="Morales, Giorgio
and Sheppard, John W.",
editor="Bifet, Albert
and Daniu{\v{s}}is, Povilas
and Davis, Jesse
and Krilavi{\v{c}}ius, Tomas
and Kull, Meelis
and Ntoutsi, Eirini
and Puolam{\"a}ki, Kai
and {\v{Z}}liobait{\.{e}}, Indr{\.{e}}",
title="Univariate Skeleton Prediction in Multivariate Systems Using Transformers",
booktitle="Machine Learning and Knowledge Discovery in Databases. Research Track and Demo Track",
year="2024",
publisher="Springer Nature Switzerland",
address="Cham",
pages="107--125",
isbn="978-3-031-70371-3"
}
```
|
tasksource/mmlu | tasksource | "2023-03-31T20:44:21Z" | 10,407 | 33 | [
"task_categories:text-classification",
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"task_ids:open-domain-qa",
"task_ids:closed-domain-qa",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"multi-task",
"multitask",
"mmlu",
"hendrycks_test"
] | [
"text-classification",
"multiple-choice",
"question-answering"
] | "2023-02-01T10:20:16Z" | ---
license: apache-2.0
task_categories:
- text-classification
- multiple-choice
- question-answering
task_ids:
- multiple-choice-qa
- open-domain-qa
- closed-domain-qa
language:
- en
tags:
- multi-task
- multitask
- mmlu
- hendrycks_test
pretty_name: mmlu
---
MMLU (`hendrycks_test` on huggingface) without auxiliary train. It is much lighter (7MB vs 162MB) and faster than the original implementation, in which auxiliary train is loaded (+ duplicated!) by default for all the configs in the original version, making it quite heavy.
We use this version in [tasksource](https://huggingface.co/tasksource).
Reference to original dataset:
Measuring Massive Multitask Language Understanding - https://github.com/hendrycks/test
```
@article{hendryckstest2021,
title={Measuring Massive Multitask Language Understanding},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
}
``` |
avalab/Allo-AVA | avalab | "2024-10-15T18:19:45Z" | 10,336 | 1 | [
"language:en",
"license:cc",
"size_categories:n>1T",
"modality:audio",
"modality:text",
"modality:video",
"region:us",
"code"
] | null | "2024-10-15T12:58:23Z" | ---
license: cc
language:
- en
tags:
- code
size_categories:
- n>1T
--- |
lmms-lab/POPE | lmms-lab | "2024-05-23T03:29:23Z" | 10,260 | 6 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-01-18T15:13:42Z" | ---
dataset_info:
- config_name: Full
features:
- name: id
dtype: string
- name: question_id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: image_source
dtype: string
- name: image
dtype: image
- name: category
dtype: string
splits:
- name: adversarial
num_bytes: 490408158.0
num_examples: 3000
- name: popular
num_bytes: 490397000.0
num_examples: 3000
- name: random
num_bytes: 490394976.0
num_examples: 3000
download_size: 255022914
dataset_size: 1471200134.0
- config_name: default
features:
- name: id
dtype: string
- name: question_id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: image_source
dtype: string
- name: image
dtype: image
- name: category
dtype: string
splits:
- name: test
num_bytes: 1471200135.0
num_examples: 9000
download_size: 255022914
dataset_size: 1471200135.0
configs:
- config_name: Full
data_files:
- split: adversarial
path: Full/adversarial-*
- split: popular
path: Full/popular-*
- split: random
path: Full/random-*
- config_name: default
data_files:
- split: test
path: data/test-*
---
<p align="center" width="100%">
<img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png" width="100%" height="80%">
</p>
# Large-scale Multi-modality Models Evaluation Suite
> Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval`
🏠 [Homepage](https://lmms-lab.github.io/) | 📚 [Documentation](docs/README.md) | 🤗 [Huggingface Datasets](https://huggingface.co/lmms-lab)
# This Dataset
This is a formatted version of [POPE](https://github.com/RUCAIBox/POPE). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models.
```
@article{li2023evaluating,
title={Evaluating object hallucination in large vision-language models},
author={Li, Yifan and Du, Yifan and Zhou, Kun and Wang, Jinpeng and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2305.10355},
year={2023}
}
```
|
MahmoodLab/hest | MahmoodLab | "2024-11-02T10:04:44Z" | 10,233 | 30 | [
"task_categories:image-classification",
"task_categories:feature-extraction",
"task_categories:image-segmentation",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:100B<n<1T",
"arxiv:2406.16192",
"region:us",
"spatial-transcriptomics",
"histology",
"pathology"
] | [
"image-classification",
"feature-extraction",
"image-segmentation"
] | "2024-06-13T13:14:17Z" | ---
license: cc-by-nc-sa-4.0
language:
- en
tags:
- spatial-transcriptomics
- histology
- pathology
task_categories:
- image-classification
- feature-extraction
- image-segmentation
size_categories:
- 100B<n<1T
---
# Model Card for HEST-1k
<img src="fig1a.jpg" alt="Description" style="width: 38%;" align="right"/>
#### What is HEST-1k?
- A collection of <b>1,229</b> spatial transcriptomic profiles, each linked and aligned to a Whole Slide Image (with pixel size > 1.15 µm/px) and metadata.
- HEST-1k was assembled from 131 public and internal cohorts encompassing:
- 26 organs
- 2 species (Homo Sapiens and Mus Musculus)
- 367 cancer samples from 25 cancer types.
HEST-1k processing enabled the identification of <b>1.5 million</b> expression/morphology pairs and <b>76 million</b> nuclei
### Updates
- **21.10.24**: HEST has been accepted to NeurIPS 2024 as a Spotlight! We will be in Vancouver from Dec 10th to 15th. Send us a message if you wanna learn more about HEST ([email protected]).
- **23.09.24**: 121 new samples released, including 27 Xenium and 7 Visium HD! We also make the aligned Xenium transcripts + the aligned DAPI segmented cells/nuclei public.
- **30.08.24**: HEST-Benchmark results updated. Includes H-Optimus-0, Virchow 2, Virchow, and GigaPath. New COAD task based on 4 Xenium samples. HuggingFace bench data have been updated.
- **28.08.24**: New set of helpers for batch effect visualization and correction. Tutorial [here](https://github.com/mahmoodlab/HEST/blob/main/tutorials/5-Batch-effect-visualization.ipynb).
## Instructions for Setting Up HuggingFace Account and Token
### 1. Create an Account on HuggingFace
Follow the instructions provided on the [HuggingFace sign-up page](https://huggingface.co/join).
### 2. Accept terms of use of HEST
1. On this page click request access (access will be automatically granted)
2. At this stage, you can already manually inspect the data by navigating in the `Files and version`
### 3. Create a Hugging Face Token
1. **Go to Settings:** Navigate to your profile settings by clicking on your profile picture in the top right corner and selecting `Settings` from the dropdown menu.
2. **Access Tokens:** In the settings menu, find and click on `Access tokens`.
3. **Create New Token:**
- Click on `New token`.
- Set the token name (e.g., `hest`).
- Set the access level to `Write`.
- Click on `Create`.
4. **Copy Token:** After the token is created, copy it to your clipboard. You will need this token for authentication.
### 4. Logging
Run the following
```
pip install datasets
```
```
from huggingface_hub import login
login(token="YOUR HUGGINGFACE TOKEN")
```
## Download the entire HEST-1k dataset:
```python
import datasets
local_dir='hest_data' # hest will be dowloaded to this folder
# Note that the full dataset is around 1TB of data
dataset = datasets.load_dataset(
'MahmoodLab/hest',
cache_dir=local_dir,
patterns='*'
)
```
## Download a subset of HEST-1k:
```python
import datasets
local_dir='hest_data' # hest will be dowloaded to this folder
ids_to_query = ['TENX96', 'TENX99'] # list of ids to query
list_patterns = [f"*{id}[_.]**" for id in ids_to_query]
dataset = datasets.load_dataset(
'MahmoodLab/hest',
cache_dir=local_dir,
patterns=list_patterns
)
```
#### Query HEST by organ, techonology, oncotree code...
```python
import datasets
import pandas as pd
local_dir='hest_data' # hest will be dowloaded to this folder
meta_df = pd.read_csv("hf://datasets/MahmoodLab/hest/HEST_v1_1_0.csv")
# Filter the dataframe by organ, oncotree code...
meta_df = meta_df[meta_df['oncotree_code'] == 'IDC']
meta_df = meta_df[meta_df['organ'] == 'Breast']
ids_to_query = meta_df['id'].values
list_patterns = [f"*{id}[_.]**" for id in ids_to_query]
dataset = datasets.load_dataset(
'MahmoodLab/hest',
cache_dir=local_dir,
patterns=list_patterns
)
```
## Loading the data with the python library `hest`
Once downloaded, you can then easily iterate through the dataset:
```python
from hest import iter_hest
for st in iter_hest('../hest_data', id_list=['TENX95']):
print(st)
```
Please visit the [github repo](https://github.com/mahmoodlab/hest) and the [documentation](https://hest.readthedocs.io/en/latest/) for more information about the `hest` library API.
## Data organization
For each sample:
- `wsis/`: H&E stained Whole Slide Images in pyramidal Generic TIFF (or pyramidal Generic BigTIFF if >4.1GB)
- `st/`: spatial transcriptomics expressions in a scanpy `.h5ad` object
- `metadata/`: metadata
- `spatial_plots/`: overlay of the WSI with the st spots
- `thumbnails/`: downscaled version of the WSI
- `tissue_seg/`: tissue segmentation masks:
- {id}_mask.jpg: downscaled or full resolution greyscale tissue mask
- {id}_mask.pkl: tissue/holes contours in a pickle file
- {id}_vis.jpg: visualization of the tissue mask on the downscaled WSI
- `pixel_size_vis/`: visualization of the pixel size
- `patches/`: 256x256 H&E patches (0.5µm/px) extracted around ST spots in a .h5 object optimized for deep-learning. Each patch is matched to the corresponding ST profile (see `st/`) with a barcode.
- `patches_vis/`: visualization of the mask and patches on a downscaled WSI.
- `cellvit_seg/`: cellvit nuclei segmentation
For each xenium sample:
- `transcripts/`: individual transcripts aligned to H&E for xenium samples; read with pandas.read_parquet; aligned coordinates in pixel are in columns `['he_x', 'he_y']`
- `xenium_seg/`: xenium segmentation on DAPI and aligned to H&E
### How to cite:
```
@article{jaume2024hest,
author = {Jaume, Guillaume and Doucet, Paul and Song, Andrew H. and Lu, Ming Y. and Almagro-Perez, Cristina and Wagner, Sophia J. and Vaidya, Anurag J. and Chen, Richard J. and Williamson, Drew F. K. and Kim, Ahrong and Mahmood, Faisal},
title = {{HEST-1k: A Dataset for Spatial Transcriptomics and Histology Image Analysis}},
journal = {arXiv},
year = {2024},
month = jun,
eprint = {2406.16192},
url = {https://arxiv.org/abs/2406.16192v1}
}
```
### Contact:
- <b>Guillaume Jaume</b> Harvard Medical School, Boston, Mahmood Lab (`[email protected]`)
- <b>Paul Doucet</b> Harvard Medical School, Boston, Mahmood Lab (`[email protected]`)
<i>The dataset is distributed under the Attribution-NonCommercial-ShareAlike 4.0 International license (CC BY-NC-SA 4.0 Deed)</i> |
deepghs/gelbooru-webp-4Mpixel | deepghs | "2024-12-26T01:11:40Z" | 10,232 | 4 | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"task_categories:text-to-image",
"annotations_creators:no-annotation",
"source_datasets:gelbooru",
"language:en",
"license:other",
"size_categories:10M<n<100M",
"region:us",
"art",
"anime",
"not-for-all-audiences"
] | [
"image-classification",
"zero-shot-image-classification",
"text-to-image"
] | "2024-06-19T09:17:42Z" | ---
license: other
task_categories:
- image-classification
- zero-shot-image-classification
- text-to-image
language:
- en
tags:
- art
- anime
- not-for-all-audiences
size_categories:
- 10M<n<100M
annotations_creators:
- no-annotation
source_datasets:
- gelbooru
---
# Gelbooru 4M Re-encoded Dataset
This is the re-encoded dataset of [deepghs/gelbooru_full](https://huggingface.co/datasets/deepghs/gelbooru_full). And all the resized images are maintained here.
There are 10083605 images in total. The maximum ID of these images is 11191848. Last updated at `2024-12-26 10:10:42 JST`.
# How to Painlessly Use This
Use [cheesechaser](https://github.com/deepghs/cheesechaser) to quickly get images from this repository.
Before using this code, you have to **grant the access from this gated repository**. And then **set your personal HuggingFace token into `HF_TOKEN` environment variable** to give the code authorization for this repository.
```python
from cheesechaser.datapool import GelbooruWebpDataPool
pool = GelbooruWebpDataPool()
pool.batch_download_to_directory(
# download images #7000000-7000100, any ranges or id lists are okay
resource_ids=range(7000000, 7000100),
# save to directory /data/gelbooru_webp
dst_dir='/data/gelbooru_webp',
)
```
|
rayliuca/WikidataLabels | rayliuca | "2024-01-11T04:17:57Z" | 10,211 | 1 | [
"task_categories:translation",
"task_categories:text2text-generation",
"language:en",
"language:fr",
"language:de",
"language:ja",
"language:zh",
"language:hi",
"language:ar",
"language:bn",
"language:ru",
"language:es",
"license:cc0-1.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"translation",
"text2text-generation"
] | "2024-01-01T00:23:08Z" | ---
license: cc0-1.0
dataset_info:
- config_name: aa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13986211
num_examples: 436895
download_size: 9821312
dataset_size: 13986211
- config_name: ab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5012532
num_examples: 159908
download_size: 3013706
dataset_size: 5012532
- config_name: abs
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4252728
num_examples: 143986
download_size: 2567450
dataset_size: 4252728
- config_name: ace
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 19105673
num_examples: 574712
download_size: 13573374
dataset_size: 19105673
- config_name: ady
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4444259
num_examples: 148627
download_size: 2705754
dataset_size: 4444259
- config_name: ady-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4412556
num_examples: 147884
download_size: 2682170
dataset_size: 4412556
- config_name: aeb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4305734
num_examples: 145198
download_size: 2606368
dataset_size: 4305734
- config_name: aeb-arab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4467930
num_examples: 148796
download_size: 2722169
dataset_size: 4467930
- config_name: aeb-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12770359
num_examples: 404946
download_size: 8886489
dataset_size: 12770359
- config_name: af
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 58561042
num_examples: 1643153
download_size: 42539052
dataset_size: 58561042
- config_name: agq
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 1317
num_examples: 33
download_size: 2906
dataset_size: 1317
- config_name: ak
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14198715
num_examples: 443037
download_size: 9991525
dataset_size: 14198715
- config_name: aln
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13811116
num_examples: 432089
download_size: 9673418
dataset_size: 13811116
- config_name: als
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20691
num_examples: 543
download_size: 17540
dataset_size: 20691
- config_name: alt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 108390
num_examples: 1814
download_size: 59046
dataset_size: 108390
- config_name: am
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5231176
num_examples: 163038
download_size: 3187164
dataset_size: 5231176
- config_name: ami
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 21519
num_examples: 686
download_size: 16640
dataset_size: 21519
- config_name: an
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 240345072
num_examples: 5921087
download_size: 164895205
dataset_size: 240345072
- config_name: ang
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14275715
num_examples: 443461
download_size: 10063758
dataset_size: 14275715
- config_name: anp
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8558258
num_examples: 241612
download_size: 4381360
dataset_size: 8558258
- config_name: ar
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 291173732
num_examples: 5724064
download_size: 159369497
dataset_size: 291173732
- config_name: arc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4473283
num_examples: 150006
download_size: 2722619
dataset_size: 4473283
- config_name: arn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13879729
num_examples: 433912
download_size: 9715431
dataset_size: 13879729
- config_name: arq
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4346991
num_examples: 146004
download_size: 2636972
dataset_size: 4346991
- config_name: ary
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5358568
num_examples: 171568
download_size: 3313402
dataset_size: 5358568
- config_name: arz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 81806333
num_examples: 1669699
download_size: 49423508
dataset_size: 81806333
- config_name: as
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 21658610
num_examples: 450074
download_size: 9641626
dataset_size: 21658610
- config_name: ase
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4252943
num_examples: 143986
download_size: 2568106
dataset_size: 4252943
- config_name: ast
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 1385628786
num_examples: 20696237
download_size: 955908362
dataset_size: 1385628786
- config_name: atj
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12996229
num_examples: 411639
download_size: 9057557
dataset_size: 12996229
- config_name: av
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4722934
num_examples: 153781
download_size: 2880103
dataset_size: 4722934
- config_name: avk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13194485
num_examples: 414598
download_size: 9200917
dataset_size: 13194485
- config_name: awa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8599312
num_examples: 242320
download_size: 4411751
dataset_size: 8599312
- config_name: ay
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14269432
num_examples: 443521
download_size: 10029939
dataset_size: 14269432
- config_name: az
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 21049248
num_examples: 516732
download_size: 14117527
dataset_size: 21049248
- config_name: azb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 30781587
num_examples: 607562
download_size: 16028687
dataset_size: 30781587
- config_name: ba
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 11525351
num_examples: 261509
download_size: 6733777
dataset_size: 11525351
- config_name: ban
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13674052
num_examples: 426706
download_size: 9513747
dataset_size: 13674052
- config_name: ban-bali
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 50961
num_examples: 748
download_size: 25817
dataset_size: 50961
- config_name: bar
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 54783034
num_examples: 1566120
download_size: 40389830
dataset_size: 54783034
- config_name: bbc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12820895
num_examples: 406960
download_size: 8917054
dataset_size: 12820895
- config_name: bcc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8017228
num_examples: 241977
download_size: 4344579
dataset_size: 8017228
- config_name: be
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 30978832
num_examples: 564184
download_size: 17461174
dataset_size: 30978832
- config_name: be-tarask
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 18931909
num_examples: 374396
download_size: 10871239
dataset_size: 18931909
- config_name: bg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 200628708
num_examples: 4383953
download_size: 137745533
dataset_size: 200628708
- config_name: bgn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 7999280
num_examples: 241566
download_size: 4331249
dataset_size: 7999280
- config_name: bi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14040026
num_examples: 438382
download_size: 9867032
dataset_size: 14040026
- config_name: bjn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8375348
num_examples: 254558
download_size: 5722334
dataset_size: 8375348
- config_name: bm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 18145787
num_examples: 549694
download_size: 13129193
dataset_size: 18145787
- config_name: bn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 815803977
num_examples: 9767284
download_size: 261147329
dataset_size: 815803977
- config_name: bo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 11671330
num_examples: 278307
download_size: 5669602
dataset_size: 11671330
- config_name: bpy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15497749
num_examples: 347458
download_size: 6991190
dataset_size: 15497749
- config_name: bqi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8017455
num_examples: 241984
download_size: 4345123
dataset_size: 8017455
- config_name: br
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 58304963
num_examples: 1653800
download_size: 42722031
dataset_size: 58304963
- config_name: brh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5328437
num_examples: 171504
download_size: 3376189
dataset_size: 5328437
- config_name: bs
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 30441466
num_examples: 858190
download_size: 21606575
dataset_size: 30441466
- config_name: btm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4252525
num_examples: 143980
download_size: 2567218
dataset_size: 4252525
- config_name: bto
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12841721
num_examples: 407470
download_size: 8934218
dataset_size: 12841721
- config_name: bug
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 7595464
num_examples: 235268
download_size: 5129941
dataset_size: 7595464
- config_name: bxr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4713699
num_examples: 153707
download_size: 2869313
dataset_size: 4713699
- config_name: ca
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 408509932
num_examples: 9936886
download_size: 288474980
dataset_size: 408509932
- config_name: cbk-zam
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14108232
num_examples: 440345
download_size: 9920793
dataset_size: 14108232
- config_name: cdo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 6503254
num_examples: 201362
download_size: 4137841
dataset_size: 6503254
- config_name: ce
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 28093148
num_examples: 607767
download_size: 16367596
dataset_size: 28093148
- config_name: ceb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 332947091
num_examples: 7769402
download_size: 219525737
dataset_size: 332947091
- config_name: ch
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13983906
num_examples: 436785
download_size: 9817385
dataset_size: 13983906
- config_name: cho
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13950786
num_examples: 435869
download_size: 9791296
dataset_size: 13950786
- config_name: chr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5386793
num_examples: 172855
download_size: 3419676
dataset_size: 5386793
- config_name: chy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13994916
num_examples: 437007
download_size: 9830465
dataset_size: 13994916
- config_name: ckb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 23343034
num_examples: 511183
download_size: 11459344
dataset_size: 23343034
- config_name: co
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 47080480
num_examples: 1346929
download_size: 34551346
dataset_size: 47080480
- config_name: cps
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12849864
num_examples: 407695
download_size: 8941921
dataset_size: 12849864
- config_name: cr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5516556
num_examples: 176667
download_size: 3532952
dataset_size: 5516556
- config_name: crh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 10864382
num_examples: 336709
download_size: 7542853
dataset_size: 10864382
- config_name: crh-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4419064
num_examples: 148046
download_size: 2688683
dataset_size: 4419064
- config_name: crh-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14201429
num_examples: 442905
download_size: 9986290
dataset_size: 14201429
- config_name: cs
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 140189244
num_examples: 3384048
download_size: 97516751
dataset_size: 140189244
- config_name: csb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20177120
num_examples: 619275
download_size: 14528772
dataset_size: 20177120
- config_name: cv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8047221
num_examples: 215611
download_size: 4857718
dataset_size: 8047221
- config_name: cy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 89241808
num_examples: 2244550
download_size: 62686006
dataset_size: 89241808
- config_name: da
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 130931077
num_examples: 3448894
download_size: 98202417
dataset_size: 130931077
- config_name: dag
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 2664957
num_examples: 78534
download_size: 2052615
dataset_size: 2664957
- config_name: de
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 765398522
num_examples: 17531361
download_size: 527642124
dataset_size: 765398522
- config_name: de-at
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 53043722
num_examples: 1515373
download_size: 38761571
dataset_size: 53043722
- config_name: de-ch
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 53480908
num_examples: 1528137
download_size: 39349412
dataset_size: 53480908
- config_name: de-formal
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4256391
num_examples: 144061
download_size: 2571862
dataset_size: 4256391
- config_name: din
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12819746
num_examples: 406591
download_size: 8922303
dataset_size: 12819746
- config_name: diq
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 7570161
num_examples: 232674
download_size: 5057742
dataset_size: 7570161
- config_name: dsb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16135830
num_examples: 491423
download_size: 11412316
dataset_size: 16135830
- config_name: dtp
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13867373
num_examples: 433733
download_size: 9720699
dataset_size: 13867373
- config_name: dty
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8839082
num_examples: 246026
download_size: 4551845
dataset_size: 8839082
- config_name: dua
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 2631
num_examples: 87
download_size: 3877
dataset_size: 2631
- config_name: dv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 81396462
num_examples: 2103276
download_size: 45332104
dataset_size: 81396462
- config_name: dz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8590239
num_examples: 242196
download_size: 4406353
dataset_size: 8590239
- config_name: ee
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14377017
num_examples: 447208
download_size: 10136064
dataset_size: 14377017
- config_name: egl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13068224
num_examples: 413551
download_size: 9121776
dataset_size: 13068224
- config_name: el
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 32978562
num_examples: 592016
download_size: 19577876
dataset_size: 32978562
- config_name: eml
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14768563
num_examples: 458847
download_size: 10453636
dataset_size: 14768563
- config_name: en
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 6327454281
num_examples: 81801560
download_size: 4224231068
dataset_size: 6327454281
- config_name: en-ca
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 73305274
num_examples: 1909970
download_size: 53060194
dataset_size: 73305274
- config_name: en-gb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 115978412
num_examples: 2520405
download_size: 78924421
dataset_size: 115978412
- config_name: en-us
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14815
num_examples: 332
download_size: 9953
dataset_size: 14815
- config_name: eo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 256196064
num_examples: 6285304
download_size: 177219679
dataset_size: 256196064
- config_name: es
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 730214298
num_examples: 17233968
download_size: 514588069
dataset_size: 730214298
- config_name: es-419
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4355180
num_examples: 146476
download_size: 2659218
dataset_size: 4355180
- config_name: es-formal
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4280933
num_examples: 144717
download_size: 2592085
dataset_size: 4280933
- config_name: et
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 65123623
num_examples: 1820762
download_size: 48197302
dataset_size: 65123623
- config_name: eu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 290282374
num_examples: 7109758
download_size: 197889378
dataset_size: 290282374
- config_name: ext
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 223257222
num_examples: 5359047
download_size: 147078789
dataset_size: 223257222
- config_name: fa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 123727757
num_examples: 2142642
download_size: 65952114
dataset_size: 123727757
- config_name: ff
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14116652
num_examples: 440614
download_size: 9920388
dataset_size: 14116652
- config_name: fi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 286539944
num_examples: 6905698
download_size: 209916638
dataset_size: 286539944
- config_name: fit
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20217258
num_examples: 620391
download_size: 14566702
dataset_size: 20217258
- config_name: fj
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14159041
num_examples: 441745
download_size: 9956108
dataset_size: 14159041
- config_name: fkv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4328482
num_examples: 145988
download_size: 2619845
dataset_size: 4328482
- config_name: fo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 24474476
num_examples: 731732
download_size: 17876981
dataset_size: 24474476
- config_name: fr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 774128723
num_examples: 17908351
download_size: 534489308
dataset_size: 774128723
- config_name: frc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 17896106
num_examples: 547258
download_size: 12953740
dataset_size: 17896106
- config_name: frp
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 40902510
num_examples: 1191134
download_size: 29778105
dataset_size: 40902510
- config_name: frr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16979214
num_examples: 515350
download_size: 12069637
dataset_size: 16979214
- config_name: fur
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 42077410
num_examples: 1221071
download_size: 30714082
dataset_size: 42077410
- config_name: ga
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 471527543
num_examples: 11524282
download_size: 320967189
dataset_size: 471527543
- config_name: gag
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14149375
num_examples: 440732
download_size: 9940551
dataset_size: 14149375
- config_name: gan
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 31572161
num_examples: 905186
download_size: 18909564
dataset_size: 31572161
- config_name: gan-hans
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 31004794
num_examples: 889875
download_size: 18566811
dataset_size: 31004794
- config_name: gan-hant
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4374444
num_examples: 147098
download_size: 2657182
dataset_size: 4374444
- config_name: gcr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4311409
num_examples: 145829
download_size: 2618211
dataset_size: 4311409
- config_name: gd
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 49316935
num_examples: 1429457
download_size: 36220978
dataset_size: 49316935
- config_name: gl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 289484839
num_examples: 7052226
download_size: 197315151
dataset_size: 289484839
- config_name: glk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8327018
num_examples: 249115
download_size: 4538325
dataset_size: 8327018
- config_name: gn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14212974
num_examples: 442765
download_size: 10004863
dataset_size: 14212974
- config_name: gom
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4584575
num_examples: 150273
download_size: 2780570
dataset_size: 4584575
- config_name: gom-deva
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8585678
num_examples: 242131
download_size: 4400578
dataset_size: 8585678
- config_name: gom-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12783006
num_examples: 405302
download_size: 8897342
dataset_size: 12783006
- config_name: gor
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14667616
num_examples: 454512
download_size: 10319196
dataset_size: 14667616
- config_name: got
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5432139
num_examples: 172951
download_size: 3435531
dataset_size: 5432139
- config_name: grc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4494817
num_examples: 149631
download_size: 2746170
dataset_size: 4494817
- config_name: gu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 23788894
num_examples: 486140
download_size: 10779200
dataset_size: 23788894
- config_name: guc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 1419
num_examples: 38
download_size: 3054
dataset_size: 1419
- config_name: guw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 118
num_examples: 4
download_size: 1864
dataset_size: 118
- config_name: gv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20683485
num_examples: 631005
download_size: 14894590
dataset_size: 20683485
- config_name: ha
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14716168
num_examples: 455836
download_size: 10421790
dataset_size: 14716168
- config_name: hak
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 6128644
num_examples: 193036
download_size: 3991729
dataset_size: 6128644
- config_name: haw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14158084
num_examples: 441511
download_size: 9952975
dataset_size: 14158084
- config_name: he
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 43629050
num_examples: 884809
download_size: 27221301
dataset_size: 43629050
- config_name: hi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 37237187
num_examples: 668964
download_size: 17804873
dataset_size: 37237187
- config_name: hif
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14457954
num_examples: 449009
download_size: 10166264
dataset_size: 14457954
- config_name: hif-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14519845
num_examples: 454037
download_size: 10240704
dataset_size: 14519845
- config_name: hil
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12928914
num_examples: 409962
download_size: 9009705
dataset_size: 12928914
- config_name: ho
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13950504
num_examples: 435857
download_size: 9790849
dataset_size: 13950504
- config_name: hr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 61272623
num_examples: 1720527
download_size: 45307411
dataset_size: 61272623
- config_name: hrx
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12869295
num_examples: 407823
download_size: 8964114
dataset_size: 12869295
- config_name: hsb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 23720349
num_examples: 707100
download_size: 17145693
dataset_size: 23720349
- config_name: ht
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16835529
num_examples: 509955
download_size: 11880404
dataset_size: 16835529
- config_name: hu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 85054175
num_examples: 2200589
download_size: 64143342
dataset_size: 85054175
- config_name: hu-formal
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4252810
num_examples: 143986
download_size: 2567582
dataset_size: 4252810
- config_name: hy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 39339286
num_examples: 773925
download_size: 22108994
dataset_size: 39339286
- config_name: hyw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5443608
num_examples: 166902
download_size: 3238370
dataset_size: 5443608
- config_name: hz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13948574
num_examples: 435804
download_size: 9788697
dataset_size: 13948574
- config_name: ia
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 229143237
num_examples: 5616433
download_size: 155877454
dataset_size: 229143237
- config_name: id
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 95220928
num_examples: 2512331
download_size: 69525046
dataset_size: 95220928
- config_name: ie
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 225725262
num_examples: 5533032
download_size: 153371930
dataset_size: 225725262
- config_name: ig
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20109388
num_examples: 617044
download_size: 14475407
dataset_size: 20109388
- config_name: ii
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4310418
num_examples: 145332
download_size: 2609723
dataset_size: 4310418
- config_name: ik
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13989609
num_examples: 436958
download_size: 9823174
dataset_size: 13989609
- config_name: ike-cans
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4352278
num_examples: 146355
download_size: 2645174
dataset_size: 4352278
- config_name: ike-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13851135
num_examples: 432932
download_size: 9714057
dataset_size: 13851135
- config_name: ilo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15955483
num_examples: 480555
download_size: 11141942
dataset_size: 15955483
- config_name: inh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4634360
num_examples: 152226
download_size: 2831580
dataset_size: 4634360
- config_name: io
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 233656822
num_examples: 5757440
download_size: 159720058
dataset_size: 233656822
- config_name: is
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 51679396
num_examples: 1483610
download_size: 37965494
dataset_size: 51679396
- config_name: it
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 536601426
num_examples: 12631487
download_size: 375025347
dataset_size: 536601426
- config_name: iu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5360588
num_examples: 172215
download_size: 3402239
dataset_size: 5360588
- config_name: ja
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 140641579
num_examples: 2917962
download_size: 92145329
dataset_size: 140641579
- config_name: jam
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 18849751
num_examples: 571777
download_size: 13684422
dataset_size: 18849751
- config_name: jbo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14301985
num_examples: 446512
download_size: 9994516
dataset_size: 14301985
- config_name: jv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 27232302
num_examples: 794181
download_size: 19651565
dataset_size: 27232302
- config_name: ka
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 24073345
num_examples: 399546
download_size: 11679979
dataset_size: 24073345
- config_name: kaa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14082184
num_examples: 439411
download_size: 9902820
dataset_size: 14082184
- config_name: kab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 18459676
num_examples: 557857
download_size: 13384218
dataset_size: 18459676
- config_name: kbd
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4594409
num_examples: 149733
download_size: 2759503
dataset_size: 4594409
- config_name: kbd-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4417661
num_examples: 148017
download_size: 2687531
dataset_size: 4417661
- config_name: kbp
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12873178
num_examples: 408039
download_size: 8965474
dataset_size: 12873178
- config_name: kea
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12793700
num_examples: 405901
download_size: 8896866
dataset_size: 12793700
- config_name: kg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 40949149
num_examples: 1193499
download_size: 29766747
dataset_size: 40949149
- config_name: khw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4308653
num_examples: 145279
download_size: 2608581
dataset_size: 4308653
- config_name: ki
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14056900
num_examples: 439015
download_size: 9875534
dataset_size: 14056900
- config_name: kj
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13881723
num_examples: 433861
download_size: 9733715
dataset_size: 13881723
- config_name: kjp
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8504302
num_examples: 240339
download_size: 4341523
dataset_size: 8504302
- config_name: kk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 19216115
num_examples: 428880
download_size: 11577682
dataset_size: 19216115
- config_name: kk-arab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 7241749
num_examples: 211731
download_size: 4487032
dataset_size: 7241749
- config_name: kk-kz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4937945
num_examples: 160027
download_size: 3062906
dataset_size: 4937945
- config_name: kk-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 22197825
num_examples: 677162
download_size: 16072332
dataset_size: 22197825
- config_name: kk-tr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20060635
num_examples: 616521
download_size: 14438929
dataset_size: 20060635
- config_name: ko
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 60335212
num_examples: 1364440
download_size: 39186630
dataset_size: 60335212
- config_name: ko-kp
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4338717
num_examples: 146150
download_size: 2630925
dataset_size: 4338717
- config_name: koi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4737590
num_examples: 155082
download_size: 2894674
dataset_size: 4737590
- config_name: kr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13886057
num_examples: 433990
download_size: 9737602
dataset_size: 13886057
- config_name: krc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4646136
num_examples: 151026
download_size: 2785454
dataset_size: 4646136
- config_name: kri
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12798530
num_examples: 406032
download_size: 8902330
dataset_size: 12798530
- config_name: krj
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13850324
num_examples: 433444
download_size: 9703460
dataset_size: 13850324
- config_name: krl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12788020
num_examples: 405729
download_size: 8893337
dataset_size: 12788020
- config_name: ks
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4390604
num_examples: 147033
download_size: 2671069
dataset_size: 4390604
- config_name: ks-deva
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8567518
num_examples: 241832
download_size: 4387687
dataset_size: 8567518
- config_name: ksh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20394712
num_examples: 624523
download_size: 14698860
dataset_size: 20394712
- config_name: ku
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8037777
num_examples: 239515
download_size: 5306097
dataset_size: 8037777
- config_name: ku-arab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4577826
num_examples: 151290
download_size: 2796159
dataset_size: 4577826
- config_name: ku-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14683841
num_examples: 458802
download_size: 10371977
dataset_size: 14683841
- config_name: kum
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4252739
num_examples: 143985
download_size: 2567503
dataset_size: 4252739
- config_name: kv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4946978
num_examples: 158888
download_size: 2997865
dataset_size: 4946978
- config_name: kw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20245535
num_examples: 621432
download_size: 14581378
dataset_size: 20245535
- config_name: ky
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8909613
num_examples: 235165
download_size: 5462115
dataset_size: 8909613
- config_name: la
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 299766395
num_examples: 7085082
download_size: 201477460
dataset_size: 299766395
- config_name: lad
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20336417
num_examples: 622775
download_size: 14653199
dataset_size: 20336417
- config_name: lb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 56473066
num_examples: 1601093
download_size: 41410732
dataset_size: 56473066
- config_name: lbe
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4501470
num_examples: 149898
download_size: 2744786
dataset_size: 4501470
- config_name: lez
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4890798
num_examples: 155936
download_size: 2959653
dataset_size: 4890798
- config_name: lfn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14709210
num_examples: 456719
download_size: 10408539
dataset_size: 14709210
- config_name: lg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13979286
num_examples: 436009
download_size: 9802779
dataset_size: 13979286
- config_name: li
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 43476868
num_examples: 1253970
download_size: 31750932
dataset_size: 43476868
- config_name: lij
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 42327066
num_examples: 1227346
download_size: 30898971
dataset_size: 42327066
- config_name: liv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12781331
num_examples: 405236
download_size: 8895889
dataset_size: 12781331
- config_name: lki
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8039166
num_examples: 242526
download_size: 4363703
dataset_size: 8039166
- config_name: lld
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 90305
num_examples: 2634
download_size: 69672
dataset_size: 90305
- config_name: lmo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 18287638
num_examples: 545398
download_size: 13130119
dataset_size: 18287638
- config_name: ln
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14123637
num_examples: 439731
download_size: 9915851
dataset_size: 14123637
- config_name: lo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 9905189
num_examples: 271710
download_size: 5313218
dataset_size: 9905189
- config_name: loz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13695602
num_examples: 428723
download_size: 9581113
dataset_size: 13695602
- config_name: lt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 39902419
num_examples: 1096727
download_size: 29185765
dataset_size: 39902419
- config_name: ltg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13884707
num_examples: 433453
download_size: 9736637
dataset_size: 13884707
- config_name: lus
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13695197
num_examples: 428712
download_size: 9580538
dataset_size: 13695197
- config_name: luz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8459036
num_examples: 253454
download_size: 4687414
dataset_size: 8459036
- config_name: lv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 27242119
num_examples: 764753
download_size: 19676667
dataset_size: 27242119
- config_name: lzh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 25067538
num_examples: 685152
download_size: 14998856
dataset_size: 25067538
- config_name: mdf
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4634268
num_examples: 152141
download_size: 2820744
dataset_size: 4634268
- config_name: mg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 43863002
num_examples: 1271074
download_size: 32016826
dataset_size: 43863002
- config_name: mh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13775721
num_examples: 431162
download_size: 9644397
dataset_size: 13775721
- config_name: mi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20857040
num_examples: 637118
download_size: 15060301
dataset_size: 20857040
- config_name: min
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 53044258
num_examples: 1464128
download_size: 38587450
dataset_size: 53044258
- config_name: mk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 24087229
num_examples: 449241
download_size: 12217912
dataset_size: 24087229
- config_name: ml
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 189266798
num_examples: 2664923
download_size: 71344031
dataset_size: 189266798
- config_name: mn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 9311543
num_examples: 219695
download_size: 5272784
dataset_size: 9311543
- config_name: mni
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8696893
num_examples: 243616
download_size: 4470994
dataset_size: 8696893
- config_name: mnw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8861861
num_examples: 244906
download_size: 4517726
dataset_size: 8861861
- config_name: mo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5377009
num_examples: 172144
download_size: 3405661
dataset_size: 5377009
- config_name: mr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 26855182
num_examples: 526220
download_size: 12358679
dataset_size: 26855182
- config_name: mrh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 68
num_examples: 2
download_size: 1820
dataset_size: 68
- config_name: mrj
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5007903
num_examples: 160889
download_size: 3073431
dataset_size: 5007903
- config_name: ms
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 64674328
num_examples: 1803714
download_size: 47165217
dataset_size: 64674328
- config_name: ms-arab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 136496
num_examples: 2961
download_size: 92316
dataset_size: 136496
- config_name: mt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 22632686
num_examples: 682867
download_size: 16352572
dataset_size: 22632686
- config_name: mus
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14013416
num_examples: 437688
download_size: 9835239
dataset_size: 14013416
- config_name: mwl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14493299
num_examples: 448926
download_size: 10225888
dataset_size: 14493299
- config_name: my
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16182182
num_examples: 345096
download_size: 7981905
dataset_size: 16182182
- config_name: mzn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 17973941
num_examples: 447870
download_size: 9174617
dataset_size: 17973941
- config_name: na
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13992666
num_examples: 436956
download_size: 9823328
dataset_size: 13992666
- config_name: nah
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14490294
num_examples: 449748
download_size: 10192501
dataset_size: 14490294
- config_name: nan-hani
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 191
num_examples: 6
download_size: 1925
dataset_size: 191
- config_name: nap
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 42362346
num_examples: 1229161
download_size: 30918265
dataset_size: 42362346
- config_name: nb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 142554768
num_examples: 3688026
download_size: 105549981
dataset_size: 142554768
- config_name: nds
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 58766114
num_examples: 1666813
download_size: 43421948
dataset_size: 58766114
- config_name: nds-nl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 44121756
num_examples: 1273149
download_size: 32201410
dataset_size: 44121756
- config_name: ne
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 11925386
num_examples: 295006
download_size: 6265232
dataset_size: 11925386
- config_name: new
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16906308
num_examples: 350362
download_size: 7680329
dataset_size: 16906308
- config_name: ng
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13870754
num_examples: 433582
download_size: 9723795
dataset_size: 13870754
- config_name: nia
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20649
num_examples: 515
download_size: 16535
dataset_size: 20649
- config_name: niu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12794247
num_examples: 405902
download_size: 8897260
dataset_size: 12794247
- config_name: nl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5016576732
num_examples: 61931959
download_size: 3380404239
dataset_size: 5016576732
- config_name: nn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 99997815
num_examples: 2708994
download_size: 74736304
dataset_size: 99997815
- config_name: 'no'
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 2934
num_examples: 64
download_size: 4108
dataset_size: 2934
- config_name: nod
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4322068
num_examples: 145566
download_size: 2618106
dataset_size: 4322068
- config_name: nov
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14150434
num_examples: 440903
download_size: 9947798
dataset_size: 14150434
- config_name: nqo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8094271
num_examples: 243184
download_size: 4398836
dataset_size: 8094271
- config_name: nrm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 41330956
num_examples: 1203295
download_size: 30084065
dataset_size: 41330956
- config_name: nso
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14178321
num_examples: 443205
download_size: 9959708
dataset_size: 14178321
- config_name: nv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15351770
num_examples: 455188
download_size: 10472240
dataset_size: 15351770
- config_name: ny
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13989813
num_examples: 436764
download_size: 9821588
dataset_size: 13989813
- config_name: nys
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13092059
num_examples: 413241
download_size: 9153100
dataset_size: 13092059
- config_name: oc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 266612548
num_examples: 6569770
download_size: 180156462
dataset_size: 266612548
- config_name: olo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13200388
num_examples: 416935
download_size: 9214968
dataset_size: 13200388
- config_name: om
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5476389
num_examples: 175314
download_size: 3496637
dataset_size: 5476389
- config_name: or
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 22798709
num_examples: 470237
download_size: 10322832
dataset_size: 22798709
- config_name: os
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5946062
num_examples: 177054
download_size: 3583703
dataset_size: 5946062
- config_name: ota
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8015024
num_examples: 241903
download_size: 4343478
dataset_size: 8015024
- config_name: pa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20505754
num_examples: 481522
download_size: 10552147
dataset_size: 20505754
- config_name: pam
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14527964
num_examples: 451253
download_size: 10242443
dataset_size: 14527964
- config_name: pap
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 54505401
num_examples: 1449881
download_size: 40415776
dataset_size: 54505401
- config_name: pcd
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 42132826
num_examples: 1221362
download_size: 30766812
dataset_size: 42132826
- config_name: pdc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14435256
num_examples: 448055
download_size: 10178322
dataset_size: 14435256
- config_name: pdt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13994892
num_examples: 437200
download_size: 9819388
dataset_size: 13994892
- config_name: pfl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15461023
num_examples: 474198
download_size: 10893651
dataset_size: 15461023
- config_name: pi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8913354
num_examples: 250251
download_size: 4651392
dataset_size: 8913354
- config_name: pih
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13971081
num_examples: 436214
download_size: 9810653
dataset_size: 13971081
- config_name: pl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 426030491
num_examples: 10025139
download_size: 295767506
dataset_size: 426030491
- config_name: pms
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 51268512
num_examples: 1477043
download_size: 37698831
dataset_size: 51268512
- config_name: pnb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16192682
num_examples: 409037
download_size: 9196626
dataset_size: 16192682
- config_name: pnt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4439173
num_examples: 148336
download_size: 2703117
dataset_size: 4439173
- config_name: prg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 17940420
num_examples: 544030
download_size: 12958482
dataset_size: 17940420
- config_name: ps
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8860902
num_examples: 259186
download_size: 4916502
dataset_size: 8860902
- config_name: pt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 491184040
num_examples: 11574568
download_size: 340831923
dataset_size: 491184040
- config_name: pt-br
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 318857431
num_examples: 7782980
download_size: 223442911
dataset_size: 318857431
- config_name: pwn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8500
num_examples: 269
download_size: 8738
dataset_size: 8500
- config_name: qu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15254702
num_examples: 468823
download_size: 10750388
dataset_size: 15254702
- config_name: quc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 32
num_examples: 1
download_size: 1772
dataset_size: 32
- config_name: qug
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13798264
num_examples: 431733
download_size: 9661685
dataset_size: 13798264
- config_name: rgn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 17001688
num_examples: 519871
download_size: 12258201
dataset_size: 17001688
- config_name: rif
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13792951
num_examples: 431588
download_size: 9657698
dataset_size: 13792951
- config_name: rm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 44450577
num_examples: 1284908
download_size: 32519630
dataset_size: 44450577
- config_name: rmc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 159
num_examples: 4
download_size: 1963
dataset_size: 159
- config_name: rmy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5610156
num_examples: 179191
download_size: 3608283
dataset_size: 5610156
- config_name: rn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13935534
num_examples: 435271
download_size: 9779486
dataset_size: 13935534
- config_name: ro
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 247469452
num_examples: 5878366
download_size: 177525205
dataset_size: 247469452
- config_name: roa-tara
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14425120
num_examples: 448972
download_size: 10152875
dataset_size: 14425120
- config_name: ru
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 405103215
num_examples: 7485811
download_size: 257215625
dataset_size: 405103215
- config_name: rue
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4953403
num_examples: 159530
download_size: 3037824
dataset_size: 4953403
- config_name: rup
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14459686
num_examples: 450345
download_size: 10198398
dataset_size: 14459686
- config_name: ruq-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4434290
num_examples: 148404
download_size: 2700920
dataset_size: 4434290
- config_name: ruq-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13783683
num_examples: 430978
download_size: 9656941
dataset_size: 13783683
- config_name: rw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14090196
num_examples: 439172
download_size: 9901257
dataset_size: 14090196
- config_name: rwr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8568706
num_examples: 241841
download_size: 4388475
dataset_size: 8568706
- config_name: ryu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 2852
num_examples: 82
download_size: 4237
dataset_size: 2852
- config_name: sa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 21404327
num_examples: 455674
download_size: 9692464
dataset_size: 21404327
- config_name: sat
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 10810040
num_examples: 284911
download_size: 5750917
dataset_size: 10810040
- config_name: sc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 47195572
num_examples: 1348137
download_size: 34521764
dataset_size: 47195572
- config_name: scn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 43458983
num_examples: 1259067
download_size: 31775157
dataset_size: 43458983
- config_name: sco
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 56960413
num_examples: 1611092
download_size: 41724559
dataset_size: 56960413
- config_name: sd
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14257513
num_examples: 363318
download_size: 7844047
dataset_size: 14257513
- config_name: sdc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13975497
num_examples: 436913
download_size: 9800517
dataset_size: 13975497
- config_name: se
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 23962268
num_examples: 711439
download_size: 17409387
dataset_size: 23962268
- config_name: sei
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13827581
num_examples: 432520
download_size: 9684192
dataset_size: 13827581
- config_name: sg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13913524
num_examples: 434751
download_size: 9761739
dataset_size: 13913524
- config_name: sh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 30173635
num_examples: 746207
download_size: 20133594
dataset_size: 30173635
- config_name: shi-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13783218
num_examples: 430968
download_size: 9656828
dataset_size: 13783218
- config_name: shi-tfng
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4308577
num_examples: 145279
download_size: 2608525
dataset_size: 4308577
- config_name: shn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 10139002
num_examples: 260808
download_size: 4952168
dataset_size: 10139002
- config_name: shy-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4255322
num_examples: 144058
download_size: 2570625
dataset_size: 4255322
- config_name: si
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 7405400
num_examples: 189718
download_size: 4270591
dataset_size: 7405400
- config_name: sjd
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4300688
num_examples: 145047
download_size: 2604357
dataset_size: 4300688
- config_name: sje
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20970223
num_examples: 637639
download_size: 15120381
dataset_size: 20970223
- config_name: sju
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4315103
num_examples: 145655
download_size: 2620763
dataset_size: 4315103
- config_name: sk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 75586366
num_examples: 2050873
download_size: 54951330
dataset_size: 75586366
- config_name: skr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4274062
num_examples: 144443
download_size: 2585286
dataset_size: 4274062
- config_name: sl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 157883240
num_examples: 4112048
download_size: 118047353
dataset_size: 157883240
- config_name: sli
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13909208
num_examples: 434986
download_size: 9745964
dataset_size: 13909208
- config_name: sm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13984823
num_examples: 436830
download_size: 9817472
dataset_size: 13984823
- config_name: sma
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20653595
num_examples: 630437
download_size: 14902319
dataset_size: 20653595
- config_name: smj
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 19640206
num_examples: 604326
download_size: 14133964
dataset_size: 19640206
- config_name: smn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 10902411
num_examples: 337543
download_size: 7576850
dataset_size: 10902411
- config_name: sms
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4462345
num_examples: 149355
download_size: 2741038
dataset_size: 4462345
- config_name: sn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20116601
num_examples: 618231
download_size: 14463728
dataset_size: 20116601
- config_name: sq
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 304708913
num_examples: 7311820
download_size: 225592169
dataset_size: 304708913
- config_name: sr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 52787253
num_examples: 1018361
download_size: 31364006
dataset_size: 52787253
- config_name: sr-ec
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 9237541
num_examples: 248556
download_size: 5875548
dataset_size: 9237541
- config_name: sr-el
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 48848162
num_examples: 1418824
download_size: 35859120
dataset_size: 48848162
- config_name: srq
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12796525
num_examples: 405957
download_size: 8899493
dataset_size: 12796525
- config_name: ss
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13823630
num_examples: 432423
download_size: 9682165
dataset_size: 13823630
- config_name: st
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13938937
num_examples: 435419
download_size: 9785161
dataset_size: 13938937
- config_name: stq
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14484394
num_examples: 449885
download_size: 10228446
dataset_size: 14484394
- config_name: su
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20025826
num_examples: 583096
download_size: 14042822
dataset_size: 20025826
- config_name: sv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 339074900
num_examples: 8115455
download_size: 236022796
dataset_size: 339074900
- config_name: sw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 50612064
num_examples: 1465385
download_size: 37096369
dataset_size: 50612064
- config_name: szl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16772062
num_examples: 500107
download_size: 11868254
dataset_size: 16772062
- config_name: szy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4332021
num_examples: 146136
download_size: 2633271
dataset_size: 4332021
- config_name: ta
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 31251824
num_examples: 546558
download_size: 15157673
dataset_size: 31251824
- config_name: tay
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4345269
num_examples: 146938
download_size: 2632535
dataset_size: 4345269
- config_name: tcy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8723594
num_examples: 244350
download_size: 4487471
dataset_size: 8723594
- config_name: te
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 27587665
num_examples: 569615
download_size: 13669398
dataset_size: 27587665
- config_name: tet
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15092299
num_examples: 466244
download_size: 10702917
dataset_size: 15092299
- config_name: tg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12643125
num_examples: 304625
download_size: 7622522
dataset_size: 12643125
- config_name: tg-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4504034
num_examples: 149533
download_size: 2755000
dataset_size: 4504034
- config_name: tg-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 19845835
num_examples: 610020
download_size: 14264492
dataset_size: 19845835
- config_name: th
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 32693750
num_examples: 537447
download_size: 15849247
dataset_size: 32693750
- config_name: ti
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4366995
num_examples: 146479
download_size: 2648869
dataset_size: 4366995
- config_name: tk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5797050
num_examples: 184302
download_size: 3728802
dataset_size: 5797050
- config_name: tl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13661554
num_examples: 387377
download_size: 9456413
dataset_size: 13661554
- config_name: tly
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4309748
num_examples: 145312
download_size: 2609307
dataset_size: 4309748
- config_name: tly-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 35
num_examples: 1
download_size: 1793
dataset_size: 35
- config_name: tn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13936132
num_examples: 435219
download_size: 9780279
dataset_size: 13936132
- config_name: to
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13980327
num_examples: 436460
download_size: 9810650
dataset_size: 13980327
- config_name: tpi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14169019
num_examples: 442133
download_size: 9961827
dataset_size: 14169019
- config_name: tr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 72134544
num_examples: 1770267
download_size: 51032484
dataset_size: 72134544
- config_name: tru
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5322844
num_examples: 171327
download_size: 3371105
dataset_size: 5322844
- config_name: trv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 94285
num_examples: 3109
download_size: 65138
dataset_size: 94285
- config_name: ts
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13943481
num_examples: 435408
download_size: 9783789
dataset_size: 13943481
- config_name: tt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 24182976
num_examples: 548502
download_size: 14868166
dataset_size: 24182976
- config_name: tt-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4943914
num_examples: 158198
download_size: 3048932
dataset_size: 4943914
- config_name: tt-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13842972
num_examples: 432513
download_size: 9702714
dataset_size: 13842972
- config_name: tum
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13924159
num_examples: 435110
download_size: 9770501
dataset_size: 13924159
- config_name: tw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13830508
num_examples: 432669
download_size: 9688164
dataset_size: 13830508
- config_name: ty
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16816401
num_examples: 507332
download_size: 12098154
dataset_size: 16816401
- config_name: tyv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4583082
num_examples: 149929
download_size: 2779632
dataset_size: 4583082
- config_name: tzm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4253588
num_examples: 144002
download_size: 2569067
dataset_size: 4253588
- config_name: udm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4854947
num_examples: 156300
download_size: 2958444
dataset_size: 4854947
- config_name: ug-arab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4316690
num_examples: 145443
download_size: 2614962
dataset_size: 4316690
- config_name: ug-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13786474
num_examples: 431056
download_size: 9659723
dataset_size: 13786474
- config_name: uk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 251058352
num_examples: 5108733
download_size: 168140976
dataset_size: 251058352
- config_name: ur
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 57063750
num_examples: 987011
download_size: 28328459
dataset_size: 57063750
- config_name: uz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 11731793
num_examples: 344615
download_size: 8102734
dataset_size: 11731793
- config_name: uz-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4252574
num_examples: 143981
download_size: 2567325
dataset_size: 4252574
- config_name: ve
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13932174
num_examples: 435216
download_size: 9777266
dataset_size: 13932174
- config_name: vec
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 52081230
num_examples: 1466867
download_size: 37307805
dataset_size: 52081230
- config_name: vep
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 6174898
num_examples: 192298
download_size: 3994582
dataset_size: 6174898
- config_name: vi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 246835524
num_examples: 5743737
download_size: 172949263
dataset_size: 246835524
- config_name: vls
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 42789297
num_examples: 1239359
download_size: 31228294
dataset_size: 42789297
- config_name: vmf
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 18352990
num_examples: 555205
download_size: 13289296
dataset_size: 18352990
- config_name: vo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 228352533
num_examples: 5610875
download_size: 155496988
dataset_size: 228352533
- config_name: vot
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5406190
num_examples: 173486
download_size: 3439433
dataset_size: 5406190
- config_name: wa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 49235347
num_examples: 1426584
download_size: 36167816
dataset_size: 49235347
- config_name: war
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 190306474
num_examples: 4449062
download_size: 133786270
dataset_size: 190306474
- config_name: wls
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4033
num_examples: 104
download_size: 5150
dataset_size: 4033
- config_name: wo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 40961626
num_examples: 1193626
download_size: 29778666
dataset_size: 40961626
- config_name: wuu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 40570130
num_examples: 1127741
download_size: 24209117
dataset_size: 40570130
- config_name: wya
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 28
num_examples: 1
download_size: 1740
dataset_size: 28
- config_name: xal
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4475344
num_examples: 149984
download_size: 2722459
dataset_size: 4475344
- config_name: xh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20036194
num_examples: 615514
download_size: 14405310
dataset_size: 20036194
- config_name: xmf
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5943645
num_examples: 169507
download_size: 3418593
dataset_size: 5943645
- config_name: xsy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4262789
num_examples: 144305
download_size: 2573349
dataset_size: 4262789
- config_name: yav
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4070
num_examples: 102
download_size: 4718
dataset_size: 4070
- config_name: yi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5495313
num_examples: 170277
download_size: 3373820
dataset_size: 5495313
- config_name: yo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 25424749
num_examples: 724345
download_size: 18086773
dataset_size: 25424749
- config_name: za
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15159230
num_examples: 365892
download_size: 7774767
dataset_size: 15159230
- config_name: zea
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14538518
num_examples: 451577
download_size: 10262897
dataset_size: 14538518
- config_name: zgh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4253917
num_examples: 144006
download_size: 2569373
dataset_size: 4253917
- config_name: zh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 264353677
num_examples: 5424320
download_size: 174420118
dataset_size: 264353677
- config_name: zh-cn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 42868611
num_examples: 1158755
download_size: 27243799
dataset_size: 42868611
- config_name: zh-hans
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 57233156
num_examples: 1483225
download_size: 36583522
dataset_size: 57233156
- config_name: zh-hant
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 53502814
num_examples: 1356560
download_size: 36755083
dataset_size: 53502814
- config_name: zh-hk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15325323
num_examples: 408391
download_size: 10455809
dataset_size: 15325323
- config_name: zh-mo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 6568267
num_examples: 180950
download_size: 3547260
dataset_size: 6568267
- config_name: zh-my
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 32637498
num_examples: 916876
download_size: 19289581
dataset_size: 32637498
- config_name: zh-sg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 35325327
num_examples: 979652
download_size: 21150070
dataset_size: 35325327
- config_name: zh-tw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 17500668
num_examples: 443057
download_size: 11121104
dataset_size: 17500668
- config_name: zh-yue
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 1352
num_examples: 30
download_size: 2963
dataset_size: 1352
- config_name: zu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 47349379
num_examples: 1380550
download_size: 34649660
dataset_size: 47349379
configs:
- config_name: aa
data_files:
- split: label
path: aa/label-*
- config_name: ab
data_files:
- split: label
path: ab/label-*
- config_name: abs
data_files:
- split: label
path: abs/label-*
- config_name: ace
data_files:
- split: label
path: ace/label-*
- config_name: ady
data_files:
- split: label
path: ady/label-*
- config_name: ady-cyrl
data_files:
- split: label
path: ady-cyrl/label-*
- config_name: aeb
data_files:
- split: label
path: aeb/label-*
- config_name: aeb-arab
data_files:
- split: label
path: aeb-arab/label-*
- config_name: aeb-latn
data_files:
- split: label
path: aeb-latn/label-*
- config_name: af
data_files:
- split: label
path: af/label-*
- config_name: agq
data_files:
- split: label
path: agq/label-*
- config_name: ak
data_files:
- split: label
path: ak/label-*
- config_name: aln
data_files:
- split: label
path: aln/label-*
- config_name: als
data_files:
- split: label
path: als/label-*
- config_name: alt
data_files:
- split: label
path: alt/label-*
- config_name: am
data_files:
- split: label
path: am/label-*
- config_name: ami
data_files:
- split: label
path: ami/label-*
- config_name: an
data_files:
- split: label
path: an/label-*
- config_name: ang
data_files:
- split: label
path: ang/label-*
- config_name: anp
data_files:
- split: label
path: anp/label-*
- config_name: ar
data_files:
- split: label
path: ar/label-*
- config_name: arc
data_files:
- split: label
path: arc/label-*
- config_name: arn
data_files:
- split: label
path: arn/label-*
- config_name: arq
data_files:
- split: label
path: arq/label-*
- config_name: ary
data_files:
- split: label
path: ary/label-*
- config_name: arz
data_files:
- split: label
path: arz/label-*
- config_name: as
data_files:
- split: label
path: as/label-*
- config_name: ase
data_files:
- split: label
path: ase/label-*
- config_name: ast
data_files:
- split: label
path: ast/label-*
- config_name: atj
data_files:
- split: label
path: atj/label-*
- config_name: av
data_files:
- split: label
path: av/label-*
- config_name: avk
data_files:
- split: label
path: avk/label-*
- config_name: awa
data_files:
- split: label
path: awa/label-*
- config_name: ay
data_files:
- split: label
path: ay/label-*
- config_name: az
data_files:
- split: label
path: az/label-*
- config_name: azb
data_files:
- split: label
path: azb/label-*
- config_name: ba
data_files:
- split: label
path: ba/label-*
- config_name: ban
data_files:
- split: label
path: ban/label-*
- config_name: ban-bali
data_files:
- split: label
path: ban-bali/label-*
- config_name: bar
data_files:
- split: label
path: bar/label-*
- config_name: bbc
data_files:
- split: label
path: bbc/label-*
- config_name: bcc
data_files:
- split: label
path: bcc/label-*
- config_name: be
data_files:
- split: label
path: be/label-*
- config_name: be-tarask
data_files:
- split: label
path: be-tarask/label-*
- config_name: bg
data_files:
- split: label
path: bg/label-*
- config_name: bgn
data_files:
- split: label
path: bgn/label-*
- config_name: bi
data_files:
- split: label
path: bi/label-*
- config_name: bjn
data_files:
- split: label
path: bjn/label-*
- config_name: bm
data_files:
- split: label
path: bm/label-*
- config_name: bn
data_files:
- split: label
path: bn/label-*
- config_name: bo
data_files:
- split: label
path: bo/label-*
- config_name: bpy
data_files:
- split: label
path: bpy/label-*
- config_name: bqi
data_files:
- split: label
path: bqi/label-*
- config_name: br
data_files:
- split: label
path: br/label-*
- config_name: brh
data_files:
- split: label
path: brh/label-*
- config_name: bs
data_files:
- split: label
path: bs/label-*
- config_name: btm
data_files:
- split: label
path: btm/label-*
- config_name: bto
data_files:
- split: label
path: bto/label-*
- config_name: bug
data_files:
- split: label
path: bug/label-*
- config_name: bxr
data_files:
- split: label
path: bxr/label-*
- config_name: ca
data_files:
- split: label
path: ca/label-*
- config_name: cbk-zam
data_files:
- split: label
path: cbk-zam/label-*
- config_name: cdo
data_files:
- split: label
path: cdo/label-*
- config_name: ce
data_files:
- split: label
path: ce/label-*
- config_name: ceb
data_files:
- split: label
path: ceb/label-*
- config_name: ch
data_files:
- split: label
path: ch/label-*
- config_name: cho
data_files:
- split: label
path: cho/label-*
- config_name: chr
data_files:
- split: label
path: chr/label-*
- config_name: chy
data_files:
- split: label
path: chy/label-*
- config_name: ckb
data_files:
- split: label
path: ckb/label-*
- config_name: co
data_files:
- split: label
path: co/label-*
- config_name: cps
data_files:
- split: label
path: cps/label-*
- config_name: cr
data_files:
- split: label
path: cr/label-*
- config_name: crh
data_files:
- split: label
path: crh/label-*
- config_name: crh-cyrl
data_files:
- split: label
path: crh-cyrl/label-*
- config_name: crh-latn
data_files:
- split: label
path: crh-latn/label-*
- config_name: cs
data_files:
- split: label
path: cs/label-*
- config_name: csb
data_files:
- split: label
path: csb/label-*
- config_name: cv
data_files:
- split: label
path: cv/label-*
- config_name: cy
data_files:
- split: label
path: cy/label-*
- config_name: da
data_files:
- split: label
path: da/label-*
- config_name: dag
data_files:
- split: label
path: dag/label-*
- config_name: de
data_files:
- split: label
path: de/label-*
- config_name: de-at
data_files:
- split: label
path: de-at/label-*
- config_name: de-ch
data_files:
- split: label
path: de-ch/label-*
- config_name: de-formal
data_files:
- split: label
path: de-formal/label-*
- config_name: din
data_files:
- split: label
path: din/label-*
- config_name: diq
data_files:
- split: label
path: diq/label-*
- config_name: dsb
data_files:
- split: label
path: dsb/label-*
- config_name: dtp
data_files:
- split: label
path: dtp/label-*
- config_name: dty
data_files:
- split: label
path: dty/label-*
- config_name: dua
data_files:
- split: label
path: dua/label-*
- config_name: dv
data_files:
- split: label
path: dv/label-*
- config_name: dz
data_files:
- split: label
path: dz/label-*
- config_name: ee
data_files:
- split: label
path: ee/label-*
- config_name: egl
data_files:
- split: label
path: egl/label-*
- config_name: el
data_files:
- split: label
path: el/label-*
- config_name: eml
data_files:
- split: label
path: eml/label-*
- config_name: en
data_files:
- split: label
path: en/label-*
default: true
- config_name: en-ca
data_files:
- split: label
path: en-ca/label-*
- config_name: en-gb
data_files:
- split: label
path: en-gb/label-*
- config_name: en-us
data_files:
- split: label
path: en-us/label-*
- config_name: eo
data_files:
- split: label
path: eo/label-*
- config_name: es
data_files:
- split: label
path: es/label-*
- config_name: es-419
data_files:
- split: label
path: es-419/label-*
- config_name: es-formal
data_files:
- split: label
path: es-formal/label-*
- config_name: et
data_files:
- split: label
path: et/label-*
- config_name: eu
data_files:
- split: label
path: eu/label-*
- config_name: ext
data_files:
- split: label
path: ext/label-*
- config_name: fa
data_files:
- split: label
path: fa/label-*
- config_name: ff
data_files:
- split: label
path: ff/label-*
- config_name: fi
data_files:
- split: label
path: fi/label-*
- config_name: fit
data_files:
- split: label
path: fit/label-*
- config_name: fj
data_files:
- split: label
path: fj/label-*
- config_name: fkv
data_files:
- split: label
path: fkv/label-*
- config_name: fo
data_files:
- split: label
path: fo/label-*
- config_name: fr
data_files:
- split: label
path: fr/label-*
- config_name: frc
data_files:
- split: label
path: frc/label-*
- config_name: frp
data_files:
- split: label
path: frp/label-*
- config_name: frr
data_files:
- split: label
path: frr/label-*
- config_name: fur
data_files:
- split: label
path: fur/label-*
- config_name: ga
data_files:
- split: label
path: ga/label-*
- config_name: gag
data_files:
- split: label
path: gag/label-*
- config_name: gan
data_files:
- split: label
path: gan/label-*
- config_name: gan-hans
data_files:
- split: label
path: gan-hans/label-*
- config_name: gan-hant
data_files:
- split: label
path: gan-hant/label-*
- config_name: gcr
data_files:
- split: label
path: gcr/label-*
- config_name: gd
data_files:
- split: label
path: gd/label-*
- config_name: gl
data_files:
- split: label
path: gl/label-*
- config_name: glk
data_files:
- split: label
path: glk/label-*
- config_name: gn
data_files:
- split: label
path: gn/label-*
- config_name: gom
data_files:
- split: label
path: gom/label-*
- config_name: gom-deva
data_files:
- split: label
path: gom-deva/label-*
- config_name: gom-latn
data_files:
- split: label
path: gom-latn/label-*
- config_name: gor
data_files:
- split: label
path: gor/label-*
- config_name: got
data_files:
- split: label
path: got/label-*
- config_name: grc
data_files:
- split: label
path: grc/label-*
- config_name: gu
data_files:
- split: label
path: gu/label-*
- config_name: guc
data_files:
- split: label
path: guc/label-*
- config_name: guw
data_files:
- split: label
path: guw/label-*
- config_name: gv
data_files:
- split: label
path: gv/label-*
- config_name: ha
data_files:
- split: label
path: ha/label-*
- config_name: hak
data_files:
- split: label
path: hak/label-*
- config_name: haw
data_files:
- split: label
path: haw/label-*
- config_name: he
data_files:
- split: label
path: he/label-*
- config_name: hi
data_files:
- split: label
path: hi/label-*
- config_name: hif
data_files:
- split: label
path: hif/label-*
- config_name: hif-latn
data_files:
- split: label
path: hif-latn/label-*
- config_name: hil
data_files:
- split: label
path: hil/label-*
- config_name: ho
data_files:
- split: label
path: ho/label-*
- config_name: hr
data_files:
- split: label
path: hr/label-*
- config_name: hrx
data_files:
- split: label
path: hrx/label-*
- config_name: hsb
data_files:
- split: label
path: hsb/label-*
- config_name: ht
data_files:
- split: label
path: ht/label-*
- config_name: hu
data_files:
- split: label
path: hu/label-*
- config_name: hu-formal
data_files:
- split: label
path: hu-formal/label-*
- config_name: hy
data_files:
- split: label
path: hy/label-*
- config_name: hyw
data_files:
- split: label
path: hyw/label-*
- config_name: hz
data_files:
- split: label
path: hz/label-*
- config_name: ia
data_files:
- split: label
path: ia/label-*
- config_name: id
data_files:
- split: label
path: id/label-*
- config_name: ie
data_files:
- split: label
path: ie/label-*
- config_name: ig
data_files:
- split: label
path: ig/label-*
- config_name: ii
data_files:
- split: label
path: ii/label-*
- config_name: ik
data_files:
- split: label
path: ik/label-*
- config_name: ike-cans
data_files:
- split: label
path: ike-cans/label-*
- config_name: ike-latn
data_files:
- split: label
path: ike-latn/label-*
- config_name: ilo
data_files:
- split: label
path: ilo/label-*
- config_name: inh
data_files:
- split: label
path: inh/label-*
- config_name: io
data_files:
- split: label
path: io/label-*
- config_name: is
data_files:
- split: label
path: is/label-*
- config_name: it
data_files:
- split: label
path: it/label-*
- config_name: iu
data_files:
- split: label
path: iu/label-*
- config_name: ja
data_files:
- split: label
path: ja/label-*
- config_name: jam
data_files:
- split: label
path: jam/label-*
- config_name: jbo
data_files:
- split: label
path: jbo/label-*
- config_name: jv
data_files:
- split: label
path: jv/label-*
- config_name: ka
data_files:
- split: label
path: ka/label-*
- config_name: kaa
data_files:
- split: label
path: kaa/label-*
- config_name: kab
data_files:
- split: label
path: kab/label-*
- config_name: kbd
data_files:
- split: label
path: kbd/label-*
- config_name: kbd-cyrl
data_files:
- split: label
path: kbd-cyrl/label-*
- config_name: kbp
data_files:
- split: label
path: kbp/label-*
- config_name: kea
data_files:
- split: label
path: kea/label-*
- config_name: kg
data_files:
- split: label
path: kg/label-*
- config_name: khw
data_files:
- split: label
path: khw/label-*
- config_name: ki
data_files:
- split: label
path: ki/label-*
- config_name: kj
data_files:
- split: label
path: kj/label-*
- config_name: kjp
data_files:
- split: label
path: kjp/label-*
- config_name: kk
data_files:
- split: label
path: kk/label-*
- config_name: kk-arab
data_files:
- split: label
path: kk-arab/label-*
- config_name: kk-kz
data_files:
- split: label
path: kk-kz/label-*
- config_name: kk-latn
data_files:
- split: label
path: kk-latn/label-*
- config_name: kk-tr
data_files:
- split: label
path: kk-tr/label-*
- config_name: ko
data_files:
- split: label
path: ko/label-*
- config_name: ko-kp
data_files:
- split: label
path: ko-kp/label-*
- config_name: koi
data_files:
- split: label
path: koi/label-*
- config_name: kr
data_files:
- split: label
path: kr/label-*
- config_name: krc
data_files:
- split: label
path: krc/label-*
- config_name: kri
data_files:
- split: label
path: kri/label-*
- config_name: krj
data_files:
- split: label
path: krj/label-*
- config_name: krl
data_files:
- split: label
path: krl/label-*
- config_name: ks
data_files:
- split: label
path: ks/label-*
- config_name: ks-deva
data_files:
- split: label
path: ks-deva/label-*
- config_name: ksh
data_files:
- split: label
path: ksh/label-*
- config_name: ku
data_files:
- split: label
path: ku/label-*
- config_name: ku-arab
data_files:
- split: label
path: ku-arab/label-*
- config_name: ku-latn
data_files:
- split: label
path: ku-latn/label-*
- config_name: kum
data_files:
- split: label
path: kum/label-*
- config_name: kv
data_files:
- split: label
path: kv/label-*
- config_name: kw
data_files:
- split: label
path: kw/label-*
- config_name: ky
data_files:
- split: label
path: ky/label-*
- config_name: la
data_files:
- split: label
path: la/label-*
- config_name: lad
data_files:
- split: label
path: lad/label-*
- config_name: lb
data_files:
- split: label
path: lb/label-*
- config_name: lbe
data_files:
- split: label
path: lbe/label-*
- config_name: lez
data_files:
- split: label
path: lez/label-*
- config_name: lfn
data_files:
- split: label
path: lfn/label-*
- config_name: lg
data_files:
- split: label
path: lg/label-*
- config_name: li
data_files:
- split: label
path: li/label-*
- config_name: lij
data_files:
- split: label
path: lij/label-*
- config_name: liv
data_files:
- split: label
path: liv/label-*
- config_name: lki
data_files:
- split: label
path: lki/label-*
- config_name: lld
data_files:
- split: label
path: lld/label-*
- config_name: lmo
data_files:
- split: label
path: lmo/label-*
- config_name: ln
data_files:
- split: label
path: ln/label-*
- config_name: lo
data_files:
- split: label
path: lo/label-*
- config_name: loz
data_files:
- split: label
path: loz/label-*
- config_name: lt
data_files:
- split: label
path: lt/label-*
- config_name: ltg
data_files:
- split: label
path: ltg/label-*
- config_name: lus
data_files:
- split: label
path: lus/label-*
- config_name: luz
data_files:
- split: label
path: luz/label-*
- config_name: lv
data_files:
- split: label
path: lv/label-*
- config_name: lzh
data_files:
- split: label
path: lzh/label-*
- config_name: mdf
data_files:
- split: label
path: mdf/label-*
- config_name: mg
data_files:
- split: label
path: mg/label-*
- config_name: mh
data_files:
- split: label
path: mh/label-*
- config_name: mi
data_files:
- split: label
path: mi/label-*
- config_name: min
data_files:
- split: label
path: min/label-*
- config_name: mk
data_files:
- split: label
path: mk/label-*
- config_name: ml
data_files:
- split: label
path: ml/label-*
- config_name: mn
data_files:
- split: label
path: mn/label-*
- config_name: mni
data_files:
- split: label
path: mni/label-*
- config_name: mnw
data_files:
- split: label
path: mnw/label-*
- config_name: mo
data_files:
- split: label
path: mo/label-*
- config_name: mr
data_files:
- split: label
path: mr/label-*
- config_name: mrh
data_files:
- split: label
path: mrh/label-*
- config_name: mrj
data_files:
- split: label
path: mrj/label-*
- config_name: ms
data_files:
- split: label
path: ms/label-*
- config_name: ms-arab
data_files:
- split: label
path: ms-arab/label-*
- config_name: mt
data_files:
- split: label
path: mt/label-*
- config_name: mus
data_files:
- split: label
path: mus/label-*
- config_name: mwl
data_files:
- split: label
path: mwl/label-*
- config_name: my
data_files:
- split: label
path: my/label-*
- config_name: mzn
data_files:
- split: label
path: mzn/label-*
- config_name: na
data_files:
- split: label
path: na/label-*
- config_name: nah
data_files:
- split: label
path: nah/label-*
- config_name: nan-hani
data_files:
- split: label
path: nan-hani/label-*
- config_name: nap
data_files:
- split: label
path: nap/label-*
- config_name: nb
data_files:
- split: label
path: nb/label-*
- config_name: nds
data_files:
- split: label
path: nds/label-*
- config_name: nds-nl
data_files:
- split: label
path: nds-nl/label-*
- config_name: ne
data_files:
- split: label
path: ne/label-*
- config_name: new
data_files:
- split: label
path: new/label-*
- config_name: ng
data_files:
- split: label
path: ng/label-*
- config_name: nia
data_files:
- split: label
path: nia/label-*
- config_name: niu
data_files:
- split: label
path: niu/label-*
- config_name: nl
data_files:
- split: label
path: nl/label-*
- config_name: nn
data_files:
- split: label
path: nn/label-*
- config_name: 'no'
data_files:
- split: label
path: no/label-*
- config_name: nod
data_files:
- split: label
path: nod/label-*
- config_name: nov
data_files:
- split: label
path: nov/label-*
- config_name: nqo
data_files:
- split: label
path: nqo/label-*
- config_name: nrm
data_files:
- split: label
path: nrm/label-*
- config_name: nso
data_files:
- split: label
path: nso/label-*
- config_name: nv
data_files:
- split: label
path: nv/label-*
- config_name: ny
data_files:
- split: label
path: ny/label-*
- config_name: nys
data_files:
- split: label
path: nys/label-*
- config_name: oc
data_files:
- split: label
path: oc/label-*
- config_name: olo
data_files:
- split: label
path: olo/label-*
- config_name: om
data_files:
- split: label
path: om/label-*
- config_name: or
data_files:
- split: label
path: or/label-*
- config_name: os
data_files:
- split: label
path: os/label-*
- config_name: ota
data_files:
- split: label
path: ota/label-*
- config_name: pa
data_files:
- split: label
path: pa/label-*
- config_name: pam
data_files:
- split: label
path: pam/label-*
- config_name: pap
data_files:
- split: label
path: pap/label-*
- config_name: pcd
data_files:
- split: label
path: pcd/label-*
- config_name: pdc
data_files:
- split: label
path: pdc/label-*
- config_name: pdt
data_files:
- split: label
path: pdt/label-*
- config_name: pfl
data_files:
- split: label
path: pfl/label-*
- config_name: pi
data_files:
- split: label
path: pi/label-*
- config_name: pih
data_files:
- split: label
path: pih/label-*
- config_name: pl
data_files:
- split: label
path: pl/label-*
- config_name: pms
data_files:
- split: label
path: pms/label-*
- config_name: pnb
data_files:
- split: label
path: pnb/label-*
- config_name: pnt
data_files:
- split: label
path: pnt/label-*
- config_name: prg
data_files:
- split: label
path: prg/label-*
- config_name: ps
data_files:
- split: label
path: ps/label-*
- config_name: pt
data_files:
- split: label
path: pt/label-*
- config_name: pt-br
data_files:
- split: label
path: pt-br/label-*
- config_name: pwn
data_files:
- split: label
path: pwn/label-*
- config_name: qu
data_files:
- split: label
path: qu/label-*
- config_name: quc
data_files:
- split: label
path: quc/label-*
- config_name: qug
data_files:
- split: label
path: qug/label-*
- config_name: rgn
data_files:
- split: label
path: rgn/label-*
- config_name: rif
data_files:
- split: label
path: rif/label-*
- config_name: rm
data_files:
- split: label
path: rm/label-*
- config_name: rmc
data_files:
- split: label
path: rmc/label-*
- config_name: rmy
data_files:
- split: label
path: rmy/label-*
- config_name: rn
data_files:
- split: label
path: rn/label-*
- config_name: ro
data_files:
- split: label
path: ro/label-*
- config_name: roa-tara
data_files:
- split: label
path: roa-tara/label-*
- config_name: ru
data_files:
- split: label
path: ru/label-*
- config_name: rue
data_files:
- split: label
path: rue/label-*
- config_name: rup
data_files:
- split: label
path: rup/label-*
- config_name: ruq-cyrl
data_files:
- split: label
path: ruq-cyrl/label-*
- config_name: ruq-latn
data_files:
- split: label
path: ruq-latn/label-*
- config_name: rw
data_files:
- split: label
path: rw/label-*
- config_name: rwr
data_files:
- split: label
path: rwr/label-*
- config_name: ryu
data_files:
- split: label
path: ryu/label-*
- config_name: sa
data_files:
- split: label
path: sa/label-*
- config_name: sat
data_files:
- split: label
path: sat/label-*
- config_name: sc
data_files:
- split: label
path: sc/label-*
- config_name: scn
data_files:
- split: label
path: scn/label-*
- config_name: sco
data_files:
- split: label
path: sco/label-*
- config_name: sd
data_files:
- split: label
path: sd/label-*
- config_name: sdc
data_files:
- split: label
path: sdc/label-*
- config_name: se
data_files:
- split: label
path: se/label-*
- config_name: sei
data_files:
- split: label
path: sei/label-*
- config_name: sg
data_files:
- split: label
path: sg/label-*
- config_name: sh
data_files:
- split: label
path: sh/label-*
- config_name: shi-latn
data_files:
- split: label
path: shi-latn/label-*
- config_name: shi-tfng
data_files:
- split: label
path: shi-tfng/label-*
- config_name: shn
data_files:
- split: label
path: shn/label-*
- config_name: shy-latn
data_files:
- split: label
path: shy-latn/label-*
- config_name: si
data_files:
- split: label
path: si/label-*
- config_name: sjd
data_files:
- split: label
path: sjd/label-*
- config_name: sje
data_files:
- split: label
path: sje/label-*
- config_name: sju
data_files:
- split: label
path: sju/label-*
- config_name: sk
data_files:
- split: label
path: sk/label-*
- config_name: skr
data_files:
- split: label
path: skr/label-*
- config_name: sl
data_files:
- split: label
path: sl/label-*
- config_name: sli
data_files:
- split: label
path: sli/label-*
- config_name: sm
data_files:
- split: label
path: sm/label-*
- config_name: sma
data_files:
- split: label
path: sma/label-*
- config_name: smj
data_files:
- split: label
path: smj/label-*
- config_name: smn
data_files:
- split: label
path: smn/label-*
- config_name: sms
data_files:
- split: label
path: sms/label-*
- config_name: sn
data_files:
- split: label
path: sn/label-*
- config_name: sq
data_files:
- split: label
path: sq/label-*
- config_name: sr
data_files:
- split: label
path: sr/label-*
- config_name: sr-ec
data_files:
- split: label
path: sr-ec/label-*
- config_name: sr-el
data_files:
- split: label
path: sr-el/label-*
- config_name: srq
data_files:
- split: label
path: srq/label-*
- config_name: ss
data_files:
- split: label
path: ss/label-*
- config_name: st
data_files:
- split: label
path: st/label-*
- config_name: stq
data_files:
- split: label
path: stq/label-*
- config_name: su
data_files:
- split: label
path: su/label-*
- config_name: sv
data_files:
- split: label
path: sv/label-*
- config_name: sw
data_files:
- split: label
path: sw/label-*
- config_name: szl
data_files:
- split: label
path: szl/label-*
- config_name: szy
data_files:
- split: label
path: szy/label-*
- config_name: ta
data_files:
- split: label
path: ta/label-*
- config_name: tay
data_files:
- split: label
path: tay/label-*
- config_name: tcy
data_files:
- split: label
path: tcy/label-*
- config_name: te
data_files:
- split: label
path: te/label-*
- config_name: tet
data_files:
- split: label
path: tet/label-*
- config_name: tg
data_files:
- split: label
path: tg/label-*
- config_name: tg-cyrl
data_files:
- split: label
path: tg-cyrl/label-*
- config_name: tg-latn
data_files:
- split: label
path: tg-latn/label-*
- config_name: th
data_files:
- split: label
path: th/label-*
- config_name: ti
data_files:
- split: label
path: ti/label-*
- config_name: tk
data_files:
- split: label
path: tk/label-*
- config_name: tl
data_files:
- split: label
path: tl/label-*
- config_name: tly
data_files:
- split: label
path: tly/label-*
- config_name: tly-cyrl
data_files:
- split: label
path: tly-cyrl/label-*
- config_name: tn
data_files:
- split: label
path: tn/label-*
- config_name: to
data_files:
- split: label
path: to/label-*
- config_name: tpi
data_files:
- split: label
path: tpi/label-*
- config_name: tr
data_files:
- split: label
path: tr/label-*
- config_name: tru
data_files:
- split: label
path: tru/label-*
- config_name: trv
data_files:
- split: label
path: trv/label-*
- config_name: ts
data_files:
- split: label
path: ts/label-*
- config_name: tt
data_files:
- split: label
path: tt/label-*
- config_name: tt-cyrl
data_files:
- split: label
path: tt-cyrl/label-*
- config_name: tt-latn
data_files:
- split: label
path: tt-latn/label-*
- config_name: tum
data_files:
- split: label
path: tum/label-*
- config_name: tw
data_files:
- split: label
path: tw/label-*
- config_name: ty
data_files:
- split: label
path: ty/label-*
- config_name: tyv
data_files:
- split: label
path: tyv/label-*
- config_name: tzm
data_files:
- split: label
path: tzm/label-*
- config_name: udm
data_files:
- split: label
path: udm/label-*
- config_name: ug-arab
data_files:
- split: label
path: ug-arab/label-*
- config_name: ug-latn
data_files:
- split: label
path: ug-latn/label-*
- config_name: uk
data_files:
- split: label
path: uk/label-*
- config_name: ur
data_files:
- split: label
path: ur/label-*
- config_name: uz
data_files:
- split: label
path: uz/label-*
- config_name: uz-cyrl
data_files:
- split: label
path: uz-cyrl/label-*
- config_name: ve
data_files:
- split: label
path: ve/label-*
- config_name: vec
data_files:
- split: label
path: vec/label-*
- config_name: vep
data_files:
- split: label
path: vep/label-*
- config_name: vi
data_files:
- split: label
path: vi/label-*
- config_name: vls
data_files:
- split: label
path: vls/label-*
- config_name: vmf
data_files:
- split: label
path: vmf/label-*
- config_name: vo
data_files:
- split: label
path: vo/label-*
- config_name: vot
data_files:
- split: label
path: vot/label-*
- config_name: wa
data_files:
- split: label
path: wa/label-*
- config_name: war
data_files:
- split: label
path: war/label-*
- config_name: wls
data_files:
- split: label
path: wls/label-*
- config_name: wo
data_files:
- split: label
path: wo/label-*
- config_name: wuu
data_files:
- split: label
path: wuu/label-*
- config_name: wya
data_files:
- split: label
path: wya/label-*
- config_name: xal
data_files:
- split: label
path: xal/label-*
- config_name: xh
data_files:
- split: label
path: xh/label-*
- config_name: xmf
data_files:
- split: label
path: xmf/label-*
- config_name: xsy
data_files:
- split: label
path: xsy/label-*
- config_name: yav
data_files:
- split: label
path: yav/label-*
- config_name: yi
data_files:
- split: label
path: yi/label-*
- config_name: yo
data_files:
- split: label
path: yo/label-*
- config_name: za
data_files:
- split: label
path: za/label-*
- config_name: zea
data_files:
- split: label
path: zea/label-*
- config_name: zgh
data_files:
- split: label
path: zgh/label-*
- config_name: zh
data_files:
- split: label
path: zh/label-*
- config_name: zh-cn
data_files:
- split: label
path: zh-cn/label-*
- config_name: zh-hans
data_files:
- split: label
path: zh-hans/label-*
- config_name: zh-hant
data_files:
- split: label
path: zh-hant/label-*
- config_name: zh-hk
data_files:
- split: label
path: zh-hk/label-*
- config_name: zh-mo
data_files:
- split: label
path: zh-mo/label-*
- config_name: zh-my
data_files:
- split: label
path: zh-my/label-*
- config_name: zh-sg
data_files:
- split: label
path: zh-sg/label-*
- config_name: zh-tw
data_files:
- split: label
path: zh-tw/label-*
- config_name: zh-yue
data_files:
- split: label
path: zh-yue/label-*
- config_name: zu
data_files:
- split: label
path: zu/label-*
task_categories:
- translation
- text2text-generation
language:
- en
- fr
- de
- ja
- zh
- hi
- ar
- bn
- ru
- es
---
# Wikidata Labels
Large parallel corpus for machine translation
- Entity label data extracted from Wikidata (2022-01-03), filtered for item entities only
- Only download the languages you need with `datasets>=2.14.0`
- Similar dataset: https://huggingface.co/datasets/wmt/wikititles (18 Wikipedia titles pairs instead of all Wikidata entities)
## Dataset Details
### Dataset Sources
- Wikidata JSON dump (wikidata-20220103-all.json.gz) https://www.wikidata.org/wiki/Wikidata:Database_download
## Uses
You can generate parallel text examples from this dataset like below:
```python
from datasets import load_dataset
import pandas as pd
def parallel_labels(lang_codes: list, how="inner", repo_id="rayliuca/wikidata_entity_label", merge_config={}, datasets_config={}) -> pd.DataFrame:
out_df = None
for lc in lang_codes:
dataset = load_dataset(repo_id, lc, **datasets_config)
dataset_df = dataset['label'].to_pandas().rename(columns={"label":lc}).drop(columns=['lastrevid'])
if out_df is None:
out_df = dataset_df
else:
out_df = out_df.merge(
dataset_df,
on='wikidata_id',
how=how,
**merge_config
)
return out_df
# Note: the "en" subset is >4GB
parallel_labels(['en', 'fr', 'ja', 'zh']).head()
```
### Output
| | wikidata_id | en | fr | ja | zh |
|---:|:--------------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------------------------------|:---------------------------------------------|
| 0 | Q109739412 | SARS-CoV-2 Omicron variant | variant Omicron du SARS-CoV-2 | SARSコロナウイルス2-オミクロン株 | 嚴重急性呼吸道症候群冠狀病毒2型Omicron變異株 |
| 1 | Q108460606 | Ulughbegsaurus | Ulughbegsaurus | ウルグベグサウルス | 兀魯伯龍屬 |
| 2 | Q108556886 | AUKUS | AUKUS | AUKUS | AUKUS |
| 3 | Q106496152 | Claude Joseph | Claude Joseph | クロード・ジョゼフ | 克洛德·约瑟夫 |
| 4 | Q105519361 | The World's Finest Assassin Gets Reincarnated in a Different World as an Aristocrat | The World's Finest Assassin Gets Reincarnated in Another World as an Aristocrat | 世界最高の暗殺者、異世界貴族に転生する | 世界頂尖的暗殺者轉生為異世界貴族 |
Note: this example table above shows a quirk(?) of the Wiki data. The French Wikipedia page [The World's Finest Assassin Gets Reincarnated in Another World as an Aristocrat](https://fr.wikipedia.org/wiki/The_World%27s_Finest_Assassin_Gets_Reincarnated_in_Another_World_as_an_Aristocrat) uses English for its title. While this could be disadvantageous for direct translation training, it also provides insights into how native speakers might call this entity instead of the literal translation on the Wiki page as well
## Dataset Structure
Each language has its own subset (aka config), which means you only have to download the languages you need with `datasets>=2.14.0`
Each subset has these fields:
- wikidata_id
- lastrevid
- label
## Dataset Creation
#### Data Collection and Processing
- Filtered for item entities only
- Ignored the descriptions as those texts are not very parallel
## Bias, Risks, and Limitations
- Might be slightly outdated (2022)
- Popular languages have more entries
- Labels are not guaranteed to be literal translations (see examples above) |
CASIA-LM/ChineseWebText2.0 | CASIA-LM | "2024-12-02T08:43:03Z" | 10,204 | 17 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"arxiv:2411.19668",
"region:us"
] | null | "2024-11-15T10:19:36Z" | ---
license: apache-2.0
size_categories:
- n>1T
---
# ChineseWebText 2.0: Large-Scale High-quality Chinese Web Text with Multi-dimensional and fine-grained information
This directory contains the ChineseWebText2.0 dataset, and a new tool-chain called MDFG-tool for constructing large-scale and high-quality Chinese datasets with multi-dimensional and fine-grained information. Our ChineseWebText2.0 code is publicly available on github [(here)](https://github.com/CASIA-LM/ChineseWebText-2.0.git).
## ChineseWebText2.0
- ### Dataset Overview
We have released the latest and largest Chinese dataset, ChineseWebText 2.0, which consists of 3.8 TB of data. Each text in the dataset is accompanied by a quality score, domain single-label and multi-label tags, as well as toxicity classification and scores, enabling LLM researchers to select data based on new quality thresholds.
- ### Data Example
```json
{
"text": "近日,黑龙江省高校校报协会第十四届学术年会暨校报工作交流研讨会在东北农业大学举行。我校10件新闻作品喜获2项一等奖,2项二等奖,6项三等奖……",
"domain":
{
"single_label": "news",
"multi_label": ["news", "education"]
},
"toxicity":
{
"label": 0,
"score": 1.0347155694034882e-05
},
"quality_score": 0.96044921875
}
```
- "text": [string] Text content of data sample.
- "single_label": [string] The highest probability label generated by the domain classification model.
- "multi_label": [list] All labels generated by the domain classification model with probabilities higher than the threshold.
- "label": [int] Toxicity label generated by toxicity classification models.
- "score": [flaot] Toxicity score generated by toxicity classification model, samples with scores exceeding 0.99 were categorised as toxic.
- "quality_score": [float] Quality score generated by the quality evaluation model.
## MDFG-tool
### Introduction
We introduce a new toolchain, MDFG-tool (see Figure 1). We begin with the coarse-grained filtering module, which applies rule-based methods to clean the data, focusing on criteria such as text length and sensitive words to ensure data quality. After cleaning, we evaluate the text quality using a BERT-based model. This process generates a quality score, and by selecting an appropriate threshold, we can extract high-quality text data that meets our needs. Next, we use FastText for both single-label and multi-label classification of the cleaned data. Meanwhile, we conduct toxicity assessment. The FastText model is used to filter out toxic content and assign toxicity scores to each text. This scoring system allows researchers to set thresholds for identifying and selecting harmful texts for further training.
<div align="center">
<img src="./Pictures/structure.png" width="67%" />
<br>
<em>Figure 1: The pipeline of MDFG-tool.</em>
</div>
## Data Analysis
#### Removal Rate for Different Stages
In order to provide a high-level overview of the preparation and preprocessing stages, the figure followed shows the processing workflow and the removal rate of each step. This figure details the removal ratio of data from the previous step and the absolute percentage of the remaining data from the original collected dataset, thereby facilitating readers in tracking the various processing stages from the raw data to the high-quality dataset.
After collecting raw data from various sources, we initially obtain a original Chinese dataset totaling 6.6 TB. However, due to a significant amount of irrelevant and noisy content in some sources, a manual sampling analysis is performed in preparation stage. If irrelevant text accounted for more than 50\% of a source, the data from that source will be discarded entirely. As a result, a substantial portion of the data is removed during the preparation stage, retaining only 67.68\% of the original dataset. In preprocessing stage, four rule-based steps are implemented to filter the remained data. First, the Data Length step remove overly short texts to ensure that each text contains sufficient informational content. Next, the Character Proportion step eliminate texts with a high percentage of noisy characters, such as English, Traditional Chinese characters, or other irrelevant symbols. Finally, the Sensitive Words step and the Deduplication step are employed to remove toxic content and duplicate texts from the dataset. After the preprocessing stage, we produce a high-quality Chinese text dataset totaling 3.8 TB. In the next stage, each text in this high-quality dataset will be enriched with fine-grained annotations, including a quality score, domain lablels, a toxicity score and a toxicity label.
<div align="center">
<img src="./Pictures/data_statistics.png" width="100%" />
<br>
<em>Figure 2: The proportion of data removed from the originally collected data in each processing step. The gray bars represent the proportion of data removed in each step relative to the data remaining before that step, while the other colored bars represent the retained data and its proportion relative to the originally collected data.</em>
</div>
#### Data Quality Distribution
<div align="center">
<img src="./Pictures/quality-evaluation.png" width="100%" />
<br>
<em>Figure 3: The Data Analysis on Quality Evaluation.</em>
</div>
**Quality Distribution** To investigate the quality distribution, we calculate the data proportions across different quality score ranges from our ChineseWebText 2.0 dataset. Figure 3(a) shows the proportion of data across different quality score intervals. The data is primarily concentrated in the mid-range score intervals \([0.2, 0.4)\), each contributing approximately 18\%. Additionally, a significant proportion lies within the high-quality interval \([0.9, 1.0)\), reflecting the presence of high-quality content in the dataset. In contrast, the lowest interval \([0.1, 0.2)\) contains only a minimal fraction, indicating a limited amount of poor-quality data. Note that the quantity of quality scores in the range [0, 0.1) is zero, so this interval has been omitted. This quality distribution provides a valuable reference for LLM researchers, enabling them to select data based on desired quality thresholds.
**Human Acceptance Evaluation**
To validate the consistency between quality evaluation and human judgments, Figure 3(b) displays human acceptance rates across different score intervals, showing a clear positive trend: higher scores correlate with higher acceptance rates.
Specifically, the highest score interval \([0.5, 1.0)\) achieves an acceptance rate exceeding 90\%, while the lowest interval \([0.1, 0.2)\) still maintains an acceptance rate of 80\%. This trend highlights the overall high quality of the data.
In summary, the dataset is primarily concentrated in the mid-quality range, with higher scores strongly correlating to greater human acceptance. This alignment underscores the dataset's potential for high-quality applications, where consistency in human-like quality is essential.
#### Domain Distribution
To investigate the distribution of our dataset across different domains, in this section, we conduct an in-depth analysis of the data distribution across eleven distinct domains: *book*, *dialogue*, *education*, *encyclopedia*, *finance*, *law*, *math*, *medicine*, *news*, *technology*, and *general*. This analysis considers two perspectives: the overall domain distribution and the quality-related domain distribution, providing comprehensive insights into the dataset's composition across different domains.
**Overall Domain Distribution**
As illustrated in Figure 8, the sample counts and corresponding proportions across various domains are presented. The Encyclopedia, General, and News domains dominate the dataset, comprising 33.43\%, 32.63\%, and 28.01\% of the data, respectively. In contrast, the Math domain has the smallest share at 0.55\%, yet it still includes over 8 million samples. Figure 9 complements this with a bar chart that provides a more intuitive visualization of the data distribution. This comprehensive domain distribution enables LLM researchers to select suitable datasets, facilitating the enhancement of the model’s knowledge and capabilities in specific domains.
<div align="center">
<img src="./Pictures/domain-distribution.png" width="100%" />
<br>
<em>Figure 4: Data Distribution Across Different Domains.</em>
</div>
**Quality-Related Domain Distribution**
In order to explore the domain distribution across different quality intervals, we perform an analysis focusing on the quality-related domain distribution. Specifically, we calculate the proportions of various domains within each quality interval. As shown in Figure 5, this table provides a detailed breakdown of domain proportions across different quality intervals. From the results, we observe that the distribution of domain data within each quality interval aligns closely with their overall distribution in the dataset. Based on the proportions in Figure 5, researchers can filter domain-specific data within targeted quality intervals, enabling the extraction of higher-quality domain-specific data subsets.
<div align="center">
<img src="./Pictures/domain-distribution-per-quality.png" width="100%" />
<br>
<em>Figure 5: Table of Domain Distribution Across Quality Levels</em>
</div>
#### Data Toxicity Analysis
<div align="center">
<img src="./Pictures/toxicity_distribution.png" width="100%" />
<br>
<em>Figure 6:The Distribution of Toxicity: A threshold of 0.99 was established, and samples with scores exceeding 0.99 were categorised as toxic.</em>
</div>
During the training procedure of LLMs, toxic data introduces harmful knowledge and information, which may lead the model to generate toxic outputs. In this section, we analyze the toxicity distribution within our dataset. As shown in Figure 6, it depicts the toxicity distribution of the dataset. In this figure, a higher toxicity score indicates greater toxicity. It is evident that the majority of the data in our dataset has a toxicity score of 0.0, signifying non-toxic, high-quality data. These non-toxic texts comprise 97.41\% of the dataset.
Additionally, through manual analysis of the toxicity scores, we identify that data with scores above 0.99 are classified as toxic. By applying this empirical threshold, we filter our dataset and obtain a 3.16GB toxic text subset comprising 1,632,620 samples. In Figure 7, we conduct a comparison between this subset with other publicly available toxic datasets. In this table, OffensEval 2019, AbusEval, HatEval, RAL-E and ToxiGen are English toxicity datasets, while COLD, ToxiCN, SWSR and CDial-Bias are Chinese toxicity datasets. The OffensEval 2019, AbusEval, and HatEval datasets are derived from Twitter and focus on the analysis of offensive language, abusive language, and hate speech, respectively. The RAL-E dataset, sourced from a banned Reddit community, is a large-scale, unannotated English dataset. In contrast, ToxiGen is a toxicity dataset generated using GPT-3, targeting multiple groups. The COLD, SWSR, CDial-Bias, and ToxiCN datasets are collected from Chinese social media platforms including Zhihu, Weibo, and Tieba, with each dataset focusing on different groups. Compared to these datasets, ours features the largest collection of toxicity data and each text contains a toxicity score, providing researchers with a valuable resource to better optimize and evaluate LLMs' safety.
<div align="center">
<img src="./Pictures/toxicity-datasets-comparison.png" width="100%" />
<br>
<em>Figure 7: Table of Comparison of Different Toxicity Datasets.</em>
</div>
## Citation
Please cite the paper if you use the data or code in this repo.
```shell
@misc{zhang2024chinesewebtext20largescalehighquality,
title={ChineseWebText 2.0: Large-Scale High-quality Chinese Web Text with Multi-dimensional and fine-grained information},
author={Wanyue Zhang and Ziyong Li and Wen Yang and Chunlin Leng and Yinan Bai and Qianlong Du and Chengqing Zong and Jiajun Zhang},
year={2024},
eprint={2411.19668},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2411.19668},
}
```
|
bilgeyucel/seven-wonders | bilgeyucel | "2023-03-09T14:25:43Z" | 10,195 | 5 | [
"language:en",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-03-08T18:44:17Z" | ---
language:
- en
size_categories:
- n<1K
--- |
hotpotqa/hotpot_qa | hotpotqa | "2024-01-18T11:05:40Z" | 10,193 | 106 | [
"task_categories:question-answering",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"size_categories:100K<n<1M",
"arxiv:1809.09600",
"region:us",
"multi-hop"
] | [
"question-answering"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: HotpotQA
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- question-answering
task_ids: []
paperswithcode_id: hotpotqa
tags:
- multi-hop
dataset_info:
- config_name: distractor
features:
- name: id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: type
dtype: string
- name: level
dtype: string
- name: supporting_facts
sequence:
- name: title
dtype: string
- name: sent_id
dtype: int32
- name: context
sequence:
- name: title
dtype: string
- name: sentences
sequence: string
splits:
- name: train
num_bytes: 552949315
num_examples: 90447
- name: validation
num_bytes: 45716111
num_examples: 7405
download_size: 612746344
dataset_size: 598665426
- config_name: fullwiki
features:
- name: id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: type
dtype: string
- name: level
dtype: string
- name: supporting_facts
sequence:
- name: title
dtype: string
- name: sent_id
dtype: int32
- name: context
sequence:
- name: title
dtype: string
- name: sentences
sequence: string
splits:
- name: train
num_bytes: 552949315
num_examples: 90447
- name: validation
num_bytes: 46848601
num_examples: 7405
- name: test
num_bytes: 46000102
num_examples: 7405
download_size: 660094672
dataset_size: 645798018
---
# Dataset Card for "hotpot_qa"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://hotpotqa.github.io/](https://hotpotqa.github.io/)
- **Repository:** https://github.com/hotpotqa/hotpot
- **Paper:** [HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering](https://arxiv.org/abs/1809.09600)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 1.27 GB
- **Size of the generated dataset:** 1.24 GB
- **Total amount of disk used:** 2.52 GB
### Dataset Summary
HotpotQA is a new dataset with 113k Wikipedia-based question-answer pairs with four key features: (1) the questions require finding and reasoning over multiple supporting documents to answer; (2) the questions are diverse and not constrained to any pre-existing knowledge bases or knowledge schemas; (3) we provide sentence-level supporting facts required for reasoning, allowingQA systems to reason with strong supervision and explain the predictions; (4) we offer a new type of factoid comparison questions to test QA systems’ ability to extract relevant facts and perform necessary comparison.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### distractor
- **Size of downloaded dataset files:** 612.75 MB
- **Size of the generated dataset:** 598.66 MB
- **Total amount of disk used:** 1.21 GB
An example of 'validation' looks as follows.
```
{
"answer": "This is the answer",
"context": {
"sentences": [["Sent 1"], ["Sent 21", "Sent 22"]],
"title": ["Title1", "Title 2"]
},
"id": "000001",
"level": "medium",
"question": "What is the answer?",
"supporting_facts": {
"sent_id": [0, 1, 3],
"title": ["Title of para 1", "Title of para 2", "Title of para 3"]
},
"type": "comparison"
}
```
#### fullwiki
- **Size of downloaded dataset files:** 660.10 MB
- **Size of the generated dataset:** 645.80 MB
- **Total amount of disk used:** 1.31 GB
An example of 'train' looks as follows.
```
{
"answer": "This is the answer",
"context": {
"sentences": [["Sent 1"], ["Sent 2"]],
"title": ["Title1", "Title 2"]
},
"id": "000001",
"level": "hard",
"question": "What is the answer?",
"supporting_facts": {
"sent_id": [0, 1, 3],
"title": ["Title of para 1", "Title of para 2", "Title of para 3"]
},
"type": "bridge"
}
```
### Data Fields
The data fields are the same among all splits.
#### distractor
- `id`: a `string` feature.
- `question`: a `string` feature.
- `answer`: a `string` feature.
- `type`: a `string` feature.
- `level`: a `string` feature.
- `supporting_facts`: a dictionary feature containing:
- `title`: a `string` feature.
- `sent_id`: a `int32` feature.
- `context`: a dictionary feature containing:
- `title`: a `string` feature.
- `sentences`: a `list` of `string` features.
#### fullwiki
- `id`: a `string` feature.
- `question`: a `string` feature.
- `answer`: a `string` feature.
- `type`: a `string` feature.
- `level`: a `string` feature.
- `supporting_facts`: a dictionary feature containing:
- `title`: a `string` feature.
- `sent_id`: a `int32` feature.
- `context`: a dictionary feature containing:
- `title`: a `string` feature.
- `sentences`: a `list` of `string` features.
### Data Splits
#### distractor
| |train|validation|
|----------|----:|---------:|
|distractor|90447| 7405|
#### fullwiki
| |train|validation|test|
|--------|----:|---------:|---:|
|fullwiki|90447| 7405|7405|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
HotpotQA is distributed under a [CC BY-SA 4.0 License](http://creativecommons.org/licenses/by-sa/4.0/).
### Citation Information
```
@inproceedings{yang2018hotpotqa,
title={{HotpotQA}: A Dataset for Diverse, Explainable Multi-hop Question Answering},
author={Yang, Zhilin and Qi, Peng and Zhang, Saizheng and Bengio, Yoshua and Cohen, William W. and Salakhutdinov, Ruslan and Manning, Christopher D.},
booktitle={Conference on Empirical Methods in Natural Language Processing ({EMNLP})},
year={2018}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova), [@ghomasHudson](https://github.com/ghomasHudson) for adding this dataset. |
togethercomputer/RedPajama-Data-1T-Sample | togethercomputer | "2023-07-19T06:59:10Z" | 10,179 | 122 | [
"task_categories:text-generation",
"language:en",
"size_categories:100K<n<1M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"text-generation"
] | "2023-04-16T23:12:30Z" | ---
task_categories:
- text-generation
language:
- en
pretty_name: Red Pajama 1T Sample
---
# Dataset Card for Dataset Name
### Dataset Summary
RedPajama is a clean-room, fully open-source implementation of the LLaMa dataset.
This HuggingFace repo contains a 1B-token sample of the RedPajama dataset.
The full dataset has the following token counts and is available for [download]( https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T):
| Dataset | Token Count |
|---------------|-------------|
| Commoncrawl | 878 Billion |
| C4 | 175 Billion |
| GitHub | 59 Billion |
| Books | 26 Billion |
| ArXiv | 28 Billion |
| Wikipedia | 24 Billion |
| StackExchange | 20 Billion |
| Total | 1.2 Trillion |
A full set of scripts to recreate the dataset from scratch can be found [here](https://github.com/togethercomputer/RedPajama-Data).
### Languages
Primarily English, though the Wikipedia slice contains multiple languages.
## Dataset Structure
The dataset structure is as follows:
```
{
"text": ...,
"meta": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...}
}
```
## Dataset Creation
This dataset was created to follow the LLaMa paper as closely as possible to try to reproduce its recipe.
### Source Data
#### Commoncrawl
We download five dumps from Commoncrawl, and run the dumps through the official `cc_net` pipeline.
We then deduplicate on the paragraph level, and filter out low quality text using a linear classifier trained to
classify paragraphs as Wikipedia references or random Commoncrawl samples.
#### C4
C4 is downloaded from Huggingface. The only preprocessing step is to bring the data into our own format.
#### GitHub
The raw GitHub data is downloaded from Google BigQuery. We deduplicate on the file level and filter out low quality
files and only keep projects that are distributed under the MIT, BSD, or Apache license.
#### Wikipedia
We use the Wikipedia dataset available on Huggingface, which is based on the Wikipedia dump from 2023-03-20 and contains
text in 20 different languages. The dataset comes in preprocessed format, so that hyperlinks, comments and other
formatting boilerplate has been removed.
#### Gutenberg and Books3
The PG19 subset of the Gutenberg Project and Books3 datasets are downloaded from Huggingface. After downloading, we use
simhash to remove near duplicates.
#### ArXiv
ArXiv data is downloaded from Amazon S3 in the `arxiv` requester pays bucket. We only keep latex source files and
remove preambles, comments, macros and bibliographies.
#### Stackexchange
The Stack Exchange split of the dataset is download from the
[Internet Archive](https://archive.org/download/stackexchange). Here we only keep the posts from the 28 largest sites,
remove html tags, group the posts into question-answer pairs, and order answers by their score.
<!--
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
--> |
Nerfgun3/bad_prompt | Nerfgun3 | "2022-11-19T23:43:47Z" | 10,139 | 925 | [
"language:en",
"license:creativeml-openrail-m",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us",
"stable-diffusion",
"text-to-image",
"image-to-image"
] | null | "2022-11-17T20:47:06Z" | ---
language:
- en
license: creativeml-openrail-m
thumbnail: "https://huggingface.co/datasets/Nerfgun3/bad_prompt/resolve/main/bad_prompt_showcase.jpg"
tags:
- stable-diffusion
- text-to-image
- image-to-image
inference: false
---
# Negative Embedding / Textual Inversion
<img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/bad_prompt/resolve/main/bad_prompt_showcase.jpg"/>
## Idea
The idea behind this embedding was to somehow train the negative prompt as an embedding, thus unifying the basis of the negative prompt into one word or embedding.
Side note: Embedding has proven to be very helpful for the generation of hands! :)
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder.
**Please put the embedding in the negative prompt to get the right results!**
For special negative tags such as "malformed sword", you still need to add them yourself. The negative embedding is trained on a basic skeleton for the negative prompt, which should provide a high-resolution image as a result.
### Version 1:
Issue: Changing the style to much.
To use it in the negative prompt: ```"bad_prompt"```
Personally, I would recommend to use my embeddings with a strength of 0.8 even the negative embeddings, like ```"(bad_prompt:0.8)"```
### Version 2:
With this version I tried to reduce the amount of vectors used, aswell as the issue with the changing artstyle. The newer version is still a work in progress, but its already way better than the first version. Its in files section!
I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508"
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
hpprc/emb | hpprc | "2024-09-13T01:51:47Z" | 10,113 | 10 | [
"language:ja",
"license:other",
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2409.07737",
"region:us"
] | null | "2024-04-15T14:12:27Z" | ---
language:
- ja
license: other
dataset_info:
- config_name: auto-wiki-nli-triplet
features:
- name: anc
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: neg.orig
sequence: string
splits:
- name: train
num_bytes: 533673945
num_examples: 198895
download_size: 362814978
dataset_size: 533673945
- config_name: auto-wiki-qa-collection
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5215705706
num_examples: 8215817
download_size: 3385038265
dataset_size: 5215705706
- config_name: auto-wiki-qa-dataset
features:
- name: passage_id
dtype: int64
- name: anc
dtype: string
- name: answers
sequence: string
- name: pos_ids.original
sequence: int64
- name: neg_ids.original
sequence: 'null'
- name: pos_ids.me5-large
sequence: int64
- name: pos_sims.me5-large
sequence: float64
- name: pos_ids.bm25
sequence: int64
- name: pos_sims.bm25
sequence: float64
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 30767957804
num_examples: 2377503
download_size: 21875194075
dataset_size: 30767957804
- config_name: auto-wiki-qa-nemotron-collection
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 4202532852
num_examples: 6354725
download_size: 2709124196
dataset_size: 4202532852
- config_name: auto-wiki-qa-nemotron-dataset
features:
- name: passage_id
dtype: int64
- name: anc
dtype: string
- name: answers
sequence: string
- name: pos_ids.original
sequence: int64
- name: neg_ids.original
sequence: 'null'
- name: pos_ids.me5-large
sequence: int64
- name: pos_sims.me5-large
sequence: float64
- name: pos_ids.bm25
sequence: int64
- name: pos_sims.bm25
sequence: float64
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 2034181294
num_examples: 156089
download_size: 1449231482
dataset_size: 2034181294
- config_name: baobab-wiki-retrieval-collection
features:
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3771123469
num_examples: 5140862
download_size: 2463376300
dataset_size: 3771123469
- config_name: baobab-wiki-retrieval-dataset
features:
- name: anc
dtype: string
- name: pos_1st
dtype: string
- name: neg_1st.original
dtype: 'null'
- name: neg_1st.me5-large
dtype: string
- name: sim_1st.me5-large
dtype: float64
- name: neg_1st.bm25
dtype: string
- name: sim_1st.bm25
dtype: float64
- name: pos_ids
sequence: int64
- name: neg_ids.original
sequence: 'null'
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 7837529
num_examples: 838
download_size: 5661379
dataset_size: 7837529
- config_name: jagovfaqs-collection
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 13918890
num_examples: 22794
download_size: 5874592
dataset_size: 13918890
- config_name: jagovfaqs-dataset
features:
- name: anc
dtype: string
- name: pos_1st
dtype: string
- name: neg_1st.original
dtype: 'null'
- name: neg_1st.me5-large
dtype: string
- name: sim_1st.me5-large
dtype: float64
- name: neg_1st.bm25
dtype: string
- name: sim_1st.bm25
dtype: float64
- name: pos_ids
sequence: int64
- name: neg_ids.original
sequence: 'null'
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 205284001
num_examples: 22794
download_size: 93115345
dataset_size: 205284001
- config_name: janli-triplet
features:
- name: anc
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: neg.orig
sequence: string
splits:
- name: train
num_bytes: 14075833
num_examples: 13496
download_size: 3088881
dataset_size: 14075833
- config_name: jaquad-collection
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 4213318372
num_examples: 6364369
download_size: 2716125410
dataset_size: 4213318372
- config_name: jaquad-dataset
features:
- name: anc
dtype: string
- name: answers
sequence: string
- name: pos_ids.original
sequence: int64
- name: neg_ids.original
sequence: 'null'
- name: pos_ids.me5-large
sequence: int64
- name: pos_sims.me5-large
sequence: float64
- name: pos_ids.bm25
sequence: int64
- name: pos_sims.bm25
sequence: float64
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 410758435
num_examples: 31748
download_size: 267846825
dataset_size: 410758435
- config_name: jcommonsenseqa-dataset
features:
- name: anc
dtype: string
- name: answers
sequence: string
- name: pos_ids.original
sequence: 'null'
- name: neg_ids.original
sequence: 'null'
splits:
- name: train
num_bytes: 673948
num_examples: 8939
download_size: 381605
dataset_size: 673948
- config_name: jqara-collection
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 4267669475
num_examples: 6433384
download_size: 2751666583
dataset_size: 4267669475
- config_name: jqara-dataset
features:
- name: anc
dtype: string
- name: answers
sequence: string
- name: pos_ids.original
sequence: int64
- name: neg_ids.original
sequence: int64
- name: pos_ids.me5-large
sequence: int64
- name: pos_sims.me5-large
sequence: float64
- name: pos_ids.bm25
sequence: int64
- name: pos_sims.bm25
sequence: float64
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 29789340
num_examples: 2235
download_size: 22310036
dataset_size: 29789340
- config_name: jsnli-triplet
features:
- name: anc
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: neg.orig
sequence: string
splits:
- name: train
num_bytes: 170593490
num_examples: 144190
download_size: 88629828
dataset_size: 170593490
- config_name: jsquad-collection
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 4210493031
num_examples: 6369790
download_size: 2714126867
dataset_size: 4210493031
- config_name: jsquad-dataset
features:
- name: passage_id
dtype: int64
- name: anc
dtype: string
- name: answers
sequence: string
- name: pos_ids.original
sequence: int64
- name: neg_ids.original
sequence: 'null'
- name: pos_ids.me5-large
sequence: int64
- name: pos_sims.me5-large
sequence: float64
- name: pos_ids.bm25
sequence: int64
- name: pos_sims.bm25
sequence: float64
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 812736672
num_examples: 62859
download_size: 514718047
dataset_size: 812736672
- config_name: miracl-collection
features:
- name: passage_id
dtype: int64
- name: docid
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3026160577.0
num_examples: 6953614
download_size: 1682864613
dataset_size: 3026160577.0
- config_name: miracl-dataset
features:
- name: anc
dtype: string
- name: pos_1st
dtype: string
- name: neg_1st.original
dtype: string
- name: neg_1st.me5-large
dtype: string
- name: sim_1st.me5-large
dtype: float64
- name: neg_1st.bm25
dtype: string
- name: sim_1st.bm25
dtype: float64
- name: pos_ids
sequence: int64
- name: neg_ids.original
sequence: int64
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 32393484
num_examples: 3477
download_size: 23431039
dataset_size: 32393484
- config_name: mkqa-dataset
features:
- name: anc
dtype: string
- name: answers
sequence: string
- name: pos_ids.original
sequence: 'null'
- name: neg_ids.original
sequence: 'null'
- name: pos_ids.me5-large
sequence: int64
- name: pos_sims.me5-large
sequence: float64
- name: pos_ids.bm25
sequence: int64
- name: pos_sims.bm25
sequence: float64
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 129900532
num_examples: 10000
download_size: 88793974
dataset_size: 129900532
- config_name: mkqa-triplet
features:
- name: idx
dtype: string
- name: anc
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
splits:
- name: train
num_bytes: 7640649
num_examples: 10000
download_size: 4121496
dataset_size: 7640649
- config_name: mmarco-collection
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3814117634
num_examples: 8829813
download_size: 2217976936
dataset_size: 3814117634
- config_name: mmarco-dataset
features:
- name: anc
dtype: string
- name: pos_1st
dtype: string
- name: neg_1st.original
dtype: string
- name: neg_1st.me5-large
dtype: string
- name: sim_1st.me5-large
dtype: float64
- name: neg_1st.bm25
dtype: string
- name: sim_1st.bm25
dtype: float64
- name: pos_ids
sequence: int64
- name: neg_ids.original
sequence: int64
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 3548801103
num_examples: 391060
download_size: 2624355417
dataset_size: 3548801103
- config_name: mr-tydi-collection
features:
- name: passage_id
dtype: int64
- name: docid
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3061941618
num_examples: 7000027
download_size: 1702050239
dataset_size: 3061941618
- config_name: mr-tydi-dataset
features:
- name: anc
dtype: string
- name: pos_1st
dtype: string
- name: neg_1st.original
dtype: string
- name: neg_1st.me5-large
dtype: string
- name: sim_1st.me5-large
dtype: float64
- name: neg_1st.bm25
dtype: string
- name: sim_1st.bm25
dtype: float64
- name: pos_ids
sequence: int64
- name: neg_ids.original
sequence: int64
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 35660240
num_examples: 3697
download_size: 25702000
dataset_size: 35660240
- config_name: niilc-qa-dataset
features:
- name: id
dtype: string
- name: anc
dtype: string
- name: answers
sequence: string
splits:
- name: dev
num_bytes: 94339
num_examples: 795
- name: test
num_bytes: 24706
num_examples: 198
download_size: 69487
dataset_size: 119045
- config_name: nu-mnli-triplet
features:
- name: anc
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: neg.orig
sequence: string
splits:
- name: train
num_bytes: 145358014
num_examples: 77785
download_size: 90397670
dataset_size: 145358014
- config_name: nu-snli-triplet
features:
- name: anc
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: neg.orig
sequence: string
splits:
- name: train
num_bytes: 133786645
num_examples: 109154
download_size: 68979487
dataset_size: 133786645
- config_name: paws-x-triplet
features:
- name: anc
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: neg.orig
sequence: string
splits:
- name: train
num_bytes: 124053741
num_examples: 49401
download_size: 75965630
dataset_size: 124053741
- config_name: qa-collection
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 4202542828.0
num_examples: 6354742
download_size: 2284295643
dataset_size: 4202542828.0
- config_name: quiz-no-mori-dataset
features:
- name: anc
dtype: string
- name: answers
sequence: string
- name: pos_ids.original
sequence: 'null'
- name: neg_ids.original
sequence: 'null'
- name: pos_ids.me5-large
sequence: int64
- name: pos_sims.me5-large
sequence: float64
- name: pos_ids.bm25
sequence: int64
- name: pos_sims.bm25
sequence: float64
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 340206118
num_examples: 25991
download_size: 241017142
dataset_size: 340206118
- config_name: quiz-works-dataset
features:
- name: anc
dtype: string
- name: answers
sequence: string
- name: pos_ids.original
sequence: 'null'
- name: neg_ids.original
sequence: 'null'
- name: pos_ids.me5-large
sequence: int64
- name: pos_sims.me5-large
sequence: float64
- name: pos_ids.bm25
sequence: int64
- name: pos_sims.bm25
sequence: float64
- name: neg_ids.me5-large
sequence: int64
- name: neg_sims.me5-large
sequence: float64
- name: neg_ids.bm25
sequence: int64
- name: neg_sims.bm25
sequence: float64
splits:
- name: train
num_bytes: 248971793
num_examples: 19073
download_size: 176241965
dataset_size: 248971793
- config_name: snow-triplet
features:
- name: anc
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
splits:
- name: train
num_bytes: 63640356
num_examples: 62758
download_size: 35752257
dataset_size: 63640356
configs:
- config_name: auto-wiki-nli-triplet
data_files:
- split: train
path: auto-wiki-nli-triplet/train-*
- config_name: auto-wiki-qa-collection
data_files:
- split: train
path: auto-wiki-qa-collection/train-*
- config_name: auto-wiki-qa-dataset
data_files:
- split: train
path: auto-wiki-qa-dataset/train-*
- config_name: auto-wiki-qa-nemotron-collection
data_files:
- split: train
path: auto-wiki-qa-nemotron-collection/train-*
- config_name: auto-wiki-qa-nemotron-dataset
data_files:
- split: train
path: auto-wiki-qa-nemotron-dataset/train-*
- config_name: baobab-wiki-retrieval-collection
data_files:
- split: train
path: baobab-wiki-retrieval-collection/train-*
- config_name: baobab-wiki-retrieval-dataset
data_files:
- split: train
path: baobab-wiki-retrieval-dataset/train-*
- config_name: jagovfaqs-collection
data_files:
- split: train
path: jagovfaqs-collection/train-*
- config_name: jagovfaqs-dataset
data_files:
- split: train
path: jagovfaqs-dataset/train-*
- config_name: janli-triplet
data_files:
- split: train
path: janli-triplet/train-*
- config_name: jaquad-collection
data_files:
- split: train
path: jaquad-collection/train-*
- config_name: jaquad-dataset
data_files:
- split: train
path: jaquad-dataset/train-*
- config_name: jcommonsenseqa-dataset
data_files:
- split: train
path: jcommonsenseqa-dataset/train-*
- config_name: jqara-collection
data_files:
- split: train
path: jqara-collection/train-*
- config_name: jqara-dataset
data_files:
- split: train
path: jqara-dataset/train-*
- config_name: jsnli-triplet
data_files:
- split: train
path: jsnli-triplet/train-*
- config_name: jsquad-collection
data_files:
- split: train
path: jsquad-collection/train-*
- config_name: jsquad-dataset
data_files:
- split: train
path: jsquad-dataset/train-*
- config_name: miracl-collection
data_files:
- split: train
path: miracl-collection/train-*
- config_name: miracl-dataset
data_files:
- split: train
path: miracl-dataset/train-*
- config_name: mkqa-dataset
data_files:
- split: train
path: mkqa-dataset/train-*
- config_name: mkqa-triplet
data_files:
- split: train
path: mkqa-triplet/train-*
- config_name: mmarco-collection
data_files:
- split: train
path: mmarco-collection/train-*
- config_name: mmarco-dataset
data_files:
- split: train
path: mmarco-dataset/train-*
- config_name: mr-tydi-collection
data_files:
- split: train
path: mr-tydi-collection/train-*
- config_name: mr-tydi-dataset
data_files:
- split: train
path: mr-tydi-dataset/train-*
- config_name: niilc-qa-dataset
data_files:
- split: dev
path: niilc-qa-dataset/dev-*
- split: test
path: niilc-qa-dataset/test-*
- config_name: nu-mnli-triplet
data_files:
- split: train
path: nu-mnli-triplet/train-*
- config_name: nu-snli-triplet
data_files:
- split: train
path: nu-snli-triplet/train-*
- config_name: paws-x-triplet
data_files:
- split: train
path: paws-x-triplet/train-*
- config_name: qa-collection
data_files:
- split: train
path: qa-collection/train-*
- config_name: quiz-no-mori-dataset
data_files:
- split: train
path: quiz-no-mori-dataset/train-*
- config_name: quiz-works-dataset
data_files:
- split: train
path: quiz-works-dataset/train-*
- config_name: snow-triplet
data_files:
- split: train
path: snow-triplet/train-*
---
still WIP
## Dataset Description
- **Paper:** https://arxiv.org/abs/2409.07737
- **Point of Contact:** [Hayato Tsukagoshi](mailto:[email protected])
## Information
|Name|Type|License (根拠)|
|-|-|-|
|MMARCO|Retrieval|[Apache 2.0 (?)](https://huggingface.co/datasets/unicamp-dl/mmarco)|
|Mr. TyDi|Retrieval|[Apache 2.0](https://huggingface.co/datasets/castorini/mr-tydi)|
|MIRACL|Retrieval|[Apache 2.0](https://huggingface.co/datasets/miracl/miracl)|
|JaGovFaqs|QA|[CC-BY-4.0](https://huggingface.co/datasets/matsuxr/JaGovFaqs-22k)|
|Auto Wiki QA|QA & Retrieval|[CC-BY-SA-4.0](https://huggingface.co/datasets/cl-nagoya/auto-wiki-qa)|
|Auto Wiki QA Nemotron|QA & Retrieval|[CC-BY-SA-4.0](https://huggingface.co/datasets/hpprc/auto-wiki-qa-nemotron)|
|JCommonsenseQA|QA|[CC-BY-SA-4.0](https://github.com/yahoojapan/JGLUE)|
|JSQuAD|QA & Retrieval|[CC-BY-SA-4.0](https://github.com/yahoojapan/JGLUE)|
|Japanese Wikipedia Human Retrieval|QA & Retrieval|[Apache 2.0](https://huggingface.co/datasets/baobab-trees/wikipedia-human-retrieval-ja)|
|JQaRA (dev, unused)|QA|[CC-BY-SA-4.0](https://huggingface.co/datasets/hotchpotch/JQaRA#:~:text=%E3%81%B0%E5%B9%B8%E3%81%84%E3%81%A7%E3%81%99%E3%80%82-,%E3%83%A9%E3%82%A4%E3%82%BB%E3%83%B3%E3%82%B9,%E3%81%A7%E3%81%82%E3%82%8B%20CC%20BY%2DSA%204.0%20%E3%81%BE%E3%81%9F%E3%81%AF%20GFDL%E3%81%A8%E3%81%97%E3%81%BE%E3%81%99%E3%80%82,-%E8%AC%9D%E8%BE%9E)|
|JaQuAD|QA & Retrieval|[CC-BY-SA-3.0](https://huggingface.co/datasets/SkelterLabsInc/JaQuAD)|
|JSNLI|NLI|[CC-BY-SA-4.0](https://huggingface.co/datasets/shunk031/jsnli)|
|Auto Wiki NLI|NLI|[CC-BY-SA-4.0](https://huggingface.co/datasets/hpprc/auto-wiki-nli-reward)|
|NU-SNLI|NLI|[CC-BY-SA-4.0](https://huggingface.co/datasets/cl-nagoya/nu-snli)|
|NU-MNLI|NLI|[CC-BY-SA-3.0, MIT, Others](https://huggingface.co/datasets/cl-nagoya/nu-mnli)|
|PAWS-X|Paraphrase|[Free (二次利用自由)](https://github.com/google-research-datasets/paws?tab=License-1-ov-file#readme)|
|SNOW|Paraphrase|[CC-BY-3.0](https://huggingface.co/datasets/SNOW-NLP/snow_simplified_japanese_corpus)|
|MKQA|QA|[CC-BY-3.0](https://huggingface.co/datasets/apple/mkqa)|
|Quiz Works|QA|[Free (二次利用自由)](https://quiz-works.com/about)|
|Quiz No Mori|QA|[Free (二次利用自由)](https://quiz-schedule.info/quiz_no_mori/quizforestsecond.html)|
|NIILC QA|QA|[CC-BY-SA](https://mynlp.is.s.u-tokyo.ac.jp/niilc-qa/)| |
mcaleste/sat_multiple_choice_math_may_23 | mcaleste | "2023-10-14T02:23:29Z" | 10,107 | 2 | [
"language:en",
"size_categories:n<1K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-09-18T21:30:36Z" | ---
language:
- en
size_categories:
- n<1K
---
This is the set of math SAT questions from the May 2023 SAT, taken from here: https://www.mcelroytutoring.com/lower.php?url=44-official-sat-pdfs-and-82-official-act-pdf-practice-tests-free.
Questions that included images were not included but all other math questions, including those that have tables were included. |
mlfoundations/dclm-pool-1b-5x | mlfoundations | "2024-06-22T05:50:04Z" | 10,081 | 1 | [
"license:cc-by-4.0",
"region:us"
] | null | "2024-06-12T04:26:45Z" | ---
license: cc-by-4.0
--- |
TIGER-Lab/MMLU-STEM | TIGER-Lab | "2024-06-20T03:37:16Z" | 10,073 | 8 | [
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-01-15T16:45:00Z" | ---
license: mit
dataset_info:
- config_name: default
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: subject
dtype: string
- name: answer
dtype: int64
splits:
- name: test
num_bytes: 976986
num_examples: 3153
download_size: 487500
dataset_size: 976986
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
This contains a subset of STEM subjects defined in MMLU by the original paper.
The included subjects are
- 'abstract_algebra',
- 'anatomy',
- 'astronomy',
- 'college_biology',
- 'college_chemistry',
- 'college_computer_science',
- 'college_mathematics',
- 'college_physics',
- 'computer_security',
- 'conceptual_physics',
- 'electrical_engineering',
- 'elementary_mathematics',
- 'high_school_biology',
- 'high_school_chemistry',
- 'high_school_computer_science',
- 'high_school_mathematics',
- 'high_school_physics',
- 'high_school_statistics',
- 'machine_learning'
Please cite the original MMLU paper when you are using it. |
HiTZ/EusExams | HiTZ | "2024-04-01T08:50:18Z" | 10,052 | 2 | [
"task_categories:question-answering",
"task_categories:multiple-choice",
"language:eu",
"language:es",
"license:cc-by-sa-4.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2403.20266",
"region:us",
"legal"
] | [
"question-answering",
"multiple-choice"
] | "2023-10-11T09:28:48Z" | ---
license: cc-by-sa-4.0
task_categories:
- question-answering
- multiple-choice
language:
- eu
- es
tags:
- legal
pretty_name: EusExams
size_categories:
- 10K<n<100K
configs:
- config_name: eu_opeosakiadmineu
data_files:
- split: test
path: "data/eu/opeosaki/opeosakiadmineu.jsonl"
- config_name: eu_opeosakiauxenfeu
data_files:
- split: test
path: "data/eu/opeosaki/opeosakiauxenfeu.jsonl"
- config_name: eu_opeosakiauxeu
data_files:
- split: test
path: "data/eu/opeosaki/opeosakiauxeu.jsonl"
- config_name: eu_opeosakiceladoreu
data_files:
- split: test
path: "data/eu/opeosaki/opeosakiceladoreu.jsonl"
- config_name: eu_opeosakienfeu
data_files:
- split: test
path: "data/eu/opeosaki/opeosakienfeu.jsonl"
- config_name: eu_opeosakioperarioeu
data_files:
- split: test
path: "data/eu/opeosaki/opeosakioperarioeu.jsonl"
- config_name: eu_opeosakitecnicoeu
data_files:
- split: test
path: "data/eu/opeosaki/opeosakitecnicoeu.jsonl"
- config_name: eu_opeosakivarioseu
data_files:
- split: test
path: "data/eu/opeosaki/opeosakivarioseu.jsonl"
- config_name: eu_opegasteizkoudala
data_files:
- split: test
path: "data/eu/opegasteiz/opegasteizkoudala.jsonl"
- config_name: eu_opeehuadmineu
data_files:
- split: test
path: "data/eu/opeehu/opeehuadmineu.jsonl"
- config_name: eu_opeehuauxeu
data_files:
- split: test
path: "data/eu/opeehu/opeehuauxeu.jsonl"
- config_name: eu_opeehubiblioeu
data_files:
- split: test
path: "data/eu/opeehu/opeehubiblioeu.jsonl"
- config_name: eu_opeehuderechoeu
data_files:
- split: test
path: "data/eu/opeehu/opeehuderechoeu.jsonl"
- config_name: eu_opeehueconomicaseu
data_files:
- split: test
path: "data/eu/opeehu/opeehueconomicaseu.jsonl"
- config_name: eu_opeehuempresarialeseu
data_files:
- split: test
path: "data/eu/opeehu/opeehuempresarialeseu.jsonl"
- config_name: eu_opeehusubalternoeu
data_files:
- split: test
path: "data/eu/opeehu/opeehusubalternoeu.jsonl"
- config_name: eu_opeehutecnicoeu
data_files:
- split: test
path: "data/eu/opeehu/opeehutecnicoeu.jsonl"
- config_name: eu_opeehuteknikarib
data_files:
- split: test
path: "data/eu/opeehu/opeehuteknikarib.jsonl"
- config_name: eu_ejadministrari
data_files:
- split: test
path: "data/eu/ope/ejadministrari.jsonl"
- config_name: eu_ejlaguntza
data_files:
- split: test
path: "data/eu/ope/ejlaguntza.jsonl"
- config_name: eu_ejlaguntzaile
data_files:
- split: test
path: "data/eu/ope/ejlaguntzaile.jsonl"
- config_name: eu_ejteknikari
data_files:
- split: test
path: "data/eu/ope/ejteknikari.jsonl"
- config_name: eu_osakidetza1e
data_files:
- split: test
path: "data/eu/osakidetza/osakidetza1e.jsonl"
- config_name: eu_osakidetza2e
data_files:
- split: test
path: "data/eu/osakidetza/osakidetza2e.jsonl"
- config_name: eu_osakidetza3e
data_files:
- split: test
path: "data/eu/osakidetza/osakidetza3e.jsonl"
- config_name: eu_osakidetza5e
data_files:
- split: test
path: "data/eu/osakidetza/osakidetza5e.jsonl"
- config_name: eu_osakidetza6e
data_files:
- split: test
path: "data/eu/osakidetza/osakidetza6e.jsonl"
- config_name: eu_osakidetza7e
data_files:
- split: test
path: "data/eu/osakidetza/osakidetza7e.jsonl"
- config_name: eu_opebilbaoeu
data_files:
- split: test
path: "data/eu/opebilbao/opebilbaoeu.jsonl"
- config_name: es_opeosakiadmin
data_files:
- split: test
path: "data/es/opeosaki/opeosakiadmin.jsonl"
- config_name: es_opeosakiaux
data_files:
- split: test
path: "data/es/opeosaki/opeosakiaux.jsonl"
- config_name: es_opeosakiauxenf
data_files:
- split: test
path: "data/es/opeosaki/opeosakiauxenf.jsonl"
- config_name: es_opeosakicelador
data_files:
- split: test
path: "data/es/opeosaki/opeosakicelador.jsonl"
- config_name: es_opeosakienf
data_files:
- split: test
path: "data/es/opeosaki/opeosakienf.jsonl"
- config_name: es_opeosakijuridico
data_files:
- split: test
path: "data/es/opeosaki/opeosakijuridico.jsonl"
- config_name: es_opeosakioperario
data_files:
- split: test
path: "data/es/opeosaki/opeosakioperario.jsonl"
- config_name: es_opeosakitecnico
data_files:
- split: test
path: "data/es/opeosaki/opeosakitecnico.jsonl"
- config_name: es_opeosakivarios
data_files:
- split: test
path: "data/es/opeosaki/opeosakivarios.jsonl"
- config_name: es_opeayuntamientovitoria
data_files:
- split: test
path: "data/es/opegasteiz/opeayuntamientovitoria.jsonl"
- config_name: es_opeehuadmin
data_files:
- split: test
path: "data/es/opeehu/opeehuadmin.jsonl"
- config_name: es_opeehuaux
data_files:
- split: test
path: "data/es/opeehu/opeehuaux.jsonl"
- config_name: es_opeehubiblio
data_files:
- split: test
path: "data/es/opeehu/opeehubiblio.jsonl"
- config_name: es_opeehuderecho
data_files:
- split: test
path: "data/es/opeehu/opeehuderecho.jsonl"
- config_name: es_opeehueconomicas
data_files:
- split: test
path: "data/es/opeehu/opeehueconomicas.jsonl"
- config_name: es_opeehuempresariales
data_files:
- split: test
path: "data/es/opeehu/opeehuempresariales.jsonl"
- config_name: es_opeehusubalterno
data_files:
- split: test
path: "data/es/opeehu/opeehusubalterno.jsonl"
- config_name: es_opeehutecnico
data_files:
- split: test
path: "data/es/opeehu/opeehutecnico.jsonl"
- config_name: es_opeehutecnicob
data_files:
- split: test
path: "data/es/opeehu/opeehutecnicob.jsonl"
- config_name: es_ejadministrativo
data_files:
- split: test
path: "data/es/ope/ejadministrativo.jsonl"
- config_name: es_ejauxiliar
data_files:
- split: test
path: "data/es/ope/ejauxiliar.jsonl"
- config_name: es_ejsubalterno
data_files:
- split: test
path: "data/es/ope/ejsubalterno.jsonl"
- config_name: es_ejtecnico
data_files:
- split: test
path: "data/es/ope/ejtecnico.jsonl"
- config_name: es_osakidetza1c
data_files:
- split: test
path: "data/es/osakidetza/osakidetza1c.jsonl"
- config_name: es_osakidetza2c
data_files:
- split: test
path: "data/es/osakidetza/osakidetza2c.jsonl"
- config_name: es_osakidetza3c
data_files:
- split: test
path: "data/es/osakidetza/osakidetza3c.jsonl"
- config_name: es_osakidetza4c
data_files:
- split: test
path: "data/es/osakidetza/osakidetza4c.jsonl"
- config_name: es_osakidetza5c
data_files:
- split: test
path: "data/es/osakidetza/osakidetza5c.jsonl"
- config_name: es_osakidetza6c
data_files:
- split: test
path: "data/es/osakidetza/osakidetza6c.jsonl"
- config_name: es_osakidetza7c
data_files:
- split: test
path: "data/es/osakidetza/osakidetza7c.jsonl"
- config_name: es_osakidetza8c
data_files:
- split: test
path: "data/es/osakidetza/osakidetza8c.jsonl"
- config_name: es_osakidetza9c
data_files:
- split: test
path: "data/es/osakidetza/osakidetza9c.jsonl"
- config_name: es_opebilbao
data_files:
- split: test
path: "data/es/opebilbao/opebilbao.jsonl"
---
# Dataset Card for EusExams
EusExams is a collection of tests designed to prepare individuals for Public Service examinations conducted by several Basque institutions, including the public health system Osakidetza, the Basque Government, the City Councils of Bilbao and Gasteiz, and the University of the Basque Country (UPV/EHU). Within each of these groups, there are different exams for public positions, such as administrative and assistant roles. Each multiple-choice question contains 2 to 4 choices (3.90 on average) and one correct answer. The dataset is mostly parallel with 16k questions in Basque and 18k in Spanish.
- **Curated by:** HiTZ Research Center & IXA Research group (University of the Basque Country UPV/EHU)
- **Language(s) (NLP):** Basque (eu)
- 📒 Blog Post: [Latxa: An Open Language Model and Evaluation Suite for Basque](https://www.hitz.eus/en/node/340)
- 📖 Paper: [Latxa: An Open Language Model and Evaluation Suite for Basque](https://arxiv.org/abs/2403.20266)
- 💻 Code: [hitz-zentroa/latxa](https://github.com/hitz-zentroa/latxa)
- 📧 Contact: [[email protected]](mailto:[email protected])
## Example
Basque Example:
```txt
Galdera: UPV/EHUREN ONDAREA HAU DA:
A. UPV/EHUk jabetzan dituen ondasunak.
B. UPV/EHUk jabetzan dituen ondasun eta eskubideak.
C. UPV/EHUk jabetzan edo titularitatean dituen ondasun eta eskubideak, bai eta etorkizunean eskuratzen edo esleitzen zaizkion gainerako guztiak ere.
D. UPV/EHUk jabetzan dituen ondasunak, bai eta etorkizunean eskuratzen dituen gainerako guztiak ere.
Erantzuna: C
```
English Translation:
```txt
Question: UPV/EHU’S LEGACY IS:
A. The property owned by UPV/EHU.
B. The rights and property owned by the UPV/EHU.
C. The rights and property of the UPV/EHU in ownership, as well as any other property acquired or assigned to it in the future.
D. The property of the UPV/EHU in ownership, as well as any other property acquired or assigned to it in the future.
Answer: C
```
## Citation
```bibtex
@misc{etxaniz2024latxa,
title={{L}atxa: An Open Language Model and Evaluation Suite for {B}asque},
author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa},
year={2024},
eprint={2403.20266},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
fixie-ai/librispeech_asr | fixie-ai | "2024-08-05T18:38:33Z" | 10,037 | 1 | [
"language:en",
"size_categories:100K<n<1M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-19T02:34:30Z" | ---
language:
- en
dataset_info:
- config_name: clean
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
- name: continuation
dtype: string
splits:
- name: test
num_bytes: 623948478.48
num_examples: 2620
- name: validation
num_bytes: 622190064.956
num_examples: 2703
- name: train.360
num_bytes: 41953890926.124
num_examples: 104014
- name: train.100
num_bytes: 11606313661.774
num_examples: 28539
download_size: 53886816833
dataset_size: 54806343131.334
- config_name: other
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
- name: continuation
dtype: string
splits:
- name: train.500
num_bytes: 57330687390.808
num_examples: 148688
- name: validation
num_bytes: 591511495.496
num_examples: 2864
- name: test
num_bytes: 616939198.113
num_examples: 2939
download_size: 57019309170
dataset_size: 58539138084.417
configs:
- config_name: clean
data_files:
- split: test
path: clean/test-*
- split: validation
path: clean/validation-*
- split: train.360
path: clean/train.360-*
- split: train.100
path: clean/train.100-*
- config_name: other
data_files:
- split: train.500
path: other/train.500-*
- split: validation
path: other/validation-*
- split: test
path: other/test-*
---
|
google-research-datasets/newsgroup | google-research-datasets | "2024-01-18T11:10:22Z" | 9,975 | 9 | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:10K<n<100K",
"region:us"
] | [
"text-classification"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- found
language:
- en
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
pretty_name: 20 Newsgroups
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
paperswithcode_id: 20-newsgroups
dataset_info:
- config_name: 18828_alt.atheism
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1669511
num_examples: 799
download_size: 14666916
dataset_size: 1669511
- config_name: 18828_comp.graphics
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1661199
num_examples: 973
download_size: 14666916
dataset_size: 1661199
- config_name: 18828_comp.os.ms-windows.misc
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2378739
num_examples: 985
download_size: 14666916
dataset_size: 2378739
- config_name: 18828_comp.sys.ibm.pc.hardware
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1185187
num_examples: 982
download_size: 14666916
dataset_size: 1185187
- config_name: 18828_comp.sys.mac.hardware
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1056264
num_examples: 961
download_size: 14666916
dataset_size: 1056264
- config_name: 18828_comp.windows.x
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1876297
num_examples: 980
download_size: 14666916
dataset_size: 1876297
- config_name: 18828_misc.forsale
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 925124
num_examples: 972
download_size: 14666916
dataset_size: 925124
- config_name: 18828_rec.autos
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1295307
num_examples: 990
download_size: 14666916
dataset_size: 1295307
- config_name: 18828_rec.motorcycles
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1206491
num_examples: 994
download_size: 14666916
dataset_size: 1206491
- config_name: 18828_rec.sport.baseball
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1369551
num_examples: 994
download_size: 14666916
dataset_size: 1369551
- config_name: 18828_rec.sport.hockey
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1758094
num_examples: 999
download_size: 14666916
dataset_size: 1758094
- config_name: 18828_sci.crypt
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2050727
num_examples: 991
download_size: 14666916
dataset_size: 2050727
- config_name: 18828_sci.electronics
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1237175
num_examples: 981
download_size: 14666916
dataset_size: 1237175
- config_name: 18828_sci.med
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1886363
num_examples: 990
download_size: 14666916
dataset_size: 1886363
- config_name: 18828_sci.space
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1812803
num_examples: 987
download_size: 14666916
dataset_size: 1812803
- config_name: 18828_soc.religion.christian
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2307486
num_examples: 997
download_size: 14666916
dataset_size: 2307486
- config_name: 18828_talk.politics.guns
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1922992
num_examples: 910
download_size: 14666916
dataset_size: 1922992
- config_name: 18828_talk.politics.mideast
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2910324
num_examples: 940
download_size: 14666916
dataset_size: 2910324
- config_name: 18828_talk.politics.misc
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2102809
num_examples: 775
download_size: 14666916
dataset_size: 2102809
- config_name: 18828_talk.religion.misc
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1374261
num_examples: 628
download_size: 14666916
dataset_size: 1374261
- config_name: 19997_alt.atheism
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2562277
num_examples: 1000
download_size: 17332201
dataset_size: 2562277
- config_name: 19997_comp.graphics
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2181673
num_examples: 1000
download_size: 17332201
dataset_size: 2181673
- config_name: 19997_comp.os.ms-windows.misc
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2898760
num_examples: 1000
download_size: 17332201
dataset_size: 2898760
- config_name: 19997_comp.sys.ibm.pc.hardware
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1671166
num_examples: 1000
download_size: 17332201
dataset_size: 1671166
- config_name: 19997_comp.sys.mac.hardware
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1580881
num_examples: 1000
download_size: 17332201
dataset_size: 1580881
- config_name: 19997_comp.windows.x
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2418273
num_examples: 1000
download_size: 17332201
dataset_size: 2418273
- config_name: 19997_misc.forsale
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1412012
num_examples: 1000
download_size: 17332201
dataset_size: 1412012
- config_name: 19997_rec.autos
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1780502
num_examples: 1000
download_size: 17332201
dataset_size: 1780502
- config_name: 19997_rec.motorcycles
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1677964
num_examples: 1000
download_size: 17332201
dataset_size: 1677964
- config_name: 19997_rec.sport.baseball
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1835432
num_examples: 1000
download_size: 17332201
dataset_size: 1835432
- config_name: 19997_rec.sport.hockey
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2207282
num_examples: 1000
download_size: 17332201
dataset_size: 2207282
- config_name: 19997_sci.crypt
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2607835
num_examples: 1000
download_size: 17332201
dataset_size: 2607835
- config_name: 19997_sci.electronics
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1732199
num_examples: 1000
download_size: 17332201
dataset_size: 1732199
- config_name: 19997_sci.med
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2388789
num_examples: 1000
download_size: 17332201
dataset_size: 2388789
- config_name: 19997_sci.space
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2351411
num_examples: 1000
download_size: 17332201
dataset_size: 2351411
- config_name: 19997_soc.religion.christian
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2743018
num_examples: 997
download_size: 17332201
dataset_size: 2743018
- config_name: 19997_talk.politics.guns
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2639343
num_examples: 1000
download_size: 17332201
dataset_size: 2639343
- config_name: 19997_talk.politics.mideast
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3695931
num_examples: 1000
download_size: 17332201
dataset_size: 3695931
- config_name: 19997_talk.politics.misc
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3169183
num_examples: 1000
download_size: 17332201
dataset_size: 3169183
- config_name: 19997_talk.religion.misc
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2658700
num_examples: 1000
download_size: 17332201
dataset_size: 2658700
- config_name: bydate_alt.atheism
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1042224
num_examples: 480
- name: test
num_bytes: 702920
num_examples: 319
download_size: 14464277
dataset_size: 1745144
- config_name: bydate_comp.graphics
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 911665
num_examples: 584
- name: test
num_bytes: 849632
num_examples: 389
download_size: 14464277
dataset_size: 1761297
- config_name: bydate_comp.os.ms-windows.misc
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1770988
num_examples: 591
- name: test
num_bytes: 706676
num_examples: 394
download_size: 14464277
dataset_size: 2477664
- config_name: bydate_comp.sys.ibm.pc.hardware
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 800446
num_examples: 590
- name: test
num_bytes: 485310
num_examples: 392
download_size: 14464277
dataset_size: 1285756
- config_name: bydate_comp.sys.mac.hardware
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 696311
num_examples: 578
- name: test
num_bytes: 468791
num_examples: 385
download_size: 14464277
dataset_size: 1165102
- config_name: bydate_comp.windows.x
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1243463
num_examples: 593
- name: test
num_bytes: 795366
num_examples: 395
download_size: 14464277
dataset_size: 2038829
- config_name: bydate_misc.forsale
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 611210
num_examples: 585
- name: test
num_bytes: 415902
num_examples: 390
download_size: 14464277
dataset_size: 1027112
- config_name: bydate_rec.autos
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 860646
num_examples: 594
- name: test
num_bytes: 535378
num_examples: 396
download_size: 14464277
dataset_size: 1396024
- config_name: bydate_rec.motorcycles
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 811151
num_examples: 598
- name: test
num_bytes: 497735
num_examples: 398
download_size: 14464277
dataset_size: 1308886
- config_name: bydate_rec.sport.baseball
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 850740
num_examples: 597
- name: test
num_bytes: 618609
num_examples: 397
download_size: 14464277
dataset_size: 1469349
- config_name: bydate_rec.sport.hockey
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1189652
num_examples: 600
- name: test
num_bytes: 666358
num_examples: 399
download_size: 14464277
dataset_size: 1856010
- config_name: bydate_sci.crypt
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1502448
num_examples: 595
- name: test
num_bytes: 657727
num_examples: 396
download_size: 14464277
dataset_size: 2160175
- config_name: bydate_sci.electronics
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 814856
num_examples: 591
- name: test
num_bytes: 523095
num_examples: 393
download_size: 14464277
dataset_size: 1337951
- config_name: bydate_sci.med
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1195201
num_examples: 594
- name: test
num_bytes: 791826
num_examples: 396
download_size: 14464277
dataset_size: 1987027
- config_name: bydate_sci.space
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1197965
num_examples: 593
- name: test
num_bytes: 721771
num_examples: 394
download_size: 14464277
dataset_size: 1919736
- config_name: bydate_soc.religion.christian
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1358047
num_examples: 599
- name: test
num_bytes: 1003668
num_examples: 398
download_size: 14464277
dataset_size: 2361715
- config_name: bydate_talk.politics.guns
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1313019
num_examples: 546
- name: test
num_bytes: 701477
num_examples: 364
download_size: 14464277
dataset_size: 2014496
- config_name: bydate_talk.politics.mideast
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1765833
num_examples: 564
- name: test
num_bytes: 1236435
num_examples: 376
download_size: 14464277
dataset_size: 3002268
- config_name: bydate_talk.politics.misc
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1328057
num_examples: 465
- name: test
num_bytes: 853395
num_examples: 310
download_size: 14464277
dataset_size: 2181452
- config_name: bydate_talk.religion.misc
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 835761
num_examples: 377
- name: test
num_bytes: 598452
num_examples: 251
download_size: 14464277
dataset_size: 1434213
---
# Dataset Card for "newsgroup"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://qwone.com/~jason/20Newsgroups/](http://qwone.com/~jason/20Newsgroups/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [NewsWeeder: Learning to Filter Netnews](https://doi.org/10.1016/B978-1-55860-377-6.50048-7)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 929.27 MB
- **Size of the generated dataset:** 124.41 MB
- **Total amount of disk used:** 1.05 GB
### Dataset Summary
The 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across
20 different newsgroups. To the best of my knowledge, it was originally collected by Ken Lang, probably for his Newsweeder:
Learning to filter netnews paper, though he does not explicitly mention this collection. The 20 newsgroups collection has become
a popular data set for experiments in text applications of machine learning techniques, such as text classification and text clustering.
does not include cross-posts and includes only the "From" and "Subject" headers.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### 18828_alt.atheism
- **Size of downloaded dataset files:** 14.67 MB
- **Size of the generated dataset:** 1.67 MB
- **Total amount of disk used:** 16.34 MB
An example of 'train' looks as follows.
```
```
#### 18828_comp.graphics
- **Size of downloaded dataset files:** 14.67 MB
- **Size of the generated dataset:** 1.66 MB
- **Total amount of disk used:** 16.33 MB
An example of 'train' looks as follows.
```
```
#### 18828_comp.os.ms-windows.misc
- **Size of downloaded dataset files:** 14.67 MB
- **Size of the generated dataset:** 2.38 MB
- **Total amount of disk used:** 17.05 MB
An example of 'train' looks as follows.
```
```
#### 18828_comp.sys.ibm.pc.hardware
- **Size of downloaded dataset files:** 14.67 MB
- **Size of the generated dataset:** 1.18 MB
- **Total amount of disk used:** 15.85 MB
An example of 'train' looks as follows.
```
```
#### 18828_comp.sys.mac.hardware
- **Size of downloaded dataset files:** 14.67 MB
- **Size of the generated dataset:** 1.06 MB
- **Total amount of disk used:** 15.73 MB
An example of 'train' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### 18828_alt.atheism
- `text`: a `string` feature.
#### 18828_comp.graphics
- `text`: a `string` feature.
#### 18828_comp.os.ms-windows.misc
- `text`: a `string` feature.
#### 18828_comp.sys.ibm.pc.hardware
- `text`: a `string` feature.
#### 18828_comp.sys.mac.hardware
- `text`: a `string` feature.
### Data Splits
| name |train|
|------------------------------|----:|
|18828_alt.atheism | 799|
|18828_comp.graphics | 973|
|18828_comp.os.ms-windows.misc | 985|
|18828_comp.sys.ibm.pc.hardware| 982|
|18828_comp.sys.mac.hardware | 961|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@incollection{LANG1995331,
title = {NewsWeeder: Learning to Filter Netnews},
editor = {Armand Prieditis and Stuart Russell},
booktitle = {Machine Learning Proceedings 1995},
publisher = {Morgan Kaufmann},
address = {San Francisco (CA)},
pages = {331-339},
year = {1995},
isbn = {978-1-55860-377-6},
doi = {https://doi.org/10.1016/B978-1-55860-377-6.50048-7},
url = {https://www.sciencedirect.com/science/article/pii/B9781558603776500487},
author = {Ken Lang},
}
```
### Contributions
Thanks to [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
nguha/legalbench | nguha | "2024-09-30T04:35:09Z" | 9,927 | 97 | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:other",
"size_categories:10K<n<100K",
"arxiv:2308.11462",
"arxiv:2110.01799",
"arxiv:2103.06268",
"arxiv:2301.00876",
"arxiv:1911.00841",
"arxiv:2105.07903",
"region:us",
"legal",
"law",
"finance"
] | [
"text-classification",
"question-answering",
"text-generation"
] | "2023-03-16T23:03:42Z" | ---
language:
- en
license: other
size_categories:
- 10K<n<100K
task_categories:
- text-classification
- question-answering
- text-generation
tags:
- legal
- law
- finance
dataset_info:
- config_name: abercrombie
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 307
num_examples: 5
- name: test
num_bytes: 6240
num_examples: 95
download_size: 19558988
dataset_size: 6547
- config_name: canada_tax_court_outcomes
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2975
num_examples: 6
- name: test
num_bytes: 157411
num_examples: 244
download_size: 19558988
dataset_size: 160386
- config_name: citation_prediction_classification
features:
- name: answer
dtype: string
- name: citation
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 660
num_examples: 2
- name: test
num_bytes: 26112
num_examples: 108
download_size: 19558988
dataset_size: 26772
- config_name: citation_prediction_open
features:
- name: answer
dtype: string
- name: circuit
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 555
num_examples: 2
- name: test
num_bytes: 13460
num_examples: 53
download_size: 19558988
dataset_size: 14015
- config_name: consumer_contracts_qa
features:
- name: answer
dtype: string
- name: contract
dtype: string
- name: index
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 9941
num_examples: 4
- name: test
num_bytes: 1221320
num_examples: 396
download_size: 19558988
dataset_size: 1231261
- config_name: contract_nli_confidentiality_of_agreement
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 4070
num_examples: 8
- name: test
num_bytes: 43818
num_examples: 82
download_size: 19558988
dataset_size: 47888
- config_name: contract_nli_explicit_identification
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3615
num_examples: 8
- name: test
num_bytes: 62133
num_examples: 109
download_size: 19558988
dataset_size: 65748
- config_name: contract_nli_inclusion_of_verbally_conveyed_information
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3817
num_examples: 8
- name: test
num_bytes: 81933
num_examples: 139
download_size: 19558988
dataset_size: 85750
- config_name: contract_nli_limited_use
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 4855
num_examples: 8
- name: test
num_bytes: 98534
num_examples: 208
download_size: 19558988
dataset_size: 103389
- config_name: contract_nli_no_licensing
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2591
num_examples: 8
- name: test
num_bytes: 78173
num_examples: 162
download_size: 19558988
dataset_size: 80764
- config_name: contract_nli_notice_on_compelled_disclosure
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3907
num_examples: 8
- name: test
num_bytes: 80470
num_examples: 142
download_size: 19558988
dataset_size: 84377
- config_name: contract_nli_permissible_acquirement_of_similar_information
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2736
num_examples: 8
- name: test
num_bytes: 87469
num_examples: 178
download_size: 19558988
dataset_size: 90205
- config_name: contract_nli_permissible_copy
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3480
num_examples: 8
- name: test
num_bytes: 39015
num_examples: 87
download_size: 19558988
dataset_size: 42495
- config_name: contract_nli_permissible_development_of_similar_information
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3921
num_examples: 8
- name: test
num_bytes: 62603
num_examples: 136
download_size: 19558988
dataset_size: 66524
- config_name: contract_nli_permissible_post-agreement_possession
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 4608
num_examples: 8
- name: test
num_bytes: 65932
num_examples: 111
download_size: 19558988
dataset_size: 70540
- config_name: contract_nli_return_of_confidential_information
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3499
num_examples: 8
- name: test
num_bytes: 35672
num_examples: 66
download_size: 19558988
dataset_size: 39171
- config_name: contract_nli_sharing_with_employees
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3173
num_examples: 8
- name: test
num_bytes: 104240
num_examples: 170
download_size: 19558988
dataset_size: 107413
- config_name: contract_nli_sharing_with_third-parties
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3249
num_examples: 8
- name: test
num_bytes: 104822
num_examples: 180
download_size: 19558988
dataset_size: 108071
- config_name: contract_nli_survival_of_obligations
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2272
num_examples: 8
- name: test
num_bytes: 75450
num_examples: 157
download_size: 19558988
dataset_size: 77722
- config_name: contract_qa
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: question
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2408
num_examples: 8
- name: test
num_bytes: 26370
num_examples: 80
download_size: 19558988
dataset_size: 28778
- config_name: corporate_lobbying
features:
- name: answer
dtype: string
- name: bill_summary
dtype: string
- name: bill_title
dtype: string
- name: company_description
dtype: string
- name: company_name
dtype: string
- name: index
dtype: string
splits:
- name: train
num_bytes: 54334
num_examples: 10
- name: test
num_bytes: 2974813
num_examples: 490
download_size: 19558988
dataset_size: 3029147
- config_name: cuad_affiliate_license-licensee
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 4067
num_examples: 6
- name: test
num_bytes: 115798
num_examples: 198
download_size: 19558988
dataset_size: 119865
- config_name: cuad_affiliate_license-licensor
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 4247
num_examples: 6
- name: test
num_bytes: 64931
num_examples: 88
download_size: 19558988
dataset_size: 69178
- config_name: cuad_anti-assignment
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2070
num_examples: 6
- name: test
num_bytes: 513026
num_examples: 1172
download_size: 19558988
dataset_size: 515096
- config_name: cuad_audit_rights
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2555
num_examples: 6
- name: test
num_bytes: 526977
num_examples: 1216
download_size: 19558988
dataset_size: 529532
- config_name: cuad_cap_on_liability
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2621
num_examples: 6
- name: test
num_bytes: 587220
num_examples: 1246
download_size: 19558988
dataset_size: 589841
- config_name: cuad_change_of_control
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2231
num_examples: 6
- name: test
num_bytes: 203823
num_examples: 416
download_size: 19558988
dataset_size: 206054
- config_name: cuad_competitive_restriction_exception
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2774
num_examples: 6
- name: test
num_bytes: 115844
num_examples: 220
download_size: 19558988
dataset_size: 118618
- config_name: cuad_covenant_not_to_sue
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2581
num_examples: 6
- name: test
num_bytes: 153799
num_examples: 308
download_size: 19558988
dataset_size: 156380
- config_name: cuad_effective_date
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2080
num_examples: 6
- name: test
num_bytes: 87802
num_examples: 236
download_size: 19558988
dataset_size: 89882
- config_name: cuad_exclusivity
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 1897
num_examples: 6
- name: test
num_bytes: 355097
num_examples: 762
download_size: 19558988
dataset_size: 356994
- config_name: cuad_expiration_date
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 1638
num_examples: 6
- name: test
num_bytes: 354232
num_examples: 876
download_size: 19558988
dataset_size: 355870
- config_name: cuad_governing_law
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2420
num_examples: 6
- name: test
num_bytes: 337322
num_examples: 876
download_size: 19558988
dataset_size: 339742
- config_name: cuad_insurance
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2537
num_examples: 6
- name: test
num_bytes: 475827
num_examples: 1030
download_size: 19558988
dataset_size: 478364
- config_name: cuad_ip_ownership_assignment
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 4756
num_examples: 6
- name: test
num_bytes: 294749
num_examples: 576
download_size: 19558988
dataset_size: 299505
- config_name: cuad_irrevocable_or_perpetual_license
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 5328
num_examples: 6
- name: test
num_bytes: 160279
num_examples: 280
download_size: 19558988
dataset_size: 165607
- config_name: cuad_joint_ip_ownership
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 5011
num_examples: 6
- name: test
num_bytes: 90592
num_examples: 192
download_size: 19558988
dataset_size: 95603
- config_name: cuad_license_grant
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3690
num_examples: 6
- name: test
num_bytes: 709331
num_examples: 1396
download_size: 19558988
dataset_size: 713021
- config_name: cuad_liquidated_damages
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3579
num_examples: 6
- name: test
num_bytes: 97839
num_examples: 220
download_size: 19558988
dataset_size: 101418
- config_name: cuad_minimum_commitment
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2874
num_examples: 6
- name: test
num_bytes: 354078
num_examples: 772
download_size: 19558988
dataset_size: 356952
- config_name: cuad_most_favored_nation
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2103
num_examples: 6
- name: test
num_bytes: 32800
num_examples: 64
download_size: 19558988
dataset_size: 34903
- config_name: cuad_no-solicit_of_customers
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3310
num_examples: 6
- name: test
num_bytes: 40828
num_examples: 84
download_size: 19558988
dataset_size: 44138
- config_name: cuad_no-solicit_of_employees
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3619
num_examples: 6
- name: test
num_bytes: 72661
num_examples: 142
download_size: 19558988
dataset_size: 76280
- config_name: cuad_non-compete
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3675
num_examples: 6
- name: test
num_bytes: 211272
num_examples: 442
download_size: 19558988
dataset_size: 214947
- config_name: cuad_non-disparagement
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2168
num_examples: 6
- name: test
num_bytes: 49850
num_examples: 100
download_size: 19558988
dataset_size: 52018
- config_name: cuad_non-transferable_license
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3643
num_examples: 6
- name: test
num_bytes: 269505
num_examples: 542
download_size: 19558988
dataset_size: 273148
- config_name: cuad_notice_period_to_terminate_renewal
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 4166
num_examples: 6
- name: test
num_bytes: 100014
num_examples: 222
download_size: 19558988
dataset_size: 104180
- config_name: cuad_post-termination_services
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 3349
num_examples: 6
- name: test
num_bytes: 419477
num_examples: 808
download_size: 19558988
dataset_size: 422826
- config_name: cuad_price_restrictions
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2945
num_examples: 6
- name: test
num_bytes: 19430
num_examples: 46
download_size: 19558988
dataset_size: 22375
- config_name: cuad_renewal_term
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2163
num_examples: 6
- name: test
num_bytes: 168528
num_examples: 386
download_size: 19558988
dataset_size: 170691
- config_name: cuad_revenue-profit_sharing
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2581
num_examples: 6
- name: test
num_bytes: 363594
num_examples: 774
download_size: 19558988
dataset_size: 366175
- config_name: cuad_rofr-rofo-rofn
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2817
num_examples: 6
- name: test
num_bytes: 338243
num_examples: 690
download_size: 19558988
dataset_size: 341060
- config_name: cuad_source_code_escrow
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2696
num_examples: 6
- name: test
num_bytes: 58125
num_examples: 118
download_size: 19558988
dataset_size: 60821
- config_name: cuad_termination_for_convenience
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 1506
num_examples: 6
- name: test
num_bytes: 181164
num_examples: 430
download_size: 19558988
dataset_size: 182670
- config_name: cuad_third_party_beneficiary
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2378
num_examples: 6
- name: test
num_bytes: 24106
num_examples: 68
download_size: 19558988
dataset_size: 26484
- config_name: cuad_uncapped_liability
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2570
num_examples: 6
- name: test
num_bytes: 158009
num_examples: 294
download_size: 19558988
dataset_size: 160579
- config_name: cuad_unlimited-all-you-can-eat-license
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 2414
num_examples: 6
- name: test
num_bytes: 22347
num_examples: 48
download_size: 19558988
dataset_size: 24761
- config_name: cuad_volume_restriction
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 1397
num_examples: 6
- name: test
num_bytes: 129456
num_examples: 322
download_size: 19558988
dataset_size: 130853
- config_name: cuad_warranty_duration
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
- name: document_name
dtype: string
splits:
- name: train
num_bytes: 1815
num_examples: 6
- name: test
num_bytes: 142580
num_examples: 320
download_size: 19558988
dataset_size: 144395
- config_name: definition_classification
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1826
num_examples: 8
- name: test
num_bytes: 371743
num_examples: 1337
download_size: 19558988
dataset_size: 373569
- config_name: definition_extraction
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2731
num_examples: 8
- name: test
num_bytes: 254689
num_examples: 687
download_size: 19558988
dataset_size: 257420
- config_name: diversity_1
features:
- name: aic_is_met
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
- name: parties_are_diverse
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 803
num_examples: 6
- name: test
num_bytes: 41135
num_examples: 300
download_size: 19558988
dataset_size: 41938
- config_name: diversity_2
features:
- name: aic_is_met
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
- name: parties_are_diverse
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1041
num_examples: 6
- name: test
num_bytes: 53537
num_examples: 300
download_size: 19558988
dataset_size: 54578
- config_name: diversity_3
features:
- name: aic_is_met
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
- name: parties_are_diverse
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 992
num_examples: 6
- name: test
num_bytes: 50744
num_examples: 300
download_size: 19558988
dataset_size: 51736
- config_name: diversity_4
features:
- name: aic_is_met
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
- name: parties_are_diverse
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1070
num_examples: 6
- name: test
num_bytes: 53464
num_examples: 300
download_size: 19558988
dataset_size: 54534
- config_name: diversity_5
features:
- name: aic_is_met
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
- name: parties_are_diverse
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1232
num_examples: 6
- name: test
num_bytes: 62550
num_examples: 300
download_size: 19558988
dataset_size: 63782
- config_name: diversity_6
features:
- name: aic_is_met
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
- name: parties_are_diverse
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2016
num_examples: 6
- name: test
num_bytes: 100411
num_examples: 300
download_size: 19558988
dataset_size: 102427
- config_name: function_of_decision_section
features:
- name: Citation
dtype: string
- name: Paragraph
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
splits:
- name: train
num_bytes: 1547
num_examples: 7
- name: test
num_bytes: 210419
num_examples: 367
download_size: 19558988
dataset_size: 211966
- config_name: hearsay
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: slice
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 788
num_examples: 5
- name: test
num_bytes: 17150
num_examples: 94
download_size: 19558988
dataset_size: 17938
- config_name: insurance_policy_interpretation
features:
- name: answer
dtype: string
- name: claim
dtype: string
- name: index
dtype: string
- name: policy
dtype: string
splits:
- name: train
num_bytes: 3119
num_examples: 5
- name: test
num_bytes: 70764
num_examples: 133
download_size: 19558988
dataset_size: 73883
- config_name: international_citizenship_questions
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 832
num_examples: 4
- name: test
num_bytes: 2089107
num_examples: 9306
download_size: 19558988
dataset_size: 2089939
- config_name: jcrew_blocker
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7352
num_examples: 6
- name: test
num_bytes: 59879
num_examples: 54
download_size: 19558988
dataset_size: 67231
- config_name: learned_hands_benefits
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8267
num_examples: 6
- name: test
num_bytes: 87512
num_examples: 66
download_size: 19558988
dataset_size: 95779
- config_name: learned_hands_business
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6075
num_examples: 6
- name: test
num_bytes: 202116
num_examples: 174
download_size: 19558988
dataset_size: 208191
- config_name: learned_hands_consumer
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6355
num_examples: 6
- name: test
num_bytes: 795463
num_examples: 614
download_size: 19558988
dataset_size: 801818
- config_name: learned_hands_courts
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 10693
num_examples: 6
- name: test
num_bytes: 228204
num_examples: 192
download_size: 19558988
dataset_size: 238897
- config_name: learned_hands_crime
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7322
num_examples: 6
- name: test
num_bytes: 846597
num_examples: 688
download_size: 19558988
dataset_size: 853919
- config_name: learned_hands_divorce
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 10651
num_examples: 6
- name: test
num_bytes: 189279
num_examples: 150
download_size: 19558988
dataset_size: 199930
- config_name: learned_hands_domestic_violence
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11170
num_examples: 6
- name: test
num_bytes: 239797
num_examples: 174
download_size: 19558988
dataset_size: 250967
- config_name: learned_hands_education
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6992
num_examples: 6
- name: test
num_bytes: 79184
num_examples: 56
download_size: 19558988
dataset_size: 86176
- config_name: learned_hands_employment
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11223
num_examples: 6
- name: test
num_bytes: 909220
num_examples: 710
download_size: 19558988
dataset_size: 920443
- config_name: learned_hands_estates
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5970
num_examples: 6
- name: test
num_bytes: 216836
num_examples: 178
download_size: 19558988
dataset_size: 222806
- config_name: learned_hands_family
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8714
num_examples: 6
- name: test
num_bytes: 3073508
num_examples: 2265
download_size: 19558988
dataset_size: 3082222
- config_name: learned_hands_health
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6155
num_examples: 6
- name: test
num_bytes: 336934
num_examples: 226
download_size: 19558988
dataset_size: 343089
- config_name: learned_hands_housing
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9726
num_examples: 6
- name: test
num_bytes: 6028612
num_examples: 4494
download_size: 19558988
dataset_size: 6038338
- config_name: learned_hands_immigration
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3955
num_examples: 6
- name: test
num_bytes: 165352
num_examples: 134
download_size: 19558988
dataset_size: 169307
- config_name: learned_hands_torts
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4484
num_examples: 6
- name: test
num_bytes: 615649
num_examples: 432
download_size: 19558988
dataset_size: 620133
- config_name: learned_hands_traffic
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6250
num_examples: 6
- name: test
num_bytes: 667539
num_examples: 556
download_size: 19558988
dataset_size: 673789
- config_name: legal_reasoning_causality
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4688
num_examples: 4
- name: test
num_bytes: 87007
num_examples: 55
download_size: 19558988
dataset_size: 91695
- config_name: maud_ability_to_consummate_concept_is_subject_to_mae_carveouts
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5322
num_examples: 1
- name: test
num_bytes: 304051
num_examples: 69
download_size: 19558988
dataset_size: 309373
- config_name: maud_accuracy_of_fundamental_target_rws_bringdown_standard
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 271
num_examples: 1
- name: test
num_bytes: 148869
num_examples: 175
download_size: 19558988
dataset_size: 149140
- config_name: maud_accuracy_of_target_capitalization_rw_(outstanding_shares)_bringdown_standard_answer
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1493
num_examples: 1
- name: test
num_bytes: 152224
num_examples: 181
download_size: 19558988
dataset_size: 153717
- config_name: maud_accuracy_of_target_general_rw_bringdown_timing_answer
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1000
num_examples: 1
- name: test
num_bytes: 152717
num_examples: 181
download_size: 19558988
dataset_size: 153717
- config_name: maud_additional_matching_rights_period_for_modifications_(cor)
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2170
num_examples: 1
- name: test
num_bytes: 312632
num_examples: 158
download_size: 19558988
dataset_size: 314802
- config_name: maud_application_of_buyer_consent_requirement_(negative_interim_covenant)
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 558
num_examples: 1
- name: test
num_bytes: 96990
num_examples: 180
download_size: 19558988
dataset_size: 97548
- config_name: maud_buyer_consent_requirement_(ordinary_course)
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2620
num_examples: 1
- name: test
num_bytes: 138668
num_examples: 181
download_size: 19558988
dataset_size: 141288
- config_name: maud_change_in_law__subject_to_disproportionate_impact_modifier
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6000
num_examples: 1
- name: test
num_bytes: 448666
num_examples: 99
download_size: 19558988
dataset_size: 454666
- config_name: maud_changes_in_gaap_or_other_accounting_principles__subject_to_disproportionate_impact_modifier
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5998
num_examples: 1
- name: test
num_bytes: 444442
num_examples: 98
download_size: 19558988
dataset_size: 450440
- config_name: maud_cor_permitted_in_response_to_intervening_event
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2631
num_examples: 1
- name: test
num_bytes: 195447
num_examples: 100
download_size: 19558988
dataset_size: 198078
- config_name: maud_cor_permitted_with_board_fiduciary_determination_only
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3970
num_examples: 1
- name: test
num_bytes: 194108
num_examples: 100
download_size: 19558988
dataset_size: 198078
- config_name: maud_cor_standard_(intervening_event)
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 727
num_examples: 1
- name: test
num_bytes: 175140
num_examples: 84
download_size: 19558988
dataset_size: 175867
- config_name: maud_cor_standard_(superior_offer)
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1173
num_examples: 1
- name: test
num_bytes: 196905
num_examples: 100
download_size: 19558988
dataset_size: 198078
- config_name: maud_definition_contains_knowledge_requirement_-_answer
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1899
num_examples: 1
- name: test
num_bytes: 231405
num_examples: 147
download_size: 19558988
dataset_size: 233304
- config_name: maud_definition_includes_asset_deals
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 614
num_examples: 1
- name: test
num_bytes: 289644
num_examples: 146
download_size: 19558988
dataset_size: 290258
- config_name: maud_definition_includes_stock_deals
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 683
num_examples: 1
- name: test
num_bytes: 292466
num_examples: 148
download_size: 19558988
dataset_size: 293149
- config_name: maud_fiduciary_exception__board_determination_standard
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1594
num_examples: 1
- name: test
num_bytes: 288180
num_examples: 179
download_size: 19558988
dataset_size: 289774
- config_name: maud_fiduciary_exception_board_determination_trigger_(no_shop)
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3538
num_examples: 1
- name: test
num_bytes: 286236
num_examples: 179
download_size: 19558988
dataset_size: 289774
- config_name: maud_financial_point_of_view_is_the_sole_consideration
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3290
num_examples: 1
- name: test
num_bytes: 217048
num_examples: 112
download_size: 19558988
dataset_size: 220338
- config_name: maud_fls_(mae)_standard
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4669
num_examples: 1
- name: test
num_bytes: 349856
num_examples: 77
download_size: 19558988
dataset_size: 354525
- config_name: maud_general_economic_and_financial_conditions_subject_to_disproportionate_impact_modifier
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5998
num_examples: 1
- name: test
num_bytes: 445306
num_examples: 98
download_size: 19558988
dataset_size: 451304
- config_name: maud_includes_consistent_with_past_practice
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1127
num_examples: 1
- name: test
num_bytes: 140161
num_examples: 181
download_size: 19558988
dataset_size: 141288
- config_name: maud_initial_matching_rights_period_(cor)
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3041
num_examples: 1
- name: test
num_bytes: 311761
num_examples: 158
download_size: 19558988
dataset_size: 314802
- config_name: maud_initial_matching_rights_period_(ftr)
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1850
num_examples: 1
- name: test
num_bytes: 279202
num_examples: 132
download_size: 19558988
dataset_size: 281052
- config_name: maud_intervening_event_-_required_to_occur_after_signing_-_answer
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3055
num_examples: 1
- name: test
num_bytes: 230249
num_examples: 147
download_size: 19558988
dataset_size: 233304
- config_name: maud_knowledge_definition
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 240
num_examples: 1
- name: test
num_bytes: 359730
num_examples: 167
download_size: 19558988
dataset_size: 359970
- config_name: maud_liability_standard_for_no-shop_breach_by_target_non-do_representatives
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 154
num_examples: 1
- name: test
num_bytes: 40946
num_examples: 156
download_size: 19558988
dataset_size: 41100
- config_name: maud_ordinary_course_efforts_standard
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1037
num_examples: 1
- name: test
num_bytes: 140251
num_examples: 181
download_size: 19558988
dataset_size: 141288
- config_name: maud_pandemic_or_other_public_health_event__subject_to_disproportionate_impact_modifier
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3728
num_examples: 1
- name: test
num_bytes: 447053
num_examples: 98
download_size: 19558988
dataset_size: 450781
- config_name: maud_pandemic_or_other_public_health_event_specific_reference_to_pandemic-related_governmental_responses_or_measures
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3728
num_examples: 1
- name: test
num_bytes: 447053
num_examples: 98
download_size: 19558988
dataset_size: 450781
- config_name: maud_relational_language_(mae)_applies_to
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4948
num_examples: 1
- name: test
num_bytes: 409477
num_examples: 90
download_size: 19558988
dataset_size: 414425
- config_name: maud_specific_performance
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 771
num_examples: 1
- name: test
num_bytes: 107392
num_examples: 178
download_size: 19558988
dataset_size: 108163
- config_name: maud_tail_period_length
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 406
num_examples: 1
- name: test
num_bytes: 108632
num_examples: 179
download_size: 19558988
dataset_size: 109038
- config_name: maud_type_of_consideration
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 258
num_examples: 1
- name: test
num_bytes: 139270
num_examples: 172
download_size: 19558988
dataset_size: 139528
- config_name: nys_judicial_ethics
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: question
dtype: string
- name: year
dtype: string
splits:
- name: train
num_bytes: 1697
num_examples: 8
- name: test
num_bytes: 53974
num_examples: 292
download_size: 19558988
dataset_size: 55671
- config_name: opp115_data_retention
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1791
num_examples: 8
- name: test
num_bytes: 18620
num_examples: 88
download_size: 19558988
dataset_size: 20411
- config_name: opp115_data_security
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2123
num_examples: 8
- name: test
num_bytes: 352667
num_examples: 1334
download_size: 19558988
dataset_size: 354790
- config_name: opp115_do_not_track
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2507
num_examples: 8
- name: test
num_bytes: 26363
num_examples: 110
download_size: 19558988
dataset_size: 28870
- config_name: opp115_first_party_collection_use
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2227
num_examples: 8
- name: test
num_bytes: 463566
num_examples: 2086
download_size: 19558988
dataset_size: 465793
- config_name: opp115_international_and_specific_audiences
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1643
num_examples: 8
- name: test
num_bytes: 338196
num_examples: 980
download_size: 19558988
dataset_size: 339839
- config_name: opp115_policy_change
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1201
num_examples: 8
- name: test
num_bytes: 94060
num_examples: 431
download_size: 19558988
dataset_size: 95261
- config_name: opp115_third_party_sharing_collection
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1217
num_examples: 8
- name: test
num_bytes: 383909
num_examples: 1590
download_size: 19558988
dataset_size: 385126
- config_name: opp115_user_access,_edit_and_deletion
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1251
num_examples: 8
- name: test
num_bytes: 108969
num_examples: 462
download_size: 19558988
dataset_size: 110220
- config_name: opp115_user_choice_control
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1695
num_examples: 8
- name: test
num_bytes: 353113
num_examples: 1546
download_size: 19558988
dataset_size: 354808
- config_name: oral_argument_question_purpose
features:
- name: Docket No.
dtype: string
- name: answer
dtype: string
- name: index
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 2415
num_examples: 7
- name: test
num_bytes: 95262
num_examples: 312
download_size: 19558988
dataset_size: 97677
- config_name: overruling
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 629
num_examples: 6
- name: test
num_bytes: 443484
num_examples: 2394
download_size: 19558988
dataset_size: 444113
- config_name: personal_jurisdiction
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: slice
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1660
num_examples: 4
- name: test
num_bytes: 21089
num_examples: 50
download_size: 19558988
dataset_size: 22749
- config_name: privacy_policy_entailment
features:
- name: answer
dtype: string
- name: description
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6282
num_examples: 8
- name: test
num_bytes: 3174950
num_examples: 4335
download_size: 19558988
dataset_size: 3181232
- config_name: privacy_policy_qa
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: question
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2231
num_examples: 8
- name: test
num_bytes: 2817986
num_examples: 10923
download_size: 19558988
dataset_size: 2820217
- config_name: proa
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1057
num_examples: 5
- name: test
num_bytes: 25475
num_examples: 95
download_size: 19558988
dataset_size: 26532
- config_name: rule_qa
features:
- name: answer
dtype: string
- name: doctrine
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 12665
num_examples: 50
download_size: 19558988
dataset_size: 12665
- config_name: sara_entailment
features:
- name: answer
dtype: string
- name: case id
dtype: string
- name: description
dtype: string
- name: index
dtype: string
- name: question
dtype: string
- name: statute
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2528
num_examples: 4
- name: test
num_bytes: 225560
num_examples: 272
download_size: 19558988
dataset_size: 228088
- config_name: sara_numeric
features:
- name: answer
dtype: string
- name: case id
dtype: string
- name: description
dtype: string
- name: index
dtype: string
- name: question
dtype: string
- name: statute
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 238363
num_examples: 4
- name: test
num_bytes: 5725392
num_examples: 96
download_size: 19558988
dataset_size: 5963755
- config_name: scalr
features:
- name: answer
dtype: string
- name: choice_0
dtype: string
- name: choice_1
dtype: string
- name: choice_2
dtype: string
- name: choice_3
dtype: string
- name: choice_4
dtype: string
- name: index
dtype: string
- name: question
dtype: string
splits:
- name: test
num_bytes: 1026740
num_examples: 571
download_size: 19558988
dataset_size: 1026740
- config_name: ssla_company_defendants
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5847
num_examples: 3
- name: test
num_bytes: 2313039
num_examples: 1228
download_size: 19558988
dataset_size: 2318886
- config_name: ssla_individual_defendants
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5962
num_examples: 3
- name: test
num_bytes: 2002620
num_examples: 1012
download_size: 19558988
dataset_size: 2008582
- config_name: ssla_plaintiff
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5831
num_examples: 3
- name: test
num_bytes: 1926518
num_examples: 1033
download_size: 19558988
dataset_size: 1932349
- config_name: successor_liability
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: issue
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1734
num_examples: 3
- name: test
num_bytes: 26490
num_examples: 47
download_size: 19558988
dataset_size: 28224
- config_name: supply_chain_disclosure_best_practice_accountability
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 18987
num_examples: 8
- name: test
num_bytes: 1347025
num_examples: 379
download_size: 19558988
dataset_size: 1366012
- config_name: supply_chain_disclosure_best_practice_audits
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 23879
num_examples: 8
- name: test
num_bytes: 1342065
num_examples: 379
download_size: 19558988
dataset_size: 1365944
- config_name: supply_chain_disclosure_best_practice_certification
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 22058
num_examples: 8
- name: test
num_bytes: 1338516
num_examples: 378
download_size: 19558988
dataset_size: 1360574
- config_name: supply_chain_disclosure_best_practice_training
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 24071
num_examples: 8
- name: test
num_bytes: 1341885
num_examples: 379
download_size: 19558988
dataset_size: 1365956
- config_name: supply_chain_disclosure_best_practice_verification
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 27158
num_examples: 8
- name: test
num_bytes: 1338739
num_examples: 379
download_size: 19558988
dataset_size: 1365897
- config_name: supply_chain_disclosure_disclosed_accountability
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 18902
num_examples: 8
- name: test
num_bytes: 1344444
num_examples: 378
download_size: 19558988
dataset_size: 1363346
- config_name: supply_chain_disclosure_disclosed_audits
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 24404
num_examples: 8
- name: test
num_bytes: 1341624
num_examples: 379
download_size: 19558988
dataset_size: 1366028
- config_name: supply_chain_disclosure_disclosed_certification
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 17987
num_examples: 8
- name: test
num_bytes: 1342646
num_examples: 378
download_size: 19558988
dataset_size: 1360633
- config_name: supply_chain_disclosure_disclosed_training
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 27093
num_examples: 8
- name: test
num_bytes: 1338919
num_examples: 379
download_size: 19558988
dataset_size: 1366012
- config_name: supply_chain_disclosure_disclosed_verification
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 25387
num_examples: 8
- name: test
num_bytes: 1340578
num_examples: 379
download_size: 19558988
dataset_size: 1365965
- config_name: telemarketing_sales_rule
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1230
num_examples: 4
- name: test
num_bytes: 17140
num_examples: 47
download_size: 19558988
dataset_size: 18370
- config_name: textualism_tool_dictionaries
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4842
num_examples: 4
- name: test
num_bytes: 102644
num_examples: 107
download_size: 19558988
dataset_size: 107486
- config_name: textualism_tool_plain
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3338
num_examples: 4
- name: test
num_bytes: 167428
num_examples: 165
download_size: 19558988
dataset_size: 170766
- config_name: ucc_v_common_law
features:
- name: answer
dtype: string
- name: contract
dtype: string
- name: index
dtype: string
splits:
- name: train
num_bytes: 904
num_examples: 6
- name: test
num_bytes: 12694
num_examples: 94
download_size: 19558988
dataset_size: 13598
- config_name: unfair_tos
features:
- name: answer
dtype: string
- name: index
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3308
num_examples: 9
- name: test
num_bytes: 787108
num_examples: 3813
download_size: 19558988
dataset_size: 790416
---
# Dataset Card for Dataset Name
- **Homepage: https://hazyresearch.stanford.edu/legalbench/**
- **Repository: https://github.com/HazyResearch/legalbench/**
- **Paper: https://arxiv.org/abs/2308.11462**
## Dataset Description
### Dataset Summary
The LegalBench project is an ongoing open science effort to collaboratively curate tasks for evaluating legal reasoning in English large language models (LLMs). The benchmark currently consists of 162 tasks gathered from 40 contributors.
Note: Because LegalBench is intended to test zero and few-shot reasoning, the available "train" splits are small. However, if you are interested in finetuning models or studying model performance in a more traditional train/test regime, you can combine and re-partition train and test data.
If you have questions about the project or would like to get involved, please see the website for more information.
### Supported Tasks and Leaderboards
LegalBench tasks span multiple types (binary classification, multi-class classification, extraction, generation, entailment), multiple types of text (statutes, judicial opinions, contracts, etc.), and multiple areas of law (evidence, contracts, civil procedure, etc.). For more information on tasks, we recommend visiting the website, where you can search through task descriptions, or the Github repository, which contains more granular task descriptions. We also recommend reading the paper, which provides more background on task significance and construction process.
### Languages
All LegalBench tasks are in English.
## Dataset Structure
### Data Instances
Detailed descriptions of the instances for each task can be found on the Github. An example of an instance, for the `abercrombie` task, is provided below:
```
{
"text": "The mark "Ivory" for a product made of elephant tusks.",
"label": "generic"
"idx": 0
}
```
A substantial number of LegalBench tasks are binary classification tasks, which require the LLM to determine if a piece of text has some legal attribute. Because these are framed as Yes/No questions, the label space is "Yes" or "No".
### Data Fields
Detailed descriptions of the instances for each task can be found on the Github.
### Data Splits
Each task (except for `rule_qa` and `scalr`) has both a training and evaluation split. Following [RAFT](https://huggingface.co/datasets/ought/raft), train splits only consists of a few-labeled instances, reflecting the few-shot nature of most LLMs.
## Dataset Creation
### Curation Rationale
LegalBench was created to enable researchers to better benchmark the legal reasoning capabilities of LLMs.
### Source Data
#### Initial Data Collection and Normalization
Broadly, LegalBench tasks are drawn from three sources. The first source of tasks are existing available datasets and corpora. Most of these were originally released for non-LLM evaluation settings. In creating tasks for LegalBench from these sources, we often significantly reformatted data and restructured the prediction objective. For instance, the original [CUAD dataset](https://github.com/TheAtticusProject/cuad) contains annotations on long-documents and is intended for evaluating extraction with span-prediction models. We restructure this corpora to generate a binary classification task for each type of contractual clause. While the original corpus emphasized the long-document aspects of contracts, our restructured tasks emphasize whether LLMs can identify the distinguishing features of different types of clauses. The second source of tasks are datasets that were previously constructed by legal professionals but never released. This primarily includes datasets hand-coded by legal scholars as part of prior empirical legal projects. The last category of tasks are those that were developed specifically for \name, by the authors of this paper. Overall, tasks are drawn from 36 distinct corpora. Please see the Appendix of the paper for more details.
#### Who are the source language producers?
LegalBench data was created by humans. Demographic information for these individuals is not available.
### Annotations
#### Annotation process
Please see the paper for more information on the annotation process used in the creation of each task.
#### Who are the annotators?
Please see the paper for more information on the identity of annotators for each task.
### Personal and Sensitive Information
Data in this benchmark has either been synthetically generated, or derived from an already public source (e.g., contracts from the EDGAR database).
Several tasks have been derived from the LearnedHands corpus, which consists of public posts on /r/LegalAdvice. Some posts may discuss sensitive issues.
## Considerations for Using the Data
### Social Impact of Dataset
Please see the original paper for a discussion of social impact.
### Discussion of Biases
Please see the original paper for a discussion of social impact.
### Other Known Limitations
LegalBench primarily contains tasks corresponding to American law.
## Additional Information
### Dataset Curators
Please see the website for a full list of participants in the LegalBench project.
### Licensing Information
LegalBench tasks are subject to different licenses. Please see the paper for a description of the licenses.
### Citation Information
If you intend to reference LegalBench broadly, please use the citation below. If you are working with a particular task, please use the citation below in addition to the task specific citation (which can be found on the task page on the website or Github).
```
@misc{guha2023legalbench,
title={LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models},
author={Neel Guha and Julian Nyarko and Daniel E. Ho and Christopher Ré and Adam Chilton and Aditya Narayana and Alex Chohlas-Wood and Austin Peters and Brandon Waldon and Daniel N. Rockmore and Diego Zambrano and Dmitry Talisman and Enam Hoque and Faiz Surani and Frank Fagan and Galit Sarfaty and Gregory M. Dickinson and Haggai Porat and Jason Hegland and Jessica Wu and Joe Nudell and Joel Niklaus and John Nay and Jonathan H. Choi and Kevin Tobia and Margaret Hagan and Megan Ma and Michael Livermore and Nikon Rasumov-Rahe and Nils Holzenberger and Noam Kolt and Peter Henderson and Sean Rehaag and Sharad Goel and Shang Gao and Spencer Williams and Sunny Gandhi and Tom Zur and Varun Iyer and Zehua Li},
year={2023},
eprint={2308.11462},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{koreeda2021contractnli,
title={ContractNLI: A dataset for document-level natural language inference for contracts},
author={Koreeda, Yuta and Manning, Christopher D},
journal={arXiv preprint arXiv:2110.01799},
year={2021}
}
@article{hendrycks2021cuad,
title={Cuad: An expert-annotated nlp dataset for legal contract review},
author={Hendrycks, Dan and Burns, Collin and Chen, Anya and Ball, Spencer},
journal={arXiv preprint arXiv:2103.06268},
year={2021}
}
@article{wang2023maud,
title={MAUD: An Expert-Annotated Legal NLP Dataset for Merger Agreement Understanding},
author={Wang, Steven H and Scardigli, Antoine and Tang, Leonard and Chen, Wei and Levkin, Dimitry and Chen, Anya and Ball, Spencer and Woodside, Thomas and Zhang, Oliver and Hendrycks, Dan},
journal={arXiv preprint arXiv:2301.00876},
year={2023}
}
@inproceedings{wilson2016creation,
title={The creation and analysis of a website privacy policy corpus},
author={Wilson, Shomir and Schaub, Florian and Dara, Aswarth Abhilash and Liu, Frederick and Cherivirala, Sushain and Leon, Pedro Giovanni and Andersen, Mads Schaarup and Zimmeck, Sebastian and Sathyendra, Kanthashree Mysore and Russell, N Cameron and others},
booktitle={Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={1330--1340},
year={2016}
}
@inproceedings{zheng2021does,
title={When does pretraining help? assessing self-supervised learning for law and the casehold dataset of 53,000+ legal holdings},
author={Zheng, Lucia and Guha, Neel and Anderson, Brandon R and Henderson, Peter and Ho, Daniel E},
booktitle={Proceedings of the eighteenth international conference on artificial intelligence and law},
pages={159--168},
year={2021}
}
@article{zimmeck2019maps,
title={Maps: Scaling privacy compliance analysis to a million apps},
author={Zimmeck, Sebastian and Story, Peter and Smullen, Daniel and Ravichander, Abhilasha and Wang, Ziqi and Reidenberg, Joel R and Russell, N Cameron and Sadeh, Norman},
journal={Proc. Priv. Enhancing Tech.},
volume={2019},
pages={66},
year={2019}
}
@article{ravichander2019question,
title={Question answering for privacy policies: Combining computational and legal perspectives},
author={Ravichander, Abhilasha and Black, Alan W and Wilson, Shomir and Norton, Thomas and Sadeh, Norman},
journal={arXiv preprint arXiv:1911.00841},
year={2019}
}
@article{holzenberger2021factoring,
title={Factoring statutory reasoning as language understanding challenges},
author={Holzenberger, Nils and Van Durme, Benjamin},
journal={arXiv preprint arXiv:2105.07903},
year={2021}
}
@article{lippi2019claudette,
title={CLAUDETTE: an automated detector of potentially unfair clauses in online terms of service},
author={Lippi, Marco and Pa{\l}ka, Przemys{\l}aw and Contissa, Giuseppe and Lagioia, Francesca and Micklitz, Hans-Wolfgang and Sartor, Giovanni and Torroni, Paolo},
journal={Artificial Intelligence and Law},
volume={27},
pages={117--139},
year={2019},
publisher={Springer}
}
``` |
JonasGeiping/the_pile_WordPiecex32768_2efdb9d060d1ae95faf952ec1a50f020 | JonasGeiping | "2023-06-13T16:25:54Z" | 9,918 | 1 | [
"size_categories:10M<n<100M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2212.14034",
"arxiv:2101.00027",
"arxiv:2201.07311",
"region:us"
] | null | "2023-06-08T17:30:55Z" | ---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 43860000000
num_examples: 85000000
download_size: 24001057282
dataset_size: 43860000000
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license: other
multilinguality:
- monolingual
pretty_name: pretokenized,filtered,sorted subset of the Pile
size_categories:
- 10B<n<100B
source_datasets:
- the-pile
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: the-pile-cramming
---
# Dataset Card for "the_pile_WordPiecex32768_2efdb9d060d1ae95faf952ec1a50f020"
## Dataset Description
- **Repository:** https://github.com/JonasGeiping/cramming
- **Paper:** https://arxiv.org/abs/2212.14034
- **Raw Data Source Paper:** [The Pile: An 800GB Dataset of Diverse Text for Language Modeling](https://arxiv.org/abs/2101.00027)
- **Raw Data Source Datasheet:** [Datasheet for the Pile](https://arxiv.org/abs/2201.07311)
### Dataset Summary
This is a preprocessed, tokenized dataset for the cramming-project.
Use only with the tokenizer uploaded here.
This version is `2efdb9d060d1ae95faf952ec1a50f020`, which corresponds to a specific dataset construction setup, described below.
The raw data source is the Pile, a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality
datasets combined together.
### Languages
This dataset is in English (`EN`).
### Data Splits
This preprocessed subset contains only a train split.
## Dataset Creation
The configuration to create this dataset with the cramming project code (https://github.com/JonasGeiping/cramming) is
```
# This is a slice of the pile
name: the_pile
defaults:
- sources:
- the_pile
#
# Preprocessing
normalizer:
force_lowercase: True
strip_accents: True
force_english_keyboard: True
whitespace_escape: False
tokenizer: WordPiece
vocab_size: 32768
# Dataset Formation
seq_length: 128
include_cls_token_in_corpus: False
include_sep_token_in_corpus: True
use_type_ids: False
max_entries_in_raw_dataset: 16e6
max_seq_in_tokenized_dataset: 85e6
# Data Cleaning:
named_entity_simplification: False
remove_whitespaces: False
remove_trash: True
trash_cutoff: 0.25
deduplicate_entries: False
deduplication_threshold: 75
# Data Order:
ordering: sentence-length-curriculum
```
## Considerations for Using the Data
Limitations and bias:
This training data was further filtered and sorted beyond the normal preprocessing.
These modifications were not tested for unintended consequences.
## Additional Information
### Dataset Curators
This dataset is a filtered, sorted and preprocessed subset of the the-Pile made by Jonas Geiping . The original dataset was primarily curated by Leo Gao and Stella Biderman, with assistance from other authors of the Pile paper.
### Licensing Information
Please refer to the specific license depending on the subset you use at https://huggingface.co/datasets/EleutherAI/pile
### Citation Information
Filtered version for the cramming project:
```
@article{geiping_cramming_2022,
title = {Cramming: {{Training}} a {{Language Model}} on a {{Single GPU}} in {{One Day}}},
shorttitle = {Cramming},
author = {Geiping, Jonas and Goldstein, Tom},
year = {2022},
month = dec,
eprint = {2212.14034},
primaryclass = {cs},
publisher = {{arXiv}},
doi = {10.48550/arXiv.2212.14034},
url = {http://arxiv.org/abs/2212.14034},
urldate = {2023-01-10},
archiveprefix = {arxiv},
keywords = {Computer Science - Computation and Language,Computer Science - Machine Learning},
journal = {arxiv:2212.14034[cs]}
}
```
Original Data Curation:
```
@article{gao2020pile,
title={The {P}ile: An 800{GB} dataset of diverse text for language modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
}
@article{biderman2022datasheet,
title={Datasheet for the pile},
author={Biderman, Stella and Bicheno, Kieran and Gao, Leo},
journal={arXiv preprint arXiv:2201.07311},
year={2022}
}
``` |
CohereForAI/Global-MMLU | CohereForAI | "2024-12-12T13:49:50Z" | 9,895 | 98 | [
"language:en",
"language:ar",
"language:bn",
"language:es",
"language:fr",
"language:hi",
"language:ru",
"language:de",
"language:id",
"language:it",
"language:ja",
"language:ko",
"language:pt",
"language:zh",
"language:yo",
"language:nl",
"language:ro",
"language:uk",
"language:vi",
"language:tr",
"language:pl",
"language:fa",
"language:cs",
"language:he",
"language:el",
"language:ms",
"language:fil",
"language:te",
"language:si",
"language:ne",
"language:ky",
"language:sv",
"language:lt",
"language:sr",
"language:mg",
"language:so",
"language:ha",
"language:am",
"language:sn",
"language:ig",
"language:ny",
"language:sw",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:argilla",
"arxiv:2412.03304",
"region:us",
"argilla"
] | null | "2024-12-01T22:45:59Z" | ---
dataset_info:
- config_name: am
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 209505
num_examples: 285
- name: test
num_bytes: 12085768
num_examples: 14042
download_size: 10260448
dataset_size: 12295273
- config_name: ar
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 202343
num_examples: 285
- name: test
num_bytes: 11621977
num_examples: 14042
download_size: 9817049
dataset_size: 11824320
- config_name: bn
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 301875
num_examples: 285
- name: test
num_bytes: 18061158
num_examples: 14042
download_size: 12524784
dataset_size: 18363033
- config_name: cs
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 149807
num_examples: 285
- name: test
num_bytes: 8607308
num_examples: 14042
download_size: 8640151
dataset_size: 8757115
- config_name: de
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 162406
num_examples: 285
- name: test
num_bytes: 9575360
num_examples: 14042
download_size: 9187953
dataset_size: 9737766
- config_name: el
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 254308
num_examples: 285
- name: test
num_bytes: 14502137
num_examples: 14042
download_size: 12288940
dataset_size: 14756445
- config_name: en
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 146364
num_examples: 285
- name: test
num_bytes: 8440632
num_examples: 14042
download_size: 7912429
dataset_size: 8586996
- config_name: es
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 160633
num_examples: 285
- name: test
num_bytes: 9399724
num_examples: 14042
download_size: 8752720
dataset_size: 9560357
- config_name: fa
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 202609
num_examples: 285
- name: test
num_bytes: 11611890
num_examples: 14042
download_size: 9564082
dataset_size: 11814499
- config_name: fil
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 165182
num_examples: 285
- name: test
num_bytes: 9510179
num_examples: 14042
download_size: 8564879
dataset_size: 9675361
- config_name: fr
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 166173
num_examples: 285
- name: test
num_bytes: 9858873
num_examples: 14042
download_size: 9202595
dataset_size: 10025046
- config_name: ha
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 147406
num_examples: 285
- name: test
num_bytes: 8445707
num_examples: 14042
download_size: 7665529
dataset_size: 8593113
- config_name: he
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 178912
num_examples: 285
- name: test
num_bytes: 10248592
num_examples: 14042
download_size: 8818618
dataset_size: 10427504
- config_name: hi
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 308254
num_examples: 285
- name: test
num_bytes: 17970478
num_examples: 14042
download_size: 12407854
dataset_size: 18278732
- config_name: id
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 154692
num_examples: 285
- name: test
num_bytes: 8886643
num_examples: 14042
download_size: 7793365
dataset_size: 9041335
- config_name: ig
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 157376
num_examples: 285
- name: test
num_bytes: 9221405
num_examples: 14042
download_size: 7644102
dataset_size: 9378781
- config_name: it
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 157547
num_examples: 285
- name: test
num_bytes: 9374481
num_examples: 14042
download_size: 8873034
dataset_size: 9532028
- config_name: ja
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 167646
num_examples: 285
- name: test
num_bytes: 9830716
num_examples: 14042
download_size: 8826164
dataset_size: 9998362
- config_name: ko
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 160572
num_examples: 285
- name: test
num_bytes: 9454859
num_examples: 14042
download_size: 8640457
dataset_size: 9615431
- config_name: ky
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 235001
num_examples: 285
- name: test
num_bytes: 13483934
num_examples: 14042
download_size: 11148813
dataset_size: 13718935
- config_name: lt
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 148917
num_examples: 285
- name: test
num_bytes: 8504949
num_examples: 14042
download_size: 8416467
dataset_size: 8653866
- config_name: mg
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 161992
num_examples: 285
- name: test
num_bytes: 9337415
num_examples: 14042
download_size: 8011427
dataset_size: 9499407
- config_name: ms
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 152549
num_examples: 285
- name: test
num_bytes: 8823844
num_examples: 14042
download_size: 7783581
dataset_size: 8976393
- config_name: ne
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 294790
num_examples: 285
- name: test
num_bytes: 16972110
num_examples: 14042
download_size: 11895818
dataset_size: 17266900
- config_name: nl
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 158122
num_examples: 285
- name: test
num_bytes: 9099176
num_examples: 14042
download_size: 8565959
dataset_size: 9257298
- config_name: ny
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 151315
num_examples: 285
- name: test
num_bytes: 8686819
num_examples: 14042
download_size: 7822699
dataset_size: 8838134
- config_name: pl
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 157290
num_examples: 285
- name: test
num_bytes: 8980730
num_examples: 14042
download_size: 8981270
dataset_size: 9138020
- config_name: pt
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 154592
num_examples: 285
- name: test
num_bytes: 8983299
num_examples: 14042
download_size: 8517588
dataset_size: 9137891
- config_name: ro
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 158311
num_examples: 285
- name: test
num_bytes: 9163189
num_examples: 14042
download_size: 8773232
dataset_size: 9321500
- config_name: ru
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 246059
num_examples: 285
- name: test
num_bytes: 14059847
num_examples: 14042
download_size: 11904365
dataset_size: 14305906
- config_name: si
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 297843
num_examples: 285
- name: test
num_bytes: 17374939
num_examples: 14042
download_size: 12790098
dataset_size: 17672782
- config_name: sn
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 147355
num_examples: 285
- name: test
num_bytes: 8507368
num_examples: 14042
download_size: 7962672
dataset_size: 8654723
- config_name: so
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 156282
num_examples: 285
- name: test
num_bytes: 9033243
num_examples: 14042
download_size: 8706693
dataset_size: 9189525
- config_name: sr
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 221580
num_examples: 285
- name: test
num_bytes: 12695546
num_examples: 14042
download_size: 10748391
dataset_size: 12917126
- config_name: sv
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 147893
num_examples: 285
- name: test
num_bytes: 8549708
num_examples: 14042
download_size: 8181997
dataset_size: 8697601
- config_name: sw
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 147069
num_examples: 285
- name: test
num_bytes: 8653210
num_examples: 14042
download_size: 7932986
dataset_size: 8800279
- config_name: te
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 315724
num_examples: 285
- name: test
num_bytes: 18170058
num_examples: 14042
download_size: 12631358
dataset_size: 18485782
- config_name: tr
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 153426
num_examples: 285
- name: test
num_bytes: 8833244
num_examples: 14042
download_size: 8351339
dataset_size: 8986670
- config_name: uk
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 229888
num_examples: 285
- name: test
num_bytes: 13233771
num_examples: 14042
download_size: 11347842
dataset_size: 13463659
- config_name: vi
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 185712
num_examples: 285
- name: test
num_bytes: 10604332
num_examples: 14042
download_size: 8971266
dataset_size: 10790044
- config_name: yo
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 153810
num_examples: 285
- name: test
num_bytes: 10694916
num_examples: 14042
download_size: 9303668
dataset_size: 10848726
- config_name: zh
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 127577
num_examples: 285
- name: test
num_bytes: 7393764
num_examples: 14042
download_size: 7322261
dataset_size: 7521341
configs:
- config_name: am
data_files:
- split: test
path: am/test-*
- split: dev
path: am/dev-*
- config_name: ar
data_files:
- split: test
path: ar/test-*
- split: dev
path: ar/dev-*
- config_name: bn
data_files:
- split: test
path: bn/test-*
- split: dev
path: bn/dev-*
- config_name: cs
data_files:
- split: test
path: cs/test-*
- split: dev
path: cs/dev-*
- config_name: de
data_files:
- split: test
path: de/test-*
- split: dev
path: de/dev-*
- config_name: el
data_files:
- split: test
path: el/test-*
- split: dev
path: el/dev-*
- config_name: en
data_files:
- split: test
path: en/test-*
- split: dev
path: en/dev-*
- config_name: es
data_files:
- split: test
path: es/test-*
- split: dev
path: es/dev-*
- config_name: fa
data_files:
- split: test
path: fa/test-*
- split: dev
path: fa/dev-*
- config_name: fil
data_files:
- split: test
path: fil/test-*
- split: dev
path: fil/dev-*
- config_name: fr
data_files:
- split: test
path: fr/test-*
- split: dev
path: fr/dev-*
- config_name: ha
data_files:
- split: test
path: ha/test-*
- split: dev
path: ha/dev-*
- config_name: he
data_files:
- split: test
path: he/test-*
- split: dev
path: he/dev-*
- config_name: hi
data_files:
- split: test
path: hi/test-*
- split: dev
path: hi/dev-*
- config_name: id
data_files:
- split: test
path: id/test-*
- split: dev
path: id/dev-*
- config_name: ig
data_files:
- split: test
path: ig/test-*
- split: dev
path: ig/dev-*
- config_name: it
data_files:
- split: test
path: it/test-*
- split: dev
path: it/dev-*
- config_name: ja
data_files:
- split: test
path: ja/test-*
- split: dev
path: ja/dev-*
- config_name: ko
data_files:
- split: test
path: ko/test-*
- split: dev
path: ko/dev-*
- config_name: ky
data_files:
- split: test
path: ky/test-*
- split: dev
path: ky/dev-*
- config_name: lt
data_files:
- split: test
path: lt/test-*
- split: dev
path: lt/dev-*
- config_name: mg
data_files:
- split: test
path: mg/test-*
- split: dev
path: mg/dev-*
- config_name: ms
data_files:
- split: test
path: ms/test-*
- split: dev
path: ms/dev-*
- config_name: ne
data_files:
- split: test
path: ne/test-*
- split: dev
path: ne/dev-*
- config_name: nl
data_files:
- split: test
path: nl/test-*
- split: dev
path: nl/dev-*
- config_name: ny
data_files:
- split: test
path: ny/test-*
- split: dev
path: ny/dev-*
- config_name: pl
data_files:
- split: test
path: pl/test-*
- split: dev
path: pl/dev-*
- config_name: pt
data_files:
- split: test
path: pt/test-*
- split: dev
path: pt/dev-*
- config_name: ro
data_files:
- split: test
path: ro/test-*
- split: dev
path: ro/dev-*
- config_name: ru
data_files:
- split: test
path: ru/test-*
- split: dev
path: ru/dev-*
- config_name: si
data_files:
- split: test
path: si/test-*
- split: dev
path: si/dev-*
- config_name: sn
data_files:
- split: test
path: sn/test-*
- split: dev
path: sn/dev-*
- config_name: so
data_files:
- split: test
path: so/test-*
- split: dev
path: so/dev-*
- config_name: sr
data_files:
- split: test
path: sr/test-*
- split: dev
path: sr/dev-*
- config_name: sv
data_files:
- split: test
path: sv/test-*
- split: dev
path: sv/dev-*
- config_name: sw
data_files:
- split: test
path: sw/test-*
- split: dev
path: sw/dev-*
- config_name: te
data_files:
- split: test
path: te/test-*
- split: dev
path: te/dev-*
- config_name: tr
data_files:
- split: test
path: tr/test-*
- split: dev
path: tr/dev-*
- config_name: uk
data_files:
- split: test
path: uk/test-*
- split: dev
path: uk/dev-*
- config_name: vi
data_files:
- split: test
path: vi/test-*
- split: dev
path: vi/dev-*
- config_name: yo
data_files:
- split: test
path: yo/test-*
- split: dev
path: yo/dev-*
- config_name: zh
data_files:
- split: test
path: zh/test-*
- split: dev
path: zh/dev-*
tags:
- argilla
language:
- en
- ar
- bn
- es
- fr
- hi
- ru
- de
- id
- it
- ja
- ko
- pt
- zh
- yo
- nl
- ro
- uk
- vi
- tr
- pl
- fa
- cs
- he
- el
- ms
- fil
- te
- si
- ne
- ky
- sv
- lt
- sr
- mg
- so
- ha
- am
- sn
- ig
- ny
- sw
---
![GlobalMMLU Header](https://huggingface.co/datasets/CohereForAI/Global-MMLU/resolve/main/global_mmlu.jpg)
# Dataset Summary
[Global-MMLU](https://arxiv.org/abs/2412.03304) 🌍 is a multilingual evaluation set spanning 42 languages, including English. This dataset combines machine translations for [MMLU](https://huggingface.co/datasets/cais/mmlu) questions along with professional translations and crowd-sourced post-edits.
It also includes cultural sensitivity annotations for a subset of the questions (2850 questions per language) and classifies them as *Culturally Sensitive* (CS) 🗽 or *Culturally Agnostic* (CA) ⚖️. These annotations were collected as part of an open science initiative led by Cohere For AI in collaboration with many external collaborators from both industry and academia.
- **Curated by:** Professional annotators and contributors of [Cohere For AI Community](https://cohere.com/research).
- **Language(s):** 42 languages.
- **License:** [Apache 2.0](https://opensource.org/license/apache-2-0)
**Note:** We also provide a "lite" version of Global MMLU called ["Global-MMLU-Lite"](https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite). This datatset is more balanced containing 200 samples each for CA and CS subsets for each language. And provides coverage for 15 languages with human translations.
### **Global-MMLU Dataset Family:**
| Name | Explanation |
|------|--------------|
| [Global-MMLU](https://huggingface.co/datasets/CohereForAI/Global-MMLU) | Full Global-MMLU set with translations for all 14K samples including CS and CA subsets|
| [Global-MMLU-Lite](https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite) | Lite version of Global-MMLU with human translated samples in 15 languages and containing 200 samples each for CS and CA subsets per language.|
## Load with Datasets
To load this dataset with `datasets`, you'll first need to install it using `pip install datasets` and then use the following code:
```python
from datasets import load_dataset
# load HF dataset
global_mmlu = load_dataset("CohereForAI/Global-MMLU", 'en')
# can also be used as pandas dataframe
global_mmlu.set_format("pandas")
global_mmlu_test = global_mmlu['test'][:]
global_mmlu_dev = global_mmlu['dev'][:]
```
<details>
<summary> additional details </summary>
The columns corresponding to annotations collected from our cultural bias study (i.e. 'required_knowledge', 'time_sensitive', 'reference', 'culture', 'region', 'country') contain a list of values representing annotations from different annotators.
However, to avoid conversion issues to HF dataset, these columns are provided as string in the final dataset.
You can convert these columns back to list of values for easier manipulation as follows:
```python
import ast
# convert string values to list
global_mmlu_df['required_knowledge'] = global_mmlu_df['required_knowledge'].apply(lamda x: ast.literal_eval(x))
```
</details>
<br>
## Data Fields
The data fields are the same among all splits. Brief description of each field is provided below.
<details>
<summary> data field description </summary>
- `sample_id`: A unique identifier for the question.
- `subject`: The main topic the question falls under.
- `subject_category`: The high-level category the subject falls under i.e. STEM/Humanities/Social Sciences/Medical/Business/Other.
- `question`: translated question from MMLU
- `option_a`: one of the possible option choices
- `option_b`: one of the possible option choices
- `option_c`: one of the possible option choices
- `option_d`: one of the possible option choices
- `answer': the correct answer (A/B/C/D)
- `required_knowledge`: annotator votes for knowledge needed to answer the question correctly. Possible values include: "cultural", "regional", "dialect" or "none"
- `time_sensitive`: annotator votes indicating if the question's answer is time-dependent. Possible values include: Yes/No
- `reference`: annotations for which part of the question contains cultural/regional/dialect references. The different items in the list are annotations from different annotators.
- `culture`: annotations for which culture does the question belong to. The different items in the list correspond to annotations from different annotators.
- `region`: Geographic region the question is relevant to. Each item in the list correspond to annotations from different annotators.
- `country`: Specific country the question pertains to. Each item in the list correspond to annotations from different annotators.
- `cultural_sensitivity_label`: Label to indicate if question is culturally sensitive (CS) or culturally agnostic (CA) based on annotator votes.
- `is_annotated`: True/False flag to indicate if sample contains any annotations from our cultural bias study.
</details>
<br>
## Data Splits
The following are the splits of the data:
| Split | No. of instances | Language Coverage |
|-------|------------------|-------------------|
| test | 589,764 | 42 |
| dev | 11,970 | 42 |
## Data Instances
An example from `test` set looks as follows:
```json
{'sample_id': 'world_religions/test/170'
'subject': 'world_religions',
'subject_category': 'Humanities',
'question': ' The numen of Augustus referred to which of the following characteristics?',
'option_a': 'Divine power',
'option_b': 'Sexual virility',
'option_c': 'Military acumen',
'option_d': 'Philosophical intellect',
'answer': 'A',
'required_knowledge': "['none', 'cultural', 'cultural', 'cultural']",
'time_sensitive': "['No', 'No', 'No', 'No']",
'reference': "['-', '-', {'end': 22, 'label': 'Cultural', 'score': None, 'start': 5}, {'end': 22, 'label': 'Cultural', 'score': None, 'start': 5}]",
'culture': "['Western Culture', 'Western Culture', 'Western Culture']",
'region': "['North America', 'Europe']",
'country': "['Italy']",
'cultural_sensitivity_label': 'CS',
'is_annotated': True,
}
```
## Statistics
### Annotation Types
The following is the breakdown of CS🗽, CA⚖️ and MA📝 samples in the final dataset.
| Type of Annotation | Instances per language | No. of languages | Total instances
|--------------------|------------------------|------------------|----------------|
| Culturally Sensitive 🗽 | 792 | 42 | 33,264 |
| Culturally Agnostic ⚖️ | 2058 |42 | 86,436 |
| MMLU Annotated 📝| 2850 |42 | 119,700 |
### Languages
The dataset covers 42 languages: 20 high-resource, 9 mid-resource, and 13 low-resource languages. The following is details about the languages included in the dataset.
<details>
<summary> Languages Info </summary>
| ISO Code | Language | Resources |
|----------|----------|-----------|
| `am` | Amharic | Low |
| `ar` | Arabic (Standard)| High |
| `bn` | Bengali | Mid |
| `de` | German | High |
| `el` | Greek | Mid |
| `en` | English | High |
| `fil` | Filipino | Mid |
| `fr` | French | High |
| `ha` | Hausa | Low |
| `he` | Hebrew | Mid |
| `hi` | Hindi | High |
| `ig` | Igbo | Low |
| `id` | Indonesian | Mid |
| `it` | Italian | High |
| `ja` | Japanese | High |
| `ky` | Kyrgyz | Low |
| `ko` | Korean | Mid |
| `lt` | Lithuanian | Mid |
| `mg` | Malagasy | Low |
| `ms` | Malay | Mid |
| `ne` | Nepali | Low |
| `nl` | Dutch | High |
| `ny` | Chichewa | Low |
| `fa` | Persian | High |
| `pl` | Polish | High |
| `pt` | Portuguese | High |
| `ru` | Russian | High |
| `si` | Sinhala | Low |
| `sn` | Shona | Low |
| `so` | Somali | Low |
| `es` | Spanish | High |
| `sr` | Serbian | High |
| `sw` | Swahili | Low |
| `sw` | Swedish | High |
| `te` | Telugu | Low |
| `tr` | Turkish | High |
| `uk` | Ukrainian | Mid |
| `vi` | Vietnamese | High |
| `yo` | Yorùbá | Low |
| `zh` | Chinese (Simplified) | High |
</details>
<br>
# Known Limitations
A brief overview of limitations of this dataset is provided below.
<details>
<summary> show limitations </summary>
- **Language and dialect coverage:** Global-MMLU focusses on 42 languages. However, this is still only a tiny fraction of the world’s linguistic diversity. Future work is needed to continue to improve evaluations beyond these 42 languages and take into account how technology serves different dialects.
- **Uneven distribution of contributions:** The dataset contains translation post-edits from community volunteers, with a 'long tail' of volunteers making only one or two contributions. Similarly, there is a huge gap between languages with the highest number of contributions and ones with the lowest number of contributions.
- **Toxic or offensive speech:** Our annotation process did not focus on flagging for toxic,harmful, or offensive speech, so it is possible that Global-MMLU contains some data that could be considered harmful. We believe this is of relatively low risk because of the nature of the original MMLU and the focus on examination material.
- **Region Category Assignment:** For the annotation of geographically sensitive questions, we classified regions into six geographic regions (Africa, Asia, Europe, North America, Oceania,and South America). However, based upon discussions we would going forward recommend switching to the taxonomy proposed by the World Bank which is more granular and includes separate designations for Central America and Sub-Saharan Africa.
- **Identifying cultural sensitivity does not guarantee cultural inclusion:** While Global-MMLU highlights important limitations in current datasets by identifying gaps in non-Western cultural representation. Future work must prioritize the integration of diverse culturally grounded knowledge to achieve true inclusivity and fairness in multilingual AI evaluation.
</details>
<br>
# Additional Information
## Provenance
- **Methods Used:** Professional annotations as well as crowd-sourced through volunteer annotations.
- **Methodology Details:** We collected cultural bias annotations as well as post-edits of translations for different mmlu questions.
- [Cultural Sensitivity Annotation Platform](https://huggingface.co/spaces/CohereForAI/MMLU-evaluation)
- [Translation Quality Annotation Platform](https://huggingface.co/spaces/CohereForAI/review-mmlu-translations)
- Dates of Collection: May 2024 - Aug 2024
## Dataset Version and Maintenance
- **Maintenance Status:** Actively Maintained
- **Version Details:**
- *Current version:* 1.0
- *Last Update:* 12/2024
- *First Release:* 12/2024
## Authorship
- **Publishing Organization:** [Cohere For AI](https://cohere.com/research)
- **Industry Type:** Not-for-profit - Tech
## Licensing Information
This dataset can be used for any purpose, under the terms of the [Apache 2.0](https://opensource.org/license/apache-2-0) License.
## Additional Details
For any additional details, please check our paper, [Global MMLU: Understanding and Addressing Cultural and Linguistic Biases in Multilingual Evaluation](https://arxiv.org/abs/2412.03304).
## Citation Information
```bibtex
@misc{singh2024globalmmluunderstandingaddressing,
title={Global MMLU: Understanding and Addressing Cultural and Linguistic Biases in Multilingual Evaluation},
author={Shivalika Singh and Angelika Romanou and Clémentine Fourrier and David I. Adelani and Jian Gang Ngui and Daniel Vila-Suero and Peerat Limkonchotiwat and Kelly Marchisio and Wei Qi Leong and Yosephine Susanto and Raymond Ng and Shayne Longpre and Wei-Yin Ko and Madeline Smith and Antoine Bosselut and Alice Oh and Andre F. T. Martins and Leshem Choshen and Daphne Ippolito and Enzo Ferrante and Marzieh Fadaee and Beyza Ermis and Sara Hooker},
year={2024},
eprint={2412.03304},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.03304},
}
``` |
fixie-ai/peoples_speech | fixie-ai | "2024-08-11T17:26:01Z" | 9,864 | 2 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-08-05T18:35:01Z" | ---
dataset_info:
- config_name: clean
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: duration_ms
dtype: int32
- name: text
dtype: string
- name: continuation
dtype: string
splits:
- name: validation
num_bytes: 2511523987.692
num_examples: 18622
- name: test
num_bytes: 4259695510.794
num_examples: 34898
- name: train
num_bytes: 401646320552.671
num_examples: 1501271
download_size: 398922548670
dataset_size: 408417540051
- config_name: dirty_sa
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: duration_ms
dtype: int32
- name: text
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 144432442623.054
num_examples: 548014
- name: validation
num_bytes: 2511524241.692
num_examples: 18622
- name: test
num_bytes: 4259695588.794
num_examples: 34898
download_size: 149491764186
dataset_size: 151203662453.53998
configs:
- config_name: clean
data_files:
- split: validation
path: clean/validation-*
- split: test
path: clean/test-*
- split: train
path: data/train-*
- config_name: dirty_sa
data_files:
- split: train
path: dirty_sa/train-*
- split: validation
path: dirty_sa/validation-*
- split: test
path: dirty_sa/test-*
---
|
dsfsi/vukuzenzele-sentence-aligned | dsfsi | "2023-11-27T11:28:54Z" | 9,818 | 1 | [
"task_categories:sentence-similarity",
"task_categories:translation",
"language:eng",
"language:afr",
"language:nbl",
"language:xho",
"language:zul",
"language:sot",
"language:nso",
"language:tsn",
"language:ssw",
"language:ven",
"language:tso",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"modality:tabular",
"modality:text",
"arxiv:2303.03750",
"region:us",
"multilingual",
"government"
] | [
"sentence-similarity",
"translation"
] | "2023-07-03T15:38:24Z" | ---
language:
- eng
- afr
- nbl
- xho
- zul
- sot
- nso
- tsn
- ssw
- ven
- tso
license: cc-by-4.0
task_categories:
- sentence-similarity
- translation
pretty_name: The Vuk'uzenzele South African Multilingual Corpus
tags:
- multilingual
- government
arxiv: 2303.0375
configs:
- config_name: afr-eng
data_files:
- split: train
path: afr-eng/train-*
- split: test
path: afr-eng/test-*
- split: eval
path: afr-eng/eval-*
- config_name: afr-nbl
data_files:
- split: train
path: afr-nbl/train-*
- split: test
path: afr-nbl/test-*
- split: eval
path: afr-nbl/eval-*
- config_name: afr-nso
data_files:
- split: train
path: afr-nso/train-*
- split: test
path: afr-nso/test-*
- split: eval
path: afr-nso/eval-*
- config_name: afr-sot
data_files:
- split: train
path: afr-sot/train-*
- split: test
path: afr-sot/test-*
- split: eval
path: afr-sot/eval-*
- config_name: afr-ssw
data_files:
- split: train
path: afr-ssw/train-*
- split: test
path: afr-ssw/test-*
- split: eval
path: afr-ssw/eval-*
- config_name: afr-tsn
data_files:
- split: train
path: afr-tsn/train-*
- split: test
path: afr-tsn/test-*
- split: eval
path: afr-tsn/eval-*
- config_name: afr-tso
data_files:
- split: train
path: afr-tso/train-*
- split: test
path: afr-tso/test-*
- split: eval
path: afr-tso/eval-*
- config_name: afr-ven
data_files:
- split: train
path: afr-ven/train-*
- split: test
path: afr-ven/test-*
- split: eval
path: afr-ven/eval-*
- config_name: afr-xho
data_files:
- split: train
path: afr-xho/train-*
- split: test
path: afr-xho/test-*
- split: eval
path: afr-xho/eval-*
- config_name: afr-zul
data_files:
- split: train
path: afr-zul/train-*
- split: test
path: afr-zul/test-*
- split: eval
path: afr-zul/eval-*
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- config_name: eng-nbl
data_files:
- split: train
path: eng-nbl/train-*
- split: test
path: eng-nbl/test-*
- split: eval
path: eng-nbl/eval-*
- config_name: eng-nso
data_files:
- split: train
path: eng-nso/train-*
- split: test
path: eng-nso/test-*
- split: eval
path: eng-nso/eval-*
- config_name: eng-sot
data_files:
- split: train
path: eng-sot/train-*
- split: test
path: eng-sot/test-*
- split: eval
path: eng-sot/eval-*
- config_name: eng-ssw
data_files:
- split: train
path: eng-ssw/train-*
- split: test
path: eng-ssw/test-*
- split: eval
path: eng-ssw/eval-*
- config_name: eng-tsn
data_files:
- split: train
path: eng-tsn/train-*
- split: test
path: eng-tsn/test-*
- split: eval
path: eng-tsn/eval-*
- config_name: eng-tso
data_files:
- split: train
path: eng-tso/train-*
- split: test
path: eng-tso/test-*
- split: eval
path: eng-tso/eval-*
- config_name: eng-ven
data_files:
- split: train
path: eng-ven/train-*
- split: test
path: eng-ven/test-*
- split: eval
path: eng-ven/eval-*
- config_name: eng-xho
data_files:
- split: train
path: eng-xho/train-*
- split: test
path: eng-xho/test-*
- split: eval
path: eng-xho/eval-*
- config_name: eng-zul
data_files:
- split: train
path: eng-zul/train-*
- split: test
path: eng-zul/test-*
- split: eval
path: eng-zul/eval-*
- config_name: nbl-nso
data_files:
- split: train
path: nbl-nso/train-*
- split: test
path: nbl-nso/test-*
- split: eval
path: nbl-nso/eval-*
- config_name: nbl-sot
data_files:
- split: train
path: nbl-sot/train-*
- split: test
path: nbl-sot/test-*
- split: eval
path: nbl-sot/eval-*
- config_name: nbl-ssw
data_files:
- split: train
path: nbl-ssw/train-*
- split: test
path: nbl-ssw/test-*
- split: eval
path: nbl-ssw/eval-*
- config_name: nbl-tsn
data_files:
- split: train
path: nbl-tsn/train-*
- split: test
path: nbl-tsn/test-*
- split: eval
path: nbl-tsn/eval-*
- config_name: nbl-tso
data_files:
- split: train
path: nbl-tso/train-*
- split: test
path: nbl-tso/test-*
- split: eval
path: nbl-tso/eval-*
- config_name: nbl-ven
data_files:
- split: train
path: nbl-ven/train-*
- split: test
path: nbl-ven/test-*
- split: eval
path: nbl-ven/eval-*
- config_name: nbl-xho
data_files:
- split: train
path: nbl-xho/train-*
- split: test
path: nbl-xho/test-*
- split: eval
path: nbl-xho/eval-*
- config_name: nbl-zul
data_files:
- split: train
path: nbl-zul/train-*
- split: test
path: nbl-zul/test-*
- split: eval
path: nbl-zul/eval-*
- config_name: nso-sot
data_files:
- split: train
path: nso-sot/train-*
- split: test
path: nso-sot/test-*
- split: eval
path: nso-sot/eval-*
- config_name: nso-ssw
data_files:
- split: train
path: nso-ssw/train-*
- split: test
path: nso-ssw/test-*
- split: eval
path: nso-ssw/eval-*
- config_name: nso-tsn
data_files:
- split: train
path: nso-tsn/train-*
- split: test
path: nso-tsn/test-*
- split: eval
path: nso-tsn/eval-*
- config_name: nso-tso
data_files:
- split: train
path: nso-tso/train-*
- split: test
path: nso-tso/test-*
- split: eval
path: nso-tso/eval-*
- config_name: nso-ven
data_files:
- split: train
path: nso-ven/train-*
- split: test
path: nso-ven/test-*
- split: eval
path: nso-ven/eval-*
- config_name: nso-xho
data_files:
- split: train
path: nso-xho/train-*
- split: test
path: nso-xho/test-*
- split: eval
path: nso-xho/eval-*
- config_name: nso-zul
data_files:
- split: train
path: nso-zul/train-*
- split: test
path: nso-zul/test-*
- split: eval
path: nso-zul/eval-*
- config_name: sot-ssw
data_files:
- split: train
path: sot-ssw/train-*
- split: test
path: sot-ssw/test-*
- split: eval
path: sot-ssw/eval-*
- config_name: sot-tsn
data_files:
- split: train
path: sot-tsn/train-*
- split: test
path: sot-tsn/test-*
- split: eval
path: sot-tsn/eval-*
- config_name: sot-tso
data_files:
- split: train
path: sot-tso/train-*
- split: test
path: sot-tso/test-*
- split: eval
path: sot-tso/eval-*
- config_name: sot-ven
data_files:
- split: train
path: sot-ven/train-*
- split: test
path: sot-ven/test-*
- split: eval
path: sot-ven/eval-*
- config_name: sot-xho
data_files:
- split: train
path: sot-xho/train-*
- split: test
path: sot-xho/test-*
- split: eval
path: sot-xho/eval-*
- config_name: sot-zul
data_files:
- split: train
path: sot-zul/train-*
- split: test
path: sot-zul/test-*
- split: eval
path: sot-zul/eval-*
- config_name: ssw-tsn
data_files:
- split: train
path: ssw-tsn/train-*
- split: test
path: ssw-tsn/test-*
- split: eval
path: ssw-tsn/eval-*
- config_name: ssw-tso
data_files:
- split: train
path: ssw-tso/train-*
- split: test
path: ssw-tso/test-*
- split: eval
path: ssw-tso/eval-*
- config_name: ssw-ven
data_files:
- split: train
path: ssw-ven/train-*
- split: test
path: ssw-ven/test-*
- split: eval
path: ssw-ven/eval-*
- config_name: ssw-xho
data_files:
- split: train
path: ssw-xho/train-*
- split: test
path: ssw-xho/test-*
- split: eval
path: ssw-xho/eval-*
- config_name: ssw-zul
data_files:
- split: train
path: ssw-zul/train-*
- split: test
path: ssw-zul/test-*
- split: eval
path: ssw-zul/eval-*
- config_name: tsn-tso
data_files:
- split: train
path: tsn-tso/train-*
- split: test
path: tsn-tso/test-*
- split: eval
path: tsn-tso/eval-*
- config_name: tsn-ven
data_files:
- split: train
path: tsn-ven/train-*
- split: test
path: tsn-ven/test-*
- split: eval
path: tsn-ven/eval-*
- config_name: tsn-xho
data_files:
- split: train
path: tsn-xho/train-*
- split: test
path: tsn-xho/test-*
- split: eval
path: tsn-xho/eval-*
- config_name: tsn-zul
data_files:
- split: train
path: tsn-zul/train-*
- split: test
path: tsn-zul/test-*
- split: eval
path: tsn-zul/eval-*
- config_name: tso-ven
data_files:
- split: train
path: tso-ven/train-*
- split: test
path: tso-ven/test-*
- split: eval
path: tso-ven/eval-*
- config_name: tso-xho
data_files:
- split: train
path: tso-xho/train-*
- split: test
path: tso-xho/test-*
- split: eval
path: tso-xho/eval-*
- config_name: tso-zul
data_files:
- split: train
path: tso-zul/train-*
- split: test
path: tso-zul/test-*
- split: eval
path: tso-zul/eval-*
- config_name: ven-xho
data_files:
- split: train
path: ven-xho/train-*
- split: test
path: ven-xho/test-*
- split: eval
path: ven-xho/eval-*
- config_name: ven-zul
data_files:
- split: train
path: ven-zul/train-*
- split: test
path: ven-zul/test-*
- split: eval
path: ven-zul/eval-*
- config_name: xho-zul
data_files:
- split: train
path: xho-zul/train-*
- split: test
path: xho-zul/test-*
- split: eval
path: xho-zul/eval-*
dataset_info:
- config_name: afr-eng
features:
- name: afr
dtype: string
- name: eng
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 793530
num_examples: 2660
- name: test
num_bytes: 171644
num_examples: 570
- name: eval
num_bytes: 172132
num_examples: 571
download_size: 757198
dataset_size: 1137306
- config_name: afr-nbl
features:
- name: afr
dtype: string
- name: nbl
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 281328
num_examples: 723
- name: test
num_bytes: 57947
num_examples: 155
- name: eval
num_bytes: 59996
num_examples: 155
download_size: 279950
dataset_size: 399271
- config_name: afr-nso
features:
- name: afr
dtype: string
- name: nso
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 980475
num_examples: 2937
- name: test
num_bytes: 203451
num_examples: 630
- name: eval
num_bytes: 214623
num_examples: 630
download_size: 892392
dataset_size: 1398549
- config_name: afr-sot
features:
- name: afr
dtype: string
- name: sot
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 971841
num_examples: 2968
- name: test
num_bytes: 211374
num_examples: 636
- name: eval
num_bytes: 209697
num_examples: 636
download_size: 901006
dataset_size: 1392912
- config_name: afr-ssw
features:
- name: afr
dtype: string
- name: ssw
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 975744
num_examples: 3042
- name: test
num_bytes: 209151
num_examples: 652
- name: eval
num_bytes: 208877
num_examples: 653
download_size: 927666
dataset_size: 1393772
- config_name: afr-tsn
features:
- name: afr
dtype: string
- name: tsn
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1010506
num_examples: 2830
- name: test
num_bytes: 218153
num_examples: 607
- name: eval
num_bytes: 214373
num_examples: 607
download_size: 913596
dataset_size: 1443032
- config_name: afr-tso
features:
- name: afr
dtype: string
- name: tso
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 980025
num_examples: 2952
- name: test
num_bytes: 213355
num_examples: 633
- name: eval
num_bytes: 211642
num_examples: 633
download_size: 902666
dataset_size: 1405022
- config_name: afr-ven
features:
- name: afr
dtype: string
- name: ven
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 352031
num_examples: 830
- name: test
num_bytes: 72702
num_examples: 178
- name: eval
num_bytes: 75243
num_examples: 178
download_size: 323825
dataset_size: 499976
- config_name: afr-xho
features:
- name: afr
dtype: string
- name: xho
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 986062
num_examples: 3098
- name: test
num_bytes: 205229
num_examples: 664
- name: eval
num_bytes: 210379
num_examples: 665
download_size: 944334
dataset_size: 1401670
- config_name: afr-zul
features:
- name: afr
dtype: string
- name: zul
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 982102
num_examples: 3078
- name: test
num_bytes: 208473
num_examples: 660
- name: eval
num_bytes: 201824
num_examples: 660
download_size: 932565
dataset_size: 1392399
- config_name: default
features:
- name: nbl
dtype: string
- name: nso
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 128131
num_examples: 315
- name: test
num_bytes: 31826
num_examples: 79
download_size: 113394
dataset_size: 159957
- config_name: eng-nbl
features:
- name: eng
dtype: string
- name: nbl
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 78419
num_examples: 207
- name: test
num_bytes: 16930
num_examples: 45
- name: eval
num_bytes: 15202
num_examples: 45
download_size: 89654
dataset_size: 110551
- config_name: eng-nso
features:
- name: eng
dtype: string
- name: nso
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 848347
num_examples: 2616
- name: test
num_bytes: 183267
num_examples: 561
- name: eval
num_bytes: 181802
num_examples: 561
download_size: 770909
dataset_size: 1213416
- config_name: eng-sot
features:
- name: eng
dtype: string
- name: sot
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 843173
num_examples: 2671
- name: test
num_bytes: 181709
num_examples: 573
- name: eval
num_bytes: 180602
num_examples: 573
download_size: 776145
dataset_size: 1205484
- config_name: eng-ssw
features:
- name: eng
dtype: string
- name: ssw
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 830354
num_examples: 2662
- name: test
num_bytes: 175688
num_examples: 571
- name: eval
num_bytes: 176734
num_examples: 571
download_size: 777951
dataset_size: 1182776
- config_name: eng-tsn
features:
- name: eng
dtype: string
- name: tsn
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 880306
num_examples: 2517
- name: test
num_bytes: 190843
num_examples: 539
- name: eval
num_bytes: 187728
num_examples: 540
download_size: 786563
dataset_size: 1258877
- config_name: eng-tso
features:
- name: eng
dtype: string
- name: tso
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 849914
num_examples: 2623
- name: test
num_bytes: 181181
num_examples: 562
- name: eval
num_bytes: 176362
num_examples: 563
download_size: 773662
dataset_size: 1207457
- config_name: eng-ven
features:
- name: eng
dtype: string
- name: ven
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 113647
num_examples: 279
- name: test
num_bytes: 26195
num_examples: 60
- name: eval
num_bytes: 26121
num_examples: 60
download_size: 119271
dataset_size: 165963
- config_name: eng-xho
features:
- name: eng
dtype: string
- name: xho
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 827201
num_examples: 2662
- name: test
num_bytes: 175023
num_examples: 571
- name: eval
num_bytes: 176047
num_examples: 571
download_size: 784961
dataset_size: 1178271
- config_name: eng-zul
features:
- name: eng
dtype: string
- name: zul
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 815837
num_examples: 2646
- name: test
num_bytes: 168769
num_examples: 567
- name: eval
num_bytes: 177547
num_examples: 567
download_size: 767836
dataset_size: 1162153
- config_name: nbl-nso
features:
- name: nbl
dtype: string
- name: nso
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 109438
num_examples: 275
- name: test
num_bytes: 24000
num_examples: 59
- name: eval
num_bytes: 26519
num_examples: 60
download_size: 118816
dataset_size: 159957
- config_name: nbl-sot
features:
- name: nbl
dtype: string
- name: sot
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 149593
num_examples: 365
- name: test
num_bytes: 30656
num_examples: 78
- name: eval
num_bytes: 32211
num_examples: 79
download_size: 152576
dataset_size: 212460
- config_name: nbl-ssw
features:
- name: nbl
dtype: string
- name: ssw
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 146544
num_examples: 387
- name: test
num_bytes: 33410
num_examples: 83
- name: eval
num_bytes: 32858
num_examples: 84
download_size: 157314
dataset_size: 212812
- config_name: nbl-tsn
features:
- name: nbl
dtype: string
- name: tsn
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 131856
num_examples: 302
- name: test
num_bytes: 31961
num_examples: 65
- name: eval
num_bytes: 29676
num_examples: 65
download_size: 139315
dataset_size: 193493
- config_name: nbl-tso
features:
- name: nbl
dtype: string
- name: tso
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 127433
num_examples: 296
- name: test
num_bytes: 24654
num_examples: 63
- name: eval
num_bytes: 23290
num_examples: 64
download_size: 127532
dataset_size: 175377
- config_name: nbl-ven
features:
- name: nbl
dtype: string
- name: ven
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 976911
num_examples: 2660
- name: test
num_bytes: 211536
num_examples: 570
- name: eval
num_bytes: 207694
num_examples: 570
download_size: 885066
dataset_size: 1396141
- config_name: nbl-xho
features:
- name: nbl
dtype: string
- name: xho
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 149020
num_examples: 403
- name: test
num_bytes: 33319
num_examples: 87
- name: eval
num_bytes: 31809
num_examples: 87
download_size: 160427
dataset_size: 214148
- config_name: nbl-zul
features:
- name: nbl
dtype: string
- name: zul
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 111905
num_examples: 289
- name: test
num_bytes: 25799
num_examples: 62
- name: eval
num_bytes: 22660
num_examples: 63
download_size: 124588
dataset_size: 160364
- config_name: nso-sot
features:
- name: nso
dtype: string
- name: sot
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1057851
num_examples: 3052
- name: test
num_bytes: 226420
num_examples: 654
- name: eval
num_bytes: 232934
num_examples: 655
download_size: 945243
dataset_size: 1517205
- config_name: nso-ssw
features:
- name: nso
dtype: string
- name: ssw
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1020738
num_examples: 2968
- name: test
num_bytes: 219932
num_examples: 636
- name: eval
num_bytes: 218161
num_examples: 637
download_size: 922981
dataset_size: 1458831
- config_name: nso-tsn
features:
- name: nso
dtype: string
- name: tsn
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1092885
num_examples: 2918
- name: test
num_bytes: 238439
num_examples: 625
- name: eval
num_bytes: 234644
num_examples: 626
download_size: 952272
dataset_size: 1565968
- config_name: nso-tso
features:
- name: nso
dtype: string
- name: tso
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1045959
num_examples: 2956
- name: test
num_bytes: 228677
num_examples: 634
- name: eval
num_bytes: 226626
num_examples: 634
download_size: 925262
dataset_size: 1501262
- config_name: nso-ven
features:
- name: nso
dtype: string
- name: ven
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 155499
num_examples: 343
- name: test
num_bytes: 35576
num_examples: 73
- name: eval
num_bytes: 31381
num_examples: 74
download_size: 152424
dataset_size: 222456
- config_name: nso-xho
features:
- name: nso
dtype: string
- name: xho
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1022670
num_examples: 2959
- name: test
num_bytes: 214850
num_examples: 634
- name: eval
num_bytes: 212932
num_examples: 635
download_size: 929486
dataset_size: 1450452
- config_name: nso-zul
features:
- name: nso
dtype: string
- name: zul
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1014985
num_examples: 2998
- name: test
num_bytes: 223825
num_examples: 643
- name: eval
num_bytes: 219173
num_examples: 643
download_size: 926742
dataset_size: 1457983
- config_name: sot-ssw
features:
- name: sot
dtype: string
- name: ssw
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1036642
num_examples: 3075
- name: test
num_bytes: 217328
num_examples: 659
- name: eval
num_bytes: 222863
num_examples: 660
download_size: 950426
dataset_size: 1476833
- config_name: sot-tsn
features:
- name: sot
dtype: string
- name: tsn
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1118346
num_examples: 3019
- name: test
num_bytes: 237826
num_examples: 647
- name: eval
num_bytes: 235279
num_examples: 647
download_size: 981019
dataset_size: 1591451
- config_name: sot-tso
features:
- name: sot
dtype: string
- name: tso
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1057631
num_examples: 3027
- name: test
num_bytes: 226229
num_examples: 649
- name: eval
num_bytes: 222671
num_examples: 649
download_size: 943068
dataset_size: 1506531
- config_name: sot-ven
features:
- name: sot
dtype: string
- name: ven
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 201166
num_examples: 461
- name: test
num_bytes: 44845
num_examples: 99
- name: eval
num_bytes: 42607
num_examples: 99
download_size: 191947
dataset_size: 288618
- config_name: sot-xho
features:
- name: sot
dtype: string
- name: xho
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1042930
num_examples: 3098
- name: test
num_bytes: 217327
num_examples: 664
- name: eval
num_bytes: 223313
num_examples: 665
download_size: 964792
dataset_size: 1483570
- config_name: sot-zul
features:
- name: sot
dtype: string
- name: zul
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1017097
num_examples: 3079
- name: test
num_bytes: 223761
num_examples: 660
- name: eval
num_bytes: 227514
num_examples: 660
download_size: 949761
dataset_size: 1468372
- config_name: ssw-tsn
features:
- name: ssw
dtype: string
- name: tsn
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1124400
num_examples: 3110
- name: test
num_bytes: 238160
num_examples: 666
- name: eval
num_bytes: 246176
num_examples: 667
download_size: 1012570
dataset_size: 1608736
- config_name: ssw-tso
features:
- name: ssw
dtype: string
- name: tso
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1069540
num_examples: 3142
- name: test
num_bytes: 237608
num_examples: 673
- name: eval
num_bytes: 231657
num_examples: 674
download_size: 980833
dataset_size: 1538805
- config_name: ssw-ven
features:
- name: ssw
dtype: string
- name: ven
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 170825
num_examples: 401
- name: test
num_bytes: 34774
num_examples: 86
- name: eval
num_bytes: 39434
num_examples: 87
download_size: 170522
dataset_size: 245033
- config_name: ssw-xho
features:
- name: ssw
dtype: string
- name: xho
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1047037
num_examples: 3193
- name: test
num_bytes: 227505
num_examples: 684
- name: eval
num_bytes: 219981
num_examples: 685
download_size: 992683
dataset_size: 1494523
- config_name: ssw-zul
features:
- name: ssw
dtype: string
- name: zul
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1054772
num_examples: 3255
- name: test
num_bytes: 231524
num_examples: 698
- name: eval
num_bytes: 223701
num_examples: 698
download_size: 997182
dataset_size: 1509997
- config_name: tsn-tso
features:
- name: tsn
dtype: string
- name: tso
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1141163
num_examples: 3023
- name: test
num_bytes: 244100
num_examples: 648
- name: eval
num_bytes: 242886
num_examples: 648
download_size: 998631
dataset_size: 1628149
- config_name: tsn-ven
features:
- name: tsn
dtype: string
- name: ven
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 175353
num_examples: 361
- name: test
num_bytes: 39141
num_examples: 77
- name: eval
num_bytes: 37453
num_examples: 78
download_size: 165408
dataset_size: 251947
- config_name: tsn-xho
features:
- name: tsn
dtype: string
- name: xho
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1095514
num_examples: 2989
- name: test
num_bytes: 231544
num_examples: 640
- name: eval
num_bytes: 227856
num_examples: 641
download_size: 986295
dataset_size: 1554914
- config_name: tsn-zul
features:
- name: tsn
dtype: string
- name: zul
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1090114
num_examples: 3038
- name: test
num_bytes: 232488
num_examples: 651
- name: eval
num_bytes: 240758
num_examples: 651
download_size: 989654
dataset_size: 1563360
- config_name: tso-ven
features:
- name: tso
dtype: string
- name: ven
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 171428
num_examples: 375
- name: test
num_bytes: 33029
num_examples: 80
- name: eval
num_bytes: 38079
num_examples: 81
download_size: 163896
dataset_size: 242536
- config_name: tso-xho
features:
- name: tso
dtype: string
- name: xho
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1049926
num_examples: 3066
- name: test
num_bytes: 224708
num_examples: 657
- name: eval
num_bytes: 221699
num_examples: 657
download_size: 967978
dataset_size: 1496333
- config_name: tso-zul
features:
- name: tso
dtype: string
- name: zul
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1059314
num_examples: 3106
- name: test
num_bytes: 224935
num_examples: 666
- name: eval
num_bytes: 225248
num_examples: 666
download_size: 970505
dataset_size: 1509497
- config_name: ven-xho
features:
- name: ven
dtype: string
- name: xho
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 164305
num_examples: 401
- name: test
num_bytes: 36290
num_examples: 86
- name: eval
num_bytes: 35520
num_examples: 87
download_size: 165177
dataset_size: 236115
- config_name: ven-zul
features:
- name: ven
dtype: string
- name: zul
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 138178
num_examples: 336
- name: test
num_bytes: 32949
num_examples: 72
- name: eval
num_bytes: 30697
num_examples: 72
download_size: 143542
dataset_size: 201824
- config_name: xho-zul
features:
- name: xho
dtype: string
- name: zul
dtype: string
- name: score
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1062980
num_examples: 3276
- name: test
num_bytes: 226001
num_examples: 702
- name: eval
num_bytes: 225893
num_examples: 703
download_size: 1011124
dataset_size: 1514874
---
# The Vuk'uzenzele South African Multilingual Corpus
Github: [https://github.com/dsfsi/vukuzenzele-nlp/](https://github.com/dsfsi/vukuzenzele-nlp/)
Zenodo: [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7598539.svg)](https://doi.org/10.5281/zenodo.7598539)
Arxiv Preprint: [![arXiv](https://img.shields.io/badge/arXiv-2303.03750-b31b1b.svg)](https://arxiv.org/abs/2303.03750)
Give Feedback 📑: [DSFSI Resource Feedback Form](https://docs.google.com/forms/d/e/1FAIpQLSf7S36dyAUPx2egmXbFpnTBuzoRulhL5Elu-N1eoMhaO7v10w/formResponse)
# About
The dataset was obtained from the South African government magazine Vuk'uzenzele, created by the [Government Communication and Information System (GCIS)](https://www.gcis.gov.za/).
The original raw PDFS were obtatined from the [Vuk'uzenzele website](https://www.vukuzenzele.gov.za/).
The datasets contain government magazine editions in 11 languages, namely:
| Language | Code | Language | Code |
|------------|-------|------------|-------|
| English | (eng) | Sepedi | (sep) |
| Afrikaans | (afr) | Setswana | (tsn) |
| isiNdebele | (nbl) | Siswati | (ssw) |
| isiXhosa | (xho) | Tshivenda | (ven) |
| isiZulu | (zul) | Xitstonga | (tso) |
| Sesotho | (nso) |
## Available pairings
The alignment direction is bidrectional, i.e. xho-zul is zul-xho
afr-eng; afr-nbl; afr-nso; afr-sot; afr-ssw; afr-tsn; afr-tso; afr-ven; afr-xho; afr-zul
eng-nbl; eng-nso; eng-sot ;eng-ssw; eng-tsn; eng-tso; eng-ven; eng-xho; eng-zul
nbl-nso; nbl-sot; nbl-ssw; nbl-tsn; nbl-tso; nbl-ven; nbl-xho; nbl-zul
nso-sot; nso-ssw; nso-tsn; nso-tso; nso-ven; nso-xho; nso-zul
sot-ssw; sot-tsn; sot-tso; sot-ven; sot-xho; sot-zul
ssw-tsn; ssw-tso; ssw-ven; ssw-xho; ssw-zul
tsn-tso; tsn-ven; tsn-xho; tsn-zul
tso-ven; tso-xho; tso-zul
ven-xho; ven-zul
xho-zul
# Disclaimer
This dataset contains machine-readable data extracted from PDF documents, from https://www.vukuzenzele.gov.za/, provided by the Government Communication Information System (GCIS). While efforts were made to ensure the accuracy and completeness of this data, there may be errors or discrepancies between the original publications and this dataset. No warranties, guarantees or representations are given in relation to the information contained in the dataset. The members of the Data Science for Societal Impact Research Group bear no responsibility and/or liability for any such errors or discrepancies in this dataset. The Government Communication Information System (GCIS) bears no responsibility and/or liability for any such errors or discrepancies in this dataset. It is recommended that users verify all information contained herein before making decisions based upon this information.
# Datasets
The datasets consist of pairwise sentence aligned data. There are 55 distinct datasets of paired sentences.
The data is obtained by comparing [LASER](https://github.com/facebookresearch/LASER) embeddings of sentence tokens between 2 languages. If the similarity is high, the sentences are deemed semantic equivalents of one another and the observation is outputted.
Naming convention:
The naming structure of the pairwise_sentence_aligned folder is `aligned-{src_lang_code}-{tgt_lang_code}.csv`.
For example, `aligned-afr-zul.csv` is the aligned sentences between Afrikaans and isiZulu.
The data is in .csv format and the columns are `src_text`,`tgt_text`,`cosine_score` where:
- `src_text` is the source sentence
- `tgt_text` is the target sentence
- `cosine_score` is the cosine similarity score obtained by comparing the sentence embeddings, it ranges from 0 to 1
**Note:** The notion of source (src) and target (tgt) are only necessary for distinction between the languages used in the aligned pair, as the sentence semantics should be bidirectional. (hallo <-> sawubona)
# Citation
Vukosi Marivate, Andani Madodonga, Daniel Njini, Richard Lastrucci, Isheanesu Dzingirai, Jenalea Rajab. **The Vuk'uzenzele South African Multilingual Corpus**, 2023
> @dataset{marivate_vukosi_2023_7598540,
author = {Marivate, Vukosi and
Njini, Daniel and
Madodonga, Andani and
Lastrucci, Richard and
Dzingirai, Isheanesu
Rajab, Jenalea},
title = {The Vuk'uzenzele South African Multilingual Corpus},
month = feb,
year = 2023,
publisher = {Zenodo},
doi = {10.5281/zenodo.7598539},
url = {https://doi.org/10.5281/zenodo.7598539}
}
### Licence
* Licence for Data - [CC 4.0 BY](LICENSE.md)
|
Davlan/sib200 | Davlan | "2024-02-19T16:34:23Z" | 9,798 | 8 | [
"task_categories:text-classification",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:expert-generated",
"multilinguality:multilingual",
"source_datasets:original",
"language:ace",
"language:acm",
"language:acq",
"language:aeb",
"language:af",
"language:ajp",
"language:ak",
"language:als",
"language:am",
"language:apc",
"language:ar",
"language:ars",
"language:ary",
"language:arz",
"language:as",
"language:ast",
"language:awa",
"language:ayr",
"language:azb",
"language:azj",
"language:ba",
"language:bm",
"language:ban",
"language:be",
"language:bem",
"language:bn",
"language:bho",
"language:bjn",
"language:bo",
"language:bs",
"language:bug",
"language:bg",
"language:ca",
"language:ceb",
"language:cs",
"language:cjk",
"language:ckb",
"language:crh",
"language:cy",
"language:da",
"language:de",
"language:dik",
"language:dyu",
"language:dz",
"language:el",
"language:en",
"language:eo",
"language:et",
"language:eu",
"language:ee",
"language:fo",
"language:fj",
"language:fi",
"language:fon",
"language:fr",
"language:fur",
"language:fuv",
"language:gaz",
"language:gd",
"language:ga",
"language:gl",
"language:gn",
"language:gu",
"language:ht",
"language:ha",
"language:he",
"language:hi",
"language:hne",
"language:hr",
"language:hu",
"language:hy",
"language:ig",
"language:ilo",
"language:id",
"language:is",
"language:it",
"language:jv",
"language:ja",
"language:kab",
"language:kac",
"language:kam",
"language:kn",
"language:ks",
"language:ka",
"language:kk",
"language:kbp",
"language:kea",
"language:khk",
"language:km",
"language:ki",
"language:rw",
"language:ky",
"language:kmb",
"language:kmr",
"language:knc",
"language:kg",
"language:ko",
"language:lo",
"language:lij",
"language:li",
"language:ln",
"language:lt",
"language:lmo",
"language:ltg",
"language:lb",
"language:lua",
"language:lg",
"language:luo",
"language:lus",
"language:lvs",
"language:mag",
"language:mai",
"language:ml",
"language:mar",
"language:min",
"language:mk",
"language:mt",
"language:mni",
"language:mos",
"language:mi",
"language:my",
"language:nl",
"language:nn",
"language:nb",
"language:npi",
"language:nqo",
"language:nso",
"language:nus",
"language:ny",
"language:oc",
"language:ory",
"language:pag",
"language:pa",
"language:pap",
"language:pbt",
"language:pes",
"language:plt",
"language:pl",
"language:pt",
"language:prs",
"language:quy",
"language:ro",
"language:rn",
"language:ru",
"language:sg",
"language:sa",
"language:sat",
"language:scn",
"language:shn",
"language:si",
"language:sk",
"language:sl",
"language:sm",
"language:sn",
"language:sd",
"language:so",
"language:st",
"language:es",
"language:sc",
"language:sr",
"language:ss",
"language:su",
"language:sv",
"language:swh",
"language:szl",
"language:ta",
"language:taq",
"language:tt",
"language:te",
"language:tg",
"language:tl",
"language:th",
"language:ti",
"language:tpi",
"language:tn",
"language:ts",
"language:tk",
"language:tum",
"language:tr",
"language:tw",
"language:tzm",
"language:ug",
"language:uk",
"language:umb",
"language:ur",
"language:uzn",
"language:vec",
"language:vi",
"language:war",
"language:wo",
"language:xh",
"language:ydd",
"language:yo",
"language:yue",
"language:zh",
"language:zsm",
"language:zu",
"license:cc-by-sa-4.0",
"size_categories:100K<n<1M",
"modality:text",
"arxiv:2309.07445",
"region:us",
"news-topic",
"sib-200",
"sib200"
] | [
"text-classification"
] | "2024-01-27T14:04:12Z" | ---
annotations_creators:
- found
language_creators:
- expert-generated
language:
- ace
- acm
- acq
- aeb
- af
- ajp
- ak
- als
- am
- apc
- ar
- ars
- ary
- arz
- as
- ast
- awa
- ayr
- azb
- azj
- ba
- bm
- ban
- be
- bem
- bn
- bho
- bjn
- bo
- bs
- bug
- bg
- ca
- ceb
- cs
- cjk
- ckb
- crh
- cy
- da
- de
- dik
- dyu
- dz
- el
- en
- eo
- et
- eu
- ee
- fo
- fj
- fi
- fon
- fr
- fur
- fuv
- gaz
- gd
- ga
- gl
- gn
- gu
- ht
- ha
- he
- hi
- hne
- hr
- hu
- hy
- ig
- ilo
- id
- is
- it
- jv
- ja
- kab
- kac
- kam
- kn
- ks
- ka
- kk
- kbp
- kea
- khk
- km
- ki
- rw
- ky
- kmb
- kmr
- knc
- kg
- ko
- lo
- lij
- li
- ln
- lt
- lmo
- ltg
- lb
- lua
- lg
- luo
- lus
- lvs
- mag
- mai
- ml
- mar
- min
- mk
- mt
- mni
- mos
- mi
- my
- nl
- nn
- nb
- npi
- nqo
- nso
- nus
- ny
- oc
- ory
- pag
- pa
- pap
- pbt
- pes
- plt
- pl
- pt
- prs
- quy
- ro
- rn
- ru
- sg
- sa
- sat
- scn
- shn
- si
- sk
- sl
- sm
- sn
- sd
- so
- st
- es
- sc
- sr
- ss
- su
- sv
- swh
- szl
- ta
- taq
- tt
- te
- tg
- tl
- th
- ti
- tpi
- tn
- ts
- tk
- tum
- tr
- tw
- tzm
- ug
- uk
- umb
- ur
- uzn
- vec
- vi
- war
- wo
- xh
- ydd
- yo
- yue
- zh
- zsm
- zu
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
pretty_name: sib200
language_details: ace_Arab, ace_Latn, acm_Arab, acq_Arab, aeb_Arab, afr_Latn, ajp_Arab,
aka_Latn, amh_Ethi, apc_Arab, arb_Arab, ars_Arab, ary_Arab, arz_Arab, asm_Beng,
ast_Latn, awa_Deva, ayr_Latn, azb_Arab, azj_Latn, bak_Cyrl, bam_Latn, ban_Latn,bel_Cyrl,
bem_Latn, ben_Beng, bho_Deva, bjn_Arab, bjn_Latn, bod_Tibt, bos_Latn, bug_Latn,
bul_Cyrl, cat_Latn, ceb_Latn, ces_Latn, cjk_Latn, ckb_Arab, crh_Latn, cym_Latn,
dan_Latn, deu_Latn, dik_Latn, dyu_Latn, dzo_Tibt, ell_Grek, eng_Latn, epo_Latn,
est_Latn, eus_Latn, ewe_Latn, fao_Latn, pes_Arab, fij_Latn, fin_Latn, fon_Latn,
fra_Latn, fur_Latn, fuv_Latn, gla_Latn, gle_Latn, glg_Latn, grn_Latn, guj_Gujr,
hat_Latn, hau_Latn, heb_Hebr, hin_Deva, hne_Deva, hrv_Latn, hun_Latn, hye_Armn,
ibo_Latn, ilo_Latn, ind_Latn, isl_Latn, ita_Latn, jav_Latn, jpn_Jpan, kab_Latn,
kac_Latn, kam_Latn, kan_Knda, kas_Arab, kas_Deva, kat_Geor, knc_Arab, knc_Latn,
kaz_Cyrl, kbp_Latn, kea_Latn, khm_Khmr, kik_Latn, kin_Latn, kir_Cyrl, kmb_Latn,
kon_Latn, kor_Hang, kmr_Latn, lao_Laoo, lvs_Latn, lij_Latn, lim_Latn, lin_Latn,
lit_Latn, lmo_Latn, ltg_Latn, ltz_Latn, lua_Latn, lug_Latn, luo_Latn, lus_Latn,
mag_Deva, mai_Deva, mal_Mlym, mar_Deva, min_Latn, mkd_Cyrl, plt_Latn, mlt_Latn,
mni_Beng, khk_Cyrl, mos_Latn, mri_Latn, zsm_Latn, mya_Mymr, nld_Latn, nno_Latn,
nob_Latn, npi_Deva, nso_Latn, nus_Latn, nya_Latn, oci_Latn, gaz_Latn, ory_Orya,
pag_Latn, pan_Guru, pap_Latn, pol_Latn, por_Latn, prs_Arab, pbt_Arab, quy_Latn,
ron_Latn, run_Latn, rus_Cyrl, sag_Latn, san_Deva, sat_Beng, scn_Latn, shn_Mymr,
sin_Sinh, slk_Latn, slv_Latn, smo_Latn, sna_Latn, snd_Arab, som_Latn, sot_Latn,
spa_Latn, als_Latn, srd_Latn, srp_Cyrl, ssw_Latn, sun_Latn, swe_Latn, swh_Latn,
szl_Latn, tam_Taml, tat_Cyrl, tel_Telu, tgk_Cyrl, tgl_Latn, tha_Thai, tir_Ethi,
taq_Latn, taq_Tfng, tpi_Latn, tsn_Latn, tso_Latn, tuk_Latn, tum_Latn, tur_Latn,
twi_Latn, tzm_Tfng, uig_Arab, ukr_Cyrl, umb_Latn, urd_Arab, uzn_Latn, vec_Latn,
vie_Latn, war_Latn, wol_Latn, xho_Latn, ydd_Hebr, yor_Latn, yue_Hant, zho_Hans,
zho_Hant, zul_Latn
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- news-topic
- sib-200
- sib200
task_categories:
- text-classification
task_ids:
- topic-classification
configs:
- config_name: ace_Arab
data_files:
- split: train
path: data/ace_Arab/train.tsv
- split: validation
path: data/ace_Arab/dev.tsv
- split: test
path: data/ace_Arab/test.tsv
- config_name: ace_Latn
data_files:
- split: train
path: data/ace_Latn/train.tsv
- split: validation
path: data/ace_Latn/dev.tsv
- split: test
path: data/ace_Latn/test.tsv
- config_name: acm_Arab
data_files:
- split: train
path: data/acm_Arab/train.tsv
- split: validation
path: data/acm_Arab/dev.tsv
- split: test
path: data/acm_Arab/test.tsv
- config_name: acq_Arab
data_files:
- split: train
path: data/acq_Arab/train.tsv
- split: validation
path: data/acq_Arab/dev.tsv
- split: test
path: data/acq_Arab/test.tsv
- config_name: aeb_Arab
data_files:
- split: train
path: data/aeb_Arab/train.tsv
- split: validation
path: data/aeb_Arab/dev.tsv
- split: test
path: data/aeb_Arab/test.tsv
- config_name: afr_Latn
data_files:
- split: train
path: data/afr_Latn/train.tsv
- split: validation
path: data/afr_Latn/dev.tsv
- split: test
path: data/afr_Latn/test.tsv
- config_name: ajp_Arab
data_files:
- split: train
path: data/ajp_Arab/train.tsv
- split: validation
path: data/ajp_Arab/dev.tsv
- split: test
path: data/ajp_Arab/test.tsv
- config_name: aka_Latn
data_files:
- split: train
path: data/aka_Latn/train.tsv
- split: validation
path: data/aka_Latn/dev.tsv
- split: test
path: data/aka_Latn/test.tsv
- config_name: als_Latn
data_files:
- split: train
path: data/als_Latn/train.tsv
- split: validation
path: data/als_Latn/dev.tsv
- split: test
path: data/als_Latn/test.tsv
- config_name: amh_Ethi
data_files:
- split: train
path: data/amh_Ethi/train.tsv
- split: validation
path: data/amh_Ethi/dev.tsv
- split: test
path: data/amh_Ethi/test.tsv
- config_name: apc_Arab
data_files:
- split: train
path: data/apc_Arab/train.tsv
- split: validation
path: data/apc_Arab/dev.tsv
- split: test
path: data/apc_Arab/test.tsv
- config_name: arb_Arab
data_files:
- split: train
path: data/arb_Arab/train.tsv
- split: validation
path: data/arb_Arab/dev.tsv
- split: test
path: data/arb_Arab/test.tsv
- config_name: arb_Latn
data_files:
- split: train
path: data/arb_Latn/train.tsv
- split: validation
path: data/arb_Latn/dev.tsv
- split: test
path: data/arb_Latn/test.tsv
- config_name: ars_Arab
data_files:
- split: train
path: data/ars_Arab/train.tsv
- split: validation
path: data/ars_Arab/dev.tsv
- split: test
path: data/ars_Arab/test.tsv
- config_name: ary_Arab
data_files:
- split: train
path: data/ary_Arab/train.tsv
- split: validation
path: data/ary_Arab/dev.tsv
- split: test
path: data/ary_Arab/test.tsv
- config_name: arz_Arab
data_files:
- split: train
path: data/arz_Arab/train.tsv
- split: validation
path: data/arz_Arab/dev.tsv
- split: test
path: data/arz_Arab/test.tsv
- config_name: asm_Beng
data_files:
- split: train
path: data/asm_Beng/train.tsv
- split: validation
path: data/asm_Beng/dev.tsv
- split: test
path: data/asm_Beng/test.tsv
- config_name: ast_Latn
data_files:
- split: train
path: data/ast_Latn/train.tsv
- split: validation
path: data/ast_Latn/dev.tsv
- split: test
path: data/ast_Latn/test.tsv
- config_name: awa_Deva
data_files:
- split: train
path: data/awa_Deva/train.tsv
- split: validation
path: data/awa_Deva/dev.tsv
- split: test
path: data/awa_Deva/test.tsv
- config_name: ayr_Latn
data_files:
- split: train
path: data/ayr_Latn/train.tsv
- split: validation
path: data/ayr_Latn/dev.tsv
- split: test
path: data/ayr_Latn/test.tsv
- config_name: azb_Arab
data_files:
- split: train
path: data/azb_Arab/train.tsv
- split: validation
path: data/azb_Arab/dev.tsv
- split: test
path: data/azb_Arab/test.tsv
- config_name: azj_Latn
data_files:
- split: train
path: data/azj_Latn/train.tsv
- split: validation
path: data/azj_Latn/dev.tsv
- split: test
path: data/azj_Latn/test.tsv
- config_name: bak_Cyrl
data_files:
- split: train
path: data/bak_Cyrl/train.tsv
- split: validation
path: data/bak_Cyrl/dev.tsv
- split: test
path: data/bak_Cyrl/test.tsv
- config_name: bam_Latn
data_files:
- split: train
path: data/bam_Latn/train.tsv
- split: validation
path: data/bam_Latn/dev.tsv
- split: test
path: data/bam_Latn/test.tsv
- config_name: ban_Latn
data_files:
- split: train
path: data/ban_Latn/train.tsv
- split: validation
path: data/ban_Latn/dev.tsv
- split: test
path: data/ban_Latn/test.tsv
- config_name: bel_Cyrl
data_files:
- split: train
path: data/bel_Cyrl/train.tsv
- split: validation
path: data/bel_Cyrl/dev.tsv
- split: test
path: data/bel_Cyrl/test.tsv
- config_name: bem_Latn
data_files:
- split: train
path: data/bem_Latn/train.tsv
- split: validation
path: data/bem_Latn/dev.tsv
- split: test
path: data/bem_Latn/test.tsv
- config_name: ben_Beng
data_files:
- split: train
path: data/ben_Beng/train.tsv
- split: validation
path: data/ben_Beng/dev.tsv
- split: test
path: data/ben_Beng/test.tsv
- config_name: bho_Deva
data_files:
- split: train
path: data/bho_Deva/train.tsv
- split: validation
path: data/bho_Deva/dev.tsv
- split: test
path: data/bho_Deva/test.tsv
- config_name: bjn_Arab
data_files:
- split: train
path: data/bjn_Arab/train.tsv
- split: validation
path: data/bjn_Arab/dev.tsv
- split: test
path: data/bjn_Arab/test.tsv
- config_name: bjn_Latn
data_files:
- split: train
path: data/bjn_Latn/train.tsv
- split: validation
path: data/bjn_Latn/dev.tsv
- split: test
path: data/bjn_Latn/test.tsv
- config_name: bod_Tibt
data_files:
- split: train
path: data/bod_Tibt/train.tsv
- split: validation
path: data/bod_Tibt/dev.tsv
- split: test
path: data/bod_Tibt/test.tsv
- config_name: bos_Latn
data_files:
- split: train
path: data/bos_Latn/train.tsv
- split: validation
path: data/bos_Latn/dev.tsv
- split: test
path: data/bos_Latn/test.tsv
- config_name: bug_Latn
data_files:
- split: train
path: data/bug_Latn/train.tsv
- split: validation
path: data/bug_Latn/dev.tsv
- split: test
path: data/bug_Latn/test.tsv
- config_name: bul_Cyrl
data_files:
- split: train
path: data/bul_Cyrl/train.tsv
- split: validation
path: data/bul_Cyrl/dev.tsv
- split: test
path: data/bul_Cyrl/test.tsv
- config_name: cat_Latn
data_files:
- split: train
path: data/cat_Latn/train.tsv
- split: validation
path: data/cat_Latn/dev.tsv
- split: test
path: data/cat_Latn/test.tsv
- config_name: ceb_Latn
data_files:
- split: train
path: data/ceb_Latn/train.tsv
- split: validation
path: data/ceb_Latn/dev.tsv
- split: test
path: data/ceb_Latn/test.tsv
- config_name: ces_Latn
data_files:
- split: train
path: data/ces_Latn/train.tsv
- split: validation
path: data/ces_Latn/dev.tsv
- split: test
path: data/ces_Latn/test.tsv
- config_name: cjk_Latn
data_files:
- split: train
path: data/cjk_Latn/train.tsv
- split: validation
path: data/cjk_Latn/dev.tsv
- split: test
path: data/cjk_Latn/test.tsv
- config_name: ckb_Arab
data_files:
- split: train
path: data/ckb_Arab/train.tsv
- split: validation
path: data/ckb_Arab/dev.tsv
- split: test
path: data/ckb_Arab/test.tsv
- config_name: crh_Latn
data_files:
- split: train
path: data/crh_Latn/train.tsv
- split: validation
path: data/crh_Latn/dev.tsv
- split: test
path: data/crh_Latn/test.tsv
- config_name: cym_Latn
data_files:
- split: train
path: data/cym_Latn/train.tsv
- split: validation
path: data/cym_Latn/dev.tsv
- split: test
path: data/cym_Latn/test.tsv
- config_name: dan_Latn
data_files:
- split: train
path: data/dan_Latn/train.tsv
- split: validation
path: data/dan_Latn/dev.tsv
- split: test
path: data/dan_Latn/test.tsv
- config_name: deu_Latn
data_files:
- split: train
path: data/deu_Latn/train.tsv
- split: validation
path: data/deu_Latn/dev.tsv
- split: test
path: data/deu_Latn/test.tsv
- config_name: dik_Latn
data_files:
- split: train
path: data/dik_Latn/train.tsv
- split: validation
path: data/dik_Latn/dev.tsv
- split: test
path: data/dik_Latn/test.tsv
- config_name: dyu_Latn
data_files:
- split: train
path: data/dyu_Latn/train.tsv
- split: validation
path: data/dyu_Latn/dev.tsv
- split: test
path: data/dyu_Latn/test.tsv
- config_name: dzo_Tibt
data_files:
- split: train
path: data/dzo_Tibt/train.tsv
- split: validation
path: data/dzo_Tibt/dev.tsv
- split: test
path: data/dzo_Tibt/test.tsv
- config_name: ell_Grek
data_files:
- split: train
path: data/ell_Grek/train.tsv
- split: validation
path: data/ell_Grek/dev.tsv
- split: test
path: data/ell_Grek/test.tsv
- config_name: eng_Latn
data_files:
- split: train
path: data/eng_Latn/train.tsv
- split: validation
path: data/eng_Latn/dev.tsv
- split: test
path: data/eng_Latn/test.tsv
- config_name: epo_Latn
data_files:
- split: train
path: data/epo_Latn/train.tsv
- split: validation
path: data/epo_Latn/dev.tsv
- split: test
path: data/epo_Latn/test.tsv
- config_name: est_Latn
data_files:
- split: train
path: data/est_Latn/train.tsv
- split: validation
path: data/est_Latn/dev.tsv
- split: test
path: data/est_Latn/test.tsv
- config_name: eus_Latn
data_files:
- split: train
path: data/eus_Latn/train.tsv
- split: validation
path: data/eus_Latn/dev.tsv
- split: test
path: data/eus_Latn/test.tsv
- config_name: ewe_Latn
data_files:
- split: train
path: data/ewe_Latn/train.tsv
- split: validation
path: data/ewe_Latn/dev.tsv
- split: test
path: data/ewe_Latn/test.tsv
- config_name: fao_Latn
data_files:
- split: train
path: data/fao_Latn/train.tsv
- split: validation
path: data/fao_Latn/dev.tsv
- split: test
path: data/fao_Latn/test.tsv
- config_name: fij_Latn
data_files:
- split: train
path: data/fij_Latn/train.tsv
- split: validation
path: data/fij_Latn/dev.tsv
- split: test
path: data/fij_Latn/test.tsv
- config_name: fin_Latn
data_files:
- split: train
path: data/fin_Latn/train.tsv
- split: validation
path: data/fin_Latn/dev.tsv
- split: test
path: data/fin_Latn/test.tsv
- config_name: fon_Latn
data_files:
- split: train
path: data/fon_Latn/train.tsv
- split: validation
path: data/fon_Latn/dev.tsv
- split: test
path: data/fon_Latn/test.tsv
- config_name: fra_Latn
data_files:
- split: train
path: data/fra_Latn/train.tsv
- split: validation
path: data/fra_Latn/dev.tsv
- split: test
path: data/fra_Latn/test.tsv
- config_name: fur_Latn
data_files:
- split: train
path: data/fur_Latn/train.tsv
- split: validation
path: data/fur_Latn/dev.tsv
- split: test
path: data/fur_Latn/test.tsv
- config_name: fuv_Latn
data_files:
- split: train
path: data/fuv_Latn/train.tsv
- split: validation
path: data/fuv_Latn/dev.tsv
- split: test
path: data/fuv_Latn/test.tsv
- config_name: gaz_Latn
data_files:
- split: train
path: data/gaz_Latn/train.tsv
- split: validation
path: data/gaz_Latn/dev.tsv
- split: test
path: data/gaz_Latn/test.tsv
- config_name: gla_Latn
data_files:
- split: train
path: data/gla_Latn/train.tsv
- split: validation
path: data/gla_Latn/dev.tsv
- split: test
path: data/gla_Latn/test.tsv
- config_name: gle_Latn
data_files:
- split: train
path: data/gle_Latn/train.tsv
- split: validation
path: data/gle_Latn/dev.tsv
- split: test
path: data/gle_Latn/test.tsv
- config_name: glg_Latn
data_files:
- split: train
path: data/glg_Latn/train.tsv
- split: validation
path: data/glg_Latn/dev.tsv
- split: test
path: data/glg_Latn/test.tsv
- config_name: grn_Latn
data_files:
- split: train
path: data/grn_Latn/train.tsv
- split: validation
path: data/grn_Latn/dev.tsv
- split: test
path: data/grn_Latn/test.tsv
- config_name: guj_Gujr
data_files:
- split: train
path: data/guj_Gujr/train.tsv
- split: validation
path: data/guj_Gujr/dev.tsv
- split: test
path: data/guj_Gujr/test.tsv
- config_name: hat_Latn
data_files:
- split: train
path: data/hat_Latn/train.tsv
- split: validation
path: data/hat_Latn/dev.tsv
- split: test
path: data/hat_Latn/test.tsv
- config_name: hau_Latn
data_files:
- split: train
path: data/hau_Latn/train.tsv
- split: validation
path: data/hau_Latn/dev.tsv
- split: test
path: data/hau_Latn/test.tsv
- config_name: heb_Hebr
data_files:
- split: train
path: data/heb_Hebr/train.tsv
- split: validation
path: data/heb_Hebr/dev.tsv
- split: test
path: data/heb_Hebr/test.tsv
- config_name: hin_Deva
data_files:
- split: train
path: data/hin_Deva/train.tsv
- split: validation
path: data/hin_Deva/dev.tsv
- split: test
path: data/hin_Deva/test.tsv
- config_name: hne_Deva
data_files:
- split: train
path: data/hne_Deva/train.tsv
- split: validation
path: data/hne_Deva/dev.tsv
- split: test
path: data/hne_Deva/test.tsv
- config_name: hrv_Latn
data_files:
- split: train
path: data/hrv_Latn/train.tsv
- split: validation
path: data/hrv_Latn/dev.tsv
- split: test
path: data/hrv_Latn/test.tsv
- config_name: hun_Latn
data_files:
- split: train
path: data/hun_Latn/train.tsv
- split: validation
path: data/hun_Latn/dev.tsv
- split: test
path: data/hun_Latn/test.tsv
- config_name: hye_Armn
data_files:
- split: train
path: data/hye_Armn/train.tsv
- split: validation
path: data/hye_Armn/dev.tsv
- split: test
path: data/hye_Armn/test.tsv
- config_name: ibo_Latn
data_files:
- split: train
path: data/ibo_Latn/train.tsv
- split: validation
path: data/ibo_Latn/dev.tsv
- split: test
path: data/ibo_Latn/test.tsv
- config_name: ilo_Latn
data_files:
- split: train
path: data/ilo_Latn/train.tsv
- split: validation
path: data/ilo_Latn/dev.tsv
- split: test
path: data/ilo_Latn/test.tsv
- config_name: ind_Latn
data_files:
- split: train
path: data/ind_Latn/train.tsv
- split: validation
path: data/ind_Latn/dev.tsv
- split: test
path: data/ind_Latn/test.tsv
- config_name: isl_Latn
data_files:
- split: train
path: data/isl_Latn/train.tsv
- split: validation
path: data/isl_Latn/dev.tsv
- split: test
path: data/isl_Latn/test.tsv
- config_name: ita_Latn
data_files:
- split: train
path: data/ita_Latn/train.tsv
- split: validation
path: data/ita_Latn/dev.tsv
- split: test
path: data/ita_Latn/test.tsv
- config_name: jav_Latn
data_files:
- split: train
path: data/jav_Latn/train.tsv
- split: validation
path: data/jav_Latn/dev.tsv
- split: test
path: data/jav_Latn/test.tsv
- config_name: jpn_Jpan
data_files:
- split: train
path: data/jpn_Jpan/train.tsv
- split: validation
path: data/jpn_Jpan/dev.tsv
- split: test
path: data/jpn_Jpan/test.tsv
- config_name: kab_Latn
data_files:
- split: train
path: data/kab_Latn/train.tsv
- split: validation
path: data/kab_Latn/dev.tsv
- split: test
path: data/kab_Latn/test.tsv
- config_name: kac_Latn
data_files:
- split: train
path: data/kac_Latn/train.tsv
- split: validation
path: data/kac_Latn/dev.tsv
- split: test
path: data/kac_Latn/test.tsv
- config_name: kam_Latn
data_files:
- split: train
path: data/kam_Latn/train.tsv
- split: validation
path: data/kam_Latn/dev.tsv
- split: test
path: data/kam_Latn/test.tsv
- config_name: kan_Knda
data_files:
- split: train
path: data/kan_Knda/train.tsv
- split: validation
path: data/kan_Knda/dev.tsv
- split: test
path: data/kan_Knda/test.tsv
- config_name: kas_Arab
data_files:
- split: train
path: data/kas_Arab/train.tsv
- split: validation
path: data/kas_Arab/dev.tsv
- split: test
path: data/kas_Arab/test.tsv
- config_name: kas_Deva
data_files:
- split: train
path: data/kas_Deva/train.tsv
- split: validation
path: data/kas_Deva/dev.tsv
- split: test
path: data/kas_Deva/test.tsv
- config_name: kat_Geor
data_files:
- split: train
path: data/kat_Geor/train.tsv
- split: validation
path: data/kat_Geor/dev.tsv
- split: test
path: data/kat_Geor/test.tsv
- config_name: kaz_Cyrl
data_files:
- split: train
path: data/kaz_Cyrl/train.tsv
- split: validation
path: data/kaz_Cyrl/dev.tsv
- split: test
path: data/kaz_Cyrl/test.tsv
- config_name: kbp_Latn
data_files:
- split: train
path: data/kbp_Latn/train.tsv
- split: validation
path: data/kbp_Latn/dev.tsv
- split: test
path: data/kbp_Latn/test.tsv
- config_name: kea_Latn
data_files:
- split: train
path: data/kea_Latn/train.tsv
- split: validation
path: data/kea_Latn/dev.tsv
- split: test
path: data/kea_Latn/test.tsv
- config_name: khk_Cyrl
data_files:
- split: train
path: data/khk_Cyrl/train.tsv
- split: validation
path: data/khk_Cyrl/dev.tsv
- split: test
path: data/khk_Cyrl/test.tsv
- config_name: khm_Khmr
data_files:
- split: train
path: data/khm_Khmr/train.tsv
- split: validation
path: data/khm_Khmr/dev.tsv
- split: test
path: data/khm_Khmr/test.tsv
- config_name: kik_Latn
data_files:
- split: train
path: data/kik_Latn/train.tsv
- split: validation
path: data/kik_Latn/dev.tsv
- split: test
path: data/kik_Latn/test.tsv
- config_name: kin_Latn
data_files:
- split: train
path: data/kin_Latn/train.tsv
- split: validation
path: data/kin_Latn/dev.tsv
- split: test
path: data/kin_Latn/test.tsv
- config_name: kir_Cyrl
data_files:
- split: train
path: data/kir_Cyrl/train.tsv
- split: validation
path: data/kir_Cyrl/dev.tsv
- split: test
path: data/kir_Cyrl/test.tsv
- config_name: kmb_Latn
data_files:
- split: train
path: data/kmb_Latn/train.tsv
- split: validation
path: data/kmb_Latn/dev.tsv
- split: test
path: data/kmb_Latn/test.tsv
- config_name: kmr_Latn
data_files:
- split: train
path: data/kmr_Latn/train.tsv
- split: validation
path: data/kmr_Latn/dev.tsv
- split: test
path: data/kmr_Latn/test.tsv
- config_name: knc_Arab
data_files:
- split: train
path: data/knc_Arab/train.tsv
- split: validation
path: data/knc_Arab/dev.tsv
- split: test
path: data/knc_Arab/test.tsv
- config_name: knc_Latn
data_files:
- split: train
path: data/knc_Latn/train.tsv
- split: validation
path: data/knc_Latn/dev.tsv
- split: test
path: data/knc_Latn/test.tsv
- config_name: kon_Latn
data_files:
- split: train
path: data/kon_Latn/train.tsv
- split: validation
path: data/kon_Latn/dev.tsv
- split: test
path: data/kon_Latn/test.tsv
- config_name: kor_Hang
data_files:
- split: train
path: data/kor_Hang/train.tsv
- split: validation
path: data/kor_Hang/dev.tsv
- split: test
path: data/kor_Hang/test.tsv
- config_name: lao_Laoo
data_files:
- split: train
path: data/lao_Laoo/train.tsv
- split: validation
path: data/lao_Laoo/dev.tsv
- split: test
path: data/lao_Laoo/test.tsv
- config_name: lij_Latn
data_files:
- split: train
path: data/lij_Latn/train.tsv
- split: validation
path: data/lij_Latn/dev.tsv
- split: test
path: data/lij_Latn/test.tsv
- config_name: lim_Latn
data_files:
- split: train
path: data/lim_Latn/train.tsv
- split: validation
path: data/lim_Latn/dev.tsv
- split: test
path: data/lim_Latn/test.tsv
- config_name: lin_Latn
data_files:
- split: train
path: data/lin_Latn/train.tsv
- split: validation
path: data/lin_Latn/dev.tsv
- split: test
path: data/lin_Latn/test.tsv
- config_name: lit_Latn
data_files:
- split: train
path: data/lit_Latn/train.tsv
- split: validation
path: data/lit_Latn/dev.tsv
- split: test
path: data/lit_Latn/test.tsv
- config_name: lmo_Latn
data_files:
- split: train
path: data/lmo_Latn/train.tsv
- split: validation
path: data/lmo_Latn/dev.tsv
- split: test
path: data/lmo_Latn/test.tsv
- config_name: ltg_Latn
data_files:
- split: train
path: data/ltg_Latn/train.tsv
- split: validation
path: data/ltg_Latn/dev.tsv
- split: test
path: data/ltg_Latn/test.tsv
- config_name: ltz_Latn
data_files:
- split: train
path: data/ltz_Latn/train.tsv
- split: validation
path: data/ltz_Latn/dev.tsv
- split: test
path: data/ltz_Latn/test.tsv
- config_name: lua_Latn
data_files:
- split: train
path: data/lua_Latn/train.tsv
- split: validation
path: data/lua_Latn/dev.tsv
- split: test
path: data/lua_Latn/test.tsv
- config_name: lug_Latn
data_files:
- split: train
path: data/lug_Latn/train.tsv
- split: validation
path: data/lug_Latn/dev.tsv
- split: test
path: data/lug_Latn/test.tsv
- config_name: luo_Latn
data_files:
- split: train
path: data/luo_Latn/train.tsv
- split: validation
path: data/luo_Latn/dev.tsv
- split: test
path: data/luo_Latn/test.tsv
- config_name: lus_Latn
data_files:
- split: train
path: data/lus_Latn/train.tsv
- split: validation
path: data/lus_Latn/dev.tsv
- split: test
path: data/lus_Latn/test.tsv
- config_name: lvs_Latn
data_files:
- split: train
path: data/lvs_Latn/train.tsv
- split: validation
path: data/lvs_Latn/dev.tsv
- split: test
path: data/lvs_Latn/test.tsv
- config_name: mag_Deva
data_files:
- split: train
path: data/mag_Deva/train.tsv
- split: validation
path: data/mag_Deva/dev.tsv
- split: test
path: data/mag_Deva/test.tsv
- config_name: mai_Deva
data_files:
- split: train
path: data/mai_Deva/train.tsv
- split: validation
path: data/mai_Deva/dev.tsv
- split: test
path: data/mai_Deva/test.tsv
- config_name: mal_Mlym
data_files:
- split: train
path: data/mal_Mlym/train.tsv
- split: validation
path: data/mal_Mlym/dev.tsv
- split: test
path: data/mal_Mlym/test.tsv
- config_name: mar_Deva
data_files:
- split: train
path: data/mar_Deva/train.tsv
- split: validation
path: data/mar_Deva/dev.tsv
- split: test
path: data/mar_Deva/test.tsv
- config_name: min_Arab
data_files:
- split: train
path: data/min_Arab/train.tsv
- split: validation
path: data/min_Arab/dev.tsv
- split: test
path: data/min_Arab/test.tsv
- config_name: min_Latn
data_files:
- split: train
path: data/min_Latn/train.tsv
- split: validation
path: data/min_Latn/dev.tsv
- split: test
path: data/min_Latn/test.tsv
- config_name: mkd_Cyrl
data_files:
- split: train
path: data/mkd_Cyrl/train.tsv
- split: validation
path: data/mkd_Cyrl/dev.tsv
- split: test
path: data/mkd_Cyrl/test.tsv
- config_name: mlt_Latn
data_files:
- split: train
path: data/mlt_Latn/train.tsv
- split: validation
path: data/mlt_Latn/dev.tsv
- split: test
path: data/mlt_Latn/test.tsv
- config_name: mni_Beng
data_files:
- split: train
path: data/mni_Beng/train.tsv
- split: validation
path: data/mni_Beng/dev.tsv
- split: test
path: data/mni_Beng/test.tsv
- config_name: mos_Latn
data_files:
- split: train
path: data/mos_Latn/train.tsv
- split: validation
path: data/mos_Latn/dev.tsv
- split: test
path: data/mos_Latn/test.tsv
- config_name: mri_Latn
data_files:
- split: train
path: data/mri_Latn/train.tsv
- split: validation
path: data/mri_Latn/dev.tsv
- split: test
path: data/mri_Latn/test.tsv
- config_name: mya_Mymr
data_files:
- split: train
path: data/mya_Mymr/train.tsv
- split: validation
path: data/mya_Mymr/dev.tsv
- split: test
path: data/mya_Mymr/test.tsv
- config_name: nld_Latn
data_files:
- split: train
path: data/nld_Latn/train.tsv
- split: validation
path: data/nld_Latn/dev.tsv
- split: test
path: data/nld_Latn/test.tsv
- config_name: nno_Latn
data_files:
- split: train
path: data/nno_Latn/train.tsv
- split: validation
path: data/nno_Latn/dev.tsv
- split: test
path: data/nno_Latn/test.tsv
- config_name: nob_Latn
data_files:
- split: train
path: data/nob_Latn/train.tsv
- split: validation
path: data/nob_Latn/dev.tsv
- split: test
path: data/nob_Latn/test.tsv
- config_name: npi_Deva
data_files:
- split: train
path: data/npi_Deva/train.tsv
- split: validation
path: data/npi_Deva/dev.tsv
- split: test
path: data/npi_Deva/test.tsv
- config_name: nqo_Nkoo
data_files:
- split: train
path: data/nqo_Nkoo/train.tsv
- split: validation
path: data/nqo_Nkoo/dev.tsv
- split: test
path: data/nqo_Nkoo/test.tsv
- config_name: nqo_Nkoo.zip
data_files:
- split: train
path: data/nqo_Nkoo.zip/train.tsv
- split: validation
path: data/nqo_Nkoo.zip/dev.tsv
- split: test
path: data/nqo_Nkoo.zip/test.tsv
- config_name: nso_Latn
data_files:
- split: train
path: data/nso_Latn/train.tsv
- split: validation
path: data/nso_Latn/dev.tsv
- split: test
path: data/nso_Latn/test.tsv
- config_name: nus_Latn
data_files:
- split: train
path: data/nus_Latn/train.tsv
- split: validation
path: data/nus_Latn/dev.tsv
- split: test
path: data/nus_Latn/test.tsv
- config_name: nya_Latn
data_files:
- split: train
path: data/nya_Latn/train.tsv
- split: validation
path: data/nya_Latn/dev.tsv
- split: test
path: data/nya_Latn/test.tsv
- config_name: oci_Latn
data_files:
- split: train
path: data/oci_Latn/train.tsv
- split: validation
path: data/oci_Latn/dev.tsv
- split: test
path: data/oci_Latn/test.tsv
- config_name: ory_Orya
data_files:
- split: train
path: data/ory_Orya/train.tsv
- split: validation
path: data/ory_Orya/dev.tsv
- split: test
path: data/ory_Orya/test.tsv
- config_name: pag_Latn
data_files:
- split: train
path: data/pag_Latn/train.tsv
- split: validation
path: data/pag_Latn/dev.tsv
- split: test
path: data/pag_Latn/test.tsv
- config_name: pan_Guru
data_files:
- split: train
path: data/pan_Guru/train.tsv
- split: validation
path: data/pan_Guru/dev.tsv
- split: test
path: data/pan_Guru/test.tsv
- config_name: pap_Latn
data_files:
- split: train
path: data/pap_Latn/train.tsv
- split: validation
path: data/pap_Latn/dev.tsv
- split: test
path: data/pap_Latn/test.tsv
- config_name: pbt_Arab
data_files:
- split: train
path: data/pbt_Arab/train.tsv
- split: validation
path: data/pbt_Arab/dev.tsv
- split: test
path: data/pbt_Arab/test.tsv
- config_name: pes_Arab
data_files:
- split: train
path: data/pes_Arab/train.tsv
- split: validation
path: data/pes_Arab/dev.tsv
- split: test
path: data/pes_Arab/test.tsv
- config_name: plt_Latn
data_files:
- split: train
path: data/plt_Latn/train.tsv
- split: validation
path: data/plt_Latn/dev.tsv
- split: test
path: data/plt_Latn/test.tsv
- config_name: pol_Latn
data_files:
- split: train
path: data/pol_Latn/train.tsv
- split: validation
path: data/pol_Latn/dev.tsv
- split: test
path: data/pol_Latn/test.tsv
- config_name: por_Latn
data_files:
- split: train
path: data/por_Latn/train.tsv
- split: validation
path: data/por_Latn/dev.tsv
- split: test
path: data/por_Latn/test.tsv
- config_name: prs_Arab
data_files:
- split: train
path: data/prs_Arab/train.tsv
- split: validation
path: data/prs_Arab/dev.tsv
- split: test
path: data/prs_Arab/test.tsv
- config_name: quy_Latn
data_files:
- split: train
path: data/quy_Latn/train.tsv
- split: validation
path: data/quy_Latn/dev.tsv
- split: test
path: data/quy_Latn/test.tsv
- config_name: ron_Latn
data_files:
- split: train
path: data/ron_Latn/train.tsv
- split: validation
path: data/ron_Latn/dev.tsv
- split: test
path: data/ron_Latn/test.tsv
- config_name: run_Latn
data_files:
- split: train
path: data/run_Latn/train.tsv
- split: validation
path: data/run_Latn/dev.tsv
- split: test
path: data/run_Latn/test.tsv
- config_name: rus_Cyrl
data_files:
- split: train
path: data/rus_Cyrl/train.tsv
- split: validation
path: data/rus_Cyrl/dev.tsv
- split: test
path: data/rus_Cyrl/test.tsv
- config_name: sag_Latn
data_files:
- split: train
path: data/sag_Latn/train.tsv
- split: validation
path: data/sag_Latn/dev.tsv
- split: test
path: data/sag_Latn/test.tsv
- config_name: san_Deva
data_files:
- split: train
path: data/san_Deva/train.tsv
- split: validation
path: data/san_Deva/dev.tsv
- split: test
path: data/san_Deva/test.tsv
- config_name: sat_Olck
data_files:
- split: train
path: data/sat_Olck/train.tsv
- split: validation
path: data/sat_Olck/dev.tsv
- split: test
path: data/sat_Olck/test.tsv
- config_name: scn_Latn
data_files:
- split: train
path: data/scn_Latn/train.tsv
- split: validation
path: data/scn_Latn/dev.tsv
- split: test
path: data/scn_Latn/test.tsv
- config_name: shn_Mymr
data_files:
- split: train
path: data/shn_Mymr/train.tsv
- split: validation
path: data/shn_Mymr/dev.tsv
- split: test
path: data/shn_Mymr/test.tsv
- config_name: sin_Sinh
data_files:
- split: train
path: data/sin_Sinh/train.tsv
- split: validation
path: data/sin_Sinh/dev.tsv
- split: test
path: data/sin_Sinh/test.tsv
- config_name: slk_Latn
data_files:
- split: train
path: data/slk_Latn/train.tsv
- split: validation
path: data/slk_Latn/dev.tsv
- split: test
path: data/slk_Latn/test.tsv
- config_name: slv_Latn
data_files:
- split: train
path: data/slv_Latn/train.tsv
- split: validation
path: data/slv_Latn/dev.tsv
- split: test
path: data/slv_Latn/test.tsv
- config_name: smo_Latn
data_files:
- split: train
path: data/smo_Latn/train.tsv
- split: validation
path: data/smo_Latn/dev.tsv
- split: test
path: data/smo_Latn/test.tsv
- config_name: sna_Latn
data_files:
- split: train
path: data/sna_Latn/train.tsv
- split: validation
path: data/sna_Latn/dev.tsv
- split: test
path: data/sna_Latn/test.tsv
- config_name: snd_Arab
data_files:
- split: train
path: data/snd_Arab/train.tsv
- split: validation
path: data/snd_Arab/dev.tsv
- split: test
path: data/snd_Arab/test.tsv
- config_name: som_Latn
data_files:
- split: train
path: data/som_Latn/train.tsv
- split: validation
path: data/som_Latn/dev.tsv
- split: test
path: data/som_Latn/test.tsv
- config_name: sot_Latn
data_files:
- split: train
path: data/sot_Latn/train.tsv
- split: validation
path: data/sot_Latn/dev.tsv
- split: test
path: data/sot_Latn/test.tsv
- config_name: spa_Latn
data_files:
- split: train
path: data/spa_Latn/train.tsv
- split: validation
path: data/spa_Latn/dev.tsv
- split: test
path: data/spa_Latn/test.tsv
- config_name: srd_Latn
data_files:
- split: train
path: data/srd_Latn/train.tsv
- split: validation
path: data/srd_Latn/dev.tsv
- split: test
path: data/srd_Latn/test.tsv
- config_name: srp_Cyrl
data_files:
- split: train
path: data/srp_Cyrl/train.tsv
- split: validation
path: data/srp_Cyrl/dev.tsv
- split: test
path: data/srp_Cyrl/test.tsv
- config_name: ssw_Latn
data_files:
- split: train
path: data/ssw_Latn/train.tsv
- split: validation
path: data/ssw_Latn/dev.tsv
- split: test
path: data/ssw_Latn/test.tsv
- config_name: sun_Latn
data_files:
- split: train
path: data/sun_Latn/train.tsv
- split: validation
path: data/sun_Latn/dev.tsv
- split: test
path: data/sun_Latn/test.tsv
- config_name: swe_Latn
data_files:
- split: train
path: data/swe_Latn/train.tsv
- split: validation
path: data/swe_Latn/dev.tsv
- split: test
path: data/swe_Latn/test.tsv
- config_name: swh_Latn
data_files:
- split: train
path: data/swh_Latn/train.tsv
- split: validation
path: data/swh_Latn/dev.tsv
- split: test
path: data/swh_Latn/test.tsv
- config_name: szl_Latn
data_files:
- split: train
path: data/szl_Latn/train.tsv
- split: validation
path: data/szl_Latn/dev.tsv
- split: test
path: data/szl_Latn/test.tsv
- config_name: tam_Taml
data_files:
- split: train
path: data/tam_Taml/train.tsv
- split: validation
path: data/tam_Taml/dev.tsv
- split: test
path: data/tam_Taml/test.tsv
- config_name: taq_Latn
data_files:
- split: train
path: data/taq_Latn/train.tsv
- split: validation
path: data/taq_Latn/dev.tsv
- split: test
path: data/taq_Latn/test.tsv
- config_name: taq_Tfng
data_files:
- split: train
path: data/taq_Tfng/train.tsv
- split: validation
path: data/taq_Tfng/dev.tsv
- split: test
path: data/taq_Tfng/test.tsv
- config_name: tat_Cyrl
data_files:
- split: train
path: data/tat_Cyrl/train.tsv
- split: validation
path: data/tat_Cyrl/dev.tsv
- split: test
path: data/tat_Cyrl/test.tsv
- config_name: tel_Telu
data_files:
- split: train
path: data/tel_Telu/train.tsv
- split: validation
path: data/tel_Telu/dev.tsv
- split: test
path: data/tel_Telu/test.tsv
- config_name: tgk_Cyrl
data_files:
- split: train
path: data/tgk_Cyrl/train.tsv
- split: validation
path: data/tgk_Cyrl/dev.tsv
- split: test
path: data/tgk_Cyrl/test.tsv
- config_name: tgl_Latn
data_files:
- split: train
path: data/tgl_Latn/train.tsv
- split: validation
path: data/tgl_Latn/dev.tsv
- split: test
path: data/tgl_Latn/test.tsv
- config_name: tha_Thai
data_files:
- split: train
path: data/tha_Thai/train.tsv
- split: validation
path: data/tha_Thai/dev.tsv
- split: test
path: data/tha_Thai/test.tsv
- config_name: tir_Ethi
data_files:
- split: train
path: data/tir_Ethi/train.tsv
- split: validation
path: data/tir_Ethi/dev.tsv
- split: test
path: data/tir_Ethi/test.tsv
- config_name: tpi_Latn
data_files:
- split: train
path: data/tpi_Latn/train.tsv
- split: validation
path: data/tpi_Latn/dev.tsv
- split: test
path: data/tpi_Latn/test.tsv
- config_name: tsn_Latn
data_files:
- split: train
path: data/tsn_Latn/train.tsv
- split: validation
path: data/tsn_Latn/dev.tsv
- split: test
path: data/tsn_Latn/test.tsv
- config_name: tso_Latn
data_files:
- split: train
path: data/tso_Latn/train.tsv
- split: validation
path: data/tso_Latn/dev.tsv
- split: test
path: data/tso_Latn/test.tsv
- config_name: tuk_Latn
data_files:
- split: train
path: data/tuk_Latn/train.tsv
- split: validation
path: data/tuk_Latn/dev.tsv
- split: test
path: data/tuk_Latn/test.tsv
- config_name: tum_Latn
data_files:
- split: train
path: data/tum_Latn/train.tsv
- split: validation
path: data/tum_Latn/dev.tsv
- split: test
path: data/tum_Latn/test.tsv
- config_name: tur_Latn
data_files:
- split: train
path: data/tur_Latn/train.tsv
- split: validation
path: data/tur_Latn/dev.tsv
- split: test
path: data/tur_Latn/test.tsv
- config_name: twi_Latn
data_files:
- split: train
path: data/twi_Latn/train.tsv
- split: validation
path: data/twi_Latn/dev.tsv
- split: test
path: data/twi_Latn/test.tsv
- config_name: tzm_Tfng
data_files:
- split: train
path: data/tzm_Tfng/train.tsv
- split: validation
path: data/tzm_Tfng/dev.tsv
- split: test
path: data/tzm_Tfng/test.tsv
- config_name: uig_Arab
data_files:
- split: train
path: data/uig_Arab/train.tsv
- split: validation
path: data/uig_Arab/dev.tsv
- split: test
path: data/uig_Arab/test.tsv
- config_name: ukr_Cyrl
data_files:
- split: train
path: data/ukr_Cyrl/train.tsv
- split: validation
path: data/ukr_Cyrl/dev.tsv
- split: test
path: data/ukr_Cyrl/test.tsv
- config_name: umb_Latn
data_files:
- split: train
path: data/umb_Latn/train.tsv
- split: validation
path: data/umb_Latn/dev.tsv
- split: test
path: data/umb_Latn/test.tsv
- config_name: urd_Arab
data_files:
- split: train
path: data/urd_Arab/train.tsv
- split: validation
path: data/urd_Arab/dev.tsv
- split: test
path: data/urd_Arab/test.tsv
- config_name: uzn_Latn
data_files:
- split: train
path: data/uzn_Latn/train.tsv
- split: validation
path: data/uzn_Latn/dev.tsv
- split: test
path: data/uzn_Latn/test.tsv
- config_name: vec_Latn
data_files:
- split: train
path: data/vec_Latn/train.tsv
- split: validation
path: data/vec_Latn/dev.tsv
- split: test
path: data/vec_Latn/test.tsv
- config_name: vie_Latn
data_files:
- split: train
path: data/vie_Latn/train.tsv
- split: validation
path: data/vie_Latn/dev.tsv
- split: test
path: data/vie_Latn/test.tsv
- config_name: war_Latn
data_files:
- split: train
path: data/war_Latn/train.tsv
- split: validation
path: data/war_Latn/dev.tsv
- split: test
path: data/war_Latn/test.tsv
- config_name: wol_Latn
data_files:
- split: train
path: data/wol_Latn/train.tsv
- split: validation
path: data/wol_Latn/dev.tsv
- split: test
path: data/wol_Latn/test.tsv
- config_name: xho_Latn
data_files:
- split: train
path: data/xho_Latn/train.tsv
- split: validation
path: data/xho_Latn/dev.tsv
- split: test
path: data/xho_Latn/test.tsv
- config_name: ydd_Hebr
data_files:
- split: train
path: data/ydd_Hebr/train.tsv
- split: validation
path: data/ydd_Hebr/dev.tsv
- split: test
path: data/ydd_Hebr/test.tsv
- config_name: yor_Latn
data_files:
- split: train
path: data/yor_Latn/train.tsv
- split: validation
path: data/yor_Latn/dev.tsv
- split: test
path: data/yor_Latn/test.tsv
- config_name: yue_Hant
data_files:
- split: train
path: data/yue_Hant/train.tsv
- split: validation
path: data/yue_Hant/dev.tsv
- split: test
path: data/yue_Hant/test.tsv
- config_name: zho_Hans
data_files:
- split: train
path: data/zho_Hans/train.tsv
- split: validation
path: data/zho_Hans/dev.tsv
- split: test
path: data/zho_Hans/test.tsv
- config_name: zho_Hant
data_files:
- split: train
path: data/zho_Hant/train.tsv
- split: validation
path: data/zho_Hant/dev.tsv
- split: test
path: data/zho_Hant/test.tsv
- config_name: zsm_Latn
data_files:
- split: train
path: data/zsm_Latn/train.tsv
- split: validation
path: data/zsm_Latn/dev.tsv
- split: test
path: data/zsm_Latn/test.tsv
- config_name: zul_Latn
data_files:
- split: train
path: data/zul_Latn/train.tsv
- split: validation
path: data/zul_Latn/dev.tsv
- split: test
path: data/zul_Latn/test.tsv
---
# Dataset Card for SIB-200
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [homepage](https://github.com/dadelani/sib-200)
- **Repository:** [github](https://github.com/dadelani/sib-200)
- **Paper:** [paper](https://arxiv.org/abs/2309.07445)
- **Point of Contact:** [email protected]
### Dataset Summary
SIB-200 is the largest publicly available topic classification dataset based on Flores-200 covering 205 languages and dialects.
The train/validation/test sets are available for all the 205 languages.
### Supported Tasks and Leaderboards
- `topic classification`: categorize wikipedia sentences into topics e.g science/technology, sports or politics.
### Languages
There are 205 languages available :
## Dataset Structure
### Data Instances
The examples look like this for English:
```
from datasets import load_dataset
data = load_dataset('Davlan/sib200', 'eng_Latn')
# Please, specify the language code
# A data point example is below:
{
'label': 0,
'index_id': 1523,
'text': 'Mutation adds new genetic variation, and selection removes it from the pool of expressed variation.'
}
```
### Data Fields
- `label`: topic id
- `index_id`: sentence id in flores-200
- `text`: text
The topics correspond to this list:
```
"science/technology", "travel", "politics", "sports", "health", "entertainment", "geography"
```
### Data Splits
For all languages, there are three splits.
The original splits were named `train`, `dev` and `test` and they correspond to the `train`, `validation` and `test` splits.
The splits have the following sizes :
| Language | train | validation | test |
|-----------------|------:|-----------:|-----:|
| English | 701 | 99 | 204 |
## Dataset Creation
### Curation Rationale
The dataset was introduced to introduce new resources for 205 languages, many are under-served for natural language processing.
[More Information Needed]
### Source Data
The source of the data is from the news domain, details can be found here ****
#### Initial Data Collection and Normalization
The articles were word-tokenized, information on the exact pre-processing pipeline is unavailable.
#### Who are the source language producers?
The source language was produced by journalists and writers employed by the news agency and newspaper mentioned above.
### Annotations
#### Annotation process
Details can be found here **
#### Who are the annotators?
Annotators were recruited from [Masakhane](https://www.masakhane.io/)
### Personal and Sensitive Information
The data is sourced from newspaper source and only contains mentions of public figures or individuals
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Users should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains.
## Additional Information
### Dataset Curators
### Licensing Information
The licensing status of the data is CC 4.0 Commercial
### Citation Information
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
```
@misc{adelani2023sib200,
title={SIB-200: A Simple, Inclusive, and Big Evaluation Dataset for Topic Classification in 200+ Languages and Dialects},
author={David Ifeoluwa Adelani and Hannah Liu and Xiaoyu Shen and Nikita Vassilyev and Jesujoba O. Alabi and Yanke Mao and Haonan Gao and Annie En-Shiun Lee},
year={2023},
eprint={2309.07445},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset. |
bigscience/xP3mt | bigscience | "2023-05-30T15:50:57Z" | 9,778 | 23 | [
"task_categories:other",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"multilinguality:multilingual",
"language:ak",
"language:ar",
"language:as",
"language:bm",
"language:bn",
"language:ca",
"language:code",
"language:en",
"language:es",
"language:eu",
"language:fon",
"language:fr",
"language:gu",
"language:hi",
"language:id",
"language:ig",
"language:ki",
"language:kn",
"language:lg",
"language:ln",
"language:ml",
"language:mr",
"language:ne",
"language:nso",
"language:ny",
"language:or",
"language:pa",
"language:pt",
"language:rn",
"language:rw",
"language:sn",
"language:st",
"language:sw",
"language:ta",
"language:te",
"language:tn",
"language:ts",
"language:tum",
"language:tw",
"language:ur",
"language:vi",
"language:wo",
"language:xh",
"language:yo",
"language:zh",
"language:zu",
"license:apache-2.0",
"size_categories:10M<n<100M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2211.01786",
"region:us"
] | [
"other"
] | "2022-09-28T12:36:00Z" | ---
annotations_creators:
- expert-generated
- crowdsourced
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
license:
- apache-2.0
multilinguality:
- multilingual
pretty_name: xP3
size_categories:
- 100M<n<1B
task_categories:
- other
---
# Dataset Card for xP3
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/bigscience-workshop/xmtf
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:[email protected])
### Dataset Summary
> xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.
- **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3). We provide this version to save processing time and ease reproducibility.
- **Languages:** 46 (Can be extended by [recreating with more splits](https://github.com/bigscience-workshop/xmtf#create-xp3))
- **xP3 Dataset Family:**
<table>
<tr>
<th>Name</th>
<th>Explanation</th>
<th>Example models</th>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t>
<td>Mixture of 17 tasks in 277 languages with English prompts</td>
<td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t>
<td>Mixture of 13 training tasks in 46 languages with English prompts</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t>
<td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t>
<td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td>
<td></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t>
<td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t>
<td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
</tr>
</table>
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"inputs": "Oración 1: Fue académico en literatura metafísica, teología y ciencias clásicas.\Oración 2: Fue académico en literatura metafísica, teología y ciencia clásica.\nPregunta: ¿La oración 1 parafrasea la oración 2? ¿Si o no?",
"targets": "Sí"
}
```
### Data Fields
The data fields are the same among all splits:
- `inputs`: the natural language input fed to the model
- `targets`: the natural language target that the model has to generate
### Data Splits
The below table summarizes sizes per language (computed from the `merged_{lang}.jsonl` files). Due to languages like `tw` only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage. We machine-translated prompts for monolingual datasets, thus languages with only crosslingual datasets (e.g. Translation) do not have non-English prompts. Languages without non-English prompts are equivalent to [xP3](https://huggingface.co/datasets/bigscience/xP3).
|Language|Kilobytes|%|Samples|%|Non-English prompts|
|--------|------:|-:|---:|-:|-:|
|tw|106288|0.11|265071|0.33| |
|bm|107056|0.11|265180|0.33| |
|ak|108096|0.11|265071|0.33| |
|ca|110608|0.11|271191|0.34| |
|eu|113008|0.12|281199|0.35| |
|fon|113072|0.12|265063|0.33| |
|st|114080|0.12|265063|0.33| |
|ki|115040|0.12|265180|0.33| |
|tum|116032|0.12|265063|0.33| |
|wo|122560|0.13|365063|0.46| |
|ln|126304|0.13|365060|0.46| |
|as|156256|0.16|265063|0.33| |
|or|161472|0.17|265063|0.33| |
|kn|165456|0.17|265063|0.33| |
|ml|175040|0.18|265864|0.33| |
|rn|192992|0.2|318189|0.4| |
|nso|229712|0.24|915051|1.14| |
|tn|235536|0.24|915054|1.14| |
|lg|235936|0.24|915021|1.14| |
|rw|249360|0.26|915043|1.14| |
|ts|250256|0.26|915044|1.14| |
|sn|252496|0.26|865056|1.08| |
|xh|254672|0.26|915058|1.14| |
|zu|263712|0.27|915061|1.14| |
|ny|272128|0.28|915063|1.14| |
|ig|325440|0.33|950097|1.19|✅|
|yo|339664|0.35|913021|1.14|✅|
|ne|398144|0.41|315754|0.39|✅|
|pa|529632|0.55|339210|0.42|✅|
|sw|561392|0.58|1114439|1.39|✅|
|gu|566576|0.58|347499|0.43|✅|
|mr|674000|0.69|417269|0.52|✅|
|bn|854864|0.88|428725|0.54|✅|
|ta|943440|0.97|410633|0.51|✅|
|te|1384016|1.42|573354|0.72|✅|
|ur|1944416|2.0|855756|1.07|✅|
|vi|3113184|3.2|1667306|2.08|✅|
|code|4330752|4.46|2707724|3.38| |
|hi|4469712|4.6|1543441|1.93|✅|
|id|4538768|4.67|2582272|3.22|✅|
|zh|4604112|4.74|3571636|4.46|✅|
|ar|4703968|4.84|2148970|2.68|✅|
|fr|5558912|5.72|5055942|6.31|✅|
|pt|6130016|6.31|3562772|4.45|✅|
|es|7579424|7.8|5151349|6.43|✅|
|en|39252528|40.4|32740750|40.87| |
|total|97150128|100.0|80100816|100.0|✅|
## Dataset Creation
### Source Data
#### Training datasets
- Code Miscellaneous
- [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex)
- [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus)
- [GreatCode](https://huggingface.co/datasets/great_code)
- [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes)
- Closed-book QA
- [Hotpot QA](https://huggingface.co/datasets/hotpot_qa)
- [Trivia QA](https://huggingface.co/datasets/trivia_qa)
- [Web Questions](https://huggingface.co/datasets/web_questions)
- [Wiki QA](https://huggingface.co/datasets/wiki_qa)
- Extractive QA
- [Adversarial QA](https://huggingface.co/datasets/adversarial_qa)
- [CMRC2018](https://huggingface.co/datasets/cmrc2018)
- [DRCD](https://huggingface.co/datasets/clue)
- [DuoRC](https://huggingface.co/datasets/duorc)
- [MLQA](https://huggingface.co/datasets/mlqa)
- [Quoref](https://huggingface.co/datasets/quoref)
- [ReCoRD](https://huggingface.co/datasets/super_glue)
- [ROPES](https://huggingface.co/datasets/ropes)
- [SQuAD v2](https://huggingface.co/datasets/squad_v2)
- [xQuAD](https://huggingface.co/datasets/xquad)
- TyDI QA
- [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary)
- [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp)
- Multiple-Choice QA
- [ARC](https://huggingface.co/datasets/ai2_arc)
- [C3](https://huggingface.co/datasets/c3)
- [CoS-E](https://huggingface.co/datasets/cos_e)
- [Cosmos](https://huggingface.co/datasets/cosmos)
- [DREAM](https://huggingface.co/datasets/dream)
- [MultiRC](https://huggingface.co/datasets/super_glue)
- [OpenBookQA](https://huggingface.co/datasets/openbookqa)
- [PiQA](https://huggingface.co/datasets/piqa)
- [QUAIL](https://huggingface.co/datasets/quail)
- [QuaRel](https://huggingface.co/datasets/quarel)
- [QuaRTz](https://huggingface.co/datasets/quartz)
- [QASC](https://huggingface.co/datasets/qasc)
- [RACE](https://huggingface.co/datasets/race)
- [SciQ](https://huggingface.co/datasets/sciq)
- [Social IQA](https://huggingface.co/datasets/social_i_qa)
- [Wiki Hop](https://huggingface.co/datasets/wiki_hop)
- [WiQA](https://huggingface.co/datasets/wiqa)
- Paraphrase Identification
- [MRPC](https://huggingface.co/datasets/super_glue)
- [PAWS](https://huggingface.co/datasets/paws)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [QQP](https://huggingface.co/datasets/qqp)
- Program Synthesis
- [APPS](https://huggingface.co/datasets/codeparrot/apps)
- [CodeContests](https://huggingface.co/datasets/teven/code_contests)
- [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs)
- [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp)
- [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search)
- [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code)
- Structure-to-text
- [Common Gen](https://huggingface.co/datasets/common_gen)
- [Wiki Bio](https://huggingface.co/datasets/wiki_bio)
- Sentiment
- [Amazon](https://huggingface.co/datasets/amazon_polarity)
- [App Reviews](https://huggingface.co/datasets/app_reviews)
- [IMDB](https://huggingface.co/datasets/imdb)
- [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes)
- [Yelp](https://huggingface.co/datasets/yelp_review_full)
- Simplification
- [BiSECT](https://huggingface.co/datasets/GEM/BiSECT)
- Summarization
- [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail)
- [Gigaword](https://huggingface.co/datasets/gigaword)
- [MultiNews](https://huggingface.co/datasets/multi_news)
- [SamSum](https://huggingface.co/datasets/samsum)
- [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua)
- [XLSum](https://huggingface.co/datasets/GEM/xlsum)
- [XSum](https://huggingface.co/datasets/xsum)
- Topic Classification
- [AG News](https://huggingface.co/datasets/ag_news)
- [DBPedia](https://huggingface.co/datasets/dbpedia_14)
- [TNEWS](https://huggingface.co/datasets/clue)
- [TREC](https://huggingface.co/datasets/trec)
- [CSL](https://huggingface.co/datasets/clue)
- Translation
- [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200)
- [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt)
- Word Sense disambiguation
- [WiC](https://huggingface.co/datasets/super_glue)
- [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic)
#### Evaluation datasets (included in [xP3all](https://huggingface.co/datasets/bigscience/xP3all) except for NLI & HumanEval)
- Natural Language Inference (NLI)
- [ANLI](https://huggingface.co/datasets/anli)
- [CB](https://huggingface.co/datasets/super_glue)
- [RTE](https://huggingface.co/datasets/super_glue)
- [XNLI](https://huggingface.co/datasets/xnli)
- Coreference Resolution
- [Winogrande](https://huggingface.co/datasets/winogrande)
- [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd)
- Program Synthesis
- [HumanEval](https://huggingface.co/datasets/openai_humaneval)
- Sentence Completion
- [COPA](https://huggingface.co/datasets/super_glue)
- [Story Cloze](https://huggingface.co/datasets/story_cloze)
- [XCOPA](https://huggingface.co/datasets/xcopa)
- [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze)
## Additional Information
### Licensing Information
The dataset is released under Apache 2.0.
### Citation Information
```bibtex
@misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset. |
HuggingFaceTB/finemath | HuggingFaceTB | "2024-12-23T11:19:16Z" | 9,764 | 163 | [
"license:odc-by",
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"doi:10.57967/hf/3847",
"region:us"
] | null | "2024-11-25T15:23:13Z" | ---
license: odc-by
dataset_info:
- config_name: finemath-3plus
features:
- name: url
dtype: string
- name: fetch_time
dtype: int64
- name: content_mime_type
dtype: string
- name: warc_filename
dtype: string
- name: warc_record_offset
dtype: int32
- name: warc_record_length
dtype: int32
- name: text
dtype: string
- name: token_count
dtype: int32
- name: char_count
dtype: int32
- name: metadata
dtype: string
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: crawl
dtype: string
- name: snapshot_type
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
splits:
- name: train
num_bytes: 137764105388.93857
num_examples: 21405610
download_size: 65039196945
dataset_size: 137764105388.93857
- config_name: finemath-4plus
features:
- name: url
dtype: string
- name: fetch_time
dtype: int64
- name: content_mime_type
dtype: string
- name: warc_filename
dtype: string
- name: warc_record_offset
dtype: int32
- name: warc_record_length
dtype: int32
- name: text
dtype: string
- name: token_count
dtype: int32
- name: char_count
dtype: int32
- name: metadata
dtype: string
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: crawl
dtype: string
- name: snapshot_type
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
splits:
- name: train
num_bytes: 39101488149.09091
num_examples: 6699493
download_size: 18365184633
dataset_size: 39101488149.09091
- config_name: infiwebmath-3plus
features:
- name: url
dtype: string
- name: metadata
dtype: string
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: token_count
dtype: int64
- name: char_count
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 96485696853.10182
num_examples: 13882669
download_size: 46808660851
dataset_size: 96485696853.10182
- config_name: infiwebmath-4plus
features:
- name: url
dtype: string
- name: metadata
dtype: string
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: token_count
dtype: int64
- name: char_count
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 40002719500.1551
num_examples: 6296212
download_size: 19234328998
dataset_size: 40002719500.1551
configs:
- config_name: finemath-3plus
data_files:
- split: train
path: finemath-3plus/train-*
- config_name: finemath-4plus
data_files:
- split: train
path: finemath-4plus/train-*
- config_name: infiwebmath-3plus
data_files:
- split: train
path: infiwebmath-3plus/train-*
- config_name: infiwebmath-4plus
data_files:
- split: train
path: infiwebmath-4plus/train-*
---
# 📐 FineMath
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/0GAdY8wZx6bGtUzqX4Lvi.png)
## What is it?
📐 FineMath consists of **34B tokens** (FineMath-3+) and **54B tokens** (FineMath-3+ with InfiMM-WebMath-3+) of mathematical educational content filtered from CommonCrawl. To curate this dataset, we trained a mathematical content [classifier](https://huggingface.co/HuggingFaceTB/finemath-classifier) using annotations generated by LLama-3.1-70B-Instruct. We used the classifier to retain only the most educational mathematics content, focusing on clear explanations and step-by-step problem solving rather than advanced academic papers.
The [Dataset Curation](#dataset-curation) section details the process for creating the dataset.
<img src="assets/train_curves.png" width="800"/>
## What is being released?
The dataset is released in two versions:
- **FineMath-3+**: 34B tokens, 21.4M documents containing mathematical reasoning and problem solving, formatted with Markdown and LaTeX.
- **FineMath-4+** (a subset of FineMath-3+): 9.6B tokens, 6.7M documents of higher quality with detailed explanations. Models trained on this dataset perform better on GSM8k and MATH.
<!-- (the image looks kinda meh) <img src="assets/stats.png" width="512"/> -->
We also release a filtered English text-only portion of the **[InfiMM-WebMath-40B](https://huggingface.co/datasets/Infi-MM/InfiMM-WebMath-40B)** dataset, classified using the same approach as FineMath:
- **InfiMM-WebMath-3+**: 20.5B tokens, 13.9M documents.
- **InfiMM-WebMath-4+** (a subset of InfiMM-WebMath-3+): 8.5B tokens, 6.3M documents.
## How to load the dataset
Use one of the available configs: `finemath-3plus`, `finemath-4plus`, `infiwebmath-3plus`, or `infiwebmath-4plus`.
```python
from datasets import load_dataset
# Load the high-quality subset
data = load_dataset("HuggingFaceTB/finemath", "finemath-4plus", split="train", num_proc=8)
# Or load the larger subset
data = load_dataset("HuggingFaceTB/finemath", "finemath-3plus", split="train", num_proc=8)
```
## Dataset curation
Recent language models like DeepSeekMath and MathStral have demonstrated strong mathematical capabilities, trained on specialized datasets that aren't publicly available. We developed a pipeline to identify and extract high-quality mathematical content from CommonCrawl, with several iterations of refinement to improve quality.
### Phase 1: Initial content extraction and classification
We began by re-extracting pages from CommonCrawl WARCs using URLs from the FineWeb dataset, collecting both the latest and largest versions of each page to capture the evolution of pages across the years.
Unlike FineWeb which uses Trafilatura, we employed Resiliparse for text extraction as it better preserves forum discussions and QA answers that often contain crucial reasoning steps and solutions.
For initial quality assessment, we used [Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) to generate annotations on a 3-point scale:
1. Contains general mathematical content
2. Shows logical reasoning in mathematical context
3. Contains clear step-by-step solutions at appropriate level
A `multilingual-e5-small`-based classifier finetuned on these annotations was used to score the initial corpus.
However, this first version performed below the OpenWebMath baseline, leading to several important refinements.
### Phase 2: Recalling more candidate pages
Analysis revealed that FineWeb's C4 filter removes pages containing '{' characters, inadvertently filtering out content with LaTeX notation. To address this and expand coverage, we:
1. Identified promising website domains by selecting those where at least 10% of pages received a classifier score ≥ 2
2. Added URLs from OpenWebMath and InfiMM-WebMath datasets
3. Recovered URLs of pages filtered by FineWeb's '{' rule from its rejection logs
4. Re-extracted all content from scratch using the [OpenWebMath pipeline](https://github.com/keirp/OpenWebMath), which properly handles mathematical notation across various HTML markup formats and standardizes them to LaTeX
### Phase 3: Refined quality assessment
The expanded corpus underwent a more fine-grained quality evaluation:
Once again, we used LLama-3.1-70B-Instruct to score a sample of newly extracted pages on a 5-point scale (full prompt available in [here](assets/prompt.txt)):
We finetuned a new [classifier](https://huggingface.co/HuggingFaceTB/finemath-classifier) on these annotations and scored the entire corpus.
After leaving only pages with a score of 3 or higher, and deduplicating the samples using simple single-band MinHash-LSH, we obtained FineMath-3+ with 34B tokens.
The same classifier was applied to InfiMM-WebMath's text content, focusing more on reasoning rather than advanced mathematics.
Both datasets were additionally filtered using FineWeb's language classification pipeline to remove non-English content.
### Decontamination
Following Qwen2.5-Math's approach, we removed samples with 13-gram overlaps against test sets from GSM8k, MATH, MMLU and ARC. Decontamination logs are available at [HuggingFaceTB/finemath_contamination_report](https://huggingface.co/datasets/HuggingFaceTB/finemath_contamination_report).
## Results and Performance
<img src="assets/eval_bar.png" width="600"/>
Our evaluations show several key findings:
1. FineMath-3+ outperforms the base InfiWebMath on GSM8k and MATH benchmarks
2. FineMath-4+ demonstrates superior performance compared to both FineMath-3+ and InfiWebMath-4+ on GSM8k and MATH
3. Combining the datasets (50% FineMath-3+ with 50% InfiWebMath-3+) yields approximately 50B tokens while matching the performance of FineMath-3+
4. Deduplicating the pages repeated between FineMath and InfiWebMath reduces performance compared to a non-deduplicated combination
## Dataset Schema
```python
{
'url': string, # Source page URL
'fetch_time': int64, # Crawler timestamp
'content_mime_type': string, # MIME type
'warc_filename': string, # Common Crawl WARC source file
'warc_record_offset': int32, # WARC record offset, in bytes
'warc_record_length': int32, # WARC record size, in bytes
'text': string, # Page content
'token_count': int32, # Number of Llama tokens
'char_count': int32, # Character count
'metadata': string, # Additional OpenWebMath metadata
'score': float64, # Raw quality score
'int_score': int64, # Integer quality score
'crawl': string, # Common Crawl crawl identifier
'snapshot_type': string, # Whether the page is the latest or the largest for this URL
'language': string, # Document language
'language_score': float64 # LangID probability
}
```
## Considerations for Using the Data
### Social Impact of Dataset
With the release of this dataset, we aim to make high-quality mathematical educational content more accessible to the machine learning community. While multiple language models have demonstrated strong mathematical capabilities, the datasets used to train these capabilities are often not publicly available. By releasing FineMath, we hope to:
- Make the dataset creation process more transparent
- Reduce the barrier to entry for training models with strong mathematical capabilities
- Provide a benchmark for mathematical content quality filtering
### Discussion of Biases
The dataset may have certain inherent biases:
- Focus on English language content
- Emphasis on popular educational approaches to mathematics
- Bias towards certain types of mathematical notation and formatting
### Other Known Limitations
- The dataset is limited to English language content
- The filtering criteria may not capture advanced mathematical content (e.g. advanced research subjects)
- Some mathematical notation (e.g. image-based) may not be preserved
- Long-form content may have varying quality even within high-scoring documents
## Licensing Information
The dataset is released under the **Open Data Commons Attribution License (ODC-By) v1.0** [license](https://opendatacommons.org/licenses/by/1-0/). The use of this dataset is also subject to [CommonCrawl's Terms of Use](https://commoncrawl.org/terms-of-use).
## Future work
There are several avenues for future work:
- Expand language coverage beyond English
- Improve mathematical notation extraction and preservation
- Develop more sophisticated quality metrics
- Create specialized subsets for different educational levels
### Citation Information
```
@misc{lozhkov2024finemath,
author = { Lozhkov, Anton and Ben Allal, Loubna and Bakouch, Elie and von Werra, Leandro and Wolf, Thomas },
title = { FineMath: the Finest Collection of Mathematical Content },
year = 2024,
url = { https://huggingface.co/datasets/HuggingFaceTB/finemath },
doi = { 10.57967/hf/3847 },
publisher = { Hugging Face }
}
``` |
cimec/lambada | cimec | "2024-01-04T14:16:25Z" | 9,757 | 57 | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended|bookcorpus",
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"long-range-dependency"
] | [
"text2text-generation"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|bookcorpus
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: lambada
pretty_name: LAMBADA
tags:
- long-range-dependency
dataset_info:
config_name: plain_text
features:
- name: text
dtype: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 978174122
num_examples: 2662
- name: test
num_bytes: 1791823
num_examples: 5153
- name: validation
num_bytes: 1703482
num_examples: 4869
download_size: 552427340
dataset_size: 981669427
configs:
- config_name: plain_text
data_files:
- split: train
path: plain_text/train-*
- split: test
path: plain_text/test-*
- split: validation
path: plain_text/validation-*
default: true
---
# Dataset Card for LAMBADA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [LAMBADA homepage](https://zenodo.org/record/2630551#.X8UP76pKiIa)
- **Paper:** [The LAMBADA dataset: Word prediction requiring a broad discourse context∗](https://www.aclweb.org/anthology/P16-1144.pdf)
- **Data:** https://doi.org/10.5281/zenodo.2630551
### Dataset Summary
The LAMBADA evaluates the capabilities of computational models
for text understanding by means of a word prediction task.
LAMBADA is a collection of narrative passages sharing the characteristic
that human subjects are able to guess their last word if
they are exposed to the whole passage, but not if they
only see the last sentence preceding the target word.
To succeed on LAMBADA, computational models cannot
simply rely on local context, but must be able to
keep track of information in the broader discourse.
The LAMBADA dataset is extracted from BookCorpus and
consists of 10'022 passages, divided into 4'869 development
and 5'153 test passages. The training data for language
models to be tested on LAMBADA include the full text
of 2'662 novels (disjoint from those in dev+test),
comprising 203 million words.
### Supported Tasks and Leaderboards
Long range dependency evaluated as (last) word prediction
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
A data point is a text sequence (passage) including the context, the target sentence (the last one) and the target word. For each passage in the dev and the test splits, the word to be guessed is the last one.
The training data include the full text of 2'662 novels (disjoint from
those in dev+test), comprising more than 200M words. It consists of text from the same domain as the dev+test passages, but not filtered in any way.
Each training instance has a `category` field indicating which sub-category the book was extracted from. This field is not given for the dev and test splits.
An example looks like this:
```
{"category": "Mystery",
"text": "bob could have been called in at this point , but he was n't miffed at his exclusion at all . he was relieved at not being brought into this initial discussion with central command . `` let 's go make some grub , '' said bob as he turned to danny . danny did n't keep his stoic expression , but with a look of irritation got up and left the room with bob",
}
```
### Data Fields
- `category`: the sub-category of books from which the book was extracted from. Only available for the training split.
- `text`: the text (concatenation of context, target sentence and target word). The word to be guessed is the last one.
### Data Splits
- train: 2'662 novels
- dev: 4'869 passages
- test: 5'153 passages
## Dataset Creation
### Curation Rationale
The dataset aims at evaluating the ability of language models to hold long-term contextual memories. Instances are extracted from books because they display long-term dependencies. In particular, the data are curated such that the target words are easy to guess by human subjects when they can look at the whole passage they come from, but nearly impossible if only the last sentence is considered.
### Source Data
#### Initial Data Collection and Normalization
The corpus was duplicated and potentially offensive material were filtered out with a stop word list.
#### Who are the source language producers?
The passages are extracted from novels from [Book Corpus](https://github.com/huggingface/datasets/tree/master/datasets/bookcorpus).
### Annotations
#### Annotation process
The authors required two consecutive subjects (paid crowdsourcers) to exactly match the missing word based on the whole passage (comprising the context and the target sentence), and made sure that no subject (out of ten) was able to provide it based on local context only, even when given 3 guesses.
#### Who are the annotators?
The text is self-annotated but was curated by asking (paid) crowdsourcers to guess the last word.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is released under the [CC BY 4.0](Creative Commons Attribution 4.0 International) license.
### Citation Information
```
@InProceedings{paperno-EtAl:2016:P16-1,
author = {Paperno, Denis and Kruszewski, Germ\'{a}n and Lazaridou,
Angeliki and Pham, Ngoc Quan and Bernardi, Raffaella and Pezzelle,
Sandro and Baroni, Marco and Boleda, Gemma and Fernandez, Raquel},
title = {The {LAMBADA} dataset: Word prediction requiring a broad
discourse context},
booktitle = {Proceedings of the 54th Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers)},
month = {August},
year = {2016},
address = {Berlin, Germany},
publisher = {Association for Computational Linguistics},
pages = {1525--1534},
url = {http://www.aclweb.org/anthology/P16-1144}
}
```
### Contributions
Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset. |
HuggingFaceGECLM/REDDIT_comments | HuggingFaceGECLM | "2023-03-17T07:52:51Z" | 9,752 | 11 | [
"task_categories:text-generation",
"task_ids:dialogue-modeling",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"language:en",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2001.08435",
"region:us",
"reddit",
"social-media"
] | [
"text-generation"
] | "2023-03-15T14:14:58Z" | ---
dataset_info:
features:
- name: archived
dtype: string
- name: author
dtype: string
- name: author_fullname
dtype: string
- name: body
dtype: string
- name: comment_type
dtype: string
- name: controversiality
dtype: string
- name: created_utc
dtype: string
- name: edited
dtype: string
- name: gilded
dtype: string
- name: id
dtype: string
- name: link_id
dtype: string
- name: locked
dtype: string
- name: name
dtype: string
- name: parent_id
dtype: string
- name: permalink
dtype: string
- name: retrieved_on
dtype: string
- name: score
dtype: string
- name: subreddit_id
dtype: string
- name: subreddit_name_prefixed
dtype: string
- name: subreddit_type
dtype: string
- name: total_awards_received
dtype: string
splits:
- name: programming
num_bytes: 3466623746
num_examples: 7503347
- name: tifu
num_bytes: 4761338653
num_examples: 12738669
- name: explainlikeimfive
num_bytes: 8451732573
num_examples: 16392814
- name: WritingPrompts
num_bytes: 4651591771
num_examples: 4436210
- name: changemyview
num_bytes: 8603031915
num_examples: 11600073
- name: LifeProTips
num_bytes: 5272994396
num_examples: 12829459
- name: todayilearned
num_bytes: 22655655241
num_examples: 60199778
- name: science
num_bytes: 7069809765
num_examples: 18112884
- name: askscience
num_bytes: 3144754665
num_examples: 6286702
- name: ifyoulikeblank
num_bytes: 547200329
num_examples: 1332211
- name: Foodforthought
num_bytes: 308377128
num_examples: 567900
- name: IWantToLearn
num_bytes: 408331672
num_examples: 745543
- name: bestof
num_bytes: 2003718831
num_examples: 4347522
- name: IAmA
num_bytes: 9380094090
num_examples: 25778822
- name: socialskills
num_bytes: 1000014402
num_examples: 1842733
- name: relationship_advice
num_bytes: 22298879735
num_examples: 38937398
- name: philosophy
num_bytes: 1494947876
num_examples: 2391695
- name: YouShouldKnow
num_bytes: 1165617658
num_examples: 2639265
- name: history
num_bytes: 1457852402
num_examples: 2962043
- name: books
num_bytes: 4562689426
num_examples: 10187495
- name: Showerthoughts
num_bytes: 13259109532
num_examples: 34123213
- name: personalfinance
num_bytes: 9484869588
num_examples: 18361314
- name: buildapc
num_bytes: 9801044390
num_examples: 21761801
- name: EatCheapAndHealthy
num_bytes: 853462012
num_examples: 1821897
- name: boardgames
num_bytes: 3131627378
num_examples: 6328926
- name: malefashionadvice
num_bytes: 2928017882
num_examples: 7712258
- name: femalefashionadvice
num_bytes: 1619784736
num_examples: 3262969
- name: scifi
num_bytes: 888152056
num_examples: 2193741
- name: Fantasy
num_bytes: 2285934538
num_examples: 4566639
- name: Games
num_bytes: 10396813188
num_examples: 23373965
- name: bodyweightfitness
num_bytes: 794549854
num_examples: 1613634
- name: SkincareAddiction
num_bytes: 3421122597
num_examples: 5660550
- name: podcasts
num_bytes: 464773126
num_examples: 943266
- name: suggestmeabook
num_bytes: 1842944304
num_examples: 3492937
- name: AskHistorians
num_bytes: 2244587909
num_examples: 2714353
- name: gaming
num_bytes: 28374513722
num_examples: 85729253
- name: DIY
num_bytes: 2113533684
num_examples: 4489265
- name: sports
num_bytes: 2230129132
num_examples: 6470079
- name: space
num_bytes: 3081499208
num_examples: 7896182
- name: gadgets
num_bytes: 1683252868
num_examples: 4104833
- name: Documentaries
num_bytes: 1852644771
num_examples: 4051474
- name: GetMotivated
num_bytes: 1211761267
num_examples: 3221980
- name: UpliftingNews
num_bytes: 2003149025
num_examples: 4741948
- name: technology
num_bytes: 10826871436
num_examples: 25404699
- name: Fitness
num_bytes: 6191132755
num_examples: 14319856
- name: travel
num_bytes: 1740556350
num_examples: 3806755
- name: lifehacks
num_bytes: 626791812
num_examples: 1799437
- name: Damnthatsinteresting
num_bytes: 6376694618
num_examples: 15643554
- name: gardening
num_bytes: 1825313940
num_examples: 4568468
- name: mildlyinteresting
num_bytes: 9079894206
num_examples: 26436769
download_size: 109177016105
dataset_size: 255339788158
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: Reddit comments
size_categories:
- 10B<n<100B
source_datasets: []
tags:
- reddit
- social-media
task_categories:
- text-generation
task_ids:
- dialogue-modeling
- language-modeling
---
# Dataset Card for "REDDIT_comments"
## Dataset Description
- **Homepage:**
- **Paper: https://arxiv.org/abs/2001.08435**
### Dataset Summary
Comments of 50 high-quality subreddits, extracted from the REDDIT PushShift data dumps (from 2006 to Jan 2023).
### Supported Tasks
These comments can be used for text generation and language modeling, as well as dialogue modeling.
## Dataset Structure
### Data Splits
Each split corresponds to a specific subreddit in the following list: "tifu", "explainlikeimfive", "WritingPrompts", "changemyview", "LifeProTips", "todayilearned", "science", "askscience", "ifyoulikeblank", "Foodforthought", "IWantToLearn", "bestof", "IAmA", "socialskills", "relationship_advice", "philosophy", "YouShouldKnow", "history", "books", "Showerthoughts", "personalfinance", "buildapc", "EatCheapAndHealthy", "boardgames", "malefashionadvice", "femalefashionadvice", "scifi", "Fantasy", "Games", "bodyweightfitness", "SkincareAddiction", "podcasts", "suggestmeabook", "AskHistorians", "gaming", "DIY", "mildlyinteresting", "sports", "space", "gadgets", "Documentaries", "GetMotivated", "UpliftingNews", "technology", "Fitness", "travel", "lifehacks", "Damnthatsinteresting", "gardening", "programming"
## Dataset Creation
### Curation Rationale
All the information fields have been cast to string, as their format change through time from one dump to the following. A reduced number of keys have been kept: "archived", "author", "author_fullname", "body", "comment_type", "controversiality", "created_utc", "edited", "gilded", "id", "link_id", "locked", "name", "parent_id", "permalink", "retrieved_on", "score", "subreddit", "subreddit_id", "subreddit_name_prefixed", "subreddit_type", "total_awards_received".
### Source Data
The [Reddit PushShift data dumps](https://files.pushshift.io/reddit/) are part of a data collection effort which crawls Reddit at regular intervals, to extract and keep all its data.
#### Initial Data Collection and Normalization
See the paper.
#### Who are the source language producers?
Redditors are mostly young (65% below 30), male (70%), and American (50% of the site).
### Personal and Sensitive Information
The data contains Redditor's usernames associated to their content.
## Considerations for Using the Data
This dataset should be anonymized before any processing.
Though the subreddits selected are considered as being of higher quality, they can still reflect what you can find on the internet in terms of expressions of biases and toxicity.
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset. |
google-research-datasets/paws-x | google-research-datasets | "2024-01-04T16:17:17Z" | 9,751 | 39 | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"task_ids:multi-input-text-classification",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:multilingual",
"source_datasets:extended|other-paws",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:ja",
"language:ko",
"language:zh",
"license:other",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1908.11828",
"region:us",
"paraphrase-identification"
] | [
"text-classification"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- expert-generated
- machine-generated
language_creators:
- expert-generated
- machine-generated
language:
- de
- en
- es
- fr
- ja
- ko
- zh
license:
- other
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-paws
task_categories:
- text-classification
task_ids:
- semantic-similarity-classification
- semantic-similarity-scoring
- text-scoring
- multi-input-text-classification
paperswithcode_id: paws-x
pretty_name: 'PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification'
tags:
- paraphrase-identification
dataset_info:
- config_name: de
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 12801784
num_examples: 49401
- name: test
num_bytes: 524206
num_examples: 2000
- name: validation
num_bytes: 514001
num_examples: 2000
download_size: 9601920
dataset_size: 13839991
- config_name: en
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 12215913
num_examples: 49401
- name: test
num_bytes: 494726
num_examples: 2000
- name: validation
num_bytes: 492279
num_examples: 2000
download_size: 9045005
dataset_size: 13202918
- config_name: es
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 12808446
num_examples: 49401
- name: test
num_bytes: 519103
num_examples: 2000
- name: validation
num_bytes: 513880
num_examples: 2000
download_size: 9538815
dataset_size: 13841429
- config_name: fr
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 13295557
num_examples: 49401
- name: test
num_bytes: 535093
num_examples: 2000
- name: validation
num_bytes: 533023
num_examples: 2000
download_size: 9785410
dataset_size: 14363673
- config_name: ja
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 15041592
num_examples: 49401
- name: test
num_bytes: 668628
num_examples: 2000
- name: validation
num_bytes: 661770
num_examples: 2000
download_size: 10435711
dataset_size: 16371990
- config_name: ko
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 13934181
num_examples: 49401
- name: test
num_bytes: 562292
num_examples: 2000
- name: validation
num_bytes: 554867
num_examples: 2000
download_size: 10263972
dataset_size: 15051340
- config_name: zh
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 10815459
num_examples: 49401
- name: test
num_bytes: 474636
num_examples: 2000
- name: validation
num_bytes: 473110
num_examples: 2000
download_size: 9178953
dataset_size: 11763205
configs:
- config_name: de
data_files:
- split: train
path: de/train-*
- split: test
path: de/test-*
- split: validation
path: de/validation-*
- config_name: en
data_files:
- split: train
path: en/train-*
- split: test
path: en/test-*
- split: validation
path: en/validation-*
- config_name: es
data_files:
- split: train
path: es/train-*
- split: test
path: es/test-*
- split: validation
path: es/validation-*
- config_name: fr
data_files:
- split: train
path: fr/train-*
- split: test
path: fr/test-*
- split: validation
path: fr/validation-*
- config_name: ja
data_files:
- split: train
path: ja/train-*
- split: test
path: ja/test-*
- split: validation
path: ja/validation-*
- config_name: ko
data_files:
- split: train
path: ko/train-*
- split: test
path: ko/test-*
- split: validation
path: ko/validation-*
- config_name: zh
data_files:
- split: train
path: zh/train-*
- split: test
path: zh/test-*
- split: validation
path: zh/validation-*
---
# Dataset Card for PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [PAWS-X](https://github.com/google-research-datasets/paws/tree/master/pawsx)
- **Repository:** [PAWS-X](https://github.com/google-research-datasets/paws/tree/master/pawsx)
- **Paper:** [PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification](https://arxiv.org/abs/1908.11828)
- **Point of Contact:** [Yinfei Yang]([email protected])
### Dataset Summary
This dataset contains 23,659 **human** translated PAWS evaluation pairs and
296,406 **machine** translated training pairs in six typologically distinct
languages: French, Spanish, German, Chinese, Japanese, and Korean. All
translated pairs are sourced from examples in
[PAWS-Wiki](https://github.com/google-research-datasets/paws#paws-wiki).
For further details, see the accompanying paper:
[PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase
Identification](https://arxiv.org/abs/1908.11828)
### Supported Tasks and Leaderboards
It has been majorly used for paraphrase identification for English and other 6 languages namely French, Spanish, German, Chinese, Japanese, and Korean
### Languages
The dataset is in English, French, Spanish, German, Chinese, Japanese, and Korean
## Dataset Structure
### Data Instances
For en:
```
id : 1
sentence1 : In Paris , in October 1560 , he secretly met the English ambassador , Nicolas Throckmorton , asking him for a passport to return to England through Scotland .
sentence2 : In October 1560 , he secretly met with the English ambassador , Nicolas Throckmorton , in Paris , and asked him for a passport to return to Scotland through England .
label : 0
```
For fr:
```
id : 1
sentence1 : À Paris, en octobre 1560, il rencontra secrètement l'ambassadeur d'Angleterre, Nicolas Throckmorton, lui demandant un passeport pour retourner en Angleterre en passant par l'Écosse.
sentence2 : En octobre 1560, il rencontra secrètement l'ambassadeur d'Angleterre, Nicolas Throckmorton, à Paris, et lui demanda un passeport pour retourner en Écosse par l'Angleterre.
label : 0
```
### Data Fields
All files are in tsv format with four columns:
Column Name | Data
:---------- | :--------------------------------------------------------
id | An ID that matches the ID of the source pair in PAWS-Wiki
sentence1 | The first sentence
sentence2 | The second sentence
label | Label for each pair
The source text of each translation can be retrieved by looking up the ID in the
corresponding file in PAWS-Wiki.
### Data Splits
The numbers of examples for each of the seven languages are shown below:
Language | Train | Dev | Test
:------- | ------: | -----: | -----:
en | 49,401 | 2,000 | 2,000
fr | 49,401 | 2,000 | 2,000
es | 49,401 | 2,000 | 2,000
de | 49,401 | 2,000 | 2,000
zh | 49,401 | 2,000 | 2,000
ja | 49,401 | 2,000 | 2,000
ko | 49,401 | 2,000 | 2,000
> **Caveat**: please note that the dev and test sets of PAWS-X are both sourced
> from the dev set of PAWS-Wiki. As a consequence, the same `sentence 1` may
> appear in both the dev and test sets. Nevertheless our data split guarantees
> that there is no overlap on sentence pairs (`sentence 1` + `sentence 2`)
> between dev and test.
## Dataset Creation
### Curation Rationale
Most existing work on adversarial data generation focuses on English. For example, PAWS (Paraphrase Adversaries from Word Scrambling) (Zhang et al., 2019) consists of challenging English paraphrase identification pairs from Wikipedia and Quora. They remedy this gap with PAWS-X, a new dataset of 23,659 human translated PAWS evaluation pairs in six typologically distinct languages: French, Spanish, German, Chinese, Japanese, and Korean. They provide baseline numbers for three models with different capacity to capture non-local context and sentence structure, and using different multilingual training and evaluation regimes. Multilingual BERT (Devlin et al., 2019) fine-tuned on PAWS English plus machine-translated data performs the best, with a range of 83.1-90.8 accuracy across the non-English languages and an average accuracy gain of 23% over the next best model. PAWS-X shows the effectiveness of deep, multilingual pre-training while also leaving considerable headroom as a new challenge to drive multilingual research that better captures structure and contextual information.
### Source Data
PAWS (Paraphrase Adversaries from Word Scrambling)
#### Initial Data Collection and Normalization
All translated pairs are sourced from examples in [PAWS-Wiki](https://github.com/google-research-datasets/paws#paws-wiki)
#### Who are the source language producers?
This dataset contains 23,659 human translated PAWS evaluation pairs and 296,406 machine translated training pairs in six typologically distinct languages: French, Spanish, German, Chinese, Japanese, and Korean.
### Annotations
#### Annotation process
If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.
#### Who are the annotators?
The paper mentions the translate team, especially Mengmeng Niu, for the help with the annotations.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
### Licensing Information
The dataset may be freely used for any purpose, although acknowledgement of Google LLC ("Google") as the data source would be appreciated. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.
### Citation Information
```
@InProceedings{pawsx2019emnlp,
title = {{PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification}},
author = {Yang, Yinfei and Zhang, Yuan and Tar, Chris and Baldridge, Jason},
booktitle = {Proc. of EMNLP},
year = {2019}
}
```
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik), [@gowtham1997](https://github.com/gowtham1997) for adding this dataset. |
alvations/c4p0-x1-en-engb | alvations | "2024-03-24T03:52:17Z" | 9,710 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-23T10:17:27Z" | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
- name: target_backto_source
dtype: string
- name: raw_target
list:
- name: generated_text
dtype: string
- name: raw_target_backto_source
list:
- name: generated_text
dtype: string
- name: prompt
dtype: string
- name: reverse_prompt
dtype: string
- name: source_langid
dtype: string
- name: target_langid
dtype: string
- name: target_backto_source_langid
dtype: string
- name: doc_id
dtype: int64
- name: sent_id
dtype: int64
- name: timestamp
dtype: string
- name: url
dtype: string
- name: doc_hash
dtype: string
splits:
- name: train
num_bytes: 5583
num_examples: 5
download_size: 17399
dataset_size: 5583
configs:
- config_name: default
data_files:
- split: train
path: 5eeb99e4b632b370/train-*
---
|
asahi417/seamless-align-enA-jaA.speaker-embedding.hubert-xl | asahi417 | "2024-06-14T01:51:16Z" | 9,701 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-06-12T06:48:50Z" | ---
dataset_info:
- config_name: subset_1
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10876520133
num_examples: 2073
download_size: 10908762452
dataset_size: 10876520133
- config_name: subset_10
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9674569297
num_examples: 1961
download_size: 9700306271
dataset_size: 9674569297
- config_name: subset_100
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9422313471
num_examples: 1757
download_size: 9447085440
dataset_size: 9422313471
- config_name: subset_101
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9998168326
num_examples: 1873
download_size: 10027347383
dataset_size: 9998168326
- config_name: subset_102
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10287499716
num_examples: 1868
download_size: 10317718412
dataset_size: 10287499716
- config_name: subset_103
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10324121806
num_examples: 1879
download_size: 10354352259
dataset_size: 10324121806
- config_name: subset_104
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10263173609
num_examples: 1901
download_size: 10293587612
dataset_size: 10263173609
- config_name: subset_105
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10125643360
num_examples: 1875
download_size: 10152113436
dataset_size: 10125643360
- config_name: subset_106
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10184641498
num_examples: 1880
download_size: 10213159494
dataset_size: 10184641498
- config_name: subset_107
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9945312725
num_examples: 1854
download_size: 9974410300
dataset_size: 9945312725
- config_name: subset_108
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10122729548
num_examples: 1834
download_size: 10152878773
dataset_size: 10122729548
- config_name: subset_109
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9646581786
num_examples: 1770
download_size: 9675397019
dataset_size: 9646581786
- config_name: subset_11
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8736765067
num_examples: 1779
download_size: 8761578004
dataset_size: 8736765067
- config_name: subset_110
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10410535331
num_examples: 1908
download_size: 10439335513
dataset_size: 10410535331
- config_name: subset_111
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10188356145
num_examples: 1877
download_size: 10218696271
dataset_size: 10188356145
- config_name: subset_112
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10485541758
num_examples: 1924
download_size: 10513113708
dataset_size: 10485541758
- config_name: subset_113
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10386567011
num_examples: 1930
download_size: 10417054414
dataset_size: 10386567011
- config_name: subset_114
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10591092324
num_examples: 1940
download_size: 10619534397
dataset_size: 10591092324
- config_name: subset_115
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10341488955
num_examples: 1902
download_size: 10371862024
dataset_size: 10341488955
- config_name: subset_116
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10529719750
num_examples: 1910
download_size: 10558882034
dataset_size: 10529719750
- config_name: subset_117
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10297417332
num_examples: 1901
download_size: 10327810400
dataset_size: 10297417332
- config_name: subset_118
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10533890733
num_examples: 1911
download_size: 10565451687
dataset_size: 10533890733
- config_name: subset_119
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10245323889
num_examples: 1867
download_size: 10275576648
dataset_size: 10245323889
- config_name: subset_12
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9731662335
num_examples: 1916
download_size: 9759429233
dataset_size: 9731662335
- config_name: subset_120
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9712138541
num_examples: 1774
download_size: 9737568085
dataset_size: 9712138541
- config_name: subset_121
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10474507472
num_examples: 1895
download_size: 10504742139
dataset_size: 10474507472
- config_name: subset_122
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10118201359
num_examples: 1851
download_size: 10145835390
dataset_size: 10118201359
- config_name: subset_123
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10668996219
num_examples: 1923
download_size: 10699951985
dataset_size: 10668996219
- config_name: subset_124
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10403769859
num_examples: 1886
download_size: 10429558449
dataset_size: 10403769859
- config_name: subset_125
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10522465607
num_examples: 1928
download_size: 10554133951
dataset_size: 10522465607
- config_name: subset_126
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10240079911
num_examples: 1903
download_size: 10269077911
dataset_size: 10240079911
- config_name: subset_127
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10500522515
num_examples: 1902
download_size: 10532042696
dataset_size: 10500522515
- config_name: subset_128
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10252894005
num_examples: 1890
download_size: 10281784120
dataset_size: 10252894005
- config_name: subset_129
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9663474139
num_examples: 1752
download_size: 9690866335
dataset_size: 9663474139
- config_name: subset_13
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8794624469
num_examples: 1769
download_size: 8820465273
dataset_size: 8794624469
- config_name: subset_130
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10080425471
num_examples: 1830
download_size: 10110566138
dataset_size: 10080425471
- config_name: subset_131
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10401094794
num_examples: 1882
download_size: 10429416473
dataset_size: 10401094794
- config_name: subset_132
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10457446364
num_examples: 1918
download_size: 10485865817
dataset_size: 10457446364
- config_name: subset_133
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10171821729
num_examples: 1886
download_size: 10202198422
dataset_size: 10171821729
- config_name: subset_134
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10462308565
num_examples: 1912
download_size: 10492670895
dataset_size: 10462308565
- config_name: subset_135
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10390714049
num_examples: 1888
download_size: 10420979533
dataset_size: 10390714049
- config_name: subset_136
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10277513585
num_examples: 1875
download_size: 10307787783
dataset_size: 10277513585
- config_name: subset_137
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10243012984
num_examples: 1866
download_size: 10273259941
dataset_size: 10243012984
- config_name: subset_138
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10196538053
num_examples: 1863
download_size: 10226765925
dataset_size: 10196538053
- config_name: subset_139
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10173792264
num_examples: 1859
download_size: 10204033126
dataset_size: 10173792264
- config_name: subset_14
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8664933141
num_examples: 1734
download_size: 8691667368
dataset_size: 8664933141
- config_name: subset_140
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9810713416
num_examples: 1766
download_size: 9838770886
dataset_size: 9810713416
- config_name: subset_141
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10401318825
num_examples: 1865
download_size: 10431447945
dataset_size: 10401318825
- config_name: subset_142
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10331604042
num_examples: 1893
download_size: 10361931781
dataset_size: 10331604042
- config_name: subset_143
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10355333367
num_examples: 1894
download_size: 10385663684
dataset_size: 10355333367
- config_name: subset_144
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7742492099
num_examples: 1381
download_size: 7765334663
dataset_size: 7742492099
- config_name: subset_15
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9579730430
num_examples: 1914
download_size: 9608255090
dataset_size: 9579730430
- config_name: subset_16
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9104014026
num_examples: 1862
download_size: 9132206904
dataset_size: 9104014026
- config_name: subset_17
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9482806827
num_examples: 1875
download_size: 9511062893
dataset_size: 9482806827
- config_name: subset_18
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9841358654
num_examples: 1937
download_size: 9870990138
dataset_size: 9841358654
- config_name: subset_19
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9511847926
num_examples: 1917
download_size: 9541482048
dataset_size: 9511847926
- config_name: subset_2
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9905653849
num_examples: 1929
download_size: 9935188764
dataset_size: 9905653849
- config_name: subset_20
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9407924858
num_examples: 1877
download_size: 9436227201
dataset_size: 9407924858
- config_name: subset_21
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9098451395
num_examples: 1761
download_size: 9126314203
dataset_size: 9098451395
- config_name: subset_22
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9250656777
num_examples: 1850
download_size: 9277818541
dataset_size: 9250656777
- config_name: subset_23
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9057374661
num_examples: 1790
download_size: 9085397757
dataset_size: 9057374661
- config_name: subset_24
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8914837743
num_examples: 1758
download_size: 8941549659
dataset_size: 8914837743
- config_name: subset_25
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9914198268
num_examples: 1898
download_size: 9941362425
dataset_size: 9914198268
- config_name: subset_26
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9711988405
num_examples: 1943
download_size: 9741716068
dataset_size: 9711988405
- config_name: subset_27
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9604785951
num_examples: 1903
download_size: 9634373630
dataset_size: 9604785951
- config_name: subset_28
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9737938983
num_examples: 1912
download_size: 9767484883
dataset_size: 9737938983
- config_name: subset_29
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9784631511
num_examples: 1945
download_size: 9811517276
dataset_size: 9784631511
- config_name: subset_3
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9540786756
num_examples: 1899
download_size: 9570365681
dataset_size: 9540786756
- config_name: subset_30
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9809407640
num_examples: 1902
download_size: 9838834622
dataset_size: 9809407640
- config_name: subset_31
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9352779642
num_examples: 1805
download_size: 9380734578
dataset_size: 9352779642
- config_name: subset_32
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9299494462
num_examples: 1797
download_size: 9326535503
dataset_size: 9299494462
- config_name: subset_33
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9079133566
num_examples: 1757
download_size: 9106984613
dataset_size: 9079133566
- config_name: subset_34
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9712189291
num_examples: 1893
download_size: 9739807425
dataset_size: 9712189291
- config_name: subset_35
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9839600386
num_examples: 1928
download_size: 9869138790
dataset_size: 9839600386
- config_name: subset_36
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9606360855
num_examples: 1863
download_size: 9635729919
dataset_size: 9606360855
- config_name: subset_37
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9642330669
num_examples: 1855
download_size: 9670222801
dataset_size: 9642330669
- config_name: subset_38
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9834140258
num_examples: 1890
download_size: 9863506751
dataset_size: 9834140258
- config_name: subset_39
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9725112494
num_examples: 1899
download_size: 9754592391
dataset_size: 9725112494
- config_name: subset_4
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8976885342
num_examples: 1835
download_size: 9002538999
dataset_size: 8976885342
- config_name: subset_40
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10000473788
num_examples: 1931
download_size: 10029772079
dataset_size: 10000473788
- config_name: subset_41
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9334915645
num_examples: 1784
download_size: 9362744529
dataset_size: 9334915645
- config_name: subset_42
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9333912380
num_examples: 1797
download_size: 9361822687
dataset_size: 9333912380
- config_name: subset_43
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9166016124
num_examples: 1757
download_size: 9189912083
dataset_size: 9166016124
- config_name: subset_44
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9649379352
num_examples: 1831
download_size: 9678549281
dataset_size: 9649379352
- config_name: subset_45
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9696258598
num_examples: 1891
download_size: 9725722759
dataset_size: 9696258598
- config_name: subset_46
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9959981112
num_examples: 1897
download_size: 9989307992
dataset_size: 9959981112
- config_name: subset_47
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10007747026
num_examples: 1897
download_size: 10038312177
dataset_size: 10007747026
- config_name: subset_48
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10007788444
num_examples: 1902
download_size: 10038354205
dataset_size: 10007788444
- config_name: subset_49
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9917511575
num_examples: 1875
download_size: 9941157796
dataset_size: 9917511575
- config_name: subset_5
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9820733471
num_examples: 1987
download_size: 9850269724
dataset_size: 9820733471
- config_name: subset_50
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10134544844
num_examples: 1951
download_size: 10165322891
dataset_size: 10134544844
- config_name: subset_51
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9386655137
num_examples: 1752
download_size: 9414301949
dataset_size: 9386655137
- config_name: subset_52
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9446951589
num_examples: 1780
download_size: 9474700273
dataset_size: 9446951589
- config_name: subset_53
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9777957117
num_examples: 1846
download_size: 9807128019
dataset_size: 9777957117
- config_name: subset_54
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9102655130
num_examples: 1723
download_size: 9129263935
dataset_size: 9102655130
- config_name: subset_55
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9780858901
num_examples: 1866
download_size: 9810124485
dataset_size: 9780858901
- config_name: subset_56
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10086702415
num_examples: 1893
download_size: 10117190006
dataset_size: 10086702415
- config_name: subset_57
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10109788587
num_examples: 1924
download_size: 10137705027
dataset_size: 10109788587
- config_name: subset_58
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9853462061
num_examples: 1881
download_size: 9882384601
dataset_size: 9853462061
- config_name: subset_59
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9883115206
num_examples: 1887
download_size: 9912433454
dataset_size: 9883115206
- config_name: subset_6
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9008892954
num_examples: 1810
download_size: 9037072334
dataset_size: 9008892954
- config_name: subset_60
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10053484869
num_examples: 1909
download_size: 10084064374
dataset_size: 10053484869
- config_name: subset_61
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9219503176
num_examples: 1728
download_size: 9246364553
dataset_size: 9219503176
- config_name: subset_62
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9375723434
num_examples: 1787
download_size: 9401019242
dataset_size: 9375723434
- config_name: subset_63
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9567750688
num_examples: 1790
download_size: 9596745445
dataset_size: 9567750688
- config_name: subset_64
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9639191254
num_examples: 1812
download_size: 9668262467
dataset_size: 9639191254
- config_name: subset_65
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10010824960
num_examples: 1877
download_size: 10041256612
dataset_size: 10010824960
- config_name: subset_66
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10044736643
num_examples: 1890
download_size: 10075237919
dataset_size: 10044736643
- config_name: subset_67
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9996933459
num_examples: 1873
download_size: 10026116707
dataset_size: 9996933459
- config_name: subset_68
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10095822332
num_examples: 1883
download_size: 10126245840
dataset_size: 10095822332
- config_name: subset_69
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10268303934
num_examples: 1916
download_size: 10298810059
dataset_size: 10268303934
- config_name: subset_7
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9097360602
num_examples: 1832
download_size: 9122322351
dataset_size: 9097360602
- config_name: subset_70
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10120152697
num_examples: 1903
download_size: 10150083596
dataset_size: 10120152697
- config_name: subset_71
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9284035527
num_examples: 1736
download_size: 9311653969
dataset_size: 9284035527
- config_name: subset_72
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10250178033
num_examples: 1887
download_size: 10280517538
dataset_size: 10250178033
- config_name: subset_73
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9222731957
num_examples: 1736
download_size: 9249882756
dataset_size: 9222731957
- config_name: subset_74
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9818296986
num_examples: 1829
download_size: 9847340045
dataset_size: 9818296986
- config_name: subset_75
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10009491493
num_examples: 1862
download_size: 10039851706
dataset_size: 10009491493
- config_name: subset_76
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10168022034
num_examples: 1914
download_size: 10196509154
dataset_size: 10168022034
- config_name: subset_77
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10076591228
num_examples: 1874
download_size: 10106985644
dataset_size: 10076591228
- config_name: subset_78
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10027538901
num_examples: 1871
download_size: 10057947606
dataset_size: 10027538901
- config_name: subset_79
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10094618912
num_examples: 1891
download_size: 10125094585
dataset_size: 10094618912
- config_name: subset_8
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9933535024
num_examples: 2009
download_size: 9963487174
dataset_size: 9933535024
- config_name: subset_80
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9926663871
num_examples: 1885
download_size: 9955941714
dataset_size: 9926663871
- config_name: subset_81
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10328392928
num_examples: 1913
download_size: 10358834523
dataset_size: 10328392928
- config_name: subset_82
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10325523231
num_examples: 1910
download_size: 10355953098
dataset_size: 10325523231
- config_name: subset_83
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10139356186
num_examples: 1887
download_size: 10169781380
dataset_size: 10139356186
- config_name: subset_84
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10033769422
num_examples: 1867
download_size: 10064131883
dataset_size: 10033769422
- config_name: subset_85
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10014868133
num_examples: 1881
download_size: 10045337903
dataset_size: 10014868133
- config_name: subset_86
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10002691939
num_examples: 1862
download_size: 10029974377
dataset_size: 10002691939
- config_name: subset_87
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10137271516
num_examples: 1897
download_size: 10166450449
dataset_size: 10137271516
- config_name: subset_88
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10210468115
num_examples: 1900
download_size: 10240900582
dataset_size: 10210468115
- config_name: subset_89
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10209211677
num_examples: 1886
download_size: 10239579862
dataset_size: 10209211677
- config_name: subset_9
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9715767539
num_examples: 1977
download_size: 9745666870
dataset_size: 9715767539
- config_name: subset_90
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10196598544
num_examples: 1913
download_size: 10227130013
dataset_size: 10196598544
- config_name: subset_91
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10267980939
num_examples: 1913
download_size: 10298448201
dataset_size: 10267980939
- config_name: subset_92
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10023876490
num_examples: 1886
download_size: 10054355442
dataset_size: 10023876490
- config_name: subset_93
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10193300693
num_examples: 1875
download_size: 10223629532
dataset_size: 10193300693
- config_name: subset_94
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10231732730
num_examples: 1900
download_size: 10262173245
dataset_size: 10231732730
- config_name: subset_95
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10013150375
num_examples: 1867
download_size: 10043533360
dataset_size: 10013150375
- config_name: subset_96
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10309586781
num_examples: 1900
download_size: 10339981006
dataset_size: 10309586781
- config_name: subset_97
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10319142937
num_examples: 1899
download_size: 10349514733
dataset_size: 10319142937
- config_name: subset_98
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10316858013
num_examples: 1904
download_size: 10347258013
dataset_size: 10316858013
- config_name: subset_99
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10377469541
num_examples: 1901
download_size: 10407826150
dataset_size: 10377469541
configs:
- config_name: subset_1
data_files:
- split: train
path: subset_1/train-*
- config_name: subset_10
data_files:
- split: train
path: subset_10/train-*
- config_name: subset_100
data_files:
- split: train
path: subset_100/train-*
- config_name: subset_101
data_files:
- split: train
path: subset_101/train-*
- config_name: subset_102
data_files:
- split: train
path: subset_102/train-*
- config_name: subset_103
data_files:
- split: train
path: subset_103/train-*
- config_name: subset_104
data_files:
- split: train
path: subset_104/train-*
- config_name: subset_105
data_files:
- split: train
path: subset_105/train-*
- config_name: subset_106
data_files:
- split: train
path: subset_106/train-*
- config_name: subset_107
data_files:
- split: train
path: subset_107/train-*
- config_name: subset_108
data_files:
- split: train
path: subset_108/train-*
- config_name: subset_109
data_files:
- split: train
path: subset_109/train-*
- config_name: subset_11
data_files:
- split: train
path: subset_11/train-*
- config_name: subset_110
data_files:
- split: train
path: subset_110/train-*
- config_name: subset_111
data_files:
- split: train
path: subset_111/train-*
- config_name: subset_112
data_files:
- split: train
path: subset_112/train-*
- config_name: subset_113
data_files:
- split: train
path: subset_113/train-*
- config_name: subset_114
data_files:
- split: train
path: subset_114/train-*
- config_name: subset_115
data_files:
- split: train
path: subset_115/train-*
- config_name: subset_116
data_files:
- split: train
path: subset_116/train-*
- config_name: subset_117
data_files:
- split: train
path: subset_117/train-*
- config_name: subset_118
data_files:
- split: train
path: subset_118/train-*
- config_name: subset_119
data_files:
- split: train
path: subset_119/train-*
- config_name: subset_12
data_files:
- split: train
path: subset_12/train-*
- config_name: subset_120
data_files:
- split: train
path: subset_120/train-*
- config_name: subset_121
data_files:
- split: train
path: subset_121/train-*
- config_name: subset_122
data_files:
- split: train
path: subset_122/train-*
- config_name: subset_123
data_files:
- split: train
path: subset_123/train-*
- config_name: subset_124
data_files:
- split: train
path: subset_124/train-*
- config_name: subset_125
data_files:
- split: train
path: subset_125/train-*
- config_name: subset_126
data_files:
- split: train
path: subset_126/train-*
- config_name: subset_127
data_files:
- split: train
path: subset_127/train-*
- config_name: subset_128
data_files:
- split: train
path: subset_128/train-*
- config_name: subset_129
data_files:
- split: train
path: subset_129/train-*
- config_name: subset_13
data_files:
- split: train
path: subset_13/train-*
- config_name: subset_130
data_files:
- split: train
path: subset_130/train-*
- config_name: subset_131
data_files:
- split: train
path: subset_131/train-*
- config_name: subset_132
data_files:
- split: train
path: subset_132/train-*
- config_name: subset_133
data_files:
- split: train
path: subset_133/train-*
- config_name: subset_134
data_files:
- split: train
path: subset_134/train-*
- config_name: subset_135
data_files:
- split: train
path: subset_135/train-*
- config_name: subset_136
data_files:
- split: train
path: subset_136/train-*
- config_name: subset_137
data_files:
- split: train
path: subset_137/train-*
- config_name: subset_138
data_files:
- split: train
path: subset_138/train-*
- config_name: subset_139
data_files:
- split: train
path: subset_139/train-*
- config_name: subset_14
data_files:
- split: train
path: subset_14/train-*
- config_name: subset_140
data_files:
- split: train
path: subset_140/train-*
- config_name: subset_141
data_files:
- split: train
path: subset_141/train-*
- config_name: subset_142
data_files:
- split: train
path: subset_142/train-*
- config_name: subset_143
data_files:
- split: train
path: subset_143/train-*
- config_name: subset_144
data_files:
- split: train
path: subset_144/train-*
- config_name: subset_15
data_files:
- split: train
path: subset_15/train-*
- config_name: subset_16
data_files:
- split: train
path: subset_16/train-*
- config_name: subset_17
data_files:
- split: train
path: subset_17/train-*
- config_name: subset_18
data_files:
- split: train
path: subset_18/train-*
- config_name: subset_19
data_files:
- split: train
path: subset_19/train-*
- config_name: subset_2
data_files:
- split: train
path: subset_2/train-*
- config_name: subset_20
data_files:
- split: train
path: subset_20/train-*
- config_name: subset_21
data_files:
- split: train
path: subset_21/train-*
- config_name: subset_22
data_files:
- split: train
path: subset_22/train-*
- config_name: subset_23
data_files:
- split: train
path: subset_23/train-*
- config_name: subset_24
data_files:
- split: train
path: subset_24/train-*
- config_name: subset_25
data_files:
- split: train
path: subset_25/train-*
- config_name: subset_26
data_files:
- split: train
path: subset_26/train-*
- config_name: subset_27
data_files:
- split: train
path: subset_27/train-*
- config_name: subset_28
data_files:
- split: train
path: subset_28/train-*
- config_name: subset_29
data_files:
- split: train
path: subset_29/train-*
- config_name: subset_3
data_files:
- split: train
path: subset_3/train-*
- config_name: subset_30
data_files:
- split: train
path: subset_30/train-*
- config_name: subset_31
data_files:
- split: train
path: subset_31/train-*
- config_name: subset_32
data_files:
- split: train
path: subset_32/train-*
- config_name: subset_33
data_files:
- split: train
path: subset_33/train-*
- config_name: subset_34
data_files:
- split: train
path: subset_34/train-*
- config_name: subset_35
data_files:
- split: train
path: subset_35/train-*
- config_name: subset_36
data_files:
- split: train
path: subset_36/train-*
- config_name: subset_37
data_files:
- split: train
path: subset_37/train-*
- config_name: subset_38
data_files:
- split: train
path: subset_38/train-*
- config_name: subset_39
data_files:
- split: train
path: subset_39/train-*
- config_name: subset_4
data_files:
- split: train
path: subset_4/train-*
- config_name: subset_40
data_files:
- split: train
path: subset_40/train-*
- config_name: subset_41
data_files:
- split: train
path: subset_41/train-*
- config_name: subset_42
data_files:
- split: train
path: subset_42/train-*
- config_name: subset_43
data_files:
- split: train
path: subset_43/train-*
- config_name: subset_44
data_files:
- split: train
path: subset_44/train-*
- config_name: subset_45
data_files:
- split: train
path: subset_45/train-*
- config_name: subset_46
data_files:
- split: train
path: subset_46/train-*
- config_name: subset_47
data_files:
- split: train
path: subset_47/train-*
- config_name: subset_48
data_files:
- split: train
path: subset_48/train-*
- config_name: subset_49
data_files:
- split: train
path: subset_49/train-*
- config_name: subset_5
data_files:
- split: train
path: subset_5/train-*
- config_name: subset_50
data_files:
- split: train
path: subset_50/train-*
- config_name: subset_51
data_files:
- split: train
path: subset_51/train-*
- config_name: subset_52
data_files:
- split: train
path: subset_52/train-*
- config_name: subset_53
data_files:
- split: train
path: subset_53/train-*
- config_name: subset_54
data_files:
- split: train
path: subset_54/train-*
- config_name: subset_55
data_files:
- split: train
path: subset_55/train-*
- config_name: subset_56
data_files:
- split: train
path: subset_56/train-*
- config_name: subset_57
data_files:
- split: train
path: subset_57/train-*
- config_name: subset_58
data_files:
- split: train
path: subset_58/train-*
- config_name: subset_59
data_files:
- split: train
path: subset_59/train-*
- config_name: subset_6
data_files:
- split: train
path: subset_6/train-*
- config_name: subset_60
data_files:
- split: train
path: subset_60/train-*
- config_name: subset_61
data_files:
- split: train
path: subset_61/train-*
- config_name: subset_62
data_files:
- split: train
path: subset_62/train-*
- config_name: subset_63
data_files:
- split: train
path: subset_63/train-*
- config_name: subset_64
data_files:
- split: train
path: subset_64/train-*
- config_name: subset_65
data_files:
- split: train
path: subset_65/train-*
- config_name: subset_66
data_files:
- split: train
path: subset_66/train-*
- config_name: subset_67
data_files:
- split: train
path: subset_67/train-*
- config_name: subset_68
data_files:
- split: train
path: subset_68/train-*
- config_name: subset_69
data_files:
- split: train
path: subset_69/train-*
- config_name: subset_7
data_files:
- split: train
path: subset_7/train-*
- config_name: subset_70
data_files:
- split: train
path: subset_70/train-*
- config_name: subset_71
data_files:
- split: train
path: subset_71/train-*
- config_name: subset_72
data_files:
- split: train
path: subset_72/train-*
- config_name: subset_73
data_files:
- split: train
path: subset_73/train-*
- config_name: subset_74
data_files:
- split: train
path: subset_74/train-*
- config_name: subset_75
data_files:
- split: train
path: subset_75/train-*
- config_name: subset_76
data_files:
- split: train
path: subset_76/train-*
- config_name: subset_77
data_files:
- split: train
path: subset_77/train-*
- config_name: subset_78
data_files:
- split: train
path: subset_78/train-*
- config_name: subset_79
data_files:
- split: train
path: subset_79/train-*
- config_name: subset_8
data_files:
- split: train
path: subset_8/train-*
- config_name: subset_80
data_files:
- split: train
path: subset_80/train-*
- config_name: subset_81
data_files:
- split: train
path: subset_81/train-*
- config_name: subset_82
data_files:
- split: train
path: subset_82/train-*
- config_name: subset_83
data_files:
- split: train
path: subset_83/train-*
- config_name: subset_84
data_files:
- split: train
path: subset_84/train-*
- config_name: subset_85
data_files:
- split: train
path: subset_85/train-*
- config_name: subset_86
data_files:
- split: train
path: subset_86/train-*
- config_name: subset_87
data_files:
- split: train
path: subset_87/train-*
- config_name: subset_88
data_files:
- split: train
path: subset_88/train-*
- config_name: subset_89
data_files:
- split: train
path: subset_89/train-*
- config_name: subset_9
data_files:
- split: train
path: subset_9/train-*
- config_name: subset_90
data_files:
- split: train
path: subset_90/train-*
- config_name: subset_91
data_files:
- split: train
path: subset_91/train-*
- config_name: subset_92
data_files:
- split: train
path: subset_92/train-*
- config_name: subset_93
data_files:
- split: train
path: subset_93/train-*
- config_name: subset_94
data_files:
- split: train
path: subset_94/train-*
- config_name: subset_95
data_files:
- split: train
path: subset_95/train-*
- config_name: subset_96
data_files:
- split: train
path: subset_96/train-*
- config_name: subset_97
data_files:
- split: train
path: subset_97/train-*
- config_name: subset_98
data_files:
- split: train
path: subset_98/train-*
- config_name: subset_99
data_files:
- split: train
path: subset_99/train-*
---
|
asahi417/seamless-align-enA-zhA.speaker-embedding.xlsr-2b | asahi417 | "2024-06-17T08:52:20Z" | 9,690 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-06-14T10:18:38Z" | ---
dataset_info:
- config_name: subset_1
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14209259131
num_examples: 1962
download_size: 14256120203
dataset_size: 14209259131
- config_name: subset_10
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13574781625
num_examples: 2031
download_size: 13621966757
dataset_size: 13574781625
- config_name: subset_100
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13223964877
num_examples: 1891
download_size: 13269307182
dataset_size: 13223964877
- config_name: subset_101
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13038203739
num_examples: 1885
download_size: 13083404216
dataset_size: 13038203739
- config_name: subset_102
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12731679458
num_examples: 1863
download_size: 12775688644
dataset_size: 12731679458
- config_name: subset_103
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12967209285
num_examples: 1861
download_size: 13011071076
dataset_size: 12967209285
- config_name: subset_104
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12798692606
num_examples: 1875
download_size: 12842795816
dataset_size: 12798692606
- config_name: subset_105
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13127114114
num_examples: 1871
download_size: 13172271401
dataset_size: 13127114114
- config_name: subset_106
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12426801586
num_examples: 1865
download_size: 12469421998
dataset_size: 12426801586
- config_name: subset_107
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12484775174
num_examples: 1838
download_size: 12527398592
dataset_size: 12484775174
- config_name: subset_108
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13018346253
num_examples: 1860
download_size: 13063301347
dataset_size: 13018346253
- config_name: subset_109
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12986696298
num_examples: 1866
download_size: 13030608940
dataset_size: 12986696298
- config_name: subset_11
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13038519979
num_examples: 1994
download_size: 13084550040
dataset_size: 13038519979
- config_name: subset_110
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12537003686
num_examples: 1843
download_size: 12580875152
dataset_size: 12537003686
- config_name: subset_111
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12920543044
num_examples: 1845
download_size: 12964231904
dataset_size: 12920543044
- config_name: subset_112
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12666264009
num_examples: 1844
download_size: 12709732284
dataset_size: 12666264009
- config_name: subset_113
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12572103874
num_examples: 1839
download_size: 12615926245
dataset_size: 12572103874
- config_name: subset_114
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12918422777
num_examples: 1851
download_size: 12960836861
dataset_size: 12918422777
- config_name: subset_115
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12832082885
num_examples: 1821
download_size: 12875679807
dataset_size: 12832082885
- config_name: subset_116
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12805128711
num_examples: 1837
download_size: 12848847004
dataset_size: 12805128711
- config_name: subset_117
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12914312061
num_examples: 1854
download_size: 12957416120
dataset_size: 12914312061
- config_name: subset_118
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12536340519
num_examples: 1814
download_size: 12579845649
dataset_size: 12536340519
- config_name: subset_119
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12663849536
num_examples: 1823
download_size: 12706307005
dataset_size: 12663849536
- config_name: subset_12
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13345506634
num_examples: 2034
download_size: 13391815198
dataset_size: 13345506634
- config_name: subset_120
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12540967981
num_examples: 1835
download_size: 12584734529
dataset_size: 12540967981
- config_name: subset_121
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12881361209
num_examples: 1832
download_size: 12925049223
dataset_size: 12881361209
- config_name: subset_122
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12693240892
num_examples: 1824
download_size: 12736912334
dataset_size: 12693240892
- config_name: subset_123
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12762978970
num_examples: 1800
download_size: 12804981254
dataset_size: 12762978970
- config_name: subset_124
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13001669496
num_examples: 1830
download_size: 13045674231
dataset_size: 13001669496
- config_name: subset_125
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12993926322
num_examples: 1858
download_size: 13037516573
dataset_size: 12993926322
- config_name: subset_126
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13297930843
num_examples: 1888
download_size: 13343186712
dataset_size: 13297930843
- config_name: subset_127
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12751048394
num_examples: 1833
download_size: 12794759892
dataset_size: 12751048394
- config_name: subset_128
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13141818374
num_examples: 1835
download_size: 13185797931
dataset_size: 13141818374
- config_name: subset_129
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13304123905
num_examples: 1885
download_size: 13349335826
dataset_size: 13304123905
- config_name: subset_13
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13296351131
num_examples: 2021
download_size: 13342470698
dataset_size: 13296351131
- config_name: subset_130
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12913357124
num_examples: 1828
download_size: 12956967608
dataset_size: 12913357124
- config_name: subset_131
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12861389006
num_examples: 1813
download_size: 12904807434
dataset_size: 12861389006
- config_name: subset_132
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13234681867
num_examples: 1864
download_size: 13279044682
dataset_size: 13234681867
- config_name: subset_133
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12962105674
num_examples: 1844
download_size: 13005284694
dataset_size: 12962105674
- config_name: subset_134
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12743586667
num_examples: 1826
download_size: 12787256702
dataset_size: 12743586667
- config_name: subset_135
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13166439361
num_examples: 1853
download_size: 13211052569
dataset_size: 13166439361
- config_name: subset_136
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13099690718
num_examples: 1881
download_size: 13144846389
dataset_size: 13099690718
- config_name: subset_137
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12741551269
num_examples: 1837
download_size: 12785267794
dataset_size: 12741551269
- config_name: subset_138
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13205037463
num_examples: 1869
download_size: 13250209887
dataset_size: 13205037463
- config_name: subset_139
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12997304962
num_examples: 1830
download_size: 13040947110
dataset_size: 12997304962
- config_name: subset_14
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13147826343
num_examples: 2012
download_size: 13193978395
dataset_size: 13147826343
- config_name: subset_140
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12925120270
num_examples: 1815
download_size: 12968597687
dataset_size: 12925120270
- config_name: subset_141
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12720671970
num_examples: 1814
download_size: 12764228297
dataset_size: 12720671970
- config_name: subset_142
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13068068628
num_examples: 1851
download_size: 13111428243
dataset_size: 13068068628
- config_name: subset_143
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12707538206
num_examples: 1792
download_size: 12750667442
dataset_size: 12707538206
- config_name: subset_144
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13032830257
num_examples: 1856
download_size: 13077677610
dataset_size: 13032830257
- config_name: subset_145
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13125760032
num_examples: 1850
download_size: 13169372344
dataset_size: 13125760032
- config_name: subset_146
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13267575660
num_examples: 1847
download_size: 13312015745
dataset_size: 13267575660
- config_name: subset_147
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13027789155
num_examples: 1851
download_size: 13072382865
dataset_size: 13027789155
- config_name: subset_148
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13035403722
num_examples: 1848
download_size: 13079919602
dataset_size: 13035403722
- config_name: subset_149
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12973733702
num_examples: 1869
download_size: 13017673308
dataset_size: 12973733702
- config_name: subset_15
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13088705529
num_examples: 2010
download_size: 13134907161
dataset_size: 13088705529
- config_name: subset_150
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13483674365
num_examples: 1866
download_size: 13528056417
dataset_size: 13483674365
- config_name: subset_151
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13181178047
num_examples: 1862
download_size: 13225576965
dataset_size: 13181178047
- config_name: subset_152
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12941487991
num_examples: 1825
download_size: 12985061862
dataset_size: 12941487991
- config_name: subset_153
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13368213999
num_examples: 1859
download_size: 13412797003
dataset_size: 13368213999
- config_name: subset_154
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13297990214
num_examples: 1862
download_size: 13342836529
dataset_size: 13297990214
- config_name: subset_155
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13054204743
num_examples: 1827
download_size: 13097552879
dataset_size: 13054204743
- config_name: subset_156
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13062749902
num_examples: 1834
download_size: 13106716484
dataset_size: 13062749902
- config_name: subset_157
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13387093767
num_examples: 1861
download_size: 13431728772
dataset_size: 13387093767
- config_name: subset_158
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12880058650
num_examples: 1783
download_size: 12922917433
dataset_size: 12880058650
- config_name: subset_159
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11881220327
num_examples: 1654
download_size: 11920259054
dataset_size: 11881220327
- config_name: subset_16
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12462625538
num_examples: 1974
download_size: 12506159941
dataset_size: 12462625538
- config_name: subset_160
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13238783195
num_examples: 1841
download_size: 13282707085
dataset_size: 13238783195
- config_name: subset_161
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13020419071
num_examples: 1838
download_size: 13064546764
dataset_size: 13020419071
- config_name: subset_162
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12977261277
num_examples: 1783
download_size: 13020115761
dataset_size: 12977261277
- config_name: subset_163
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13154440690
num_examples: 1805
download_size: 13196753466
dataset_size: 13154440690
- config_name: subset_164
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13221774010
num_examples: 1785
download_size: 13265348083
dataset_size: 13221774010
- config_name: subset_165
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12890453538
num_examples: 1764
download_size: 12932702421
dataset_size: 12890453538
- config_name: subset_166
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13238731446
num_examples: 1865
download_size: 13283761551
dataset_size: 13238731446
- config_name: subset_167
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13116757486
num_examples: 1794
download_size: 13160391058
dataset_size: 13116757486
- config_name: subset_168
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13507948525
num_examples: 1871
download_size: 13553344138
dataset_size: 13507948525
- config_name: subset_169
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13281798625
num_examples: 1845
download_size: 13326150167
dataset_size: 13281798625
- config_name: subset_17
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12530147436
num_examples: 2005
download_size: 12574908317
dataset_size: 12530147436
- config_name: subset_170
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13529725455
num_examples: 1877
download_size: 13575090508
dataset_size: 13529725455
- config_name: subset_171
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13437938918
num_examples: 1863
download_size: 13482733557
dataset_size: 13437938918
- config_name: subset_172
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13501430052
num_examples: 1841
download_size: 13546564951
dataset_size: 13501430052
- config_name: subset_173
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13150686556
num_examples: 1846
download_size: 13195014459
dataset_size: 13150686556
- config_name: subset_174
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13175889059
num_examples: 1833
download_size: 13218163076
dataset_size: 13175889059
- config_name: subset_175
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13102902135
num_examples: 1781
download_size: 13146424916
dataset_size: 13102902135
- config_name: subset_176
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13113639029
num_examples: 1809
download_size: 13156636320
dataset_size: 13113639029
- config_name: subset_177
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12925639837
num_examples: 1779
download_size: 12968379518
dataset_size: 12925639837
- config_name: subset_178
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13408612490
num_examples: 1810
download_size: 13450580598
dataset_size: 13408612490
- config_name: subset_179
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13385760100
num_examples: 1808
download_size: 13429460828
dataset_size: 13385760100
- config_name: subset_18
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12199898588
num_examples: 1933
download_size: 12243282500
dataset_size: 12199898588
- config_name: subset_180
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13530721722
num_examples: 1844
download_size: 13575900699
dataset_size: 13530721722
- config_name: subset_181
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13329843658
num_examples: 1808
download_size: 13373550290
dataset_size: 13329843658
- config_name: subset_182
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13421206325
num_examples: 1807
download_size: 13463752732
dataset_size: 13421206325
- config_name: subset_183
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13293489170
num_examples: 1790
download_size: 13336547592
dataset_size: 13293489170
- config_name: subset_184
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13170304190
num_examples: 1756
download_size: 13212477470
dataset_size: 13170304190
- config_name: subset_185
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13090231354
num_examples: 1782
download_size: 13133751213
dataset_size: 13090231354
- config_name: subset_186
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13202410818
num_examples: 1791
download_size: 13246025198
dataset_size: 13202410818
- config_name: subset_187
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13117901283
num_examples: 1780
download_size: 13161452953
dataset_size: 13117901283
- config_name: subset_188
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13040538960
num_examples: 1783
download_size: 13083376359
dataset_size: 13040538960
- config_name: subset_189
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13439617718
num_examples: 1814
download_size: 13483108790
dataset_size: 13439617718
- config_name: subset_19
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12564662092
num_examples: 1983
download_size: 12608495037
dataset_size: 12564662092
- config_name: subset_190
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13250790159
num_examples: 1802
download_size: 13294475053
dataset_size: 13250790159
- config_name: subset_191
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13576629500
num_examples: 1794
download_size: 13621403290
dataset_size: 13576629500
- config_name: subset_192
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13534614314
num_examples: 1805
download_size: 13579424803
dataset_size: 13534614314
- config_name: subset_193
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13205308518
num_examples: 1801
download_size: 13249015229
dataset_size: 13205308518
- config_name: subset_194
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13173748624
num_examples: 1781
download_size: 13217312510
dataset_size: 13173748624
- config_name: subset_195
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13262230923
num_examples: 1793
download_size: 13305822959
dataset_size: 13262230923
- config_name: subset_196
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13442030054
num_examples: 1808
download_size: 13485707566
dataset_size: 13442030054
- config_name: subset_197
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13279014406
num_examples: 1813
download_size: 13322778547
dataset_size: 13279014406
- config_name: subset_198
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13372535801
num_examples: 1806
download_size: 13416177685
dataset_size: 13372535801
- config_name: subset_199
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13260187374
num_examples: 1798
download_size: 13303786873
dataset_size: 13260187374
- config_name: subset_2
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14824893136
num_examples: 2052
download_size: 14872062544
dataset_size: 14824893136
- config_name: subset_20
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12469186480
num_examples: 1959
download_size: 12512664696
dataset_size: 12469186480
- config_name: subset_200
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13405377823
num_examples: 1814
download_size: 13449064998
dataset_size: 13405377823
- config_name: subset_201
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12857648840
num_examples: 1743
download_size: 12899723972
dataset_size: 12857648840
- config_name: subset_202
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13875371606
num_examples: 1837
download_size: 13920369985
dataset_size: 13875371606
- config_name: subset_203
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13399467921
num_examples: 1802
download_size: 13443037100
dataset_size: 13399467921
- config_name: subset_204
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13199244957
num_examples: 1790
download_size: 13242850615
dataset_size: 13199244957
- config_name: subset_205
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13522066316
num_examples: 1805
download_size: 13566902309
dataset_size: 13522066316
- config_name: subset_206
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13358642027
num_examples: 1793
download_size: 13402228636
dataset_size: 13358642027
- config_name: subset_207
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13117565020
num_examples: 1803
download_size: 13160834483
dataset_size: 13117565020
- config_name: subset_208
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13382606765
num_examples: 1772
download_size: 13425953489
dataset_size: 13382606765
- config_name: subset_209
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12974477742
num_examples: 1760
download_size: 13016667136
dataset_size: 12974477742
- config_name: subset_21
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12085360980
num_examples: 1934
download_size: 12128838861
dataset_size: 12085360980
- config_name: subset_210
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13476138379
num_examples: 1796
download_size: 13519239424
dataset_size: 13476138379
- config_name: subset_211
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13419976632
num_examples: 1804
download_size: 13463566063
dataset_size: 13419976632
- config_name: subset_212
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13480363370
num_examples: 1782
download_size: 13523820554
dataset_size: 13480363370
- config_name: subset_213
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13318991720
num_examples: 1782
download_size: 13362474010
dataset_size: 13318991720
- config_name: subset_214
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13172359156
num_examples: 1797
download_size: 13214875172
dataset_size: 13172359156
- config_name: subset_215
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13280189404
num_examples: 1805
download_size: 13323825654
dataset_size: 13280189404
- config_name: subset_216
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13228612682
num_examples: 1784
download_size: 13272174030
dataset_size: 13228612682
- config_name: subset_217
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13301788292
num_examples: 1795
download_size: 13344755106
dataset_size: 13301788292
- config_name: subset_218
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13444033873
num_examples: 1787
download_size: 13486618579
dataset_size: 13444033873
- config_name: subset_219
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13385006271
num_examples: 1798
download_size: 13428624694
dataset_size: 13385006271
- config_name: subset_22
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12068647069
num_examples: 1919
download_size: 12112052242
dataset_size: 12068647069
- config_name: subset_220
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13387979539
num_examples: 1793
download_size: 13431556808
dataset_size: 13387979539
- config_name: subset_221
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13184366878
num_examples: 1768
download_size: 13227780561
dataset_size: 13184366878
- config_name: subset_222
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13019967994
num_examples: 1771
download_size: 13063453259
dataset_size: 13019967994
- config_name: subset_223
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13009363028
num_examples: 1758
download_size: 13052746979
dataset_size: 13009363028
- config_name: subset_224
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13653747955
num_examples: 1816
download_size: 13698603852
dataset_size: 13653747955
- config_name: subset_225
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13118240446
num_examples: 1793
download_size: 13161929983
dataset_size: 13118240446
- config_name: subset_226
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13687749995
num_examples: 1820
download_size: 13732610680
dataset_size: 13687749995
- config_name: subset_227
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13607917615
num_examples: 1780
download_size: 13652544829
dataset_size: 13607917615
- config_name: subset_228
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13342866303
num_examples: 1787
download_size: 13386381015
dataset_size: 13342866303
- config_name: subset_229
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13310021752
num_examples: 1749
download_size: 13353269100
dataset_size: 13310021752
- config_name: subset_23
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12351336675
num_examples: 1955
download_size: 12394781249
dataset_size: 12351336675
- config_name: subset_230
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13841783584
num_examples: 1822
download_size: 13886657543
dataset_size: 13841783584
- config_name: subset_231
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13716496527
num_examples: 1829
download_size: 13761486198
dataset_size: 13716496527
- config_name: subset_232
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13468291883
num_examples: 1783
download_size: 13510236937
dataset_size: 13468291883
- config_name: subset_233
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13380265372
num_examples: 1800
download_size: 13423849283
dataset_size: 13380265372
- config_name: subset_234
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13713921272
num_examples: 1815
download_size: 13758828731
dataset_size: 13713921272
- config_name: subset_235
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13514220908
num_examples: 1804
download_size: 13559019032
dataset_size: 13514220908
- config_name: subset_236
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13494111624
num_examples: 1801
download_size: 13537700105
dataset_size: 13494111624
- config_name: subset_237
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13544863571
num_examples: 1790
download_size: 13589626744
dataset_size: 13544863571
- config_name: subset_238
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13554393425
num_examples: 1811
download_size: 13597849508
dataset_size: 13554393425
- config_name: subset_239
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13546762388
num_examples: 1801
download_size: 13591616587
dataset_size: 13546762388
- config_name: subset_24
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12065934022
num_examples: 1912
download_size: 12109149622
dataset_size: 12065934022
- config_name: subset_240
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13196715201
num_examples: 1770
download_size: 13240155032
dataset_size: 13196715201
- config_name: subset_241
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13659450783
num_examples: 1832
download_size: 13704113396
dataset_size: 13659450783
- config_name: subset_242
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13246292263
num_examples: 1769
download_size: 13289711363
dataset_size: 13246292263
- config_name: subset_243
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13566094535
num_examples: 1791
download_size: 13610689089
dataset_size: 13566094535
- config_name: subset_244
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13087093994
num_examples: 1754
download_size: 13130462985
dataset_size: 13087093994
- config_name: subset_245
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13606181830
num_examples: 1790
download_size: 13650688469
dataset_size: 13606181830
- config_name: subset_246
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13477021726
num_examples: 1793
download_size: 13520536644
dataset_size: 13477021726
- config_name: subset_247
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13530919547
num_examples: 1820
download_size: 13575873477
dataset_size: 13530919547
- config_name: subset_248
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13076084830
num_examples: 1779
download_size: 13118512234
dataset_size: 13076084830
- config_name: subset_249
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13529657489
num_examples: 1797
download_size: 13574453907
dataset_size: 13529657489
- config_name: subset_25
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12305898739
num_examples: 1925
download_size: 12349158665
dataset_size: 12305898739
- config_name: subset_250
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13381525728
num_examples: 1802
download_size: 13425146039
dataset_size: 13381525728
- config_name: subset_251
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13822740954
num_examples: 1801
download_size: 13866366309
dataset_size: 13822740954
- config_name: subset_252
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13673963248
num_examples: 1800
download_size: 13718685655
dataset_size: 13673963248
- config_name: subset_253
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13835058700
num_examples: 1805
download_size: 13879166806
dataset_size: 13835058700
- config_name: subset_254
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13383712853
num_examples: 1767
download_size: 13426540477
dataset_size: 13383712853
- config_name: subset_255
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13560805607
num_examples: 1763
download_size: 13604948210
dataset_size: 13560805607
- config_name: subset_256
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13237669958
num_examples: 1758
download_size: 13281016253
dataset_size: 13237669958
- config_name: subset_257
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13923624660
num_examples: 1807
download_size: 13968391517
dataset_size: 13923624660
- config_name: subset_258
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11361216368
num_examples: 1503
download_size: 11398068945
dataset_size: 11361216368
- config_name: subset_26
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11342909197
num_examples: 1832
download_size: 11383158933
dataset_size: 11342909197
- config_name: subset_27
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11602861316
num_examples: 1862
download_size: 11643598489
dataset_size: 11602861316
- config_name: subset_28
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11423759992
num_examples: 1829
download_size: 11464108953
dataset_size: 11423759992
- config_name: subset_29
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11317928199
num_examples: 1828
download_size: 11358232335
dataset_size: 11317928199
- config_name: subset_3
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15009073368
num_examples: 2081
download_size: 15059069139
dataset_size: 15009073368
- config_name: subset_30
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11340440138
num_examples: 1801
download_size: 11380580568
dataset_size: 11340440138
- config_name: subset_31
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11585306838
num_examples: 1904
download_size: 11626002198
dataset_size: 11585306838
- config_name: subset_32
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11839140118
num_examples: 1904
download_size: 11881221624
dataset_size: 11839140118
- config_name: subset_33
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11667822890
num_examples: 1874
download_size: 11709572131
dataset_size: 11667822890
- config_name: subset_34
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12102711256
num_examples: 1932
download_size: 12146166646
dataset_size: 12102711256
- config_name: subset_35
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12121565378
num_examples: 1902
download_size: 12164750724
dataset_size: 12121565378
- config_name: subset_36
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11813142889
num_examples: 1877
download_size: 11853807475
dataset_size: 11813142889
- config_name: subset_37
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12105640911
num_examples: 1862
download_size: 12148512935
dataset_size: 12105640911
- config_name: subset_38
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12055811433
num_examples: 1878
download_size: 12098864002
dataset_size: 12055811433
- config_name: subset_39
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12027703389
num_examples: 1879
download_size: 12070810328
dataset_size: 12027703389
- config_name: subset_4
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14871815890
num_examples: 2102
download_size: 14921781959
dataset_size: 14871815890
- config_name: subset_40
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12297307550
num_examples: 1919
download_size: 12340628479
dataset_size: 12297307550
- config_name: subset_41
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11673113363
num_examples: 1828
download_size: 11714539382
dataset_size: 11673113363
- config_name: subset_42
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12114525371
num_examples: 1884
download_size: 12157548017
dataset_size: 12114525371
- config_name: subset_43
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12072393153
num_examples: 1874
download_size: 12115455581
dataset_size: 12072393153
- config_name: subset_44
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12231430525
num_examples: 1894
download_size: 12274252942
dataset_size: 12231430525
- config_name: subset_45
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12138951543
num_examples: 1869
download_size: 12181833715
dataset_size: 12138951543
- config_name: subset_46
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12338384571
num_examples: 1899
download_size: 12381559009
dataset_size: 12338384571
- config_name: subset_47
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12595976439
num_examples: 1913
download_size: 12639701571
dataset_size: 12595976439
- config_name: subset_48
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12291667679
num_examples: 1922
download_size: 12335068918
dataset_size: 12291667679
- config_name: subset_49
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12076363347
num_examples: 1844
download_size: 12118856817
dataset_size: 12076363347
- config_name: subset_5
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14463875379
num_examples: 2045
download_size: 14511930615
dataset_size: 14463875379
- config_name: subset_50
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12201692647
num_examples: 1884
download_size: 12244065455
dataset_size: 12201692647
- config_name: subset_51
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12205237779
num_examples: 1918
download_size: 12247882809
dataset_size: 12205237779
- config_name: subset_52
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12189874310
num_examples: 1880
download_size: 12232943844
dataset_size: 12189874310
- config_name: subset_53
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12566145830
num_examples: 1897
download_size: 12609016174
dataset_size: 12566145830
- config_name: subset_54
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12446537009
num_examples: 1901
download_size: 12489664423
dataset_size: 12446537009
- config_name: subset_55
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12447865477
num_examples: 1890
download_size: 12490882006
dataset_size: 12447865477
- config_name: subset_56
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11886184932
num_examples: 1851
download_size: 11927798488
dataset_size: 11886184932
- config_name: subset_57
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12488545723
num_examples: 1904
download_size: 12531536020
dataset_size: 12488545723
- config_name: subset_58
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12062402025
num_examples: 1850
download_size: 12105197651
dataset_size: 12062402025
- config_name: subset_59
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12373068180
num_examples: 1875
download_size: 12415995259
dataset_size: 12373068180
- config_name: subset_6
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14501738118
num_examples: 2090
download_size: 14551152342
dataset_size: 14501738118
- config_name: subset_60
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12180776572
num_examples: 1881
download_size: 12223818393
dataset_size: 12180776572
- config_name: subset_61
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12048086660
num_examples: 1849
download_size: 12090892833
dataset_size: 12048086660
- config_name: subset_62
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12657808103
num_examples: 1895
download_size: 12701743869
dataset_size: 12657808103
- config_name: subset_63
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12392654565
num_examples: 1874
download_size: 12434138173
dataset_size: 12392654565
- config_name: subset_64
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12350385561
num_examples: 1882
download_size: 12393397192
dataset_size: 12350385561
- config_name: subset_65
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12373129329
num_examples: 1871
download_size: 12410450046
dataset_size: 12373129329
- config_name: subset_66
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12278015011
num_examples: 1850
download_size: 12320703540
dataset_size: 12278015011
- config_name: subset_67
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12371002905
num_examples: 1896
download_size: 12414173841
dataset_size: 12371002905
- config_name: subset_68
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12305454575
num_examples: 1850
download_size: 12348208355
dataset_size: 12305454575
- config_name: subset_69
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12411747691
num_examples: 1854
download_size: 12454448363
dataset_size: 12411747691
- config_name: subset_7
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14508087261
num_examples: 2116
download_size: 14558140635
dataset_size: 14508087261
- config_name: subset_70
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12350275460
num_examples: 1851
download_size: 12393029167
dataset_size: 12350275460
- config_name: subset_71
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12537513558
num_examples: 1876
download_size: 12581645796
dataset_size: 12537513558
- config_name: subset_72
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12490553756
num_examples: 1841
download_size: 12533178091
dataset_size: 12490553756
- config_name: subset_73
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12518388540
num_examples: 1882
download_size: 12561275853
dataset_size: 12518388540
- config_name: subset_74
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12497263818
num_examples: 1865
download_size: 12540088306
dataset_size: 12497263818
- config_name: subset_75
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12305999986
num_examples: 1848
download_size: 12348789656
dataset_size: 12305999986
- config_name: subset_76
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12750830510
num_examples: 1894
download_size: 12795055702
dataset_size: 12750830510
- config_name: subset_77
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12671852813
num_examples: 1875
download_size: 12715112660
dataset_size: 12671852813
- config_name: subset_78
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12871126714
num_examples: 1933
download_size: 12914706808
dataset_size: 12871126714
- config_name: subset_79
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12749824472
num_examples: 1900
download_size: 12793397015
dataset_size: 12749824472
- config_name: subset_8
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14210430391
num_examples: 2095
download_size: 14259459141
dataset_size: 14210430391
- config_name: subset_80
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12490567475
num_examples: 1821
download_size: 12533032118
dataset_size: 12490567475
- config_name: subset_81
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12666304824
num_examples: 1873
download_size: 12710378751
dataset_size: 12666304824
- config_name: subset_82
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12789616733
num_examples: 1862
download_size: 12833584641
dataset_size: 12789616733
- config_name: subset_83
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12338377273
num_examples: 1809
download_size: 12380811191
dataset_size: 12338377273
- config_name: subset_84
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12554741918
num_examples: 1792
download_size: 12597991658
dataset_size: 12554741918
- config_name: subset_85
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12701297528
num_examples: 1870
download_size: 12745293256
dataset_size: 12701297528
- config_name: subset_86
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12337303789
num_examples: 1838
download_size: 12378681021
dataset_size: 12337303789
- config_name: subset_87
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12515934904
num_examples: 1852
download_size: 12559866703
dataset_size: 12515934904
- config_name: subset_88
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12486460328
num_examples: 1867
download_size: 12529352258
dataset_size: 12486460328
- config_name: subset_89
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13005033357
num_examples: 1903
download_size: 13050512457
dataset_size: 13005033357
- config_name: subset_9
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13964425360
num_examples: 2073
download_size: 14011911721
dataset_size: 13964425360
- config_name: subset_90
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12508609901
num_examples: 1826
download_size: 12552399045
dataset_size: 12508609901
- config_name: subset_91
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12766878985
num_examples: 1852
download_size: 12810779213
dataset_size: 12766878985
- config_name: subset_92
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12484630250
num_examples: 1852
download_size: 12527348681
dataset_size: 12484630250
- config_name: subset_93
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12560208384
num_examples: 1830
download_size: 12602554131
dataset_size: 12560208384
- config_name: subset_94
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12563291097
num_examples: 1848
download_size: 12607156154
dataset_size: 12563291097
- config_name: subset_95
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12839362555
num_examples: 1857
download_size: 12882825153
dataset_size: 12839362555
- config_name: subset_96
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12907967612
num_examples: 1885
download_size: 12952072820
dataset_size: 12907967612
- config_name: subset_97
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12893205307
num_examples: 1869
download_size: 12935785916
dataset_size: 12893205307
- config_name: subset_98
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12554140420
num_examples: 1860
download_size: 12598098176
dataset_size: 12554140420
- config_name: subset_99
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13502104593
num_examples: 1915
download_size: 13548188642
dataset_size: 13502104593
configs:
- config_name: subset_1
data_files:
- split: train
path: subset_1/train-*
- config_name: subset_10
data_files:
- split: train
path: subset_10/train-*
- config_name: subset_100
data_files:
- split: train
path: subset_100/train-*
- config_name: subset_101
data_files:
- split: train
path: subset_101/train-*
- config_name: subset_102
data_files:
- split: train
path: subset_102/train-*
- config_name: subset_103
data_files:
- split: train
path: subset_103/train-*
- config_name: subset_104
data_files:
- split: train
path: subset_104/train-*
- config_name: subset_105
data_files:
- split: train
path: subset_105/train-*
- config_name: subset_106
data_files:
- split: train
path: subset_106/train-*
- config_name: subset_107
data_files:
- split: train
path: subset_107/train-*
- config_name: subset_108
data_files:
- split: train
path: subset_108/train-*
- config_name: subset_109
data_files:
- split: train
path: subset_109/train-*
- config_name: subset_11
data_files:
- split: train
path: subset_11/train-*
- config_name: subset_110
data_files:
- split: train
path: subset_110/train-*
- config_name: subset_111
data_files:
- split: train
path: subset_111/train-*
- config_name: subset_112
data_files:
- split: train
path: subset_112/train-*
- config_name: subset_113
data_files:
- split: train
path: subset_113/train-*
- config_name: subset_114
data_files:
- split: train
path: subset_114/train-*
- config_name: subset_115
data_files:
- split: train
path: subset_115/train-*
- config_name: subset_116
data_files:
- split: train
path: subset_116/train-*
- config_name: subset_117
data_files:
- split: train
path: subset_117/train-*
- config_name: subset_118
data_files:
- split: train
path: subset_118/train-*
- config_name: subset_119
data_files:
- split: train
path: subset_119/train-*
- config_name: subset_12
data_files:
- split: train
path: subset_12/train-*
- config_name: subset_120
data_files:
- split: train
path: subset_120/train-*
- config_name: subset_121
data_files:
- split: train
path: subset_121/train-*
- config_name: subset_122
data_files:
- split: train
path: subset_122/train-*
- config_name: subset_123
data_files:
- split: train
path: subset_123/train-*
- config_name: subset_124
data_files:
- split: train
path: subset_124/train-*
- config_name: subset_125
data_files:
- split: train
path: subset_125/train-*
- config_name: subset_126
data_files:
- split: train
path: subset_126/train-*
- config_name: subset_127
data_files:
- split: train
path: subset_127/train-*
- config_name: subset_128
data_files:
- split: train
path: subset_128/train-*
- config_name: subset_129
data_files:
- split: train
path: subset_129/train-*
- config_name: subset_13
data_files:
- split: train
path: subset_13/train-*
- config_name: subset_130
data_files:
- split: train
path: subset_130/train-*
- config_name: subset_131
data_files:
- split: train
path: subset_131/train-*
- config_name: subset_132
data_files:
- split: train
path: subset_132/train-*
- config_name: subset_133
data_files:
- split: train
path: subset_133/train-*
- config_name: subset_134
data_files:
- split: train
path: subset_134/train-*
- config_name: subset_135
data_files:
- split: train
path: subset_135/train-*
- config_name: subset_136
data_files:
- split: train
path: subset_136/train-*
- config_name: subset_137
data_files:
- split: train
path: subset_137/train-*
- config_name: subset_138
data_files:
- split: train
path: subset_138/train-*
- config_name: subset_139
data_files:
- split: train
path: subset_139/train-*
- config_name: subset_14
data_files:
- split: train
path: subset_14/train-*
- config_name: subset_140
data_files:
- split: train
path: subset_140/train-*
- config_name: subset_141
data_files:
- split: train
path: subset_141/train-*
- config_name: subset_142
data_files:
- split: train
path: subset_142/train-*
- config_name: subset_143
data_files:
- split: train
path: subset_143/train-*
- config_name: subset_144
data_files:
- split: train
path: subset_144/train-*
- config_name: subset_145
data_files:
- split: train
path: subset_145/train-*
- config_name: subset_146
data_files:
- split: train
path: subset_146/train-*
- config_name: subset_147
data_files:
- split: train
path: subset_147/train-*
- config_name: subset_148
data_files:
- split: train
path: subset_148/train-*
- config_name: subset_149
data_files:
- split: train
path: subset_149/train-*
- config_name: subset_15
data_files:
- split: train
path: subset_15/train-*
- config_name: subset_150
data_files:
- split: train
path: subset_150/train-*
- config_name: subset_151
data_files:
- split: train
path: subset_151/train-*
- config_name: subset_152
data_files:
- split: train
path: subset_152/train-*
- config_name: subset_153
data_files:
- split: train
path: subset_153/train-*
- config_name: subset_154
data_files:
- split: train
path: subset_154/train-*
- config_name: subset_155
data_files:
- split: train
path: subset_155/train-*
- config_name: subset_156
data_files:
- split: train
path: subset_156/train-*
- config_name: subset_157
data_files:
- split: train
path: subset_157/train-*
- config_name: subset_158
data_files:
- split: train
path: subset_158/train-*
- config_name: subset_159
data_files:
- split: train
path: subset_159/train-*
- config_name: subset_16
data_files:
- split: train
path: subset_16/train-*
- config_name: subset_160
data_files:
- split: train
path: subset_160/train-*
- config_name: subset_161
data_files:
- split: train
path: subset_161/train-*
- config_name: subset_162
data_files:
- split: train
path: subset_162/train-*
- config_name: subset_163
data_files:
- split: train
path: subset_163/train-*
- config_name: subset_164
data_files:
- split: train
path: subset_164/train-*
- config_name: subset_165
data_files:
- split: train
path: subset_165/train-*
- config_name: subset_166
data_files:
- split: train
path: subset_166/train-*
- config_name: subset_167
data_files:
- split: train
path: subset_167/train-*
- config_name: subset_168
data_files:
- split: train
path: subset_168/train-*
- config_name: subset_169
data_files:
- split: train
path: subset_169/train-*
- config_name: subset_17
data_files:
- split: train
path: subset_17/train-*
- config_name: subset_170
data_files:
- split: train
path: subset_170/train-*
- config_name: subset_171
data_files:
- split: train
path: subset_171/train-*
- config_name: subset_172
data_files:
- split: train
path: subset_172/train-*
- config_name: subset_173
data_files:
- split: train
path: subset_173/train-*
- config_name: subset_174
data_files:
- split: train
path: subset_174/train-*
- config_name: subset_175
data_files:
- split: train
path: subset_175/train-*
- config_name: subset_176
data_files:
- split: train
path: subset_176/train-*
- config_name: subset_177
data_files:
- split: train
path: subset_177/train-*
- config_name: subset_178
data_files:
- split: train
path: subset_178/train-*
- config_name: subset_179
data_files:
- split: train
path: subset_179/train-*
- config_name: subset_18
data_files:
- split: train
path: subset_18/train-*
- config_name: subset_180
data_files:
- split: train
path: subset_180/train-*
- config_name: subset_181
data_files:
- split: train
path: subset_181/train-*
- config_name: subset_182
data_files:
- split: train
path: subset_182/train-*
- config_name: subset_183
data_files:
- split: train
path: subset_183/train-*
- config_name: subset_184
data_files:
- split: train
path: subset_184/train-*
- config_name: subset_185
data_files:
- split: train
path: subset_185/train-*
- config_name: subset_186
data_files:
- split: train
path: subset_186/train-*
- config_name: subset_187
data_files:
- split: train
path: subset_187/train-*
- config_name: subset_188
data_files:
- split: train
path: subset_188/train-*
- config_name: subset_189
data_files:
- split: train
path: subset_189/train-*
- config_name: subset_19
data_files:
- split: train
path: subset_19/train-*
- config_name: subset_190
data_files:
- split: train
path: subset_190/train-*
- config_name: subset_191
data_files:
- split: train
path: subset_191/train-*
- config_name: subset_192
data_files:
- split: train
path: subset_192/train-*
- config_name: subset_193
data_files:
- split: train
path: subset_193/train-*
- config_name: subset_194
data_files:
- split: train
path: subset_194/train-*
- config_name: subset_195
data_files:
- split: train
path: subset_195/train-*
- config_name: subset_196
data_files:
- split: train
path: subset_196/train-*
- config_name: subset_197
data_files:
- split: train
path: subset_197/train-*
- config_name: subset_198
data_files:
- split: train
path: subset_198/train-*
- config_name: subset_199
data_files:
- split: train
path: subset_199/train-*
- config_name: subset_2
data_files:
- split: train
path: subset_2/train-*
- config_name: subset_20
data_files:
- split: train
path: subset_20/train-*
- config_name: subset_200
data_files:
- split: train
path: subset_200/train-*
- config_name: subset_201
data_files:
- split: train
path: subset_201/train-*
- config_name: subset_202
data_files:
- split: train
path: subset_202/train-*
- config_name: subset_203
data_files:
- split: train
path: subset_203/train-*
- config_name: subset_204
data_files:
- split: train
path: subset_204/train-*
- config_name: subset_205
data_files:
- split: train
path: subset_205/train-*
- config_name: subset_206
data_files:
- split: train
path: subset_206/train-*
- config_name: subset_207
data_files:
- split: train
path: subset_207/train-*
- config_name: subset_208
data_files:
- split: train
path: subset_208/train-*
- config_name: subset_209
data_files:
- split: train
path: subset_209/train-*
- config_name: subset_21
data_files:
- split: train
path: subset_21/train-*
- config_name: subset_210
data_files:
- split: train
path: subset_210/train-*
- config_name: subset_211
data_files:
- split: train
path: subset_211/train-*
- config_name: subset_212
data_files:
- split: train
path: subset_212/train-*
- config_name: subset_213
data_files:
- split: train
path: subset_213/train-*
- config_name: subset_214
data_files:
- split: train
path: subset_214/train-*
- config_name: subset_215
data_files:
- split: train
path: subset_215/train-*
- config_name: subset_216
data_files:
- split: train
path: subset_216/train-*
- config_name: subset_217
data_files:
- split: train
path: subset_217/train-*
- config_name: subset_218
data_files:
- split: train
path: subset_218/train-*
- config_name: subset_219
data_files:
- split: train
path: subset_219/train-*
- config_name: subset_22
data_files:
- split: train
path: subset_22/train-*
- config_name: subset_220
data_files:
- split: train
path: subset_220/train-*
- config_name: subset_221
data_files:
- split: train
path: subset_221/train-*
- config_name: subset_222
data_files:
- split: train
path: subset_222/train-*
- config_name: subset_223
data_files:
- split: train
path: subset_223/train-*
- config_name: subset_224
data_files:
- split: train
path: subset_224/train-*
- config_name: subset_225
data_files:
- split: train
path: subset_225/train-*
- config_name: subset_226
data_files:
- split: train
path: subset_226/train-*
- config_name: subset_227
data_files:
- split: train
path: subset_227/train-*
- config_name: subset_228
data_files:
- split: train
path: subset_228/train-*
- config_name: subset_229
data_files:
- split: train
path: subset_229/train-*
- config_name: subset_23
data_files:
- split: train
path: subset_23/train-*
- config_name: subset_230
data_files:
- split: train
path: subset_230/train-*
- config_name: subset_231
data_files:
- split: train
path: subset_231/train-*
- config_name: subset_232
data_files:
- split: train
path: subset_232/train-*
- config_name: subset_233
data_files:
- split: train
path: subset_233/train-*
- config_name: subset_234
data_files:
- split: train
path: subset_234/train-*
- config_name: subset_235
data_files:
- split: train
path: subset_235/train-*
- config_name: subset_236
data_files:
- split: train
path: subset_236/train-*
- config_name: subset_237
data_files:
- split: train
path: subset_237/train-*
- config_name: subset_238
data_files:
- split: train
path: subset_238/train-*
- config_name: subset_239
data_files:
- split: train
path: subset_239/train-*
- config_name: subset_24
data_files:
- split: train
path: subset_24/train-*
- config_name: subset_240
data_files:
- split: train
path: subset_240/train-*
- config_name: subset_241
data_files:
- split: train
path: subset_241/train-*
- config_name: subset_242
data_files:
- split: train
path: subset_242/train-*
- config_name: subset_243
data_files:
- split: train
path: subset_243/train-*
- config_name: subset_244
data_files:
- split: train
path: subset_244/train-*
- config_name: subset_245
data_files:
- split: train
path: subset_245/train-*
- config_name: subset_246
data_files:
- split: train
path: subset_246/train-*
- config_name: subset_247
data_files:
- split: train
path: subset_247/train-*
- config_name: subset_248
data_files:
- split: train
path: subset_248/train-*
- config_name: subset_249
data_files:
- split: train
path: subset_249/train-*
- config_name: subset_25
data_files:
- split: train
path: subset_25/train-*
- config_name: subset_250
data_files:
- split: train
path: subset_250/train-*
- config_name: subset_251
data_files:
- split: train
path: subset_251/train-*
- config_name: subset_252
data_files:
- split: train
path: subset_252/train-*
- config_name: subset_253
data_files:
- split: train
path: subset_253/train-*
- config_name: subset_254
data_files:
- split: train
path: subset_254/train-*
- config_name: subset_255
data_files:
- split: train
path: subset_255/train-*
- config_name: subset_256
data_files:
- split: train
path: subset_256/train-*
- config_name: subset_257
data_files:
- split: train
path: subset_257/train-*
- config_name: subset_258
data_files:
- split: train
path: subset_258/train-*
- config_name: subset_26
data_files:
- split: train
path: subset_26/train-*
- config_name: subset_27
data_files:
- split: train
path: subset_27/train-*
- config_name: subset_28
data_files:
- split: train
path: subset_28/train-*
- config_name: subset_29
data_files:
- split: train
path: subset_29/train-*
- config_name: subset_3
data_files:
- split: train
path: subset_3/train-*
- config_name: subset_30
data_files:
- split: train
path: subset_30/train-*
- config_name: subset_31
data_files:
- split: train
path: subset_31/train-*
- config_name: subset_32
data_files:
- split: train
path: subset_32/train-*
- config_name: subset_33
data_files:
- split: train
path: subset_33/train-*
- config_name: subset_34
data_files:
- split: train
path: subset_34/train-*
- config_name: subset_35
data_files:
- split: train
path: subset_35/train-*
- config_name: subset_36
data_files:
- split: train
path: subset_36/train-*
- config_name: subset_37
data_files:
- split: train
path: subset_37/train-*
- config_name: subset_38
data_files:
- split: train
path: subset_38/train-*
- config_name: subset_39
data_files:
- split: train
path: subset_39/train-*
- config_name: subset_4
data_files:
- split: train
path: subset_4/train-*
- config_name: subset_40
data_files:
- split: train
path: subset_40/train-*
- config_name: subset_41
data_files:
- split: train
path: subset_41/train-*
- config_name: subset_42
data_files:
- split: train
path: subset_42/train-*
- config_name: subset_43
data_files:
- split: train
path: subset_43/train-*
- config_name: subset_44
data_files:
- split: train
path: subset_44/train-*
- config_name: subset_45
data_files:
- split: train
path: subset_45/train-*
- config_name: subset_46
data_files:
- split: train
path: subset_46/train-*
- config_name: subset_47
data_files:
- split: train
path: subset_47/train-*
- config_name: subset_48
data_files:
- split: train
path: subset_48/train-*
- config_name: subset_49
data_files:
- split: train
path: subset_49/train-*
- config_name: subset_5
data_files:
- split: train
path: subset_5/train-*
- config_name: subset_50
data_files:
- split: train
path: subset_50/train-*
- config_name: subset_51
data_files:
- split: train
path: subset_51/train-*
- config_name: subset_52
data_files:
- split: train
path: subset_52/train-*
- config_name: subset_53
data_files:
- split: train
path: subset_53/train-*
- config_name: subset_54
data_files:
- split: train
path: subset_54/train-*
- config_name: subset_55
data_files:
- split: train
path: subset_55/train-*
- config_name: subset_56
data_files:
- split: train
path: subset_56/train-*
- config_name: subset_57
data_files:
- split: train
path: subset_57/train-*
- config_name: subset_58
data_files:
- split: train
path: subset_58/train-*
- config_name: subset_59
data_files:
- split: train
path: subset_59/train-*
- config_name: subset_6
data_files:
- split: train
path: subset_6/train-*
- config_name: subset_60
data_files:
- split: train
path: subset_60/train-*
- config_name: subset_61
data_files:
- split: train
path: subset_61/train-*
- config_name: subset_62
data_files:
- split: train
path: subset_62/train-*
- config_name: subset_63
data_files:
- split: train
path: subset_63/train-*
- config_name: subset_64
data_files:
- split: train
path: subset_64/train-*
- config_name: subset_65
data_files:
- split: train
path: subset_65/train-*
- config_name: subset_66
data_files:
- split: train
path: subset_66/train-*
- config_name: subset_67
data_files:
- split: train
path: subset_67/train-*
- config_name: subset_68
data_files:
- split: train
path: subset_68/train-*
- config_name: subset_69
data_files:
- split: train
path: subset_69/train-*
- config_name: subset_7
data_files:
- split: train
path: subset_7/train-*
- config_name: subset_70
data_files:
- split: train
path: subset_70/train-*
- config_name: subset_71
data_files:
- split: train
path: subset_71/train-*
- config_name: subset_72
data_files:
- split: train
path: subset_72/train-*
- config_name: subset_73
data_files:
- split: train
path: subset_73/train-*
- config_name: subset_74
data_files:
- split: train
path: subset_74/train-*
- config_name: subset_75
data_files:
- split: train
path: subset_75/train-*
- config_name: subset_76
data_files:
- split: train
path: subset_76/train-*
- config_name: subset_77
data_files:
- split: train
path: subset_77/train-*
- config_name: subset_78
data_files:
- split: train
path: subset_78/train-*
- config_name: subset_79
data_files:
- split: train
path: subset_79/train-*
- config_name: subset_8
data_files:
- split: train
path: subset_8/train-*
- config_name: subset_80
data_files:
- split: train
path: subset_80/train-*
- config_name: subset_81
data_files:
- split: train
path: subset_81/train-*
- config_name: subset_82
data_files:
- split: train
path: subset_82/train-*
- config_name: subset_83
data_files:
- split: train
path: subset_83/train-*
- config_name: subset_84
data_files:
- split: train
path: subset_84/train-*
- config_name: subset_85
data_files:
- split: train
path: subset_85/train-*
- config_name: subset_86
data_files:
- split: train
path: subset_86/train-*
- config_name: subset_87
data_files:
- split: train
path: subset_87/train-*
- config_name: subset_88
data_files:
- split: train
path: subset_88/train-*
- config_name: subset_89
data_files:
- split: train
path: subset_89/train-*
- config_name: subset_9
data_files:
- split: train
path: subset_9/train-*
- config_name: subset_90
data_files:
- split: train
path: subset_90/train-*
- config_name: subset_91
data_files:
- split: train
path: subset_91/train-*
- config_name: subset_92
data_files:
- split: train
path: subset_92/train-*
- config_name: subset_93
data_files:
- split: train
path: subset_93/train-*
- config_name: subset_94
data_files:
- split: train
path: subset_94/train-*
- config_name: subset_95
data_files:
- split: train
path: subset_95/train-*
- config_name: subset_96
data_files:
- split: train
path: subset_96/train-*
- config_name: subset_97
data_files:
- split: train
path: subset_97/train-*
- config_name: subset_98
data_files:
- split: train
path: subset_98/train-*
- config_name: subset_99
data_files:
- split: train
path: subset_99/train-*
---
|
wis-k/instruction-following-eval | wis-k | "2023-12-05T08:38:18Z" | 9,673 | 4 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-12-05T08:35:26Z" | ---
license: apache-2.0
---
|
uoft-cs/cifar100 | uoft-cs | "2024-01-04T06:57:47Z" | 9,659 | 42 | [
"task_categories:image-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended|other-80-Million-Tiny-Images",
"language:en",
"license:unknown",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-classification"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-80-Million-Tiny-Images
task_categories:
- image-classification
task_ids: []
paperswithcode_id: cifar-100
pretty_name: Cifar100
dataset_info:
config_name: cifar100
features:
- name: img
dtype: image
- name: fine_label
dtype:
class_label:
names:
'0': apple
'1': aquarium_fish
'2': baby
'3': bear
'4': beaver
'5': bed
'6': bee
'7': beetle
'8': bicycle
'9': bottle
'10': bowl
'11': boy
'12': bridge
'13': bus
'14': butterfly
'15': camel
'16': can
'17': castle
'18': caterpillar
'19': cattle
'20': chair
'21': chimpanzee
'22': clock
'23': cloud
'24': cockroach
'25': couch
'26': cra
'27': crocodile
'28': cup
'29': dinosaur
'30': dolphin
'31': elephant
'32': flatfish
'33': forest
'34': fox
'35': girl
'36': hamster
'37': house
'38': kangaroo
'39': keyboard
'40': lamp
'41': lawn_mower
'42': leopard
'43': lion
'44': lizard
'45': lobster
'46': man
'47': maple_tree
'48': motorcycle
'49': mountain
'50': mouse
'51': mushroom
'52': oak_tree
'53': orange
'54': orchid
'55': otter
'56': palm_tree
'57': pear
'58': pickup_truck
'59': pine_tree
'60': plain
'61': plate
'62': poppy
'63': porcupine
'64': possum
'65': rabbit
'66': raccoon
'67': ray
'68': road
'69': rocket
'70': rose
'71': sea
'72': seal
'73': shark
'74': shrew
'75': skunk
'76': skyscraper
'77': snail
'78': snake
'79': spider
'80': squirrel
'81': streetcar
'82': sunflower
'83': sweet_pepper
'84': table
'85': tank
'86': telephone
'87': television
'88': tiger
'89': tractor
'90': train
'91': trout
'92': tulip
'93': turtle
'94': wardrobe
'95': whale
'96': willow_tree
'97': wolf
'98': woman
'99': worm
- name: coarse_label
dtype:
class_label:
names:
'0': aquatic_mammals
'1': fish
'2': flowers
'3': food_containers
'4': fruit_and_vegetables
'5': household_electrical_devices
'6': household_furniture
'7': insects
'8': large_carnivores
'9': large_man-made_outdoor_things
'10': large_natural_outdoor_scenes
'11': large_omnivores_and_herbivores
'12': medium_mammals
'13': non-insect_invertebrates
'14': people
'15': reptiles
'16': small_mammals
'17': trees
'18': vehicles_1
'19': vehicles_2
splits:
- name: train
num_bytes: 112545106.0
num_examples: 50000
- name: test
num_bytes: 22564261.0
num_examples: 10000
download_size: 142291368
dataset_size: 135109367.0
configs:
- config_name: cifar100
data_files:
- split: train
path: cifar100/train-*
- split: test
path: cifar100/test-*
default: true
---
# Dataset Card for CIFAR-100
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CIFAR Datasets](https://www.cs.toronto.edu/~kriz/cifar.html)
- **Repository:**
- **Paper:** [Paper](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The CIFAR-100 dataset consists of 60000 32x32 colour images in 100 classes, with 600 images
per class. There are 500 training images and 100 testing images per class. There are 50000 training images and 10000 test images. The 100 classes are grouped into 20 superclasses.
There are two labels per image - fine label (actual class) and coarse label (superclass).
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image into one of 100 classes. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-cifar-100).
### Languages
English
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
'img': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=32x32 at 0x2767F58E080>, 'fine_label': 19,
'coarse_label': 11
}
```
### Data Fields
- `img`: A `PIL.Image.Image` object containing the 32x32 image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `fine_label`: an `int` classification label with the following mapping:
`0`: apple
`1`: aquarium_fish
`2`: baby
`3`: bear
`4`: beaver
`5`: bed
`6`: bee
`7`: beetle
`8`: bicycle
`9`: bottle
`10`: bowl
`11`: boy
`12`: bridge
`13`: bus
`14`: butterfly
`15`: camel
`16`: can
`17`: castle
`18`: caterpillar
`19`: cattle
`20`: chair
`21`: chimpanzee
`22`: clock
`23`: cloud
`24`: cockroach
`25`: couch
`26`: cra
`27`: crocodile
`28`: cup
`29`: dinosaur
`30`: dolphin
`31`: elephant
`32`: flatfish
`33`: forest
`34`: fox
`35`: girl
`36`: hamster
`37`: house
`38`: kangaroo
`39`: keyboard
`40`: lamp
`41`: lawn_mower
`42`: leopard
`43`: lion
`44`: lizard
`45`: lobster
`46`: man
`47`: maple_tree
`48`: motorcycle
`49`: mountain
`50`: mouse
`51`: mushroom
`52`: oak_tree
`53`: orange
`54`: orchid
`55`: otter
`56`: palm_tree
`57`: pear
`58`: pickup_truck
`59`: pine_tree
`60`: plain
`61`: plate
`62`: poppy
`63`: porcupine
`64`: possum
`65`: rabbit
`66`: raccoon
`67`: ray
`68`: road
`69`: rocket
`70`: rose
`71`: sea
`72`: seal
`73`: shark
`74`: shrew
`75`: skunk
`76`: skyscraper
`77`: snail
`78`: snake
`79`: spider
`80`: squirrel
`81`: streetcar
`82`: sunflower
`83`: sweet_pepper
`84`: table
`85`: tank
`86`: telephone
`87`: television
`88`: tiger
`89`: tractor
`90`: train
`91`: trout
`92`: tulip
`93`: turtle
`94`: wardrobe
`95`: whale
`96`: willow_tree
`97`: wolf
`98`: woman
`99`: worm
- `coarse_label`: an `int` coarse classification label with following mapping:
`0`: aquatic_mammals
`1`: fish
`2`: flowers
`3`: food_containers
`4`: fruit_and_vegetables
`5`: household_electrical_devices
`6`: household_furniture
`7`: insects
`8`: large_carnivores
`9`: large_man-made_outdoor_things
`10`: large_natural_outdoor_scenes
`11`: large_omnivores_and_herbivores
`12`: medium_mammals
`13`: non-insect_invertebrates
`14`: people
`15`: reptiles
`16`: small_mammals
`17`: trees
`18`: vehicles_1
`19`: vehicles_2
### Data Splits
| name |train|test|
|----------|----:|---------:|
|cifar100|50000| 10000|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@TECHREPORT{Krizhevsky09learningmultiple,
author = {Alex Krizhevsky},
title = {Learning multiple layers of features from tiny images},
institution = {},
year = {2009}
}
```
### Contributions
Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset. |
lmms-lab/MMMU | lmms-lab | "2024-03-08T05:09:42Z" | 9,576 | 4 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-01-15T06:32:16Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 57719107.0
num_examples: 150
- name: validation
num_bytes: 347519954.0
num_examples: 900
- name: test
num_bytes: 3271046267.0
num_examples: 10500
download_size: 3377778136
dataset_size: 3676285328.0
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
This is a merged version of [MMMU/MMMU](https://huggingface.co/datasets/MMMU/MMMU) with all subsets concatenated.
<p align="center" width="100%">
<img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png" width="100%" height="80%">
</p>
# Large-scale Multi-modality Models Evaluation Suite
> Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval`
🏠 [Homepage](https://lmms-lab.github.io/) | 📚 [Documentation](docs/README.md) | 🤗 [Huggingface Datasets](https://huggingface.co/lmms-lab)
# This Dataset
This is a formatted version of [MMMU](https://github.com/MMMU-Benchmark/MMMU). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models.
```
@article{yue2023mmmu,
title={Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi},
author={Yue, Xiang and Ni, Yuansheng and Zhang, Kai and Zheng, Tianyu and Liu, Ruoqi and Zhang, Ge and Stevens, Samuel and Jiang, Dongfu and Ren, Weiming and Sun, Yuxuan and others},
journal={arXiv preprint arXiv:2311.16502},
year={2023}
}
``` |
OpenDriveLab/OpenScene | OpenDriveLab | "2024-04-22T18:51:08Z" | 9,529 | 4 | [
"license:cc-by-nc-sa-4.0",
"size_categories:n<1K",
"modality:text",
"region:us"
] | null | "2024-03-02T04:33:04Z" | ---
license: cc-by-nc-sa-4.0
--- |
nvidia/OpenMathInstruct-2 | nvidia | "2024-11-25T20:07:28Z" | 9,526 | 135 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2410.01560",
"region:us",
"math",
"nvidia"
] | [
"question-answering",
"text-generation"
] | "2024-09-28T16:37:52Z" | ---
language:
- en
license: cc-by-4.0
size_categories:
- 10M<n<100M
task_categories:
- question-answering
- text-generation
pretty_name: OpenMathInstruct-2
dataset_info:
features:
- name: problem
dtype: string
- name: generated_solution
dtype: string
- name: expected_answer
dtype: string
- name: problem_source
dtype: string
splits:
- name: train_1M
num_bytes: 1350383003
num_examples: 1000000
- name: train_2M
num_bytes: 2760009675
num_examples: 2000000
- name: train_5M
num_bytes: 6546496157
num_examples: 5000000
- name: train
num_bytes: 15558412976
num_examples: 13972791
download_size: 20208929853
dataset_size: 26215301811
tags:
- math
- nvidia
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: train_1M
path: data/train_1M-*
- split: train_2M
path: data/train_2M-*
- split: train_5M
path: data/train_5M-*
---
# OpenMathInstruct-2
OpenMathInstruct-2 is a math instruction tuning dataset with 14M problem-solution pairs
generated using the [Llama3.1-405B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-405B-Instruct) model.
The training set problems of [GSM8K](https://github.com/openai/grade-school-math)
and [MATH](https://github.com/hendrycks/math) are used for constructing the dataset in the following ways:
- *Solution augmentation*: Generating chain-of-thought solutions for training set problems in GSM8K and MATH.
- *Problem-Solution augmentation*: Generating new problems, followed by solutions for these new problems.
<p>
<img src="SFT Data Diagram 1.jpg" width="75%" title="Composition of OpenMathInstruct-2">
</p>
OpenMathInstruct-2 dataset contains the following fields:
- **problem**: Original problem from either the GSM8K or MATH training set or augmented problem from these training sets.
- **generated_solution**: Synthetically generated solution.
- **expected_answer**: For problems in the training set, it is the ground-truth answer provided in the datasets. **For augmented problems, it is the majority-voting answer.**
- **problem_source**: Whether the problem is taken directly from GSM8K or MATH or is an augmented version derived from either dataset.
<p>
<img src="scaling_plot.jpg" width="40%" title="Scaling Curve">
</p>
We also release the 1M, 2M, and 5M, *fair-downsampled* versions of the entire training set corresponding to points in the above scaling plot.
These splits are referred to as **train_1M**, **train_2M**, and **train_5M**.
To use these subsets, just specify one of these subsets as split while downloading the data:
```python
from datasets import load_dataset
# Download only the 1M training split
dataset = load_dataset('nvidia/OpenMathInstruct-2', split='train_1M', streaming=True)
```
To download the entire training set and to convert it into the jsonl format, use the following code snippet.
This might take 20-30 minutes (or more depending on your network connection) and will use ~20Gb of RAM.
```python
import json
from datasets import load_dataset
from tqdm import tqdm
dataset = load_dataset('nvidia/OpenMathInstruct-2', split='train')
print("Converting dataset to jsonl format")
output_file = "openmathinstruct2.jsonl"
with open(output_file, 'w', encoding='utf-8') as f:
for item in tqdm(dataset):
f.write(json.dumps(item, ensure_ascii=False) + '\n')
print(f"Conversion complete. Output saved as {output_file}")
```
Apart from the dataset, we also release the [contamination explorer](https://huggingface.co/spaces/nvidia/OpenMathInstruct-2-explorer) for looking at problems
in the OpenMathInstruct-2 dataset that are similar to the [GSM8K](https://huggingface.co/datasets/openai/gsm8k), [MATH](https://github.com/hendrycks/math),
[AMC 2023](https://github.com/QwenLM/Qwen2.5-Math/tree/main/evaluation/data/amc23), [AIME 2024](https://artofproblemsolving.com/wiki/index.php/2024_AIME_I),
and [Omni-MATH](https://huggingface.co/datasets/KbsdJames/Omni-MATH) test set problems.
See our [paper](https://arxiv.org/abs/2410.01560) to learn more details!
### Note
The released dataset doesn't filter out extremely long questions. After the dataset release, we found that 564 questions (roughly 0.1%) were longer than 1024 Llama tokens.
We experimented with removing these questions and didn't see a performance drop (in fact, we observed a minor bump). Dropping these questions, helps with memory as well.
So we would recommend, filtering out extremely long questions. We have updated the data preparation commands in our [Github documentation](https://nvidia.github.io/NeMo-Skills/openmathinstruct2/dataset/#converting-to-sft-format).
## OpenMath2 models
To demonstrate the quality of this dataset, we release a series of OpenMath2 models trained on this data.
| Model | GSM8K | MATH | AMC 2023 | AIME 2024 | Omni-MATH |
|:---|:---:|:---:|:---:|:---:|:---:|
| Llama3.1-8B-Instruct | 84.5 | 51.9 | 9/40 | 2/30 | 12.7 |
| OpenMath2-Llama3.1-8B ([nemo](https://huggingface.co/nvidia/OpenMath2-Llama3.1-8B-nemo) \| [HF](https://huggingface.co/nvidia/OpenMath2-Llama3.1-8B)) | 91.7 | 67.8 | 16/40 | 3/30 | 22.0 |
| + majority@256 | 94.1 | 76.1 | 23/40 | 3/30 | 24.6 |
| Llama3.1-70B-Instruct | 95.8 | 67.9 | 19/40 | 6/30 | 19.0 |
| OpenMath2-Llama3.1-70B ([nemo](https://huggingface.co/nvidia/OpenMath2-Llama3.1-70B-nemo) \| [HF](https://huggingface.co/nvidia/OpenMath2-Llama3.1-70B)) | 94.9 | 71.9 | 20/40 | 4/30 | 23.1 |
| + majority@256 | 96.0 | 79.6 | 24/40 | 6/30 | 27.6 |
The pipeline we used to produce the data and models is fully open-sourced!
- [Code](https://github.com/NVIDIA/NeMo-Skills)
- [Models](https://huggingface.co/collections/nvidia/openmath-2-66fb142317d86400783d2c7b)
- [Dataset](https://huggingface.co/datasets/nvidia/OpenMathInstruct-2)
## Reproducing our results
We provide [all instructions](https://nvidia.github.io/NeMo-Skills/openmathinstruct2/)
to fully reproduce our results, including data generation.
## Citation
If you find our work useful, please consider citing us!
```bibtex
@article{toshniwal2024openmath2,
title = {OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data},
author = {Shubham Toshniwal and Wei Du and Ivan Moshkov and Branislav Kisacanin and Alexan Ayrapetyan and Igor Gitman},
year = {2024},
journal = {arXiv preprint arXiv:2410.01560}
}
``` |
CogComp/trec | CogComp | "2024-01-18T11:17:19Z" | 9,421 | 41 | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:1K<n<10K",
"region:us"
] | [
"text-classification"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
paperswithcode_id: trecqa
pretty_name: Text Retrieval Conference Question Answering
dataset_info:
features:
- name: text
dtype: string
- name: coarse_label
dtype:
class_label:
names:
'0': ABBR
'1': ENTY
'2': DESC
'3': HUM
'4': LOC
'5': NUM
- name: fine_label
dtype:
class_label:
names:
'0': ABBR:abb
'1': ABBR:exp
'2': ENTY:animal
'3': ENTY:body
'4': ENTY:color
'5': ENTY:cremat
'6': ENTY:currency
'7': ENTY:dismed
'8': ENTY:event
'9': ENTY:food
'10': ENTY:instru
'11': ENTY:lang
'12': ENTY:letter
'13': ENTY:other
'14': ENTY:plant
'15': ENTY:product
'16': ENTY:religion
'17': ENTY:sport
'18': ENTY:substance
'19': ENTY:symbol
'20': ENTY:techmeth
'21': ENTY:termeq
'22': ENTY:veh
'23': ENTY:word
'24': DESC:def
'25': DESC:desc
'26': DESC:manner
'27': DESC:reason
'28': HUM:gr
'29': HUM:ind
'30': HUM:title
'31': HUM:desc
'32': LOC:city
'33': LOC:country
'34': LOC:mount
'35': LOC:other
'36': LOC:state
'37': NUM:code
'38': NUM:count
'39': NUM:date
'40': NUM:dist
'41': NUM:money
'42': NUM:ord
'43': NUM:other
'44': NUM:period
'45': NUM:perc
'46': NUM:speed
'47': NUM:temp
'48': NUM:volsize
'49': NUM:weight
splits:
- name: train
num_bytes: 385090
num_examples: 5452
- name: test
num_bytes: 27983
num_examples: 500
download_size: 359212
dataset_size: 413073
---
# Dataset Card for "trec"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://cogcomp.seas.upenn.edu/Data/QA/QC/](https://cogcomp.seas.upenn.edu/Data/QA/QC/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 0.36 MB
- **Size of the generated dataset:** 0.41 MB
- **Total amount of disk used:** 0.78 MB
### Dataset Summary
The Text REtrieval Conference (TREC) Question Classification dataset contains 5500 labeled questions in training set and another 500 for test set.
The dataset has 6 coarse class labels and 50 fine class labels. Average length of each sentence is 10, vocabulary size of 8700.
Data are collected from four sources: 4,500 English questions published by USC (Hovy et al., 2001), about 500 manually constructed questions for a few rare classes, 894 TREC 8 and TREC 9 questions, and also 500 questions from TREC 10 which serves as the test set. These questions were manually labeled.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The language in this dataset is English (`en`).
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 0.36 MB
- **Size of the generated dataset:** 0.41 MB
- **Total amount of disk used:** 0.78 MB
An example of 'train' looks as follows.
```
{
'text': 'How did serfdom develop in and then leave Russia ?',
'coarse_label': 2,
'fine_label': 26
}
```
### Data Fields
The data fields are the same among all splits.
- `text` (`str`): Text of the question.
- `coarse_label` (`ClassLabel`): Coarse class label. Possible values are:
- 'ABBR' (0): Abbreviation.
- 'ENTY' (1): Entity.
- 'DESC' (2): Description and abstract concept.
- 'HUM' (3): Human being.
- 'LOC' (4): Location.
- 'NUM' (5): Numeric value.
- `fine_label` (`ClassLabel`): Fine class label. Possible values are:
- ABBREVIATION:
- 'ABBR:abb' (0): Abbreviation.
- 'ABBR:exp' (1): Expression abbreviated.
- ENTITY:
- 'ENTY:animal' (2): Animal.
- 'ENTY:body' (3): Organ of body.
- 'ENTY:color' (4): Color.
- 'ENTY:cremat' (5): Invention, book and other creative piece.
- 'ENTY:currency' (6): Currency name.
- 'ENTY:dismed' (7): Disease and medicine.
- 'ENTY:event' (8): Event.
- 'ENTY:food' (9): Food.
- 'ENTY:instru' (10): Musical instrument.
- 'ENTY:lang' (11): Language.
- 'ENTY:letter' (12): Letter like a-z.
- 'ENTY:other' (13): Other entity.
- 'ENTY:plant' (14): Plant.
- 'ENTY:product' (15): Product.
- 'ENTY:religion' (16): Religion.
- 'ENTY:sport' (17): Sport.
- 'ENTY:substance' (18): Element and substance.
- 'ENTY:symbol' (19): Symbols and sign.
- 'ENTY:techmeth' (20): Techniques and method.
- 'ENTY:termeq' (21): Equivalent term.
- 'ENTY:veh' (22): Vehicle.
- 'ENTY:word' (23): Word with a special property.
- DESCRIPTION:
- 'DESC:def' (24): Definition of something.
- 'DESC:desc' (25): Description of something.
- 'DESC:manner' (26): Manner of an action.
- 'DESC:reason' (27): Reason.
- HUMAN:
- 'HUM:gr' (28): Group or organization of persons
- 'HUM:ind' (29): Individual.
- 'HUM:title' (30): Title of a person.
- 'HUM:desc' (31): Description of a person.
- LOCATION:
- 'LOC:city' (32): City.
- 'LOC:country' (33): Country.
- 'LOC:mount' (34): Mountain.
- 'LOC:other' (35): Other location.
- 'LOC:state' (36): State.
- NUMERIC:
- 'NUM:code' (37): Postcode or other code.
- 'NUM:count' (38): Number of something.
- 'NUM:date' (39): Date.
- 'NUM:dist' (40): Distance, linear measure.
- 'NUM:money' (41): Price.
- 'NUM:ord' (42): Order, rank.
- 'NUM:other' (43): Other number.
- 'NUM:period' (44): Lasting time of something
- 'NUM:perc' (45): Percent, fraction.
- 'NUM:speed' (46): Speed.
- 'NUM:temp' (47): Temperature.
- 'NUM:volsize' (48): Size, area and volume.
- 'NUM:weight' (49): Weight.
### Data Splits
| name | train | test |
|---------|------:|-----:|
| default | 5452 | 500 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{li-roth-2002-learning,
title = "Learning Question Classifiers",
author = "Li, Xin and
Roth, Dan",
booktitle = "{COLING} 2002: The 19th International Conference on Computational Linguistics",
year = "2002",
url = "https://www.aclweb.org/anthology/C02-1150",
}
@inproceedings{hovy-etal-2001-toward,
title = "Toward Semantics-Based Answer Pinpointing",
author = "Hovy, Eduard and
Gerber, Laurie and
Hermjakob, Ulf and
Lin, Chin-Yew and
Ravichandran, Deepak",
booktitle = "Proceedings of the First International Conference on Human Language Technology Research",
year = "2001",
url = "https://www.aclweb.org/anthology/H01-1069",
}
```
### Contributions
Thanks to [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
sordonia/narrativeqa_sanitized | sordonia | "2024-09-28T05:02:43Z" | 9,407 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-09-28T05:02:28Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: questions
sequence: string
- name: answers
sequence:
sequence: string
- name: document_id
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 508515599
num_examples: 1572
download_size: 288012667
dataset_size: 508515599
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
WenhaoWang/TIP-I2V | WenhaoWang | "2024-11-27T05:39:28Z" | 9,403 | 10 | [
"task_categories:image-to-video",
"task_categories:text-to-video",
"task_categories:text-to-image",
"task_categories:image-to-image",
"language:en",
"license:cc-by-nc-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2411.04709",
"region:us",
"prompt",
"image-to-video",
"text-to-video",
"visual-generation",
"video-generation"
] | [
"image-to-video",
"text-to-video",
"text-to-image",
"image-to-image"
] | "2024-10-10T17:03:53Z" | ---
language:
- en
license: cc-by-nc-4.0
size_categories:
- 1M<n<10M
task_categories:
- image-to-video
- text-to-video
- text-to-image
- image-to-image
dataset_info:
features:
- name: UUID
dtype: string
- name: Text_Prompt
dtype: string
- name: Image_Prompt
dtype: image
- name: Subject
dtype: string
- name: Timestamp
dtype: string
- name: Text_NSFW
dtype: float32
- name: Image_NSFW
dtype: string
splits:
- name: Full
num_bytes: 13440652664.125
num_examples: 1701935
- name: Subset
num_bytes: 790710630
num_examples: 100000
- name: Eval
num_bytes: 78258893
num_examples: 10000
download_size: 27500759907
dataset_size: 27750274851.25
configs:
- config_name: default
data_files:
- split: Full
path: data/Full-*
- split: Subset
path: data/Subset-*
- split: Eval
path: data/Eval-*
tags:
- prompt
- image-to-video
- text-to-video
- visual-generation
- video-generation
pretty_name: TIP-I2V
---
# Summary
This is the dataset proposed in our paper [**TIP-I2V: A Million-Scale Real Text and Image Prompt Dataset for Image-to-Video Generation**](https://arxiv.org/abs/2411.04709).
TIP-I2V is the first dataset comprising over 1.70 million unique user-provided text and image prompts. Besides the prompts, TIP-I2V also includes videos generated by five state-of-the-art image-to-video models (Pika, Stable Video Diffusion, Open-Sora, I2VGen-XL, and CogVideoX-5B). The TIP-I2V contributes to the development of better and safer image-to-video models.
<p align="center">
<img src="https://huggingface.co/datasets/WenhaoWang/TIP-I2V/resolve/main/assets/teasor.png" width="1000">
</p>
# Datapoint
<p align="center">
<img src="https://huggingface.co/datasets/WenhaoWang/TIP-I2V/resolve/main/assets/datapoint.png" width="1000">
</p>
# Statistics
<p align="center">
<img src="https://huggingface.co/datasets/WenhaoWang/TIP-I2V/resolve/main/assets/stat.png" width="1000">
</p>
# Download
For users in mainland China, try setting `export HF_ENDPOINT=https://hf-mirror.com` to successfully download the datasets.
## Download the text and (compressed) image prompts with related information
```python
# Full (text and compressed image) prompts: ~13.4G
from datasets import load_dataset
ds = load_dataset("WenhaoWang/TIP-I2V", split='Full', streaming=True)
# Convert to Pandas format (it may be slow)
import pandas as pd
df = pd.DataFrame(ds)
```
```python
# 100k subset (text and compressed image) prompts: ~0.8G
from datasets import load_dataset
ds = load_dataset("WenhaoWang/TIP-I2V", split='Subset', streaming=True)
# Convert to Pandas format (it may be slow)
import pandas as pd
df = pd.DataFrame(ds)
```
```python
# 10k TIP-Eval (text and compressed image) prompts: ~0.08G
from datasets import load_dataset
ds = load_dataset("WenhaoWang/TIP-I2V", split='Eval', streaming=True)
# Convert to Pandas format (it may be slow)
import pandas as pd
df = pd.DataFrame(ds)
```
## Download the embeddings for text and image prompts
```python
# Embeddings for full text prompts (~21G) and image prompts (~3.5G)
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="Embedding/Full_Text_Embedding.parquet", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="Embedding/Full_Image_Embedding.parquet", repo_type="dataset")
```
```python
# Embeddings for 100k subset text prompts (~1.2G) and image prompts (~0.2G)
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="Embedding/Subset_Text_Embedding.parquet", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="Embedding/Subset_Image_Embedding.parquet", repo_type="dataset")
```
```python
# Embeddings for 10k TIP-Eval text prompts (~0.1G) and image prompts (~0.02G)
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="Embedding/Eval_Text_Embedding.parquet", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="Embedding/Eval_Image_Embedding.parquet", repo_type="dataset")
```
## Download uncompressed image prompts
```python
# Full uncompressed image prompts: ~1T
from huggingface_hub import hf_hub_download
for i in range(1,52):
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="image_prompt_tar/image_prompt_%d.tar"%i, repo_type="dataset")
```
```python
# 100k subset uncompressed image prompts: ~69.6G
from huggingface_hub import hf_hub_download
for i in range(1,3):
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="sub_image_prompt_tar/sub_image_prompt_%d.tar"%i, repo_type="dataset")
```
```python
# 10k TIP-Eval uncompressed image prompts: ~6.5G
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="eval_image_prompt_tar/eval_image_prompt.tar", repo_type="dataset")
```
## Download generated videos
```python
# Full videos generated by Pika: ~1T
from huggingface_hub import hf_hub_download
for i in range(1,52):
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="pika_videos_tar/pika_videos_%d.tar"%i, repo_type="dataset")
```
```python
# 100k subset videos generated by Pika (~57.6G), Stable Video Diffusion (~38.9G), Open-Sora (~47.2G), I2VGen-XL (~54.4G), and CogVideoX-5B (~36.7G)
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="subset_videos_tar/pika_videos_subset_1.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="subset_videos_tar/pika_videos_subset_2.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="subset_videos_tar/svd_videos_subset.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="subset_videos_tar/opensora_videos_subset.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="subset_videos_tar/i2vgenxl_videos_subset_1.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="subset_videos_tar/i2vgenxl_videos_subset_2.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="subset_videos_tar/cog_videos_subset.tar", repo_type="dataset")
```
```python
# 10k TIP-Eval videos generated by Pika (~5.8G), Stable Video Diffusion (~3.9G), Open-Sora (~4.7G), I2VGen-XL (~5.4G), and CogVideoX-5B (~3.6G)
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="eval_videos_tar/pika_videos_eval.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="eval_videos_tar/svd_videos_eval.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="eval_videos_tar/opensora_videos_eval.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="eval_videos_tar/i2vgenxl_videos_eval.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="eval_videos_tar/cog_videos_eval.tar", repo_type="dataset")
```
# Comparison with VidProM and DiffusionDB
<p align="center">
<img src="https://huggingface.co/datasets/WenhaoWang/TIP-I2V/resolve/main/assets/table.png" width="1000">
</p>
<p align="center">
<img src="https://huggingface.co/datasets/WenhaoWang/TIP-I2V/resolve/main/assets/comparison.png" width="1000">
</p>
Click the [WizMap (TIP-I2V VS VidProM)](https://poloclub.github.io/wizmap/?dataURL=https%3A%2F%2Fhuggingface.co%2Fdatasets%2FWenhaoWang%2FTIP-I2V%2Fresolve%2Fmain%2Ftip-i2v-visualize%2Fdata_tip-i2v_vidprom.ndjson&gridURL=https%3A%2F%2Fhuggingface.co%2Fdatasets%2FWenhaoWang%2FTIP-I2V%2Fresolve%2Fmain%2Ftip-i2v-visualize%2Fgrid_tip-i2v_vidprom.json) and [WizMap (TIP-I2V VS DiffusionDB)](https://poloclub.github.io/wizmap/?dataURL=https%3A%2F%2Fhuggingface.co%2Fdatasets%2FWenhaoWang%2FTIP-I2V%2Fresolve%2Fmain%2Ftip-i2v-visualize%2Fdata_tip-i2v_diffusiondb.ndjson&gridURL=https%3A%2F%2Fhuggingface.co%2Fdatasets%2FWenhaoWang%2FTIP-I2V%2Fresolve%2Fmain%2Ftip-i2v-visualize%2Fgrid_tip-i2v_diffusiondb.json)
(wait for 5 seconds) for an interactive visualization of our 1.70 million prompts.
# License
The prompts and videos in our TIP-I2V are licensed under the [CC BY-NC 4.0 license](https://creativecommons.org/licenses/by-nc/4.0/deed.en).
# Curators
TIP-I2V is created by [Wenhao Wang](https://wangwenhao0716.github.io/) and Professor [Yi Yang](https://scholar.google.com/citations?user=RMSuNFwAAAAJ&hl=zh-CN).
# License
The prompts and videos in our TIP-I2V are licensed under the [CC BY-NC 4.0 license](https://creativecommons.org/licenses/by-nc/4.0/deed.en).
# Citation
```
@article{wang2024tipi2v,
title={TIP-I2V: A Million-Scale Real Text and Image Prompt Dataset for Image-to-Video Generation},
author={Wang, Wenhao and Yang, Yi},
booktitle={arXiv preprint arXiv:2411.04709},
year={2024}
}
```
# Contact
If you have any questions, feel free to contact Wenhao Wang ([email protected]). |
lmms-lab/DocVQA | lmms-lab | "2024-04-18T05:14:35Z" | 9,396 | 27 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2007.00398",
"region:us"
] | null | "2024-01-22T16:29:32Z" | ---
license: apache-2.0
dataset_info:
- config_name: DocVQA
features:
- name: questionId
dtype: string
- name: question
dtype: string
- name: question_types
sequence: string
- name: image
dtype: image
- name: docId
dtype: int64
- name: ucsf_document_id
dtype: string
- name: ucsf_document_page_no
dtype: string
- name: answers
sequence: string
- name: data_split
dtype: string
splits:
# - name: train
# num_bytes: 5659006943.631
# num_examples: 39463
- name: validation
num_bytes: 2532447207.066
num_examples: 5349
- name: test
num_bytes: 2500408525.732
num_examples: 5188
download_size: 9555791945
dataset_size: 10691862676.428999
- config_name: InfographicVQA
features:
- name: questionId
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: answer_type
sequence: string
- name: image
dtype: image
- name: image_url
dtype: string
- name: operation/reasoning
sequence: string
- name: ocr
dtype: string
- name: data_split
dtype: string
splits:
# - name: train
# num_bytes: 11559694546.32
# num_examples: 23946
- name: validation
num_bytes: 1863177404.253
num_examples: 2801
- name: test
num_bytes: 1851304047.712
num_examples: 3288
download_size: 2544892079
dataset_size: 15274175998.285
configs:
- config_name: DocVQA
data_files:
# - split: train
# path: DocVQA/train-*
- split: validation
path: DocVQA/validation-*
- split: test
path: DocVQA/test-*
- config_name: InfographicVQA
data_files:
# - split: train
# path: InfographicVQA/train-*
- split: validation
path: InfographicVQA/validation-*
- split: test
path: InfographicVQA/test-*
---
<p align="center" width="100%">
<img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png" width="100%" height="80%">
</p>
# Large-scale Multi-modality Models Evaluation Suite
> Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval`
🏠 [Homepage](https://lmms-lab.github.io/) | 📚 [Documentation](docs/README.md) | 🤗 [Huggingface Datasets](https://huggingface.co/lmms-lab)
# This Dataset
This is a formatted version of [DocVQA](https://arxiv.org/abs/2007.00398). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models.
```
@article{mathew2020docvqa,
title={DocVQA: A Dataset for VQA on Document Images. CoRR abs/2007.00398 (2020)},
author={Mathew, Minesh and Karatzas, Dimosthenis and Manmatha, R and Jawahar, CV},
journal={arXiv preprint arXiv:2007.00398},
year={2020}
}
```
|
lmms-lab/Video-MME | lmms-lab | "2024-07-04T08:14:20Z" | 9,384 | 30 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-06-07T12:06:37Z" | ---
dataset_info:
config_name: videomme
features:
- name: video_id
dtype: string
- name: duration
dtype: string
- name: domain
dtype: string
- name: sub_category
dtype: string
- name: url
dtype: string
- name: videoID
dtype: string
- name: question_id
dtype: string
- name: task_type
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1003241.0
num_examples: 2700
download_size: 405167
dataset_size: 1003241.0
configs:
- config_name: videomme
data_files:
- split: test
path: videomme/test-*
---
|
faur-ai/fulg | faur-ai | "2024-08-15T10:58:58Z" | 9,363 | 8 | [
"task_categories:text-generation",
"language:ro",
"license:odc-by",
"size_categories:100B<n<1T",
"arxiv:2407.13657",
"region:us",
"language-modeling",
"casual-lm",
"llm"
] | [
"text-generation"
] | "2024-07-16T20:17:27Z" | ---
license: odc-by
viewer: true
task_categories:
- text-generation
language:
- ro
tags:
- language-modeling
- casual-lm
- llm
pretty_name: FuLG
size_categories:
- 100B<n<1T
---
# ❄️FuLG
The FuLG dataset is a comprehensive Romanian language corpus comprising 150 billion tokens, carefully
extracted from Common Crawl. This extensive dataset is the result of rigorous filtering and deduplication
processes applied to 95 Common Crawl snapshots. The compressed dataset has 289 GB.
For more details, check the [arXiv preprint](https://arxiv.org/abs/2407.13657).
### How do I download this?
##### Using 🤗 Datasets
```python
from datasets import load_dataset
# Full dataset
dataset = load_dataset("faur-ai/fulg")
# To load the data from a specific CC snapshot
dataset = load_dataset("faur-ai/fulg", data_dir='2018-05')
```
##### Using Git
```bash
git clone https://huggingface.co/datasets/faur-ai/fulg
```
### Data Fields
The data have several fields:
- `url`: url of the source as a string
- `date_download`: date of crawl
- `digest`: hash of content
- `length`: length of content
- `nlines`: number of lines
- `source_domain`: domain of document
- `title`: title of document
- `raw_content`: text content as a string
- `cc_segment`: source CommonCrawl segment
- `original_nlines`: original number of lines before processing
- `original_length`: original length before processing
- `language`: language (ro)
- `language_score`: score for language
### Licensing Information
We are releasing this dataset under the terms of
[ODC-BY](https://opendatacommons.org/licenses/by/1-0/). By using this dataset,
you are also bound any license agreements and terms of use of the original data
sources.
## Bibtex
If you use our dataset, please cite us at:
```bibtex
@misc{fulg150bromaniancorpus,
title={FuLG: 150B Romanian Corpus for Language Model Pretraining},
author={Vlad-Andrei Bădoiu and Mihai-Valentin Dumitru and Alexandru M. Gherghescu and Alexandru Agache and Costin Raiciu},
year={2024},
eprint={2407.13657},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.13657},
}
```
|
ontocord/VALID | ontocord | "2024-12-07T17:23:29Z" | 9,352 | 10 | [
"license:cc-by-sa-4.0",
"region:us"
] | null | "2024-12-05T00:27:35Z" | ---
license: cc-by-sa-4.0
dataset_info:
features:
- name: video_id
dtype: string
- name: chunk_idx
dtype: int64
- name: chunk_text
dtype: string
- name: video_metadata
dtype: string
- name: video_language
dtype: string
- name: chunk_media
dtype: string
splits:
- name: shard_10339
num_bytes: 1997009
num_examples: 631
- name: shard_10400
num_bytes: 2638827
num_examples: 722
- name: shard_10324
num_bytes: 1700655
num_examples: 515
- name: shard_10418
num_bytes: 3034319
num_examples: 947
- name: shard_1045
num_bytes: 2042334
num_examples: 648
- name: shard_10428
num_bytes: 2314345
num_examples: 706
- name: shard_10435
num_bytes: 2300183
num_examples: 677
- name: shard_10424
num_bytes: 1839226
num_examples: 552
- name: shard_10442
num_bytes: 1543285
num_examples: 419
- name: shard_10411
num_bytes: 2005599
num_examples: 604
- name: shard_10344
num_bytes: 1796239
num_examples: 589
- name: shard_10439
num_bytes: 1780546
num_examples: 567
- name: shard_10351
num_bytes: 2156111
num_examples: 677
- name: shard_10446
num_bytes: 2117151
num_examples: 525
- name: shard_10457
num_bytes: 1851306
num_examples: 555
- name: shard_10464
num_bytes: 1316832
num_examples: 440
- name: shard_10405
num_bytes: 1820556
num_examples: 613
- name: shard_10471
num_bytes: 2397197
num_examples: 682
- name: shard_10456
num_bytes: 1279577
num_examples: 430
- name: shard_1035
num_bytes: 2102014
num_examples: 687
- name: shard_10430
num_bytes: 2293697
num_examples: 686
- name: shard_10469
num_bytes: 2521584
num_examples: 743
- name: shard_10360
num_bytes: 2329044
num_examples: 680
- name: shard_10443
num_bytes: 2222280
num_examples: 641
- name: shard_10453
num_bytes: 3277011
num_examples: 931
- name: shard_10481
num_bytes: 2163505
num_examples: 709
- name: shard_10482
num_bytes: 1885620
num_examples: 503
- name: shard_10365
num_bytes: 1789825
num_examples: 453
- name: shard_10475
num_bytes: 2290432
num_examples: 635
- name: shard_10444
num_bytes: 1915386
num_examples: 550
- name: shard_10493
num_bytes: 2240928
num_examples: 752
- name: shard_10433
num_bytes: 1728758
num_examples: 554
- name: shard_10486
num_bytes: 1946726
num_examples: 564
- name: shard_1037
num_bytes: 1622214
num_examples: 464
- name: shard_1049
num_bytes: 2142677
num_examples: 691
- name: shard_10507
num_bytes: 1404701
num_examples: 444
- name: shard_10479
num_bytes: 2668644
num_examples: 706
- name: shard_10543
num_bytes: 1567113
num_examples: 498
- name: shard_10494
num_bytes: 2572169
num_examples: 834
- name: shard_10506
num_bytes: 2352799
num_examples: 689
- name: shard_10497
num_bytes: 2130672
num_examples: 640
- name: shard_10503
num_bytes: 2821589
num_examples: 657
- name: shard_10488
num_bytes: 2610372
num_examples: 824
- name: shard_1050
num_bytes: 2380295
num_examples: 610
- name: shard_10379
num_bytes: 2121338
num_examples: 596
- name: shard_10258
num_bytes: 2899614
num_examples: 881
- name: shard_10521
num_bytes: 1751228
num_examples: 578
- name: shard_10477
num_bytes: 1987455
num_examples: 610
- name: shard_10510
num_bytes: 1809438
num_examples: 536
- name: shard_10518
num_bytes: 1554268
num_examples: 534
- name: shard_10514
num_bytes: 2398872
num_examples: 659
- name: shard_10366
num_bytes: 2686341
num_examples: 715
- name: shard_10462
num_bytes: 3202984
num_examples: 912
- name: shard_10512
num_bytes: 2058849
num_examples: 697
- name: shard_10558
num_bytes: 2065125
num_examples: 572
- name: shard_10383
num_bytes: 2580580
num_examples: 859
- name: shard_10550
num_bytes: 2617491
num_examples: 643
- name: shard_10536
num_bytes: 2352902
num_examples: 649
- name: shard_10529
num_bytes: 1970611
num_examples: 633
- name: shard_10565
num_bytes: 1569669
num_examples: 522
- name: shard_10538
num_bytes: 2012923
num_examples: 564
- name: shard_10532
num_bytes: 1839647
num_examples: 594
- name: shard_10531
num_bytes: 2125990
num_examples: 618
- name: shard_10382
num_bytes: 1770026
num_examples: 493
- name: shard_1058
num_bytes: 1707150
num_examples: 491
- name: shard_10525
num_bytes: 3210740
num_examples: 892
- name: shard_10594
num_bytes: 1369358
num_examples: 458
- name: shard_10572
num_bytes: 1859423
num_examples: 489
- name: shard_1054
num_bytes: 2011157
num_examples: 601
- name: shard_10396
num_bytes: 3458836
num_examples: 956
- name: shard_10608
num_bytes: 2063015
num_examples: 625
- name: shard_10554
num_bytes: 2017977
num_examples: 529
- name: shard_10600
num_bytes: 1895994
num_examples: 568
- name: shard_10509
num_bytes: 1324378
num_examples: 402
- name: shard_10399
num_bytes: 2104822
num_examples: 713
- name: shard_10409
num_bytes: 1595466
num_examples: 476
- name: shard_10563
num_bytes: 2209694
num_examples: 792
- name: shard_10583
num_bytes: 2328975
num_examples: 681
- name: shard_10397
num_bytes: 1736501
num_examples: 585
- name: shard_10595
num_bytes: 2393314
num_examples: 705
- name: shard_10414
num_bytes: 1946475
num_examples: 625
- name: shard_10622
num_bytes: 2213391
num_examples: 641
- name: shard_10590
num_bytes: 2321541
num_examples: 778
- name: shard_1042
num_bytes: 1894737
num_examples: 584
- name: shard_10613
num_bytes: 2204566
num_examples: 656
- name: shard_1062
num_bytes: 2548349
num_examples: 795
- name: shard_10607
num_bytes: 2501284
num_examples: 706
- name: shard_10587
num_bytes: 3077388
num_examples: 876
- name: shard_10589
num_bytes: 1711464
num_examples: 519
- name: shard_10637
num_bytes: 2165818
num_examples: 687
- name: shard_10659
num_bytes: 1797910
num_examples: 617
- name: shard_10626
num_bytes: 1543683
num_examples: 469
- name: shard_10552
num_bytes: 1997256
num_examples: 581
- name: shard_10436
num_bytes: 1948404
num_examples: 648
- name: shard_10645
num_bytes: 1932871
num_examples: 599
- name: shard_10604
num_bytes: 2224582
num_examples: 580
- name: shard_10632
num_bytes: 3291451
num_examples: 839
- name: shard_10611
num_bytes: 2496793
num_examples: 744
- name: shard_10673
num_bytes: 2019733
num_examples: 571
- name: shard_10651
num_bytes: 2494834
num_examples: 814
- name: shard_1063
num_bytes: 2285316
num_examples: 567
- name: shard_10670
num_bytes: 1678940
num_examples: 520
- name: shard_10633
num_bytes: 1144822
num_examples: 317
- name: shard_10639
num_bytes: 1980963
num_examples: 591
- name: shard_10574
num_bytes: 2322077
num_examples: 650
- name: shard_10658
num_bytes: 2610634
num_examples: 804
- name: shard_10664
num_bytes: 2138512
num_examples: 664
- name: shard_10640
num_bytes: 2138491
num_examples: 632
- name: shard_10648
num_bytes: 2334731
num_examples: 637
- name: shard_10701
num_bytes: 2025707
num_examples: 644
- name: shard_10677
num_bytes: 1719218
num_examples: 559
- name: shard_10688
num_bytes: 1998554
num_examples: 607
- name: shard_10680
num_bytes: 1800096
num_examples: 546
- name: shard_10596
num_bytes: 1541386
num_examples: 471
- name: shard_10441
num_bytes: 1993580
num_examples: 597
- name: shard_10683
num_bytes: 2145354
num_examples: 638
- name: shard_10684
num_bytes: 2049918
num_examples: 600
- name: shard_10696
num_bytes: 2556611
num_examples: 704
- name: shard_10733
num_bytes: 1506424
num_examples: 562
- name: shard_10716
num_bytes: 1373410
num_examples: 382
- name: shard_10447
num_bytes: 1789843
num_examples: 552
- name: shard_10727
num_bytes: 1704350
num_examples: 542
- name: shard_1074
num_bytes: 2202555
num_examples: 627
- name: shard_10662
num_bytes: 2446389
num_examples: 678
- name: shard_10714
num_bytes: 2864249
num_examples: 997
- name: shard_10655
num_bytes: 2225408
num_examples: 664
- name: shard_10767
num_bytes: 1883617
num_examples: 587
- name: shard_10745
num_bytes: 1815089
num_examples: 506
- name: shard_1076
num_bytes: 1881592
num_examples: 567
- name: shard_10746
num_bytes: 2077697
num_examples: 569
- name: shard_10752
num_bytes: 1633548
num_examples: 480
- name: shard_10774
num_bytes: 1967064
num_examples: 525
- name: shard_10796
num_bytes: 3216389
num_examples: 1149
- name: shard_10741
num_bytes: 1741749
num_examples: 495
- name: shard_10771
num_bytes: 1431999
num_examples: 465
- name: shard_1081
num_bytes: 1902619
num_examples: 593
- name: shard_10691
num_bytes: 1615444
num_examples: 509
- name: shard_10781
num_bytes: 1758513
num_examples: 521
- name: shard_1072
num_bytes: 1666222
num_examples: 508
- name: shard_10789
num_bytes: 2290621
num_examples: 663
- name: shard_10824
num_bytes: 2303055
num_examples: 755
- name: shard_10720
num_bytes: 1521373
num_examples: 439
- name: shard_10699
num_bytes: 2219222
num_examples: 687
- name: shard_10809
num_bytes: 2491367
num_examples: 670
- name: shard_10868
num_bytes: 1502120
num_examples: 457
- name: shard_10860
num_bytes: 1598902
num_examples: 444
- name: shard_10756
num_bytes: 1741295
num_examples: 519
- name: shard_10875
num_bytes: 2376365
num_examples: 689
- name: shard_10628
num_bytes: 1590374
num_examples: 486
- name: shard_10759
num_bytes: 1968808
num_examples: 578
- name: shard_10784
num_bytes: 1980905
num_examples: 566
- name: shard_10712
num_bytes: 2114475
num_examples: 769
- name: shard_10734
num_bytes: 2503133
num_examples: 805
- name: shard_10846
num_bytes: 1390416
num_examples: 440
- name: shard_10705
num_bytes: 2401860
num_examples: 915
- name: shard_10831
num_bytes: 1444555
num_examples: 430
- name: shard_10778
num_bytes: 1798802
num_examples: 508
- name: shard_10882
num_bytes: 3060842
num_examples: 861
- name: shard_1089
num_bytes: 2117693
num_examples: 617
- name: shard_10897
num_bytes: 1800552
num_examples: 579
- name: shard_10853
num_bytes: 2401097
num_examples: 838
- name: shard_10463
num_bytes: 1953012
num_examples: 575
- name: shard_10815
num_bytes: 2002715
num_examples: 568
- name: shard_10763
num_bytes: 2020642
num_examples: 530
- name: shard_10797
num_bytes: 2748982
num_examples: 842
- name: shard_10918
num_bytes: 2450845
num_examples: 727
- name: shard_10910
num_bytes: 1797180
num_examples: 527
- name: shard_1065
num_bytes: 1683705
num_examples: 476
- name: shard_10474
num_bytes: 2879834
num_examples: 885
- name: shard_10932
num_bytes: 1565556
num_examples: 551
- name: shard_10770
num_bytes: 2959467
num_examples: 837
- name: shard_10840
num_bytes: 1508428
num_examples: 488
- name: shard_10821
num_bytes: 2731613
num_examples: 757
- name: shard_10925
num_bytes: 1745133
num_examples: 528
- name: shard_10660
num_bytes: 1743349
num_examples: 544
- name: shard_10947
num_bytes: 1676536
num_examples: 484
- name: shard_10866
num_bytes: 2890471
num_examples: 812
- name: shard_10872
num_bytes: 1577472
num_examples: 524
- name: shard_10879
num_bytes: 1872624
num_examples: 615
- name: shard_1094
num_bytes: 2318756
num_examples: 639
- name: shard_10983
num_bytes: 2303128
num_examples: 722
- name: shard_10929
num_bytes: 1707341
num_examples: 528
- name: shard_10857
num_bytes: 1610649
num_examples: 512
- name: shard_10485
num_bytes: 1242974
num_examples: 395
- name: shard_10969
num_bytes: 2438237
num_examples: 609
- name: shard_10976
num_bytes: 2679099
num_examples: 753
- name: shard_10990
num_bytes: 2114418
num_examples: 651
- name: shard_10885
num_bytes: 1644987
num_examples: 492
- name: shard_1085
num_bytes: 1778292
num_examples: 588
- name: shard_1048
num_bytes: 2360317
num_examples: 744
- name: shard_10922
num_bytes: 1937906
num_examples: 641
- name: shard_1091
num_bytes: 2080799
num_examples: 701
- name: shard_108
num_bytes: 1849042
num_examples: 553
- name: shard_10813
num_bytes: 2399853
num_examples: 736
- name: shard_10935
num_bytes: 3427617
num_examples: 848
- name: shard_10864
num_bytes: 2676369
num_examples: 731
- name: shard_11010
num_bytes: 1311128
num_examples: 452
- name: shard_10998
num_bytes: 1936638
num_examples: 516
- name: shard_11003
num_bytes: 3035444
num_examples: 850
- name: shard_10490
num_bytes: 2059200
num_examples: 636
- name: shard_10886
num_bytes: 1592390
num_examples: 462
- name: shard_10948
num_bytes: 2410338
num_examples: 766
- name: shard_11032
num_bytes: 2392294
num_examples: 661
- name: shard_1090
num_bytes: 1729394
num_examples: 518
- name: shard_10973
num_bytes: 1874095
num_examples: 543
- name: shard_10893
num_bytes: 2438998
num_examples: 752
- name: shard_10671
num_bytes: 2373850
num_examples: 581
- name: shard_10496
num_bytes: 2672925
num_examples: 772
- name: shard_10871
num_bytes: 2666803
num_examples: 744
- name: shard_10960
num_bytes: 2586483
num_examples: 789
- name: shard_10941
num_bytes: 2750219
num_examples: 937
- name: shard_11047
num_bytes: 1940080
num_examples: 545
- name: shard_11025
num_bytes: 1900405
num_examples: 569
- name: shard_10967
num_bytes: 4183264
num_examples: 996
- name: shard_10206
num_bytes: 3714862
num_examples: 891
- name: shard_11018
num_bytes: 3019471
num_examples: 871
- name: shard_10914
num_bytes: 1980877
num_examples: 552
- name: shard_10992
num_bytes: 1580255
num_examples: 484
- name: shard_10606
num_bytes: 2584262
num_examples: 640
- name: shard_10999
num_bytes: 1950388
num_examples: 551
- name: shard_10921
num_bytes: 2477762
num_examples: 743
- name: shard_10315
num_bytes: 2911312
num_examples: 743
- name: shard_10419
num_bytes: 2586408
num_examples: 667
- name: shard_11054
num_bytes: 2011864
num_examples: 594
- name: shard_10289
num_bytes: 3470407
num_examples: 963
- name: shard_10835
num_bytes: 3026775
num_examples: 806
- name: shard_10298
num_bytes: 2823620
num_examples: 791
- name: shard_10455
num_bytes: 3275368
num_examples: 750
- name: shard_10311
num_bytes: 4072154
num_examples: 1148
- name: shard_11090
num_bytes: 3270282
num_examples: 858
- name: shard_11035
num_bytes: 1447049
num_examples: 458
- name: shard_11022
num_bytes: 2185853
num_examples: 652
- name: shard_11111
num_bytes: 2068341
num_examples: 615
- name: shard_11119
num_bytes: 1970512
num_examples: 580
- name: shard_11126
num_bytes: 1703981
num_examples: 523
download_size: 314533805
dataset_size: 506056044
configs:
- config_name: default
data_files:
- split: train
path: data/*.parquet
- split: shard_10339
path: data/shard_10339-*
- split: shard_10400
path: data/shard_10400-*
- split: shard_10424
path: data/shard_10424-*
- split: shard_10324
path: data/shard_10324-*
- split: shard_10428
path: data/shard_10428-*
- split: shard_10258
path: data/shard_10258-*
- split: shard_10396
path: data/shard_10396-*
- split: shard_10411
path: data/shard_10411-*
- split: shard_10418
path: data/shard_10418-*
- split: shard_10206
path: data/shard_10206-*
- split: shard_10442
path: data/shard_10442-*
- split: shard_1045
path: data/shard_1045-*
- split: shard_10289
path: data/shard_10289-*
- split: shard_10298
path: data/shard_10298-*
- split: shard_10344
path: data/shard_10344-*
- split: shard_10435
path: data/shard_10435-*
- split: shard_10311
path: data/shard_10311-*
- split: shard_10405
path: data/shard_10405-*
- split: shard_10464
path: data/shard_10464-*
- split: shard_10457
path: data/shard_10457-*
- split: shard_10439
path: data/shard_10439-*
- split: shard_10351
path: data/shard_10351-*
- split: shard_10446
path: data/shard_10446-*
- split: shard_10315
path: data/shard_10315-*
- split: shard_10471
path: data/shard_10471-*
- split: shard_1035
path: data/shard_1035-*
- split: shard_10456
path: data/shard_10456-*
- split: shard_10486
path: data/shard_10486-*
- split: shard_10430
path: data/shard_10430-*
- split: shard_10469
path: data/shard_10469-*
- split: shard_10360
path: data/shard_10360-*
- split: shard_10443
path: data/shard_10443-*
- split: shard_10453
path: data/shard_10453-*
- split: shard_10462
path: data/shard_10462-*
- split: shard_10481
path: data/shard_10481-*
- split: shard_10482
path: data/shard_10482-*
- split: shard_10365
path: data/shard_10365-*
- split: shard_10475
path: data/shard_10475-*
- split: shard_10444
path: data/shard_10444-*
- split: shard_10493
path: data/shard_10493-*
- split: shard_10433
path: data/shard_10433-*
- split: shard_1037
path: data/shard_1037-*
- split: shard_1049
path: data/shard_1049-*
- split: shard_10507
path: data/shard_10507-*
- split: shard_10521
path: data/shard_10521-*
- split: shard_10479
path: data/shard_10479-*
- split: shard_10543
path: data/shard_10543-*
- split: shard_10494
path: data/shard_10494-*
- split: shard_10565
path: data/shard_10565-*
- split: shard_10558
path: data/shard_10558-*
- split: shard_10506
path: data/shard_10506-*
- split: shard_10497
path: data/shard_10497-*
- split: shard_10503
path: data/shard_10503-*
- split: shard_10488
path: data/shard_10488-*
- split: shard_1050
path: data/shard_1050-*
- split: shard_10379
path: data/shard_10379-*
- split: shard_10366
path: data/shard_10366-*
- split: shard_10512
path: data/shard_10512-*
- split: shard_10529
path: data/shard_10529-*
- split: shard_10477
path: data/shard_10477-*
- split: shard_10510
path: data/shard_10510-*
- split: shard_10518
path: data/shard_10518-*
- split: shard_10514
path: data/shard_10514-*
- split: shard_10383
path: data/shard_10383-*
- split: shard_10550
path: data/shard_10550-*
- split: shard_10525
path: data/shard_10525-*
- split: shard_10536
path: data/shard_10536-*
- split: shard_10531
path: data/shard_10531-*
- split: shard_10538
path: data/shard_10538-*
- split: shard_10532
path: data/shard_10532-*
- split: shard_10382
path: data/shard_10382-*
- split: shard_10509
path: data/shard_10509-*
- split: shard_10572
path: data/shard_10572-*
- split: shard_1058
path: data/shard_1058-*
- split: shard_10455
path: data/shard_10455-*
- split: shard_10594
path: data/shard_10594-*
- split: shard_10587
path: data/shard_10587-*
- split: shard_1054
path: data/shard_1054-*
- split: shard_10608
path: data/shard_10608-*
- split: shard_10554
path: data/shard_10554-*
- split: shard_10563
path: data/shard_10563-*
- split: shard_10600
path: data/shard_10600-*
- split: shard_10399
path: data/shard_10399-*
- split: shard_10409
path: data/shard_10409-*
- split: shard_10583
path: data/shard_10583-*
- split: shard_10397
path: data/shard_10397-*
- split: shard_10595
path: data/shard_10595-*
- split: shard_10414
path: data/shard_10414-*
- split: shard_10589
path: data/shard_10589-*
- split: shard_10622
path: data/shard_10622-*
- split: shard_1042
path: data/shard_1042-*
- split: shard_10613
path: data/shard_10613-*
- split: shard_1062
path: data/shard_1062-*
- split: shard_10626
path: data/shard_10626-*
- split: shard_10590
path: data/shard_10590-*
- split: shard_10659
path: data/shard_10659-*
- split: shard_10651
path: data/shard_10651-*
- split: shard_10607
path: data/shard_10607-*
- split: shard_10637
path: data/shard_10637-*
- split: shard_10552
path: data/shard_10552-*
- split: shard_10436
path: data/shard_10436-*
- split: shard_10645
path: data/shard_10645-*
- split: shard_10604
path: data/shard_10604-*
- split: shard_10632
path: data/shard_10632-*
- split: shard_10611
path: data/shard_10611-*
- split: shard_1063
path: data/shard_1063-*
- split: shard_10673
path: data/shard_10673-*
- split: shard_10419
path: data/shard_10419-*
- split: shard_10633
path: data/shard_10633-*
- split: shard_10670
path: data/shard_10670-*
- split: shard_10680
path: data/shard_10680-*
- split: shard_10639
path: data/shard_10639-*
- split: shard_10574
path: data/shard_10574-*
- split: shard_10658
path: data/shard_10658-*
- split: shard_10664
path: data/shard_10664-*
- split: shard_10640
path: data/shard_10640-*
- split: shard_10648
path: data/shard_10648-*
- split: shard_10688
path: data/shard_10688-*
- split: shard_10701
path: data/shard_10701-*
- split: shard_10677
path: data/shard_10677-*
- split: shard_10596
path: data/shard_10596-*
- split: shard_10441
path: data/shard_10441-*
- split: shard_10683
path: data/shard_10683-*
- split: shard_10684
path: data/shard_10684-*
- split: shard_10696
path: data/shard_10696-*
- split: shard_10606
path: data/shard_10606-*
- split: shard_10733
path: data/shard_10733-*
- split: shard_10691
path: data/shard_10691-*
- split: shard_10699
path: data/shard_10699-*
- split: shard_10716
path: data/shard_10716-*
- split: shard_10752
path: data/shard_10752-*
- split: shard_10447
path: data/shard_10447-*
- split: shard_1076
path: data/shard_1076-*
- split: shard_10727
path: data/shard_10727-*
- split: shard_10746
path: data/shard_10746-*
- split: shard_1074
path: data/shard_1074-*
- split: shard_10662
path: data/shard_10662-*
- split: shard_10714
path: data/shard_10714-*
- split: shard_10655
path: data/shard_10655-*
- split: shard_10767
path: data/shard_10767-*
- split: shard_10720
path: data/shard_10720-*
- split: shard_10745
path: data/shard_10745-*
- split: shard_1072
path: data/shard_1072-*
- split: shard_10774
path: data/shard_10774-*
- split: shard_10781
path: data/shard_10781-*
- split: shard_10789
path: data/shard_10789-*
- split: shard_10796
path: data/shard_10796-*
- split: shard_10778
path: data/shard_10778-*
- split: shard_10734
path: data/shard_10734-*
- split: shard_10741
path: data/shard_10741-*
- split: shard_10771
path: data/shard_10771-*
- split: shard_1081
path: data/shard_1081-*
- split: shard_10831
path: data/shard_10831-*
- split: shard_10705
path: data/shard_10705-*
- split: shard_10784
path: data/shard_10784-*
- split: shard_10846
path: data/shard_10846-*
- split: shard_10824
path: data/shard_10824-*
- split: shard_10809
path: data/shard_10809-*
- split: shard_10756
path: data/shard_10756-*
- split: shard_10853
path: data/shard_10853-*
- split: shard_10797
path: data/shard_10797-*
- split: shard_10868
path: data/shard_10868-*
- split: shard_10815
path: data/shard_10815-*
- split: shard_10860
path: data/shard_10860-*
- split: shard_10763
path: data/shard_10763-*
- split: shard_10875
path: data/shard_10875-*
- split: shard_10759
path: data/shard_10759-*
- split: shard_10628
path: data/shard_10628-*
- split: shard_10712
path: data/shard_10712-*
- split: shard_10882
path: data/shard_10882-*
- split: shard_1089
path: data/shard_1089-*
- split: shard_10897
path: data/shard_10897-*
- split: shard_10463
path: data/shard_10463-*
- split: shard_10918
path: data/shard_10918-*
- split: shard_10840
path: data/shard_10840-*
- split: shard_10932
path: data/shard_10932-*
- split: shard_10910
path: data/shard_10910-*
- split: shard_1065
path: data/shard_1065-*
- split: shard_10474
path: data/shard_10474-*
- split: shard_10770
path: data/shard_10770-*
- split: shard_10821
path: data/shard_10821-*
- split: shard_10925
path: data/shard_10925-*
- split: shard_1094
path: data/shard_1094-*
- split: shard_10660
path: data/shard_10660-*
- split: shard_10947
path: data/shard_10947-*
- split: shard_10866
path: data/shard_10866-*
- split: shard_10872
path: data/shard_10872-*
- split: shard_10879
path: data/shard_10879-*
- split: shard_10671
path: data/shard_10671-*
- split: shard_1091
path: data/shard_1091-*
- split: shard_10983
path: data/shard_10983-*
- split: shard_10929
path: data/shard_10929-*
- split: shard_10857
path: data/shard_10857-*
- split: shard_10485
path: data/shard_10485-*
- split: shard_10969
path: data/shard_10969-*
- split: shard_10976
path: data/shard_10976-*
- split: shard_10935
path: data/shard_10935-*
- split: shard_10948
path: data/shard_10948-*
- split: shard_10990
path: data/shard_10990-*
- split: shard_10998
path: data/shard_10998-*
- split: shard_10885
path: data/shard_10885-*
- split: shard_1085
path: data/shard_1085-*
- split: shard_10922
path: data/shard_10922-*
- split: shard_1048
path: data/shard_1048-*
- split: shard_108
path: data/shard_108-*
- split: shard_10813
path: data/shard_10813-*
- split: shard_10864
path: data/shard_10864-*
- split: shard_11010
path: data/shard_11010-*
- split: shard_11003
path: data/shard_11003-*
- split: shard_11018
path: data/shard_11018-*
- split: shard_11025
path: data/shard_11025-*
- split: shard_10490
path: data/shard_10490-*
- split: shard_10886
path: data/shard_10886-*
- split: shard_11032
path: data/shard_11032-*
- split: shard_1090
path: data/shard_1090-*
- split: shard_10973
path: data/shard_10973-*
- split: shard_10893
path: data/shard_10893-*
- split: shard_10496
path: data/shard_10496-*
- split: shard_10871
path: data/shard_10871-*
- split: shard_10960
path: data/shard_10960-*
- split: shard_10941
path: data/shard_10941-*
- split: shard_11047
path: data/shard_11047-*
- split: shard_10967
path: data/shard_10967-*
- split: shard_10835
path: data/shard_10835-*
- split: shard_10914
path: data/shard_10914-*
- split: shard_10992
path: data/shard_10992-*
- split: shard_10999
path: data/shard_10999-*
- split: shard_10921
path: data/shard_10921-*
- split: shard_11054
path: data/shard_11054-*
- split: shard_11090
path: data/shard_11090-*
- split: shard_11035
path: data/shard_11035-*
- split: shard_11022
path: data/shard_11022-*
- split: shard_11111
path: data/shard_11111-*
- split: shard_11119
path: data/shard_11119-*
- split: shard_11126
path: data/shard_11126-*
---
![VALID Dataset](https://huggingface.co/datasets/ontocord/VALID/resolve/main/banner1-1.webp)
# VALID (Video-Audio Large Interleaved Dataset)
## Overview
The **VALID (Video-Audio Large Interleaved Dataset)** is a multimodal dataset comprising approximately 720,000 [Creative Commons licensed](https://creativecommons.org/share-your-work/cclicenses/) videos crawled from YouTube, and processed into audio-video-text data records for machine learning research. The dataset provides a unique opportunity for training models to understand relationships between modalities such as video frames, audio clips, and multilingual textual data, making it suitable for applications like multimodal representation learning.
- **Please note the current version is a PREVIEW version. We are still in the process of uploading. Please be patient.**
## Features
- Audio-Video-Text Format:
A combination of:
```
<video>
<caption><image> the caption </caption>
<caption><image> the caption </caption>
<caption><image> the caption </caption>
</video>
<transcript> <audio> multi-lingual transcript </transcript>
English text
```
- The non-text multimodal portion begins the data item and can include multiple media. Some snippets may have more than one audio, and more than one video. Others may have only images/videos or only audio paired with English text. Each video contains multiple frames stored as images, and text captions for each image. There can also be standalone images interleaved as well.
Even though each audio video snippets are no more than 10 seconds, a data record may span over more than 10 secs (e.g., if a data item has two 10 second videos, then the corresponding English text corresponds roughly to 20 seconds of video).
The intention for this format is to teach a model to associate multiple modalities with each other, and understand multiple audio-video elements in an interleaved fashion.
- Data Components:
- **Images**: PNG format, phashed to ensure variability, with 0–10 images per audio snippet. Each image includes a caption created with Florence-2.
- **Audio**: OGG format, multilingual, ~10 seconds per snippet, with shorter sound or music snippets (1–3 seconds) to minimize copyright issues. Each audio snippet is transcribed either with Whisper for non-English, or with the original Youtube ASR for English.
- **Text**: Not including the captions and transcripts, the “text” portion is a concatenation of Youtube’s original English transcripts associated with the above media of around 1–40 words per data record.
- Dataset Size:
- **About 7,000,000 records.**
- **About 15,000,000 images, each captioned with FLorence-2.**
- **About 30,000,000 audio snippets, about half of which transcribed with Whisper-large, and half with Youtube ASR.**
- **Divided into about 12K shards of about 600 records, each in a parquet file and a corresponding .tar.gz file for the media.**
- **About 14TB in total.**
## File Organization
- Each data entry follows the `<video><image(s)><audio><text>` structure as described above.
- Metadata includes alignment between modalities, and implicit ordering of audio/visual elements.
## Multimodal Details
- **Audio-Video Alignment**: Snippets allow learning temporal relationships between audio and visual elements.
- **Text Annotations**: Text descriptions, including captions and Youtube ASR English translations, provide linguistic alignment.
## Preprocessing
- **Phashing for Images**: Ensures that images within the dataset are dynamic and non-static.
- **Audio Snippet Lengths**: Music and sound effects are clipped to 1–3 seconds to minimize copyright concerns under fair use principles.
------
## Licenses
All videos in VALID are CC BY, as declared by their original uploaders on YouTube. We publish the audio snippets of these videos and select image frames here under these rights and under the principles of fair use. However, we cannot guarantee that original uploaders had the rights to share the content.
This dataset has only been lightly filtered for safety by removing data records with high proportions of children related words AND high proportions of sexual or violence related words. Moreover, we disclaim all warranties, whether express or implied and all laibilities with respect to infringment, fitness for a particular puprpose, or otherwise.
## Intended Uses
- **Primary Use Case**: Training models for multimodal understanding, such as contrastive multimodal learning (e.g., CLIP, CLAP).
- **Not Recommended For**: Generation tasks, as the dataset's quality may not meet generative model requirements.
## Dataset Limitations
- **Quality**: Images and audio are sourced from YouTube and may vary in resolution and clarity.
- **Rights Uncertainty**: While videos are marked as CC-BY by the third party authors of the videos, original rights may not be verifiable.
- **Biases**: The dataset's multilingual audio paired with English-only text may introduce linguistic biases. The large variety of videos may introduce bias.
## Ethical Considerations
The dataset was built under the principles of fair use and CC-BY licensing. Its creation strives to align with the spirit of the EU AI Act, emphasizing transparency and safety in AI model development. Users must exercise caution and adhere to copyright and licensing rules when using VALID.
------
## Policy for Managing Video Deletion Requests
Our goal is to establish a clear process for removing videos from our dataset when requested by users or required by external factors, while balancing the rights of content owners, compliance with CC-BY licenses, and the community's ability to utilize the dataset for training and research purposes.
- **1. Respecting Content Owners' Rights:**
All videos in the dataset are under the CC-BY license. As such, proper attribution will always be maintained as required by the license.
If a content owner requests the removal of a video from the dataset, we will balance this request with the community's ability to train on the data, considering the original intent of the CC-BY license.
- **2. Deletion Request Process:**
- Content owners or users can request the removal of a video by FIRST requesting it be removed from Youtube: [Here](https://support.google.com/youtube/answer/2807622?) and [Here](https://support.google.com/youtube/answer/2801895?hl=en).
- Then the onwers or users should verify that it has been removed from YouTube and provide this fact in a feedback to us [Here](https://forms.gle/f4zYzZpJU78SBPho9).
- Requests must demonstrate that the video is no longer publicly available on YouTube.
- We will remove the videos confirmed to be deleted in the next release of this dataset.
- **3. Verification and Balancing Interests:**
All deletion requests will be verified by checking YouTube to ensure the video is no longer available.
We may also remove a video in our sole discretion. Decisions on video removal will take into account:
- The rights and wishes of content owners, including their ability to remove their videos from public availability.
- The community's need for robust datasets for training and research.
- The spirit of the CC-BY license, which permits redistribution and use with proper attribution.
- **4. Responsibilities for Derivative Datasets:**
Users creating derivative datasets must ensure compliance by deleting videos listed in `delete_these_videos.json`.
- **5. Proactive Deletion:**
Videos may be removed proactively under the following circumstances:
- Requests from the hosting provider (e.g., Hugging Face).
- Legal requirements or enforcement actions.
- Internal decisions.
- **6. Community Considerations:**
- The community is encouraged to respect the balance between individual content owners’ wishes and the public benefit derived from open access datasets.
- Efforts will be made to keep the dataset robust while honoring legitimate requests for content removal.
- **7. Updates:**
Users are encouraged to check the `delete_these_videos.json`, from time to time to ensure their copy of the dataset is up to date.
------
## Related Materials:
- If you are looking for CC-BY Youtube transcripts of videos, check out PleIAs’ [YouTube-Commons](https://huggingface.co/datasets/PleIAs/YouTube-Commons).
- Also, Huggingface has created an excellent CC-BY Youtube video dataset here: [Finevideo](https://huggingface.co/datasets/HuggingFaceFV/finevideo)
- LAION is also building a dataset [Here](https://huggingface.co/datasets/laion/laion-audio-preview) which includes Youtube audio snippets paired with Gemini generated captions.
## Acknowledgement and Thanks
This dataset was built by Ontocord.AI in cooperation with Grass and LAION.AI. It was created as part of our SafeLLM/Aurora-M2 project in order to build safe multimodal models that comply with the EU AI Act. This dataset was built on a subset of the Grass Video Repository, a massive video dataset of creative commons videos. We deeply thank Huggingface and the open source community for their support.
## About the Contributors:
- [**Grass**](https://www.getgrass.io/) is committed to making the public web accessible again. Through its network of millions of globally distributed nodes, it is capable of collecting petabyte-scale datasets for a variety of use cases, including training AI models. The network is run exclusively by users who have downloaded an application to their devices, allowing them to contribute their unused internet bandwidth to the network. On X: @getgrass_io
- [**LAION**](https://www.laion.ai), is a non-profit organization, that provides datasets, tools and models to liberate machine learning research. By doing so, we encourage open public education and a more environment-friendly use of resources by reusing existing datasets and models.
- [**Ontocord**](https://www.ontocord.ai/ ) is dedicated to making legally compliant AI. Our mission is to make our AGI future lawful and accessible to everyone.
- [**Alignment Lab AI**](https://x.com/alignment_lab): Our mission is to build a future leveraging AI as a force for good and as a tool that enhances human lives. We believe everyone deserves to harness the power of personal intelligence.
- And many others ...
## Citation
```
@misc{Huu2024VALID,
title = {VALID (Video-Audio Large Interleaved Dataset)},
author = {Huu Nguyen, Ken Tsui, Andrej Radonjic, Christoph Schuhmann},
year = {2024}
url = {https://huggingface.co/datasets/ontocord/VALID},
}
```
|
Tiger14n/RVC-GUI | Tiger14n | "2023-06-01T04:39:43Z" | 9,277 | 8 | [
"license:mit",
"region:us"
] | null | "2023-04-29T02:07:32Z" | ---
license: mit
---
|
fsicoli/common_voice_16_0 | fsicoli | "2023-12-22T19:58:33Z" | 9,255 | 2 | [
"task_categories:automatic-speech-recognition",
"language:ab",
"language:af",
"language:am",
"language:ar",
"language:as",
"language:ast",
"language:az",
"language:ba",
"language:bas",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:ca",
"language:ckb",
"language:cnh",
"language:cs",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:dv",
"language:dyu",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:gl",
"language:gn",
"language:ha",
"language:he",
"language:hi",
"language:hsb",
"language:hu",
"language:ia",
"language:id",
"language:ig",
"language:is",
"language:it",
"language:ja",
"language:ka",
"language:kab",
"language:kk",
"language:kmr",
"language:ko",
"language:ky",
"language:lg",
"language:lo",
"language:lt",
"language:lv",
"language:mdf",
"language:mhr",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:mrj",
"language:mt",
"language:myv",
"language:nl",
"language:oc",
"language:or",
"language:pl",
"language:ps",
"language:pt",
"language:quy",
"language:ro",
"language:ru",
"language:rw",
"language:sah",
"language:sat",
"language:sc",
"language:sk",
"language:skr",
"language:sl",
"language:sq",
"language:sr",
"language:sw",
"language:ta",
"language:th",
"language:ti",
"language:tig",
"language:tk",
"language:tok",
"language:tr",
"language:tt",
"language:tw",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:vot",
"language:yue",
"language:zgh",
"language:zh",
"language:yo",
"license:cc0-1.0",
"size_categories:100B<n<1T",
"region:us",
"mozilla",
"foundation"
] | [
"automatic-speech-recognition"
] | "2023-12-19T17:26:21Z" | ---
license: cc0-1.0
language:
- ab
- af
- am
- ar
- as
- ast
- az
- ba
- bas
- be
- bg
- bn
- br
- ca
- ckb
- cnh
- cs
- cv
- cy
- da
- de
- dv
- dyu
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- gl
- gn
- ha
- he
- hi
- hsb
- hu
- ia
- id
- ig
- is
- it
- ja
- ka
- kab
- kk
- kmr
- ko
- ky
- lg
- lo
- lt
- lv
- mdf
- mhr
- mk
- ml
- mn
- mr
- mrj
- mt
- myv
- nl
- oc
- or
- pl
- ps
- pt
- quy
- ro
- ru
- rw
- sah
- sat
- sc
- sk
- skr
- sl
- sq
- sr
- sw
- ta
- th
- ti
- tig
- tk
- tok
- tr
- tt
- tw
- ug
- uk
- ur
- uz
- vi
- vot
- yue
- zgh
- zh
- yo
task_categories:
- automatic-speech-recognition
pretty_name: Common Voice Corpus 16.0
size_categories:
- 100B<n<1T
tags:
- mozilla
- foundation
---
# Dataset Card for Common Voice Corpus 16.0
<!-- Provide a quick summary of the dataset. -->
This dataset is an unofficial version of the Mozilla Common Voice Corpus 16. It was downloaded and converted from the project's website https://commonvoice.mozilla.org/.
## Languages
```
Abkhaz, Albanian, Amharic, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dioula, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hill Mari, Hindi, Hungarian, Icelandic, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Korean, Kurmanji Kurdish, Kyrgyz, Lao, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Occitan, Odia, Pashto, Persian, Polish, Portuguese, Punjabi, Quechua Chanka, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Saraiki, Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamazight, Tamil, Tatar, Thai, Tigre, Tigrinya, Toki Pona, Turkish, Turkmen, Twi, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh, Yoruba
```
## How to use
The datasets library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the load_dataset function.
For example, to download the Portuguese config, simply specify the corresponding language config name (i.e., "pt" for Portuguese):
```
from datasets import load_dataset
cv_16 = load_dataset("fsicoli/common_voice_16_0", "pt", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a streaming=True argument to the load_dataset function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```
from datasets import load_dataset
cv_16 = load_dataset("fsicoli/common_voice_16_0", "pt", split="train", streaming=True)
print(next(iter(cv_16)))
```
Bonus: create a PyTorch dataloader directly with your own datasets (local/streamed).
### Local
```
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
cv_16 = load_dataset("fsicoli/common_voice_16_0", "pt", split="train")
batch_sampler = BatchSampler(RandomSampler(cv_16), batch_size=32, drop_last=False)
dataloader = DataLoader(cv_16, batch_sampler=batch_sampler)
```
### Streaming
```
from datasets import load_dataset
from torch.utils.data import DataLoader
cv_16 = load_dataset("fsicoli/common_voice_16_0", "pt", split="train")
dataloader = DataLoader(cv_16, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to hf.co/blog/audio-datasets.
### Dataset Structure
Data Instances
A typical data point comprises the path to the audio file and its sentence. Additional fields include accent, age, client_id, up_votes, down_votes, gender, locale and segment.
### Licensing Information
Public Domain, CC-0
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
---
|
HAERAE-HUB/KMMLU-HARD | HAERAE-HUB | "2024-03-09T23:46:06Z" | 9,204 | 8 | [
"task_categories:question-answering",
"language:ko",
"license:cc-by-nd-4.0",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2402.11548",
"region:us",
"haerae",
"mmlu"
] | [
"question-answering"
] | "2024-01-12T05:49:07Z" | ---
configs:
- config_name: maritime_engineering
data_files:
- split: dev
path: data/maritime_engineering-dev.csv
- split: test
path: data/maritime_engineering-hard-test.csv
- config_name: materials_engineering
data_files:
- split: dev
path: data/materials_engineering-dev.csv
- split: test
path: data/materials_engineering-hard-test.csv
- config_name: railway_and_automotive_engineering
data_files:
- split: dev
path: data/railway_and_automotive_engineering-dev.csv
- split: test
path: data/railway_and_automotive_engineering-hard-test.csv
- config_name: biology
data_files:
- split: dev
path: data/biology-dev.csv
- split: test
path: data/biology-hard-test.csv
- config_name: public_safety
data_files:
- split: dev
path: data/public_safety-dev.csv
- split: test
path: data/public_safety-hard-test.csv
- config_name: criminal_law
data_files:
- split: dev
path: data/criminal_law-dev.csv
- split: test
path: data/criminal_law-hard-test.csv
- config_name: information_technology
data_files:
- split: dev
path: data/information_technology-dev.csv
- split: test
path: data/information_technology-hard-test.csv
- config_name: geomatics
data_files:
- split: dev
path: data/geomatics-dev.csv
- split: test
path: data/geomatics-hard-test.csv
- config_name: management
data_files:
- split: dev
path: data/management-dev.csv
- split: test
path: data/management-hard-test.csv
- config_name: math
data_files:
- split: dev
path: data/math-dev.csv
- split: test
path: data/math-hard-test.csv
- config_name: accounting
data_files:
- split: dev
path: data/accounting-dev.csv
- split: test
path: data/accounting-hard-test.csv
- config_name: chemistry
data_files:
- split: dev
path: data/chemistry-dev.csv
- split: test
path: data/chemistry-hard-test.csv
- config_name: nondestructive_testing
data_files:
- split: dev
path: data/nondestructive_testing-dev.csv
- split: test
path: data/nondestructive_testing-hard-test.csv
- config_name: computer_science
data_files:
- split: dev
path: data/computer_science-dev.csv
- split: test
path: data/computer_science-hard-test.csv
- config_name: ecology
data_files:
- split: dev
path: data/ecology-dev.csv
- split: test
path: data/ecology-hard-test.csv
- config_name: health
data_files:
- split: dev
path: data/health-dev.csv
- split: test
path: data/health-hard-test.csv
- config_name: political_science_and_sociology
data_files:
- split: dev
path: data/political_science_and_sociology-dev.csv
- split: test
path: data/political_science_and_sociology-hard-test.csv
- config_name: patent
data_files:
- split: dev
path: data/patent-dev.csv
- split: test
path: data/patent-hard-test.csv
- config_name: electrical_engineering
data_files:
- split: dev
path: data/electrical_engineering-dev.csv
- split: test
path: data/electrical_engineering-hard-test.csv
- config_name: electronics_engineering
data_files:
- split: dev
path: data/electronics_engineering-dev.csv
- split: test
path: data/electronics_engineering-hard-test.csv
- config_name: korean_history
data_files:
- split: dev
path: data/korean_history-dev.csv
- split: test
path: data/korean_history-hard-test.csv
- config_name: gas_technology_and_engineering
data_files:
- split: dev
path: data/gas_technology_and_engineering-dev.csv
- split: test
path: data/gas_technology_and_engineering-hard-test.csv
- config_name: machine_design_and_manufacturing
data_files:
- split: dev
path: data/machine_design_and_manufacturing-dev.csv
- split: test
path: data/machine_design_and_manufacturing-hard-test.csv
- config_name: chemical_engineering
data_files:
- split: dev
path: data/chemical_engineering-dev.csv
- split: test
path: data/chemical_engineering-hard-test.csv
- config_name: telecommunications_and_wireless_technology
data_files:
- split: dev
path: data/telecommunications_and_wireless_technology-dev.csv
- split: test
path: data/telecommunications_and_wireless_technology-hard-test.csv
- config_name: food_processing
data_files:
- split: dev
path: data/food_processing-dev.csv
- split: test
path: data/food_processing-hard-test.csv
- config_name: social_welfare
data_files:
- split: dev
path: data/social_welfare-dev.csv
- split: test
path: data/social_welfare-hard-test.csv
- config_name: real_estate
data_files:
- split: dev
path: data/real_estate-dev.csv
- split: test
path: data/real_estate-hard-test.csv
- config_name: marketing
data_files:
- split: dev
path: data/marketing-dev.csv
- split: test
path: data/marketing-hard-test.csv
- config_name: mechanical_engineering
data_files:
- split: dev
path: data/mechanical_engineering-dev.csv
- split: test
path: data/mechanical_engineering-hard-test.csv
- config_name: fashion
data_files:
- split: dev
path: data/fashion-dev.csv
- split: test
path: data/fashion-hard-test.csv
- config_name: psychology
data_files:
- split: dev
path: data/psychology-dev.csv
- split: test
path: data/psychology-hard-test.csv
- config_name: taxation
data_files:
- split: dev
path: data/taxation-dev.csv
- split: test
path: data/taxation-hard-test.csv
- config_name: environmental_science
data_files:
- split: dev
path: data/environmental_science-dev.csv
- split: test
path: data/environmental_science-hard-test.csv
- config_name: refrigerating_machinery
data_files:
- split: dev
path: data/refrigerating_machinery-dev.csv
- split: test
path: data/refrigerating_machinery-hard-test.csv
- config_name: education
data_files:
- split: dev
path: data/education-dev.csv
- split: test
path: data/education-hard-test.csv
- config_name: industrial_engineer
data_files:
- split: dev
path: data/industrial_engineer-dev.csv
- split: test
path: data/industrial_engineer-hard-test.csv
- config_name: civil_engineering
data_files:
- split: dev
path: data/civil_engineering-dev.csv
- split: test
path: data/civil_engineering-hard-test.csv
- config_name: energy_management
data_files:
- split: dev
path: data/energy_management-dev.csv
- split: test
path: data/energy_management-hard-test.csv
- config_name: law
data_files:
- split: dev
path: data/law-dev.csv
- split: test
path: data/law-hard-test.csv
- config_name: agricultural_sciences
data_files:
- split: dev
path: data/agricultural_sciences-dev.csv
- split: test
path: data/agricultural_sciences-hard-test.csv
- config_name: interior_architecture_and_design
data_files:
- split: dev
path: data/interior_architecture_and_design-dev.csv
- split: test
path: data/interior_architecture_and_design-hard-test.csv
- config_name: aviation_engineering_and_maintenance
data_files:
- split: dev
path: data/aviation_engineering_and_maintenance-dev.csv
- split: test
path: data/aviation_engineering_and_maintenance-hard-test.csv
- config_name: construction
data_files:
- split: dev
path: data/construction-dev.csv
- split: test
path: data/construction-hard-test.csv
- config_name: economics
data_files:
- split: dev
path: data/economics-dev.csv
- split: test
path: data/economics-hard-test.csv
license: cc-by-nd-4.0
task_categories:
- question-answering
language:
- ko
tags:
- haerae
- mmlu
size_categories:
- 100K<n<1M
---
### KMMLU (Korean-MMLU)
We propose KMMLU, a new Korean benchmark with 35,030 expert-level multiple-choice questions across 45 subjects ranging from humanities to STEM.
Unlike previous Korean benchmarks that are translated from existing English benchmarks, KMMLU is collected from original Korean exams, capturing linguistic and cultural aspects of the Korean language.
We test 26 publically available and proprietary LLMs, identifying significant room for improvement.
The best publicly available model achieves 50.54% on KMMLU, far below the average human performance of 62.6%.
This model was primarily trained for English and Chinese, not Korean.
Current LLMs tailored to Korean, such as Polyglot-Ko, perform far worse. Surprisingly, even the most capable proprietary LLMs, e.g., GPT-4 and HyperCLOVA X, achieve 59.95% and 53.40%, respectively.
This suggests that further work is needed to improve Korean LLMs, and KMMLU offers the right tool to track this progress.
We make our dataset publicly available on the Hugging Face Hub and integrate the benchmark into EleutherAI's Language Model Evaluation Harness.
Link to Paper: [KMMLU: Measuring Massive Multitask Language Understanding in Korean](https://arxiv.org/abs/2402.11548)
### KMMLU Statistics
| Category | # Questions |
|------------------------------|-------------|
| **Prerequisites** | |
| None | 59,909 |
| 1 Prerequisite Test | 12,316 |
| 2 Prerequisite Tests | 776 |
| 2+ Years of Experience | 65,135 |
| 4+ Years of Experience | 98,678 |
| 9+ Years of Experience | 6,963 |
| **Question Type** | |
| Positive | 207,030 |
| Negation | 36,777 |
| **Split** | |
| Train | 208,522 |
| Validation | 225 |
| Test | 35,030 |
| **Total** | 243,777 |
### Categories
To reimplement the categories in the paper, refer to the following:
```
supercategories = {
"accounting": "HUMSS",
"agricultural_sciences": "Other",
"aviation_engineering_and_maintenance": "Applied Science",
"biology": "STEM",
"chemical_engineering": "STEM",
"chemistry": "STEM",
"civil_engineering": "STEM",
"computer_science": "STEM",
"construction": "Other",
"criminal_law": "HUMSS",
"ecology": "STEM",
"economics": "HUMSS",
"education": "HUMSS",
"electrical_engineering": "STEM",
"electronics_engineering": "Applied Science",
"energy_management": "Applied Science",
"environmental_science": "Applied Science",
"fashion": "Other",
"food_processing": "Other",
"gas_technology_and_engineering": "Applied Science",
"geomatics": "Applied Science",
"health": "Other",
"industrial_engineer": "Applied Science",
"information_technology": "STEM",
"interior_architecture_and_design": "Other",
"law": "HUMSS",
"machine_design_and_manufacturing": "Applied Science",
"management": "HUMSS",
"maritime_engineering": "Applied Science",
"marketing": "Other",
"materials_engineering": "STEM",
"mechanical_engineering": "STEM",
"nondestructive_testing": "Applied Science",
"patent": "Other",
"political_science_and_sociology": "HUMSS",
"psychology": "HUMSS",
"public_safety": "Other",
"railway_and_automotive_engineering": "Applied Science",
"real_estate": "Other",
"refrigerating_machinery": "Other",
"social_welfare": "HUMSS",
"taxation": "HUMSS",
"telecommunications_and_wireless_technology": "Applied Science",
"korean_history": "HUMSS",
"math": "STEM"
}
```
### Point of Contact
For any questions contact us via the following email:)
```
[email protected]
``` |
benediktkol/DDOS | benediktkol | "2024-04-26T20:34:02Z" | 9,171 | 4 | [
"task_categories:image-segmentation",
"task_categories:depth-estimation",
"task_ids:semantic-segmentation",
"license:cc-by-nc-4.0",
"size_categories:10K<n<100K",
"arxiv:2312.12494",
"region:us",
"drones",
"uav",
"aerial",
"vision",
"wires",
"cables",
"outdoor",
"segmentation",
"semantic segmentation",
"depth",
"weather",
"flying",
"computer vision",
"cv"
] | [
"image-segmentation",
"depth-estimation"
] | "2023-12-18T18:18:42Z" | ---
license: cc-by-nc-4.0
pretty_name: DDOS
task_categories:
- image-segmentation
- depth-estimation
task_ids:
- semantic-segmentation
tags:
- drones
- uav
- aerial
- vision
- wires
- cables
- outdoor
- segmentation
- semantic segmentation
- depth
- weather
- flying
- computer vision
- cv
size_categories:
- 10K<n<100K
---
# DDOS: The Drone Depth and Obstacle Segmentation Dataset
The Drone Depth and Obstacle Segmentation (DDOS) dataset comprises synthetic aerial images captured by drones, along with corresponding depth maps and pixel-wise semantic segmentation masks. DDOS is purpose-built to support research and development in computer vision, focusing on tasks such as depth estimation and obstacle segmentation from aerial imagery. Emphasizing the detection of thin structures like wires and effective navigation in diverse weather conditions, DDOS serves as a valuable resource for advancing algorithms in autonomous drone technology.
- **Paper:** [DDOS: The Drone Depth and Obstacle Segmentation Dataset](https://arxiv.org/abs/2312.12494)
---------
## Data Structure
DDOS is organised as follows:
- Data Splits:
- Train: Contains 300 flights with a total of 30k images for training.
- Validation: Contains 20 flights with a total of 2k images for validation during model development.
- Test: Contains 20 flights with a total of 2k images for the final evaluation of the trained model.
- Environments:
- Neighbourhood: Contains data captured in urban and residential environments.
- Park: Contains data captured in park and natural environments.
- Flights:
- Each flight is represented by a unique flight ID and is contained within the corresponding environment directory.
- Data for Each Flight:
- Image: Contains RGB images captured by the drone camera.
- Depth: Contains depth maps representing the distance of objects from the camera. These maps are saved as uint16 PNG images, where pixel values range from 0 to 65535, representing distances from 0 to 100 meters linearly.
- Segmentation: Contains pixel-wise segmentation masks for semantic segmentation. Classes, as well as their corresponding mappings, are mentioned below.
- Flow: Contains optical flow data representing the apparent motion of objects between consecutive frames.
- Surface Normal: Contains surface normal maps representing the orientation of object surfaces.
Overview of file structure:
```
data/
├── train/
│ ├── neighbourhood/
│ │ ├── 0/
│ │ │ ├── depth/
│ │ │ │ ├── 0.png
│ │ │ │ ├── ...
│ │ │ │ └── 99.png
│ │ │ ├── flow/
│ │ │ │ ├── 0.png
│ │ │ │ ├── ...
│ │ │ │ └── 99.png
│ │ │ ├── image/
│ │ │ │ ├── 0.png
│ │ │ │ ├── ...
│ │ │ │ └── 99.png
│ │ │ ├── segmentation/
│ │ │ │ ├── 0.png
│ │ │ │ ├── ...
│ │ │ │ └── 99.png
│ │ │ ├── surfacenormals/
│ │ │ │ ├── 0.png
│ │ │ │ ├── ...
│ │ │ │ └── 99.png
│ │ │ ├── metadata.csv
│ │ │ └── weather.csv
│ │ ├── ...
│ │ └── 249/
│ │ └── ...
│ └── park/
│ ├── 0/
│ │ ├── depth/
│ │ │ └── ...
│ │ ├── flow/
│ │ │ └── ...
│ │ ├── image/
│ │ │ └── ...
│ │ ├── segmentation/
│ │ │ └── ...
│ │ ├── surfacenormals/
│ │ │ └── ...
│ │ ├── metadata.csv
│ │ └── weather.csv
│ ├── ...
│ └── 49/
│ └── ...
├── validation/
│ └── ...
└── test/
└── ...
```
---------
## Additional Information
**Class Mapping:** The segmentation masks use the following class labels for obstacle segmentation:
```python
CLASS_MAPPING = {
'ultra_thin': 255,
'thin_structures': 240,
'small_mesh': 220,
'large_mesh': 200,
'trees': 180,
'buildings': 160,
'vehicles': 140,
'animals': 100,
'other': 80
}
```
**Metadata:** The dataset contains metadata, such as coordinates, pose, acceleration, weather conditions and camera parameters, which provide valuable contextual information about each flight.
---------
## Dataset Usage
- **Data Loading:**
To load and use the DDOS dataset in your projects, you can refer to the official PyTorch data loading tutorial: [PyTorch Data Loading Tutorial](https://pytorch.org/tutorials/beginner/basics/data_tutorial.html).
This tutorial will guide you through the process of loading data, creating data loaders, and preparing the dataset for training or evaluation using PyTorch.
- **Respect the Data Splits:**
Please ensure that the testing data is not used for validation. Mixing these datasets could lead to inaccurate assessments of model performance. Maintaining separate datasets for testing and validation helps ensure reliable evaluation and accurate reporting of results.
---------
## License
DDOS is openly licensed under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/)
---------
## Citation
If you use DDOS in your research or projects, please cite our paper:
```
@article{kolbeinsson2023ddos,
title={{DDOS}: The Drone Depth and Obstacle Segmentation Dataset},
author={Benedikt Kolbeinsson and Krystian Mikolajczyk},
journal={arXiv preprint arXiv:2312.12494},
year={2023}
}
``` |
LEAP/ClimSim_low-res | LEAP | "2023-09-29T20:31:55Z" | 9,099 | 11 | [
"license:cc-by-4.0",
"arxiv:2306.08754",
"doi:10.57967/hf/0740",
"region:us"
] | null | "2023-05-03T04:51:53Z" | ---
license: cc-by-4.0
---
Corresponding GitHub repo can be found here:
https://github.com/leap-stc/ClimSim
Read more: https://arxiv.org/abs/2306.08754. |