Datasets:

Modalities:
Text
Formats:
parquet
Libraries:
Datasets
Dask
wmt14 / README.md
librarian-bot's picture
Librarian Bot: Add language metadata for dataset
da3a066 verified
|
raw
history blame
3.95 kB
metadata
language:
  - de
dataset_info:
  - config_name: deen
    features:
      - name: input
        dtype: string
      - name: output
        dtype: string
    splits:
      - name: train
        num_bytes: 1827320440
        num_examples: 4508785
      - name: test
        num_bytes: 1089638
        num_examples: 3003
    download_size: 867523154
    dataset_size: 1828410078
  - config_name: fren
    features:
      - name: input
        dtype: string
      - name: output
        dtype: string
    splits:
      - name: train
        num_bytes: 18999540612
        num_examples: 40836715
      - name: test
        num_bytes: 1151161
        num_examples: 3003
    download_size: 8233454255
    dataset_size: 19000691773
configs:
  - config_name: deen
    data_files:
      - split: train
        path: deen/train-*
      - split: test
        path: deen/test-*
  - config_name: fren
    data_files:
      - split: train
        path: fren/train-*
      - split: test
        path: fren/test-*

Dataset Card for wmt14

This is a preprocessed version of wmt14 dataset for benchmarks in LM-Polygraph.

Dataset Details

Dataset Description

Dataset Sources [optional]

Uses

Direct Use

This dataset should be used for performing benchmarks on LM-polygraph.

Out-of-Scope Use

This dataset should not be used for further dataset preprocessing.

Dataset Structure

This dataset contains the "continuation" subset, which corresponds to main dataset, used in LM-Polygraph. It may also contain other subsets, which correspond to instruct methods, used in LM-Polygraph.

Each subset contains two splits: train and test. Each split contains two string columns: "input", which corresponds to processed input for LM-Polygraph, and "output", which corresponds to processed output for LM-Polygraph.

Dataset Creation

Curation Rationale

This dataset is created in order to separate dataset creation code from benchmarking code.

Source Data

Data Collection and Processing

Data is collected from https://huggingface.co/datasets/wmt14 and processed by using build_dataset.py script in repository.

Who are the source data producers?

People who created https://huggingface.co/datasets/wmt14

Bias, Risks, and Limitations

This dataset contains the same biases, risks, and limitations as its source dataset https://huggingface.co/datasets/wmt14

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset.