Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
Libraries:
Datasets
pandas
License:
pretokenized-paloma / README.md
rdiehlmartinez's picture
Updating metadata features
f976730 verified
metadata
dataset_info:
  features:
    - name: text
      dtype: string
    - name: source
      dtype: string
    - name: input_ids
      sequence: int32
  splits:
    - name: val
      num_bytes: 481445317
      num_examples: 29016
  download_size: 240052852
  dataset_size: 481445317
configs:
  - config_name: default
    data_files:
      - split: val
        path: data/val-*
license: apache-2.0
task_categories:
  - text2text-generation
language:
  - en
pretty_name: Pretokenized Paloma Dataset
size_categories:
  - 10M<n<100M

The Pretokenized Paloma Benchmark Dataset

This dataset is a compact, pre-tokenized evaluation dataset designed to complement the pretokenized-dolma training set. Built from the Paloma corpus (Allen Institute), this benchmark was designed to not contain any data overlap with Dolma and is ideal for evaluating models trained on it.

Overview

Features:

  • Pre-tokenized with the same tokenizer as pretokenized-dolma: allenai/OLMo-7B-0724-hf
  • Sequence length: 2048 tokens
  • Ideal for perplexity calculations for models trained on pretokenized-dolma

We release the exact scripts we use to create this dataset in our pico-lm/pico-dataset GitHub repo.

Usage

from datasets import load_dataset
dataset = load_dataset("pico-lm/pretokenized-paloma", streaming=True)