Datasets:
Tasks:
Text2Text Generation
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
License:
Updating README
Browse files
README.md
CHANGED
@@ -24,4 +24,25 @@ task_categories:
|
|
24 |
language:
|
25 |
- en
|
26 |
pretty_name: Pretokenized Paloma
|
27 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
language:
|
25 |
- en
|
26 |
pretty_name: Pretokenized Paloma
|
27 |
+
---
|
28 |
+
|
29 |
+
# The Pretokenized Paloma Benchmark Dataset
|
30 |
+
|
31 |
+
This dataset is a compact, pre-tokenized evaluation dataset designed to complement the [pretokenized-dolma](https://huggingface.co/datasets/pico-lm/pretokenized-dolma) training set. Built from the [Paloma corpus](https://github.com/allenai/OLMo-Eval/blob/main/paloma/README.md) (Allen Institute), this benchmark was designed to not contain any data overlap with Dolma and is ideal for evaluating models trained on it.
|
32 |
+
|
33 |
+
### Overview
|
34 |
+
|
35 |
+
Features:
|
36 |
+
- Pre-tokenized with the same tokenizer as pretokenized-dolma: [allenai/OLMo-7B-0724-hf](https://huggingface.co/allenai/OLMo-7B-0724-hf)
|
37 |
+
- Sequence length: 2048 tokens
|
38 |
+
- Ideal for perplexity calculations for models trained on pretokenized-dolma
|
39 |
+
|
40 |
+
We release the exact scripts we use to create this dataset in our [pico-lm/pico-dataset](https://github.com/pico-lm/pico-dataset) GitHub repo.
|
41 |
+
|
42 |
+
### Usage
|
43 |
+
|
44 |
+
```
|
45 |
+
from datasets import load_dataset
|
46 |
+
dataset = load_dataset("pico-lm/pretokenized-paloma", streaming=True)
|
47 |
+
```
|
48 |
+
|