Maurice Weber
commited on
Commit
·
bd77c17
1
Parent(s):
1a99058
add raw doc+token counts
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ pretty_name: Red Pajama V2 Dataset
|
|
15 |
RedPajama-V2 is an open dataset for training large language models. The dataset includes over 100B text
|
16 |
documents coming from 84 CommonCrawl snapshots and processed using
|
17 |
the [CCNet](https://github.com/facebookresearch/cc_net) pipeline. Out of these, there are 30B documents in the corpus
|
18 |
-
that additionally come with quality signals. In addition, we also provide the ids of duplicated documents which can be
|
19 |
used to create a dataset with 20B deduplicated documents.
|
20 |
|
21 |
Check out our [blog post](https://together.ai/blog/redpajama-data-v2) for more details on the build process, dataset
|
@@ -30,8 +30,9 @@ ds = load_dataset("togethercomputer/RedPajama-Data-V2", name="sample")
|
|
30 |
```
|
31 |
|
32 |
To download a the dataset for a specific combination of `{partition} x {snapshot_id} x {language}` (e.g., English and
|
33 |
-
German data from the `head_middle` partition of the 2023-06 and the 2022-49 dumps), you can run the following command
|
34 |
-
|
|
|
35 |
|
36 |
```python
|
37 |
from datasets import load_dataset
|
@@ -71,7 +72,7 @@ done <"$listings_file"
|
|
71 |
|
72 |
```
|
73 |
|
74 |
-
In addition, for the `head_middle` partition, you can also download the quality signals, minhash signatures and
|
75 |
duplicate ids using the following commands:
|
76 |
|
77 |
```bash
|
@@ -227,16 +228,27 @@ RedPajama-V2 is an open dataset for training large language models and includes
|
|
227 |
| minhash_signature_0.9 | Banded minhash signature of the document, for fuzzy deduplication at Jaccard similarity 0.9. The signature is based on 128 hash functions and grouped into 5 bands and 25 rows for LSH.. | Deduplication |
|
228 |
| minhash_signature_1.0 | Banded minhash signature of the document, for fuzzy deduplication at Jaccard similarity 1.0. The signature is based on 128 hash functions and grouped into 1 band and 128 rows for LSH. | Deduplication |
|
229 |
|
230 |
-
#### Document and Token Counts
|
231 |
-
|
232 |
-
| | # Documents | Estimated Token count (deduped) |
|
233 |
-
|
234 |
-
| en |
|
235 |
-
| de |
|
236 |
-
| fr |
|
237 |
-
| es |
|
238 |
-
| it |
|
239 |
-
| Total |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
240 |
|
241 |
### Languages
|
242 |
|
|
|
15 |
RedPajama-V2 is an open dataset for training large language models. The dataset includes over 100B text
|
16 |
documents coming from 84 CommonCrawl snapshots and processed using
|
17 |
the [CCNet](https://github.com/facebookresearch/cc_net) pipeline. Out of these, there are 30B documents in the corpus
|
18 |
+
that additionally come with quality signals. In addition, we also provide the ids of duplicated documents which can be
|
19 |
used to create a dataset with 20B deduplicated documents.
|
20 |
|
21 |
Check out our [blog post](https://together.ai/blog/redpajama-data-v2) for more details on the build process, dataset
|
|
|
30 |
```
|
31 |
|
32 |
To download a the dataset for a specific combination of `{partition} x {snapshot_id} x {language}` (e.g., English and
|
33 |
+
German data from the `head_middle` partition of the 2023-06 and the 2022-49 dumps), you can run the following command
|
34 |
+
which downloads the raw (i.e., not deduplicated) part of the dataset.
|
35 |
+
_Note that this will download the entire dumps and requires ~1TB disk space per dump_.
|
36 |
|
37 |
```python
|
38 |
from datasets import load_dataset
|
|
|
72 |
|
73 |
```
|
74 |
|
75 |
+
In addition, for the `head_middle` partition, you can also download the quality signals, minhash signatures and
|
76 |
duplicate ids using the following commands:
|
77 |
|
78 |
```bash
|
|
|
228 |
| minhash_signature_0.9 | Banded minhash signature of the document, for fuzzy deduplication at Jaccard similarity 0.9. The signature is based on 128 hash functions and grouped into 5 bands and 25 rows for LSH.. | Deduplication |
|
229 |
| minhash_signature_1.0 | Banded minhash signature of the document, for fuzzy deduplication at Jaccard similarity 1.0. The signature is based on 128 hash functions and grouped into 1 band and 128 rows for LSH. | Deduplication |
|
230 |
|
231 |
+
#### Raw Document and Token Counts (`head_middle`)
|
232 |
+
|
233 |
+
| | # Documents (deduped) | Estimated Token count (deduped) |
|
234 |
+
|-------|-----------------------|---------------------------------|
|
235 |
+
| en | 24.5B | 37.0T |
|
236 |
+
| de | 2.7B | 4.1T |
|
237 |
+
| fr | 2.2B | 3.7T |
|
238 |
+
| es | 2.3B | 3.9T |
|
239 |
+
| it | 1.2B | 1.9T |
|
240 |
+
| Total | 32.9B | 50.6T |
|
241 |
+
|
242 |
+
#### Deduplicated Document and Token Counts (`head_middle`)
|
243 |
+
|
244 |
+
| | # Documents (total) | Estimated Token count (total) |
|
245 |
+
|-------|---------------------|-------------------------------|
|
246 |
+
| en | 14.5B | 20.5T |
|
247 |
+
| de | 1.9B | 3.0T |
|
248 |
+
| fr | 1.6B | 2.7T |
|
249 |
+
| es | 1.8B | 2.8T |
|
250 |
+
| it | 0.9B | 1.5T |
|
251 |
+
| Total | 20.8B | 30.4T |
|
252 |
|
253 |
### Languages
|
254 |
|