Update README.md
Browse files
README.md
CHANGED
@@ -7,9 +7,14 @@ size_categories:
|
|
7 |
# WebOrganizer/Corpus-200B
|
8 |
[[Paper](https://arxiv.org/abs/2502.10341)] [[Website](https://weborganizer.allenai.org)] [[GitHub](https://github.com/CodeCreator/WebOrganizer)]
|
9 |
|
10 |
-
|
11 |
-
|
|
|
|
|
|
|
|
|
12 |
__Download the dataset by cloning the repository with Git LFS instead of HuggingFace's `load_dataset()`.__
|
|
|
13 |
The dataset has the following folder structure:
|
14 |
```bash
|
15 |
Corpus-200B/
|
@@ -17,7 +22,7 @@ Corpus-200B/
|
|
17 |
- CC_shard_00000000_processed.jsonl.zst
|
18 |
- CC_shard_00000001_processed.jsonl.zst
|
19 |
- ...
|
20 |
-
tokens/ # number of tokens per document
|
21 |
- CC_shard_00000000_processed.npy
|
22 |
- CC_shard_00000001_processed.npy
|
23 |
- ...
|
@@ -41,7 +46,10 @@ Corpus-200B/
|
|
41 |
- ...
|
42 |
```
|
43 |
We also include statistics about the presence and co-occurence of domains in the `domain_statistics/` folder, computed with the `domain_statistics.py` script.
|
|
|
44 |
## Citation
|
|
|
|
|
45 |
```bibtex
|
46 |
@article{wettig2025organize,
|
47 |
title={Organize the Web: Constructing Domains Enhances Pre-Training Data Curation},
|
|
|
7 |
# WebOrganizer/Corpus-200B
|
8 |
[[Paper](https://arxiv.org/abs/2502.10341)] [[Website](https://weborganizer.allenai.org)] [[GitHub](https://github.com/CodeCreator/WebOrganizer)]
|
9 |
|
10 |
+
This dataset is a pre-processed version of the `1b-1x` CommonCrawl pool from DataComps-LM cleaned with
|
11 |
+
(1) [RefinedWeb filters](https://github.com/mlfoundations/dclm/blob/main/baselines/baselines_configs/dclm_baseline_refinedweb.yaml) and
|
12 |
+
(2) [BFF deduplication](https://github.com/mlfoundations/dclm/tree/main/dedup/bff).
|
13 |
+
|
14 |
+
We provide the resulting 200B token corpus annotated with two quality scores, WebOrganizer domains, and k-means scores.
|
15 |
+
|
16 |
__Download the dataset by cloning the repository with Git LFS instead of HuggingFace's `load_dataset()`.__
|
17 |
+
|
18 |
The dataset has the following folder structure:
|
19 |
```bash
|
20 |
Corpus-200B/
|
|
|
22 |
- CC_shard_00000000_processed.jsonl.zst
|
23 |
- CC_shard_00000001_processed.jsonl.zst
|
24 |
- ...
|
25 |
+
tokens/ # number of tokens per document (GPT-NeoX tokenizer)
|
26 |
- CC_shard_00000000_processed.npy
|
27 |
- CC_shard_00000001_processed.npy
|
28 |
- ...
|
|
|
46 |
- ...
|
47 |
```
|
48 |
We also include statistics about the presence and co-occurence of domains in the `domain_statistics/` folder, computed with the `domain_statistics.py` script.
|
49 |
+
|
50 |
## Citation
|
51 |
+
|
52 |
+
If you make use of this pre-processed corpus in your work, please cite:
|
53 |
```bibtex
|
54 |
@article{wettig2025organize,
|
55 |
title={Organize the Web: Constructing Domains Enhances Pre-Training Data Curation},
|