Maurice Weber
commited on
Commit
•
8b3730f
1
Parent(s):
167a6e2
add minhash to table
Browse files
README.md
CHANGED
@@ -78,7 +78,8 @@ A full set of scripts to recreate the dataset, including the quality signals, ca
|
|
78 |
found [here](https://github.com/togethercomputer/RedPajama-Data).
|
79 |
|
80 |
### Applying Filtering Rules
|
81 |
-
|
|
|
82 |
the following set of rules used in Gopher:
|
83 |
|
84 |
```python
|
@@ -98,7 +99,7 @@ def gopher_rules_pass(sample) -> bool:
|
|
98 |
|
99 |
# rule 2: symbol to word ratio below 0.1
|
100 |
symbol_word_ratio = signals["rps_doc_symbol_to_word_ratio"][0][2]
|
101 |
-
if
|
102 |
return False
|
103 |
|
104 |
# rule 3: 90% of lines need to start without a bullet point
|
@@ -114,8 +115,7 @@ def gopher_rules_pass(sample) -> bool:
|
|
114 |
return False
|
115 |
|
116 |
# rule 5: ...
|
117 |
-
|
118 |
-
|
119 |
return True
|
120 |
```
|
121 |
|
@@ -123,21 +123,21 @@ Filtering the RedPajama-V2 dataset with this set of rules is then as easy as:
|
|
123 |
|
124 |
```python
|
125 |
ds_iterator = load_dataset(
|
126 |
-
"togethercomputer/RedPajama-Data-V2",
|
127 |
-
snapshots=["2023-14"],
|
128 |
-
languages=["en"],
|
129 |
-
name="default",
|
130 |
streaming=True
|
131 |
)
|
132 |
|
133 |
filtered_dataset = []
|
134 |
|
135 |
for sample in ds_iterator["train"]:
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
-
|
140 |
-
|
141 |
```
|
142 |
|
143 |
### Dataset Summary
|
@@ -191,6 +191,10 @@ RedPajama-V2 is an open dataset for training large laguage models and includes o
|
|
191 |
| rps_doc_frac_chars_top_4gram | The fraction of characters in the top word 4gram. | Repetitiveness | [RefinedWeb](https://arxiv.org/abs/2306.01116), [Gopher](https://arxiv.org/abs/2112.11446) |
|
192 |
| rps_doc_ldnoobw_words | The number of sequences of words that are contained in the List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words blocklist. The blocklist is obtained from the [LDNOOBW](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) repo. | toxicity | [C4](https://arxiv.org/abs/1910.10683) |
|
193 |
| rps_doc_ut1_blacklist | A categorical id corresponding to the list of categories of the domain of the document. Categories are obtained from the UT1 blacklist. The list is obtained from [UT-Capitole](https://dsi.ut-capitole.fr/blacklists/). | toxicictiy | [RefinedWeb](https://arxiv.org/abs/2306.01116) |
|
|
|
|
|
|
|
|
|
194 |
|
195 |
#### Document and Token Counts for the Annotated and deduplicated `head_middle` part of the dataset
|
196 |
|
|
|
78 |
found [here](https://github.com/togethercomputer/RedPajama-Data).
|
79 |
|
80 |
### Applying Filtering Rules
|
81 |
+
|
82 |
+
You can use the quality signals to filter the raw RedPajama-V2 dataset for a given set of rules. For example, consider
|
83 |
the following set of rules used in Gopher:
|
84 |
|
85 |
```python
|
|
|
99 |
|
100 |
# rule 2: symbol to word ratio below 0.1
|
101 |
symbol_word_ratio = signals["rps_doc_symbol_to_word_ratio"][0][2]
|
102 |
+
if symbol_word_ratio > 0.1:
|
103 |
return False
|
104 |
|
105 |
# rule 3: 90% of lines need to start without a bullet point
|
|
|
115 |
return False
|
116 |
|
117 |
# rule 5: ...
|
118 |
+
|
|
|
119 |
return True
|
120 |
```
|
121 |
|
|
|
123 |
|
124 |
```python
|
125 |
ds_iterator = load_dataset(
|
126 |
+
"togethercomputer/RedPajama-Data-V2",
|
127 |
+
snapshots=["2023-14"],
|
128 |
+
languages=["en"],
|
129 |
+
name="default",
|
130 |
streaming=True
|
131 |
)
|
132 |
|
133 |
filtered_dataset = []
|
134 |
|
135 |
for sample in ds_iterator["train"]:
|
136 |
+
|
137 |
+
if not gopher_rules_pass(sample):
|
138 |
+
continue
|
139 |
+
|
140 |
+
filtered_dataset.append(sample)
|
141 |
```
|
142 |
|
143 |
### Dataset Summary
|
|
|
191 |
| rps_doc_frac_chars_top_4gram | The fraction of characters in the top word 4gram. | Repetitiveness | [RefinedWeb](https://arxiv.org/abs/2306.01116), [Gopher](https://arxiv.org/abs/2112.11446) |
|
192 |
| rps_doc_ldnoobw_words | The number of sequences of words that are contained in the List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words blocklist. The blocklist is obtained from the [LDNOOBW](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) repo. | toxicity | [C4](https://arxiv.org/abs/1910.10683) |
|
193 |
| rps_doc_ut1_blacklist | A categorical id corresponding to the list of categories of the domain of the document. Categories are obtained from the UT1 blacklist. The list is obtained from [UT-Capitole](https://dsi.ut-capitole.fr/blacklists/). | toxicictiy | [RefinedWeb](https://arxiv.org/abs/2306.01116) |
|
194 |
+
| minhash_signature_0.7 | Banded minhash signature of the document, for fuzzy deduplication at Jaccard similarity 0.7 | Deduplication |
|
195 |
+
| minhash_signature_0.8 | Banded minhash signature of the document, for fuzzy deduplication at Jaccard similarity 0.8 | Deduplication |
|
196 |
+
| minhash_signature_0.9 | Banded minhash signature of the document, for fuzzy deduplication at Jaccard similarity 0.9 | Deduplication |
|
197 |
+
| minhash_signature_1.0 | Banded minhash signature of the document, for fuzzy deduplication at Jaccard similarity 1.0 | Deduplication |
|
198 |
|
199 |
#### Document and Token Counts for the Annotated and deduplicated `head_middle` part of the dataset
|
200 |
|