Update README.md
Browse files
README.md
CHANGED
@@ -23,7 +23,8 @@ This dataset combines [wikipedia20220301.en](https://huggingface.co/datasets/wik
|
|
23 |
and splits the data into smaller chunks, of size ~820 chars
|
24 |
(such that each item will be at least ~128 tokens for the average tokenizer).
|
25 |
The order of the items in this dataset has been shuffled,
|
26 |
-
meaning
|
|
|
27 |
The logic only splits on spaces, so the chunks are likely to be slightly larger than 820 chars.
|
28 |
The dataset has been normalized into lower case, with accents and non-english characters removed.
|
29 |
Items with less than 200 chars or more than 1000 chars have been removed.
|
|
|
23 |
and splits the data into smaller chunks, of size ~820 chars
|
24 |
(such that each item will be at least ~128 tokens for the average tokenizer).
|
25 |
The order of the items in this dataset has been shuffled,
|
26 |
+
meaning you don't have to use `dataset.shuffle`,
|
27 |
+
which is slower to iterate over.
|
28 |
The logic only splits on spaces, so the chunks are likely to be slightly larger than 820 chars.
|
29 |
The dataset has been normalized into lower case, with accents and non-english characters removed.
|
30 |
Items with less than 200 chars or more than 1000 chars have been removed.
|