Update README.md
Browse files
README.md
CHANGED
@@ -24,4 +24,7 @@ and splits the data into smaller chunks, of size ~820 chars
|
|
24 |
The logic only splits on spaces, so the chunks are likely to be slightly larger than 820 chars.
|
25 |
The dataset has been normalized into lower case, with accents and non-english characters removed.
|
26 |
Items with less than 200 chars or more than 1000 chars have been removed.
|
27 |
-
The data has not been shuffled (you
|
|
|
|
|
|
|
|
24 |
The logic only splits on spaces, so the chunks are likely to be slightly larger than 820 chars.
|
25 |
The dataset has been normalized into lower case, with accents and non-english characters removed.
|
26 |
Items with less than 200 chars or more than 1000 chars have been removed.
|
27 |
+
The data has not been shuffled (you can either use `dataset.shuffle(...)`,
|
28 |
+
or download the shuffled version [here](https://huggingface.co/datasets/sradc/chunked-shuffled-wikipedia20220301en-bookcorpusopen),
|
29 |
+
which will be faster to iterate over).
|
30 |
+
|