Update README.md
Browse files
README.md
CHANGED
@@ -12,4 +12,19 @@ dataset_info:
|
|
12 |
---
|
13 |
# Dataset Card for "wikipedia20220301en-bookcorpusopen-chunked-shuffled"
|
14 |
|
15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
---
|
13 |
# Dataset Card for "wikipedia20220301en-bookcorpusopen-chunked-shuffled"
|
14 |
|
15 |
+
|
16 |
+
```
|
17 |
+
num_examples: 33.5 million
|
18 |
+
download_size: 15.3 GB
|
19 |
+
dataset_size: 26.1 GB
|
20 |
+
```
|
21 |
+
|
22 |
+
This dataset combines [wikipedia20220301.en](https://huggingface.co/datasets/wikipedia) and [bookcorpusopen](https://huggingface.co/datasets/bookcorpusopen),
|
23 |
+
and splits the data into smaller chunks, of size ~820 chars
|
24 |
+
(such that each item will be at least ~128 tokens for the average tokenizer).
|
25 |
+
The chunks have been shuffled.
|
26 |
+
The logic only splits on spaces, so the chunks are likely to be slightly larger than 820 chars.
|
27 |
+
The dataset has been normalized into lower case, with accents and non-english characters removed.
|
28 |
+
Items with less than 200 chars or more than 1000 chars have been removed.
|
29 |
+
The data has not been shuffled (you probably want to shuffle before using).
|
30 |
+
|