language: en | |
dataset_info: | |
features: | |
- name: text | |
dtype: string | |
splits: | |
- name: train | |
num_bytes: 26076989556 | |
num_examples: 33536113 | |
download_size: 17380043798 | |
dataset_size: 26076989556 | |
# Dataset Card for "wikipedia20220301en-bookcorpusopen-chunked-shuffled" | |
``` | |
num_examples: 33.5 million | |
download_size: 15.3 GB | |
dataset_size: 26.1 GB | |
``` | |
This dataset combines [wikipedia20220301.en](https://huggingface.co/datasets/wikipedia) and [bookcorpusopen](https://huggingface.co/datasets/bookcorpusopen), | |
and splits the data into smaller chunks, of size ~820 chars | |
(such that each item will be at least ~128 tokens for the average tokenizer). | |
The order of the items in this dataset has been shuffled, | |
meaning you don't have to use `dataset.shuffle`, | |
which is slower to iterate over. | |
The logic only splits on spaces, so the chunks are likely to be slightly larger than 820 chars. | |
The dataset has been normalized into lower case, with accents and non-english characters removed. | |
Items with less than 200 chars or more than 1000 chars have been removed. | |
This dataset is processed for convenience, at the expense of losing some percentage of the tokens due to truncation, | |
(assuming the training minibatches are truncated to 128 tokens). | |