File size: 829 Bytes
15d8443 36942ec 8eb8f41 36942ec 9646663 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 26076989556
num_examples: 33536113
download_size: 15221565467
dataset_size: 26076989556
---
# Dataset Card for "chunked-wikipedia20220301en-bookcorpusopen"
This dataset combines [wikipedia20220301.en](https://huggingface.co/datasets/wikipedia) and [bookcorpusopen](https://huggingface.co/datasets/bookcorpusopen),
and splits the data into smaller chunks, of size ~820 chars (such that each item will be at least ~128 tokens for the average tokenizer).
The logic only splits on spaces, so the chunks are likely to be slightly larger than 820 chars. The dataset has been normalized into lower case, removing accents a non-english characters.
Items with less than 200 chars or more than 1000 chars have been removed.
|