metadata
dataset_info:
features:
- name: text
dtype: string
- name: is_filtered_out
dtype: bool
splits:
- name: train
num_bytes: 19005209693
num_examples: 29451949
download_size: 12244813118
dataset_size: 19005209693
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Dataset Card for "wikipedia-bookscorpus-en-preprocessed"
Dataset Summary
A preprocessed and normalized combination of English Wikipedia and BookCorpus datasets, optimized for BERT pretraining. The dataset is chunked into segments of ~820 characters to accommodate typical transformer architectures.
Dataset Details
- Number of Examples: 29.4 million
- Download Size: 12.2 GB
- Dataset Size: 19.0 GB
Features:
{
'text': string, # The preprocessed text chunk
'is_filtered_out': bool # Filtering flag for data quality
}
Processing Pipeline
Language Filtering:
- Retains only English language samples
- Uses langdetect for language detection
Text Chunking:
- Chunks of ~820 characters (targeting ~128 tokens)
- Preserves sentence boundaries where possible
- Splits on sentence endings (., !, ?) or spaces
Normalization:
- Converts to lowercase
- Removes accents and non-English characters
- Filters out chunks < 200 characters
- Removes special characters
Data Organization:
- Shuffled for efficient training
- Distributed across multiple JSONL files
- No need for additional dataset.shuffle() during training
Usage
from datasets import load_dataset
dataset = load_dataset("shahrukhx01/wikipedia-bookscorpus-en-preprocessed")
Preprocessing Details
For detailed information about the preprocessing pipeline, see the preprocessing documentation.
Limitations
- Some tokens may be lost due to chunk truncation
- Very long sentences might be split
- Some contextual information across chunk boundaries is lost
Citation
If you use this dataset, please cite:
@misc{wikipedia-bookscorpus-en-preprocessed,
author = {Shahrukh Khan},
title = {Preprocessed Wikipedia and BookCorpus Dataset for Language Model Training},
year = {2024},
publisher = {GitHub & Hugging Face},
howpublished = {\url{https://huggingface.co/datasets/shahrukhx01/wikipedia-bookscorpus-en-preprocessed}}
}