shahrukhx01 commited on
Commit
ac7cb8a
·
verified ·
1 Parent(s): 89e994e

update dataset readme

Browse files
Files changed (1) hide show
  1. README.md +74 -0
README.md CHANGED
@@ -17,3 +17,77 @@ configs:
17
  - split: train
18
  path: data/train-*
19
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  - split: train
18
  path: data/train-*
19
  ---
20
+
21
+
22
+ # Dataset Card for "wikipedia-bookscorpus-en-preprocessed"
23
+
24
+ ## Dataset Summary
25
+
26
+ A preprocessed and normalized combination of English Wikipedia and BookCorpus datasets, optimized for BERT pretraining. The dataset is chunked into segments of ~820 characters to accommodate typical transformer architectures.
27
+
28
+ ## Dataset Details
29
+
30
+ - **Number of Examples:** 29.4 million
31
+ - **Download Size:** 12.2 GB
32
+ - **Dataset Size:** 19.0 GB
33
+
34
+ ### Features:
35
+ ```python
36
+ {
37
+ 'text': string, # The preprocessed text chunk
38
+ 'is_filtered_out': bool # Filtering flag for data quality
39
+ }
40
+ ```
41
+
42
+ ## Processing Pipeline
43
+
44
+ 1. **Language Filtering:**
45
+ - Retains only English language samples
46
+ - Uses langdetect for language detection
47
+
48
+ 2. **Text Chunking:**
49
+ - Chunks of ~820 characters (targeting ~128 tokens)
50
+ - Preserves sentence boundaries where possible
51
+ - Splits on sentence endings (., !, ?) or spaces
52
+
53
+ 3. **Normalization:**
54
+ - Converts to lowercase
55
+ - Removes accents and non-English characters
56
+ - Filters out chunks < 200 characters
57
+ - Removes special characters
58
+
59
+ 4. **Data Organization:**
60
+ - Shuffled for efficient training
61
+ - Distributed across multiple JSONL files
62
+ - No need for additional dataset.shuffle() during training
63
+
64
+ ## Usage
65
+
66
+ ```python
67
+ from datasets import load_dataset
68
+
69
+ dataset = load_dataset("shahrukhx01/wikipedia-bookscorpus-en-preprocessed")
70
+ ```
71
+
72
+ ## Preprocessing Details
73
+
74
+ For detailed information about the preprocessing pipeline, see the [preprocessing documentation](https://github.com/shahrukhx01/minions/tree/main/scripts/data/bert_pretraining_data/README.md).
75
+
76
+ ## Limitations
77
+
78
+ - Some tokens may be lost due to chunk truncation
79
+ - Very long sentences might be split
80
+ - Some contextual information across chunk boundaries is lost
81
+
82
+ ## Citation
83
+
84
+ If you use this dataset, please cite:
85
+ ```
86
+ @misc{wikipedia-bookscorpus-en-preprocessed,
87
+ author = {Shahrukh Khan},
88
+ title = {Preprocessed Wikipedia and BookCorpus Dataset for Language Model Training},
89
+ year = {2024},
90
+ publisher = {GitHub & Hugging Face},
91
+ howpublished = {\url{https://huggingface.co/datasets/shahrukhx01/wikipedia-bookscorpus-en-preprocessed}}
92
+ }
93
+ ```