--- dataset_info: features: - name: text dtype: string - name: id dtype: string - name: metadata struct: - name: dump dtype: string - name: url dtype: string - name: date dtype: timestamp[ms] - name: file_path dtype: string - name: language dtype: string - name: language_score dtype: float64 - name: token_count dtype: int64 - name: score dtype: float64 - name: int_score dtype: int64 splits: - name: train num_bytes: 223818106668.24948 num_examples: 43368339 - name: test num_bytes: 22387828.750528496 num_examples: 4338 download_size: 129908705334 dataset_size: 223840494497.0 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* license: odc-by task_categories: - text-generation language: - en size_categories: - 10M 3.0 to make the dataset with higher quality; there are **45B GPT2 tokens** in this dataset. ## Acknowledgement We appreciate the efforts from HuggingFaceTB team to release these high-quality dataset and facilitate LLM community