Datasets:
File size: 1,130 Bytes
550406d 86430b7 550406d 86430b7 3632bac cb2e8d9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
---
language:
- en
license: mit
task_categories:
- text-generation
dataset_info:
features:
- name: pageid
dtype: int64
- name: title
dtype: string
- name: revid
dtype: int64
- name: description
dtype: string
- name: categories
sequence: string
- name: markdown
dtype: string
- name: sections
sequence: string
- name: token
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 159522657
num_examples: 18241
- name: validate
num_bytes: 28217967
num_examples: 3220
- name: test
num_bytes: 10035346
num_examples: 1130
download_size: 114465310
dataset_size: 197775970
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validate
path: data/validate-*
- split: test
path: data/test-*
---
- modified version of Goodwiki Dataset from https://huggingface.co/datasets/euirim/goodwiki
- added section column (markdown headings), token column (computed with tiktoken)
- only contains articles with less than 3,000 token
- train, validate, test split (80, 15, 5)
|