dataset_info: | |
features: | |
- name: text | |
dtype: string | |
- name: date | |
dtype: string | |
- name: file_path | |
dtype: string | |
- name: language | |
dtype: string | |
- name: language_score | |
dtype: float64 | |
- name: token_count | |
dtype: int64 | |
- name: metadata | |
struct: | |
- name: fineweb_dump | |
dtype: string | |
- name: fineweb_id | |
dtype: string | |
- name: section | |
dtype: string | |
- name: url | |
dtype: string | |
splits: | |
- name: train | |
num_bytes: 5122834071 | |
num_examples: 1487373 | |
download_size: 3119485747 | |
dataset_size: 5122834071 | |
configs: | |
- config_name: default | |
data_files: | |
- split: train | |
path: data/train-* | |
license: odc-by | |
language: | |
- en | |
tags: | |
- fineweb | |
size_categories: | |
- 1M<n<10M | |
This is a subset of the [Fineweb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) dataset trimmed down to approximately one billion tokens. No special frills. We sampled from the 10 billion token subset to create this one. |