Datasets:
metadata
dataset_info:
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: date
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
splits:
- name: train
num_bytes: 51073574.99634302
num_examples: 14874
download_size: 30919243
dataset_size: 51073574.99634302
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: odc-by
task_categories:
- text-generation
language:
- en
size_categories:
- 10K<n<100K
FineWeb 10MT
Roughly 10 million tokens worth of English FineWeb data. Obtained by taking the 10BT Sample of FineWeb, shuffling it, sharding into 1000 shards, and selecting the first shard.
Code
To reproduce FineWeb 10MT, simply use the following:
from datasets import load_dataset
fineweb = load_dataset("HuggingFaceFW/fineweb", "sample-10BT", split="train").shuffle().shard(num_shards=1000, index=0)
token = "token"
fineweb.push_to_hub("OxxoCodes/fineweb-10MT", token=token)