🤏 Smol-Data
Collection
Tried and tested mixes for strong pretraining. Inspired by https://huggingface.co/blog/codelion/optimal-dataset-mixing • 14 items • Updated
• 11
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
text: string
id: string
dump: string
url: string
date: string
file_path: string
offset: int64
token_count: int64
language: string
page_average_lid: string
page_average_lid_score: double
full_doc_lid: string
full_doc_lid_score: double
per_page_languages: list<element: string>
child 0, element: string
is_truncated: bool
extractor: string
page_ends: list<element: int64>
child 0, element: int64
fw_edu_scores: list<element: double>
child 0, element: double
fw_edu_v2_scores: list<element: double>
child 0, element: double
dclm_scores: list<element: double>
child 0, element: double
ocr_quality_scores: list<element: double>
child 0, element: double
minhash_cluster_size: int64
duplicate_count: int64
dataset: string
-- schema metadata --
huggingface: '{"info": {"features": {"text": {"dtype": "string", "_type":' + 1443
to
{'text': Value('string'), 'id': Value('string'), 'dump': Value('string'), 'url': Value('string'), 'date': Value('string'), 'file_path': Value('string'), 'language': Value('string'), 'token_count': Value('int64'), 'dataset': Value('string')}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2543, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2083, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 544, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 383, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 180, in _generate_tables
yield Key(file_idx, batch_idx), self._cast_table(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 143, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
text: string
id: string
dump: string
url: string
date: string
file_path: string
offset: int64
token_count: int64
language: string
page_average_lid: string
page_average_lid_score: double
full_doc_lid: string
full_doc_lid_score: double
per_page_languages: list<element: string>
child 0, element: string
is_truncated: bool
extractor: string
page_ends: list<element: int64>
child 0, element: int64
fw_edu_scores: list<element: double>
child 0, element: double
fw_edu_v2_scores: list<element: double>
child 0, element: double
dclm_scores: list<element: double>
child 0, element: double
ocr_quality_scores: list<element: double>
child 0, element: double
minhash_cluster_size: int64
duplicate_count: int64
dataset: string
-- schema metadata --
huggingface: '{"info": {"features": {"text": {"dtype": "string", "_type":' + 1443
to
{'text': Value('string'), 'id': Value('string'), 'dump': Value('string'), 'url': Value('string'), 'date': Value('string'), 'file_path': Value('string'), 'language': Value('string'), 'token_count': Value('int64'), 'dataset': Value('string')}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
A globally shuffled version of HuggingFaceFW/finepdfs_100BT.
Part of the Smol-Data collection — tried and tested mixes for strong pretraining.
This dataset contains the same ~100B tokens as finepdfs_100BT but with all documents globally shuffled (seed=42). Use this version when you need randomized document ordering for pretraining.
The unshuffled dataset was loaded into memory, shuffled with dataset.shuffle(seed=42), and re-uploaded with 100 shards. See the smol_data.py script for details.
from datasets import load_dataset
ds = load_dataset("HuggingFaceFW/finepdfs_100BT-shuffled", split="train", streaming=True)
for sample in ds:
print(sample["text"][:200])
break
@misc{niklaus2026smoldata,
title={SmolData},
author={Joel Niklaus and Hynek Kydl{\'\i}{\v{c}}ek},
year={2026},
publisher={Hugging Face},
journal={Hugging Face repository},
howpublished={\url{https://huggingface.co/collections/HuggingFaceFW/smol-data}}
}