The Dataset Viewer is not available on this dataset.
Danbooru 2024 tags only in 1k tar
Dedicated dataset to align deepghs/danbooru2024-webp-4Mpixel.
How to use / why I create this: my speedrun to build the dataset
How to build the "dataset" with speed
Get at least 4TB of storage, and around 75GB of RAM. Always make a venv / conda environment for each task.
(Optional) Download this directly: metadata.parquet
Download all 1k tarfile with webp via dl-booru2024-hfhub.py
Rerun that script for this repo (another 1k tarfile).
(Optional) Otherwise build this dataset via metadata-booru2024-tags-parallel.py
Run extract-booru2024-parallel.py to extract all tars into a single directory.
> python extract-booru2024-parallel.py
100%|ββββββββββββββββββββββββββββββββββββββ| 1000/1000 [6:48:15<00:00, 24.50s/it]
Extracted: 1000 iters
Delta: 0 files
PS H:\danbooru2024-webp-4Mpixel> node
Welcome to Node.js v20.15.0.
Type ".help" for more information.
> const fs = require('fs');
> console.log(fs.readdirSync("./kohyas_finetune").length);
16010020
- (Done?) Finally, instead the official guide (a bit messy), follow this reddit post to make the metadata JSON file (with ARB) and start finetuning.
- Downloads last month
- 2