The dataset viewer is not available for this dataset.
Error code: ConfigNamesError Exception: ReadTimeout Message: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: b7b80766-f3d1-49c4-8066-06d60da09874)') Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response config_names = get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 165, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1663, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1620, in dataset_module_factory return HubDatasetModuleFactoryWithoutScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1018, in get_module data_files = DataFilesDict.from_patterns( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 690, in from_patterns else DataFilesList.from_patterns( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 593, in from_patterns origin_metadata = _get_origin_metadata(data_files, download_config=download_config) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 507, in _get_origin_metadata return thread_map( File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py", line 69, in thread_map return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs)) File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/std.py", line 1169, in __iter__ for obj in iterable: File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 609, in result_iterator yield fs.pop().result() File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 446, in result return self.__get_result() File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result raise self._exception File "/usr/local/lib/python3.9/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 486, in _get_single_origin_metadata resolved_path = fs.resolve_path(data_file) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 198, in resolve_path repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 125, in _repo_and_revision_exist self._api.repo_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2704, in repo_info return method( File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2561, in dataset_info r = get_session().get(path, headers=headers, timeout=timeout, params=params) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 602, in get return self.request("GET", url, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 93, in send return super().send(request, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 635, in send raise ReadTimeout(e, request=request) requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: b7b80766-f3d1-49c4-8066-06d60da09874)')
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
ForAug/ForNet
This is the ForNet dataset from the paper ForAug: Recombining Foregrounds and Backgrounds to Improve Vision Transformer Training with Bias Mitigation.
Updates
- [19.03.2025] We release the code to download and use ForNet on GitHub :computer:
- [19.03.2025] We release the patch files of ForNet on Huggingface :hugs:
- [12.03.2025] We release the preprint of ForAug on arXiv :spiral_notepad:
Using ForAug/ForNet
Preliminaries
To be able to download ForNet, you will need the ImageNet dataset in the usual format at <in_path>
:
<in_path>
|--- train
| |--- n01440764
| | |--- n01440764_10026.JPEG
| | |--- n01440764_10027.JPEG
| | |--- n01440764_10029.JPEG
| | `- ...
| |--- n01693334
| `- ...
`-- val
|--- n01440764
| |--- ILSVRC2012_val_00000293.JPEG
| |--- ILSVRC2012_val_00002138.JPEG
| |--- ILSVRC2012_val_00003014.JPEG
| `- ...
|--- n01693334
`- ...
Downloading ForNet
To download and prepare the already-segmented ForNet dataset at <data_path>
, follow these steps:
1. Clone the git repository and install the requirements
git clone https://github.com/tobna/ForAug
cd ForAug
pip install -r prep-requirements.txt
2. Download the diff files
./download_diff_files.sh <data_path>
This script will download all dataset files to <data_path>
3. Apply the diffs to ImageNet
python apply_patch.py -p <data_path> -in <in_path> -o <data_path>
This will apply the diffs to ImageNet and store the results in the <data_path>
folder. It will also delete the already-processes patch files (the ones downloaded in step 2). In order to keep the patch files, add the --keep
flag.
Optional: Zip the files without compression
When dealing with a large cluster and dataset files that have to be sent over the network (i.e. the dataset is on another server than the one used for processing) it's sometimes useful to not deal with many small files and have fewer large ones instead. If you want this, you can zip up the files (without compression) by using
./zip_up.sh <data_path>
Creating ForNet from Scratch
Coming soon
Using ForNet
To use ForAug/ForNet you need to have it available in folder or zip form (see Downloading ForNet) at data_path
.
Additionally, you need to install the (standard) requirements from 'requirements.txt':
pip install -r requirements.txt
Then, just do
from fornet import ForNet
data_path = ...
dataset = ForNet(
data_path,
train=True,
transform=None,
background_combination="all",
)
For information on all possible parameters, run
from fornet import ForNet
help(ForNet.__init__)
Citation
@misc{nauen2025foraug,
title={ForAug: Recombining Foregrounds and Backgrounds to Improve Vision Transformer Training with Bias Mitigation},
author={Tobias Christian Nauen and Brian Moser and Federico Raue and Stanislav Frolov and Andreas Dengel},
year={2025},
eprint={2503.09399},
archivePrefix={arXiv},
primaryClass={cs.CV},
}
Dataset Sources
ToDos
- release code to download and create ForNet
- release code to use ForNet for training and evaluation
- integrate ForNet into Huggingface Datasets
- Downloads last month
- 663