Dataset Viewer
Full Screen
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    UnicodeDecodeError
Message:      'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 322, in compute
                  compute_first_rows_from_parquet_response(
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 88, in compute_first_rows_from_parquet_response
                  rows_index = indexer.get_rows_index(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 640, in get_rows_index
                  return RowsIndex(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 521, in __init__
                  self.parquet_index = self._init_parquet_index(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 538, in _init_parquet_index
                  response = get_previous_step_or_raise(
                File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 591, in get_previous_step_or_raise
                  raise CachedArtifactError(
              libcommon.simple_cache.CachedArtifactError: The previous step failed.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 240, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2216, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1239, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1389, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1044, in __iter__
                  yield from islice(self.ex_iterable, self.n)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__
                  for key, pa_table in self.generate_tables_fn(**self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/text/text.py", line 90, in _generate_tables
                  batch = f.read(self.config.chunksize)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1104, in read_with_retries
                  out = read(*args, **kwargs)
                File "/usr/local/lib/python3.9/codecs.py", line 322, in decode
                  (result, consumed) = self._buffer_decode(data, self.errors, final)
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Current sample model

https://civitai.com/models/508420

The above is SDXL, and not very good. A better one is under way.

Overview

This is my attempt at creating a truely open source SDXL model that people might be interested in using.... and perhaps copying the spirit and creating other open source models. I'm including EVERYTHING I used to create my onegirl200 model:

  • The images
  • The captions
  • The OneTrainer json preset file
  • And my specific method i used to get here.

I've been playing around with the thousands of images I've filtered so far from danbooro, at https://huggingface.co/datasets/ppbrown/danbooru-cleaned So, the images here are a strict subset of those images. I also used their tagging ALMOST as-is. I only added one tag: "anime"

See [METHODOLOGY-adamw.md] for a detailed description of what I personally did to coax a model out of this dataset.

I also plan to try other training methods.

Memory usage tips

I am using an RTX4090 card, which has 24 GB of VRAM. So I optimize for best quality, and then fastest speed, that I can fit on my card. Currently, that means bf16 SDXL or Cascade model finetunes, using "Default" attention, and no gradient saves.

You can save memory, at the sacrifice of speed, by enabling gradient saving. You can save more memory, at the sacrifice of a little quality, by switching to Xformers attention. Using those adjustments, you can run adafactor/adafactor finetunes on a 16GB card.

Downloads last month
55