repo stringclasses 1
value | github_id int64 1.27B 4.25B | github_node_id stringlengths 18 18 | number int64 8 13.5k | html_url stringlengths 49 53 | api_url stringlengths 59 63 | title stringlengths 1 402 | body stringlengths 1 62.9k ⌀ | state stringclasses 2
values | state_reason stringclasses 4
values | locked bool 2
classes | comments_count int64 0 99 | labels listlengths 0 5 | assignees listlengths 0 5 | created_at stringdate 2022-06-09 16:28:35 2026-04-13 04:28:06 | updated_at stringdate 2022-06-12 22:18:01 2026-04-13 13:35:30 | closed_at stringdate 2022-06-12 22:18:01 2026-04-10 19:54:38 ⌀ | author_association stringclasses 3
values | milestone_title stringclasses 0
values | snapshot_id stringclasses 3
values | extracted_at stringdate 2026-04-07 13:34:13 2026-04-13 13:58:38 | author_login stringlengths 3 28 | author_id int64 1.54k 258M | author_node_id stringlengths 12 20 | author_type stringclasses 3
values | author_site_admin bool 1
class |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/diffusers | 2,628,557,302 | I_kwDOHa8MBc6crJn2 | 9,833 | https://github.com/huggingface/diffusers/issues/9833 | https://api.github.com/repos/huggingface/diffusers/issues/9833 | SD3.5-large. Why is it OK when calling with a single thread, but not with multiple threads? | ### Describe the bug
First, I created a SD3.5-large service:
```python
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
import uuid
from diffusers import BitsAndBytesConfig, SD3Transformer2DModel, DDIMScheduler, DDPMParallelScheduler
from diffusers import StableDiffusion3Pipeline
import torch
from transf... | closed | completed | true | 1 | [
"bug"
] | [] | 2024-11-01T08:00:04Z | 2024-11-02T02:14:50Z | 2024-11-02T02:14:50Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | EvanSong77 | 48,975,863 | MDQ6VXNlcjQ4OTc1ODYz | User | false |
huggingface/diffusers | 2,629,094,046 | I_kwDOHa8MBc6ctMqe | 9,835 | https://github.com/huggingface/diffusers/issues/9835 | https://api.github.com/repos/huggingface/diffusers/issues/9835 | unused parameters lead to error when training contrlnet_sd3 | ### Discussed in https://github.com/huggingface/diffusers/discussions/9834
<div type='discussions-op-text'>
<sup>Originally posted by **Zheng-Fang-CH** November 1, 2024</sup>

Is there someone mee... | closed | completed | false | 6 | [] | [] | 2024-11-01T13:57:03Z | 2024-11-17T07:33:25Z | 2024-11-17T07:33:25Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Daryu-Fan | 57,171,029 | MDQ6VXNlcjU3MTcxMDI5 | User | false |
huggingface/diffusers | 2,629,147,951 | I_kwDOHa8MBc6ctZ0v | 9,836 | https://github.com/huggingface/diffusers/issues/9836 | https://api.github.com/repos/huggingface/diffusers/issues/9836 | [Feature] Can we record layer_id for DiT model? | **Is your feature request related to a problem? Please describe.**
Some layerwise algorithm may be based on layer-id.
just need some simple modification for transformer2Dmodel and its inner module like attention part, batch_norm part. just pass the layer_id as an extra parameter.
| closed | completed | false | 9 | [
"stale"
] | [] | 2024-11-01T14:26:31Z | 2025-01-27T01:31:21Z | 2025-01-27T01:31:21Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | foreverpiano | 29,483,524 | MDQ6VXNlcjI5NDgzNTI0 | User | false |
huggingface/diffusers | 2,629,158,762 | I_kwDOHa8MBc6ctcdq | 9,837 | https://github.com/huggingface/diffusers/issues/9837 | https://api.github.com/repos/huggingface/diffusers/issues/9837 | [Feature] Is it possible to customize latents.shape / prepare_latent for context parallel case? | **Is your feature request related to a problem? Please describe.**
One may need to extend the code to context parallel case and the latent sequence length needs to get divided.
Instead of copying all the code of pipeline.py, the minimum modification is just adding few lines about dividing the latent shape and all_gat... | closed | completed | false | 3 | [
"stale"
] | [] | 2024-11-01T14:32:05Z | 2024-12-01T15:07:36Z | 2024-12-01T15:07:36Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | foreverpiano | 29,483,524 | MDQ6VXNlcjI5NDgzNTI0 | User | false |
huggingface/diffusers | 1,423,152,504 | I_kwDOHa8MBc5U05V4 | 984 | https://github.com/huggingface/diffusers/issues/984 | https://api.github.com/repos/huggingface/diffusers/issues/984 | `F.interpolate(hidden_states, scale_factor=2.0, mode="nearest")` breaks for large bsz | ### Describe the bug
Thanks to the amazing work done in the [memory efficient PR](https://github.com/huggingface/diffusers/pull/532), I can now run Stable Diffusion in fp16, on TITAN RTX (24Go VRAM) until a batch size of 31 with no issue.
```python
pipe = StableDiffusionPipeline.from_pretrained(
"CompVis/stab... | closed | completed | false | 3 | [
"bug"
] | [] | 2022-10-25T21:41:47Z | 2022-10-28T09:25:23Z | 2022-10-28T09:25:23Z | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | NouamaneTazi | 29,777,165 | MDQ6VXNlcjI5Nzc3MTY1 | User | false |
huggingface/diffusers | 2,630,226,361 | I_kwDOHa8MBc6cxhG5 | 9,841 | https://github.com/huggingface/diffusers/issues/9841 | https://api.github.com/repos/huggingface/diffusers/issues/9841 | [TypeError] in DreamBooth SDXL LoRA training when `use_dora` parameter is False | ### Describe the bug
When running the DreamBooth SDXL training script with LoRA, it throws a TypeError even when `use_dora=False` (default). This happens because the `use_dora` parameter is always being passed to LoraConfig, regardless of whether DoRA is being used or not. I plan to submit a PR to fix this by conditio... | closed | completed | false | 1 | [
"bug"
] | [] | 2024-11-02T05:25:19Z | 2024-11-08T23:09:26Z | 2024-11-08T23:09:25Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | adhiiisetiawan | 51,025,603 | MDQ6VXNlcjUxMDI1NjAz | User | false |
huggingface/diffusers | 2,630,400,936 | I_kwDOHa8MBc6cyLuo | 9,844 | https://github.com/huggingface/diffusers/issues/9844 | https://api.github.com/repos/huggingface/diffusers/issues/9844 | NAN values produced by SDXL VAE encoder | ### Describe the bug
I'd like to use the SDXL VAE to encode my image, but only got NAN values. I have set the input and the vae to full precision (torch.float32), but problem still exists.
### Reproduction
```
import torch
from diffusers import StableDiffusionXLPipeline
from diffusers import DPMSolverMultistepSch... | closed | completed | false | 4 | [
"bug"
] | [] | 2024-11-02T11:24:49Z | 2024-11-03T06:53:23Z | 2024-11-03T06:53:23Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | YihanHu-2022 | 58,056,486 | MDQ6VXNlcjU4MDU2NDg2 | User | false |
huggingface/diffusers | 2,630,480,509 | I_kwDOHa8MBc6cyfJ9 | 9,846 | https://github.com/huggingface/diffusers/issues/9846 | https://api.github.com/repos/huggingface/diffusers/issues/9846 | FluxControlNetModel got not from_single_file()? It's really necessary. | ### Describe the bug
Flux is the most popular model now, but it's huge. If want to works with Union FluxControlNet together, 24GB VRAM is totally OOM. So, we have to use less VRAM option like fp8. But official ControlNet provider didn't provide fp8, all single providers are providing single file, like Kijai. So, F... | closed | completed | false | 8 | [
"bug"
] | [
"DN6"
] | 2024-11-02T14:12:33Z | 2024-12-03T01:40:02Z | 2024-12-03T01:38:44Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | riflemanl | 23,468,948 | MDQ6VXNlcjIzNDY4OTQ4 | User | false |
huggingface/diffusers | 2,630,665,811 | I_kwDOHa8MBc6czMZT | 9,847 | https://github.com/huggingface/diffusers/issues/9847 | https://api.github.com/repos/huggingface/diffusers/issues/9847 | Merge Lora weights into base model | I have finetuned the stable diffusion model and would like to merge the lora weights into the model itself. Currently I think in PEFT this is supported using `merge_and_unload` function but I seem to not find this option in diffusers. So is there any way to get a base model but with finetuned weights and If i am not wr... | closed | completed | false | 1 | [] | [] | 2024-11-02T18:00:28Z | 2024-11-03T03:03:45Z | 2024-11-03T03:03:45Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | yaswanth19 | 82,788,246 | MDQ6VXNlcjgyNzg4MjQ2 | User | false |
huggingface/diffusers | 1,423,261,976 | I_kwDOHa8MBc5U1UEY | 985 | https://github.com/huggingface/diffusers/issues/985 | https://api.github.com/repos/huggingface/diffusers/issues/985 | Possibly incorrect image normalization step in examples/dreambooth/train_dreambooth.py | ### Describe the bug
I was studying the `train_dreambooth.py` script ([link](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth.py)) and noticed that the `DreamBoothDataset` performs an image normalization step as one of the transforms (line 260), setting mean to 0.5 and standard d... | closed | not_planned | false | 2 | [
"bug"
] | [] | 2022-10-26T00:26:12Z | 2022-10-26T14:04:51Z | 2022-10-26T00:46:33Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | john-sungjin | 12,533,291 | MDQ6VXNlcjEyNTMzMjkx | User | false |
huggingface/diffusers | 2,630,910,513 | I_kwDOHa8MBc6c0IIx | 9,850 | https://github.com/huggingface/diffusers/issues/9850 | https://api.github.com/repos/huggingface/diffusers/issues/9850 | make gradient checkpointing with frozen model possible | ### Describe the bug
https://github.com/huggingface/diffusers/blob/89e4d6219805975bd7d253a267e1951badc9f1c0/src/diffusers/models/unets/unet_2d_blocks.py#L862
hi, the clause i highlighted in the link above prevents a model from using gradient checkpointing in eval mode. this is particularly useful for e.g. LORAs.
... | closed | completed | false | 10 | [
"bug"
] | [] | 2024-11-03T01:58:45Z | 2024-11-08T19:04:53Z | 2024-11-08T19:04:53Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | MikeTkachuk | 61,463,055 | MDQ6VXNlcjYxNDYzMDU1 | User | false |
huggingface/diffusers | 2,631,970,680 | I_kwDOHa8MBc6c4K94 | 9,856 | https://github.com/huggingface/diffusers/issues/9856 | https://api.github.com/repos/huggingface/diffusers/issues/9856 | ConnectionError: Tried to launch distributed communication on port 29401, but another process is utilizing it. Please specify a different port (such as using the --main_process_port flag or specifying a different main_process_port in your config file) and rerun your script. To automatically use the next open port (on a... | ### Describe the bug
ConnectionError: Tried to launch distributed communication on port 29401, but another process is utilizing it. Please specify a different port (such as using the --main_process_port flag or specifying a different main_process_port in your config file) and rerun your script. To automatically use th... | open | null | false | 7 | [
"bug",
"stale"
] | [] | 2024-11-04T06:40:10Z | 2024-12-04T15:03:04Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | qinchangchang | 95,670,613 | U_kgDOBbPRVQ | User | false |
huggingface/diffusers | 2,632,155,896 | I_kwDOHa8MBc6c44L4 | 9,857 | https://github.com/huggingface/diffusers/issues/9857 | https://api.github.com/repos/huggingface/diffusers/issues/9857 | FLUX train controlnet failed: embedding tensor size not match | ### Describe the bug
trying to train flux controlnet, reference to 'train_controlnet_flux.py' and 'readme_flux.txt'
### Reproduction
use the dataset 'fusing/fill50k', and the parameters mentioned in 'readme_flux.txt'
### Logs
```shell
Traceback (most recent call last):
File "/path_to_conda/projects/f... | closed | completed | false | 9 | [
"bug"
] | [] | 2024-11-04T08:27:38Z | 2024-11-05T05:56:47Z | 2024-11-05T05:56:47Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | yapengyu | 75,412,865 | MDQ6VXNlcjc1NDEyODY1 | User | false |
huggingface/diffusers | 2,632,669,934 | I_kwDOHa8MBc6c61ru | 9,858 | https://github.com/huggingface/diffusers/issues/9858 | https://api.github.com/repos/huggingface/diffusers/issues/9858 | KeyError: 'train' | ### Describe the bug
Traceback (most recent call last):
File "/mnt/s1-gaowenbin-data-image-text2image-sdb/project/diffusers/examples/dreambooth/train_dreambooth_flux.py", line 1812, in <module>
main(args)
File "/mnt/s1-gaowenbin-data-image-text2image-sdb/project/diffusers/examples/dreambooth/train_dreamboot... | closed | completed | false | 7 | [
"bug"
] | [] | 2024-11-04T12:19:28Z | 2024-11-15T10:50:43Z | 2024-11-06T08:06:08Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | godwenbin | 52,455,335 | MDQ6VXNlcjUyNDU1MzM1 | User | false |
huggingface/diffusers | 1,423,364,789 | I_kwDOHa8MBc5U1tK1 | 986 | https://github.com/huggingface/diffusers/issues/986 | https://api.github.com/repos/huggingface/diffusers/issues/986 | Why not call train method in "Training a diffusers model" demo of butterfly ? | in [Training a diffusers model](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) demo it provide a demo of train a UNet2DModel in notebook, in the function train_loop not call model.train() before train loop. But in many demos of projects always call model.train... | closed | completed | false | 0 | [] | [] | 2022-10-26T03:05:34Z | 2022-10-26T03:07:55Z | 2022-10-26T03:07:55Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | svjack | 27,874,014 | MDQ6VXNlcjI3ODc0MDE0 | User | false |
huggingface/diffusers | 2,633,114,694 | I_kwDOHa8MBc6c8iRG | 9,861 | https://github.com/huggingface/diffusers/issues/9861 | https://api.github.com/repos/huggingface/diffusers/issues/9861 | Flux training seems not to update the transformer model | ### Describe the bug
When I loaded the checkpoint of the transformer saved using the training script train_dreambooth_flux.py, I found it exactly the same as the pretrained flux-dev model. So I suspect that the model is not updating the parameters. Meanwhile, I notice that the optimizer.bin in the checkpoint save di... | open | null | false | 7 | [
"bug",
"stale"
] | [] | 2024-11-04T15:23:47Z | 2025-01-01T15:03:48Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | weixiong-ur | 34,222,007 | MDQ6VXNlcjM0MjIyMDA3 | User | false |
huggingface/diffusers | 2,634,763,677 | I_kwDOHa8MBc6dC02d | 9,865 | https://github.com/huggingface/diffusers/issues/9865 | https://api.github.com/repos/huggingface/diffusers/issues/9865 | Please update your tutorial | Have you learned software design? Why you delete or change some API but you keep them in your tutorial? Why do not you keep old API alive and have a new name? You bring unnecessary brother to study and application. | closed | completed | false | 3 | [] | [] | 2024-11-05T08:27:34Z | 2024-11-17T07:21:46Z | 2024-11-17T07:21:46Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | seetoclick | 96,339,820 | U_kgDOBb4HbA | User | false |
huggingface/diffusers | 2,634,821,886 | I_kwDOHa8MBc6dDDD- | 9,866 | https://github.com/huggingface/diffusers/issues/9866 | https://api.github.com/repos/huggingface/diffusers/issues/9866 | Flux controlnet can't be trained, do this script really work? | ### Describe the bug
run with one num processes, the code broke down and returns:
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by ... | closed | completed | false | 4 | [
"bug",
"stale"
] | [] | 2024-11-05T08:51:57Z | 2024-12-05T15:19:12Z | 2024-12-05T15:19:11Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | liuyu19970607 | 45,445,049 | MDQ6VXNlcjQ1NDQ1MDQ5 | User | false |
huggingface/diffusers | 2,635,661,103 | I_kwDOHa8MBc6dGP8v | 9,867 | https://github.com/huggingface/diffusers/issues/9867 | https://api.github.com/repos/huggingface/diffusers/issues/9867 | FluxInpaintPipeline overrides pixels outside the mask | ### Describe the bug
When inpainting (with diffusers==0.31.0 and torch==2.4.1) using `FluxInpaintPipeline`, I get some pixels outside the mask (and pretty far away from the mask border) that are overrided whereas the mask at their indices was black.
With either flux schnell or dev.
Is this expected?
I could p... | closed | completed | false | 19 | [
"bug",
"stale"
] | [] | 2024-11-05T14:43:03Z | 2025-02-21T17:41:34Z | 2025-02-21T17:41:32Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Clement-Lelievre | 70,368,164 | MDQ6VXNlcjcwMzY4MTY0 | User | false |
huggingface/diffusers | 1,423,435,161 | I_kwDOHa8MBc5U1-WZ | 987 | https://github.com/huggingface/diffusers/issues/987 | https://api.github.com/repos/huggingface/diffusers/issues/987 | Speech to image pipeline, Unexpected output, green image | ### Describe the bug
Resuting image is greenish
### Reproduction
```
import torch
import matplotlib.pyplot as plt
from datasets import load_dataset
from diffusers import DiffusionPipeline
from transformers import (
WhisperForConditionalGeneration,
WhisperProcessor,
)
device = "cuda" if torch.c... | closed | completed | false | 6 | [
"bug"
] | [] | 2022-10-26T04:52:10Z | 2022-10-27T06:39:00Z | 2022-10-27T06:39:00Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | darwinharianto | 44,696,192 | MDQ6VXNlcjQ0Njk2MTky | User | false |
huggingface/diffusers | 2,636,954,636 | I_kwDOHa8MBc6dLLwM | 9,872 | https://github.com/huggingface/diffusers/issues/9872 | https://api.github.com/repos/huggingface/diffusers/issues/9872 | model_index.json Not Found | ### Describe the bug
I just ran these commands and encountered the following error. I'm mentioning this in case there's an issue with the installation source, so it might be resolved in the next stable release.
### Reproduction
```
!pip install --upgrade pip -qqq
!pip install git+https://github.com/huggingface/acc... | closed | completed | false | 1 | [
"bug"
] | [] | 2024-11-06T03:41:00Z | 2024-11-06T05:56:01Z | 2024-11-06T05:56:01Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | HashingTag | 24,617,729 | MDQ6VXNlcjI0NjE3NzI5 | User | false |
huggingface/diffusers | 2,637,177,836 | I_kwDOHa8MBc6dMCPs | 9,873 | https://github.com/huggingface/diffusers/issues/9873 | https://api.github.com/repos/huggingface/diffusers/issues/9873 | Add OmniGen: A Unified Image Generation Model Pipeline | ### Model/Pipeline/Scheduler description
Adding support for OmniGen, a unified image generation model that can handle multiple tasks including text-to-image, image editing, subject-driven generation, and various computer vision tasks within a single framework.
Key features,
- Unified architecture handling mul... | closed | completed | false | 12 | [
"stale",
"contributions-welcome"
] | [] | 2024-11-06T06:39:27Z | 2025-02-11T20:46:39Z | 2025-02-11T20:46:39Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | ighoshsubho | 93,722,719 | U_kgDOBZYYXw | User | false |
huggingface/diffusers | 2,637,896,649 | I_kwDOHa8MBc6dOxvJ | 9,876 | https://github.com/huggingface/diffusers/issues/9876 | https://api.github.com/repos/huggingface/diffusers/issues/9876 | Why isn’t VRAM being released after training LoRA? | ### Describe the bug
When I use train_dreambooth_lora_sdxl.py, the VRAM is not released after training. How can I fix this?
### Reproduction
Not used.
### Logs
_No response_
### System Info
- 🤗 Diffusers version: 0.31.0.dev0
- Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.17
- Running on G... | open | null | false | 14 | [
"bug",
"stale"
] | [] | 2024-11-06T11:58:59Z | 2024-12-13T15:03:25Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | hjw-0909 | 56,182,057 | MDQ6VXNlcjU2MTgyMDU3 | User | false |
huggingface/diffusers | 1,423,478,686 | I_kwDOHa8MBc5U2I-e | 988 | https://github.com/huggingface/diffusers/issues/988 | https://api.github.com/repos/huggingface/diffusers/issues/988 | Training diffusers examples using torch FSDP | ### Describe the bug
I am able to run the current diffuser examples as specified in the readme steps
https://github.com/huggingface/diffusers/tree/main/examples/text_to_image
https://github.com/huggingface/diffusers/tree/main/examples/unconditional_image_generation
Would like to know if the examples can be tr... | closed | completed | false | 2 | [
"bug",
"stale"
] | [] | 2022-10-26T05:57:39Z | 2022-12-03T15:03:12Z | 2022-12-03T15:03:12Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | shrinath-suresh | 63,862,647 | MDQ6VXNlcjYzODYyNjQ3 | User | false |
huggingface/diffusers | 2,642,624,357 | I_kwDOHa8MBc6dgz9l | 9,886 | https://github.com/huggingface/diffusers/issues/9886 | https://api.github.com/repos/huggingface/diffusers/issues/9886 | PAG for StableDiffusionControlNetImg2ImgPipeline | PAG for StableDiffusionControlNetImg2ImgPipeline is missing | open | null | false | 6 | [
"stale",
"contributions-welcome"
] | [] | 2024-11-08T02:01:04Z | 2024-12-08T15:02:54Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | crapthings | 1,147,704 | MDQ6VXNlcjExNDc3MDQ= | User | false |
huggingface/diffusers | 2,642,721,627 | I_kwDOHa8MBc6dhLtb | 9,887 | https://github.com/huggingface/diffusers/issues/9887 | https://api.github.com/repos/huggingface/diffusers/issues/9887 | lack of support of loading lora weights in PixArtAlphaPipeline | pipe = PixArtAlphaPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe.load_lora_weights("xxx")
When I want to load lora in PixArtAlphaPipeline, it throws this error:
AttributeError: 'PixArtAlphaPipeline' object has no attribute 'load_lora_weights'
May be we can add the lora support in this pipeline... | open | null | false | 8 | [
"contributions-welcome",
"lora"
] | [] | 2024-11-08T03:23:19Z | 2025-04-08T02:48:21Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | DaaadShot | 88,389,446 | MDQ6VXNlcjg4Mzg5NDQ2 | User | false |
huggingface/diffusers | 2,643,065,132 | I_kwDOHa8MBc6difks | 9,889 | https://github.com/huggingface/diffusers/issues/9889 | https://api.github.com/repos/huggingface/diffusers/issues/9889 | Segmentation fault (core dumped) when sdxl-turbo inference with torch 2.2.1+cu118 | ### Describe the bug
When sdxl-turbo inferencing, I encountered Segmentation fault (core dumped) after loading the model. And this will happen with torch 2.2.1+cu118(xformers0.0.25+cu118), and it will not happen with torch 2.0.1+cu118(xformers0.0.20+cu118). However, I need to run under torch 2.2.1, so could anyone hel... | open | null | false | 4 | [
"bug",
"stale"
] | [] | 2024-11-08T06:55:01Z | 2024-12-08T15:02:48Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | rickyxie2004 | 127,063,398 | U_kgDOB5LVZg | User | false |
huggingface/diffusers | 1,423,643,650 | I_kwDOHa8MBc5U2xQC | 989 | https://github.com/huggingface/diffusers/issues/989 | https://api.github.com/repos/huggingface/diffusers/issues/989 | RuntimeError: 'weight' must be 2-D | ### Describe the bug
When I run the example of text_to_image.py, I got the problem shown in logs. I'm pretty sure I have it configured and running as the reademe.md requires.
### Reproduction
https://github.com/huggingface/diffusers/tree/main/examples/text_to_image/train_text_to_image.py
export MODEL_NAM... | closed | completed | false | 8 | [
"bug",
"stale"
] | [
"patil-suraj"
] | 2022-10-26T08:35:28Z | 2022-12-28T15:03:38Z | 2022-12-28T15:03:38Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | young-chao | 34,190,033 | MDQ6VXNlcjM0MTkwMDMz | User | false |
huggingface/diffusers | 2,643,463,086 | I_kwDOHa8MBc6dkAuu | 9,890 | https://github.com/huggingface/diffusers/issues/9890 | https://api.github.com/repos/huggingface/diffusers/issues/9890 | UNet2DConditionModel to onnx with torch.onnx faild | ### Describe the bug
I want to convert a unet to onnx using the way as [example](https://github.com/huggingface/diffusers/blob/main/scripts/convert_stable_diffusion_checkpoint_to_onnx.py), i can get unet result as ret,but when run into torch.onnx.export , an error reported, the code is here.
### Reproduction
impor... | open | null | false | 4 | [
"bug",
"stale"
] | [] | 2024-11-08T09:33:56Z | 2025-01-02T15:03:58Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | jesenzhang | 7,984,556 | MDQ6VXNlcjc5ODQ1NTY= | User | false |
huggingface/diffusers | 2,644,977,030 | I_kwDOHa8MBc6dpyWG | 9,894 | https://github.com/huggingface/diffusers/issues/9894 | https://api.github.com/repos/huggingface/diffusers/issues/9894 | Integration of DC-AE (Deep Compression Autoencoder) | **Is your feature request related to a problem? Please describe.**
No.
**Describe the solution you'd like.**
As the paper stated, DCAE can achieve awesome results using lower latent dimensions.
`Deep Compression Autoencoder for Efficient High-Resolution Diffusion Models`
https://github.com/mit-han-lab/efficien... | closed | completed | false | 5 | [] | [] | 2024-11-08T19:30:10Z | 2024-12-10T18:51:32Z | 2024-12-09T15:04:19Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | DefinitlyEvil | 8,604,063 | MDQ6VXNlcjg2MDQwNjM= | User | false |
huggingface/diffusers | 2,645,244,105 | I_kwDOHa8MBc6dqzjJ | 9,895 | https://github.com/huggingface/diffusers/issues/9895 | https://api.github.com/repos/huggingface/diffusers/issues/9895 | since commit [#5588725e8e] ,FluxPipeline inference yelds ERROR: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! | flux pipeline inference fails
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
when enable_sequential_cpu_offload() is used
i cannot test in other memory management settings because my 3090 wont allow it to run
it fails at #5588725e8e7be497839432e532... | closed | completed | false | 6 | [
"bug"
] | [] | 2024-11-08T21:55:02Z | 2024-11-09T23:04:15Z | 2024-11-09T23:03:03Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | paparico | 1,750,595 | MDQ6VXNlcjE3NTA1OTU= | User | false |
huggingface/diffusers | 2,646,922,169 | I_kwDOHa8MBc6dxNO5 | 9,899 | https://github.com/huggingface/diffusers/issues/9899 | https://api.github.com/repos/huggingface/diffusers/issues/9899 | about convert the transform(not unet) to cpkt | **Is your feature request related to a problem? Please describe.**
the newest stable diffusion model(like Stable Diffusion 3.5) don't have unet, they use transformer, however the script we have, like convert_diffusers_to_origin_sdxl.py only can convert unet model, i hope the script can convert transformer model
**D... | closed | completed | false | 1 | [] | [] | 2024-11-10T07:00:58Z | 2024-11-10T19:24:31Z | 2024-11-10T19:24:31Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | lmh12138 | 58,023,375 | MDQ6VXNlcjU4MDIzMzc1 | User | false |
huggingface/diffusers | 1,423,704,252 | I_kwDOHa8MBc5U3AC8 | 990 | https://github.com/huggingface/diffusers/issues/990 | https://api.github.com/repos/huggingface/diffusers/issues/990 | Implement `add_noise` in iPNDMScheduler | This method is missing from the recently-added IPNDMScheduler. It's not required for inference, but I think we should add it for consistency with all the others.
TODO:
- [ ] revert 56210ad when this is done. | closed | completed | false | 3 | [
"stale"
] | [] | 2022-10-26T09:17:57Z | 2022-11-30T13:57:23Z | 2022-11-30T13:57:22Z | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | pcuenca | 1,177,582 | MDQ6VXNlcjExNzc1ODI= | User | false |
huggingface/diffusers | 2,647,103,562 | I_kwDOHa8MBc6dx5hK | 9,900 | https://github.com/huggingface/diffusers/issues/9900 | https://api.github.com/repos/huggingface/diffusers/issues/9900 | Potential bug in repaint? | https://github.com/huggingface/diffusers/blob/dac623b59f52c58383a39207d5147aa34e0047cd/src/diffusers/schedulers/scheduling_repaint.py#L322
According to line5 of algorithm 1 in the paper, the second part in line 322 should remove the `**0.5`?
thanks! | closed | completed | false | 3 | [] | [] | 2024-11-10T10:41:26Z | 2024-12-16T19:38:22Z | 2024-12-16T19:38:14Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | jingweiz | 9,096,283 | MDQ6VXNlcjkwOTYyODM= | User | false |
huggingface/diffusers | 2,647,602,430 | I_kwDOHa8MBc6dzzT- | 9,901 | https://github.com/huggingface/diffusers/issues/9901 | https://api.github.com/repos/huggingface/diffusers/issues/9901 | 'tuple' object has no attribute 'shape' with processor=AttnProcessor() | ### Describe the bug
I'm working on modifying the attention, but when I set (processor=AttnProcessor()), it goes wrong combined with ip_adaptor:
hidden_states.shape if encoder_hidden_states is None else encoder_hidden_states.shape
AttributeError: 'tuple' object has no attribute 'shape'
### Reproductio... | closed | completed | false | 2 | [
"bug"
] | [] | 2024-11-10T20:24:30Z | 2024-11-17T23:42:12Z | 2024-11-17T23:42:12Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | massyzs | 45,664,796 | MDQ6VXNlcjQ1NjY0Nzk2 | User | false |
huggingface/diffusers | 2,647,706,071 | I_kwDOHa8MBc6d0MnX | 9,902 | https://github.com/huggingface/diffusers/issues/9902 | https://api.github.com/repos/huggingface/diffusers/issues/9902 | Conda Version got stuck at 0.30.3 | ### Describe the bug
Latest package can not be installed via conda. Here is the stuck PR: https://github.com/conda-forge/diffusers-feedstock/pull/71
### Reproduction
`conda install diffusers` installs 0.30.3
### Logs
_No response_
### System Info
Ubuntu 22.04, python 3.10
### Who can help?
_No response_ | closed | completed | true | 3 | [
"bug"
] | [] | 2024-11-10T22:12:56Z | 2024-11-12T16:02:54Z | 2024-11-12T16:02:54Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | lordsoffallen | 20,232,088 | MDQ6VXNlcjIwMjMyMDg4 | User | false |
huggingface/diffusers | 2,648,547,502 | I_kwDOHa8MBc6d3aCu | 9,904 | https://github.com/huggingface/diffusers/issues/9904 | https://api.github.com/repos/huggingface/diffusers/issues/9904 | FluxPipeline silently rounds the generated image shape | ### Describe the bug
When prompting the FluxPipeline class to generate an image with shape `(1920, 1080)`, the output image shape is rounded to `(1920, 1072)` which to me seems like the nearest multiple of 16 instead of 8.
As the FluxPipeline class accepts input sizes divisible by 8 I would expect them to remain co... | closed | completed | false | 7 | [
"bug"
] | [] | 2024-11-11T08:23:44Z | 2024-12-12T06:25:32Z | 2024-12-12T06:25:32Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | albertochimentiinbibo | 134,281,772 | U_kgDOCAD6LA | User | false |
huggingface/diffusers | 2,650,282,588 | I_kwDOHa8MBc6d-Bpc | 9,906 | https://github.com/huggingface/diffusers/issues/9906 | https://api.github.com/repos/huggingface/diffusers/issues/9906 | Stable Diffusion and SDXL Callbacks are fundamentally broken for prompt_embeds | ### Describe the bug
I'm working on implementing scheduled prompting in SDNext and realized that the callback in SDXL pipelines is fundamentally not functional. This also applies to Stable Diffusion Pipelines and likely others.
https://github.com/huggingface/diffusers/blob/dac623b59f52c58383a39207d5147aa34e0047cd/... | closed | completed | false | 15 | [
"bug"
] | [] | 2024-11-11T20:03:06Z | 2024-12-12T20:58:51Z | 2024-12-12T20:58:51Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | AI-Casanova | 54,461,896 | MDQ6VXNlcjU0NDYxODk2 | User | false |
huggingface/diffusers | 2,650,744,418 | I_kwDOHa8MBc6d_yZi | 9,907 | https://github.com/huggingface/diffusers/issues/9907 | https://api.github.com/repos/huggingface/diffusers/issues/9907 | Maximum sequence length warning when training Flux controlnet | ### Describe the bug
When training flux controlnet with the examples/controlnet/train_controlnet_flux.py script, I frequently get the following maximum sequence length CLIP truncation warning. Initially, I am wondering why there is no maximum_sequence_length training argument like there is for training base flux. Ad... | closed | completed | false | 3 | [
"bug"
] | [] | 2024-11-12T01:07:21Z | 2024-11-23T00:28:48Z | 2024-11-23T00:28:47Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | neuron-party | 96,799,331 | U_kgDOBcUKYw | User | false |
huggingface/diffusers | 2,651,271,252 | I_kwDOHa8MBc6eBzBU | 9,908 | https://github.com/huggingface/diffusers/issues/9908 | https://api.github.com/repos/huggingface/diffusers/issues/9908 | Add Additional AttentionProcessor Types to Enhance Functionality | **What API design would you like to have changed or added to the library? Why?**
The `AttentionProcessor` type defined in `diffusers.models.attention_processor.py` does not include all AttnProcessor types. For example, in Stable Diffusion 3, the `SD3Transformer2DModel` uses `JointAttnProcessor2_0`. However, attempting... | closed | completed | false | 0 | [] | [] | 2024-11-12T07:18:27Z | 2024-11-18T07:18:13Z | 2024-11-18T07:18:13Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Prgckwb | 55,102,558 | MDQ6VXNlcjU1MTAyNTU4 | User | false |
huggingface/diffusers | 2,651,421,926 | I_kwDOHa8MBc6eCXzm | 9,910 | https://github.com/huggingface/diffusers/issues/9910 | https://api.github.com/repos/huggingface/diffusers/issues/9910 | add prompt_embeds to StableDiffusionLatentUpscalePipeline please!!! | null | closed | completed | false | 7 | [
"stale"
] | [] | 2024-11-12T08:28:49Z | 2025-01-12T05:52:40Z | 2025-01-12T05:52:40Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sh4wn | 1,224,229 | MDQ6VXNlcjEyMjQyMjk= | User | false |
huggingface/diffusers | 2,651,429,533 | I_kwDOHa8MBc6eCZqd | 9,911 | https://github.com/huggingface/diffusers/issues/9911 | https://api.github.com/repos/huggingface/diffusers/issues/9911 | multi controlnet error for flux when using 2 controlnet with different layer length | ### Describe the bug
in flux multicontrolnet when i using 2 controlnet(https://huggingface.co/promeai/FLUX.1-controlnet-lineart-promeai and https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Canny/blob/main/config.json)
the lineart controlnet has 4 double layers and the canny controlnet has 5 double layers, we ... | open | null | false | 6 | [
"bug",
"wip"
] | [] | 2024-11-12T08:32:40Z | 2025-02-05T08:47:44Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | PromeAIpro | 178,361,217 | U_kgDOCqGTgQ | User | false |
huggingface/diffusers | 2,651,796,352 | I_kwDOHa8MBc6eDzOA | 9,913 | https://github.com/huggingface/diffusers/issues/9913 | https://api.github.com/repos/huggingface/diffusers/issues/9913 | Why set_lora_device doesn't work | ### Describe the bug
When I load serverl loras with set_lora_device(), the GPU memory continues to grow, cames from 20G to 25G, this function doesn't work
### Reproduction
for key in lora_list:
weight_name = key + ".safetensors"
pipe.load_lora_weights(lora_path, weight_name=weight_name, adapter_name=key, l... | open | null | false | 8 | [
"bug",
"stale",
"needs-code-example"
] | [] | 2024-11-12T10:44:54Z | 2025-02-05T15:04:33Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | West2022 | 103,084,901 | U_kgDOBiTzZQ | User | false |
huggingface/diffusers | 2,652,106,426 | I_kwDOHa8MBc6eE-66 | 9,914 | https://github.com/huggingface/diffusers/issues/9914 | https://api.github.com/repos/huggingface/diffusers/issues/9914 | [LoRA Flux Xlabs] Error loading trained LoRA with Xlabs on Diffusers (Fix proposal) | ### Describe the bug
Proposal to update the following script for Xlab Flux LoRA conversion due to a mismatch between keys in the state dictionary.
`src/diffusers/loaders/lora_conversion_utils.py`
When mapping single_blocks layers, if the model trained in Flux contains single_blocks, these keys are not updated and ... | closed | completed | false | 2 | [
"bug"
] | [] | 2024-11-12T12:56:50Z | 2024-11-12T15:45:18Z | 2024-11-12T15:45:18Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | raulmosa | 55,974,614 | MDQ6VXNlcjU1OTc0NjE0 | User | false |
huggingface/diffusers | 1,423,838,344 | I_kwDOHa8MBc5U3gyI | 992 | https://github.com/huggingface/diffusers/issues/992 | https://api.github.com/repos/huggingface/diffusers/issues/992 | Strength argument for StableDiffusionInpaintPipeline.__call__() in diffusers==0.6.0 | In previous inpainting pipeline(now StableDiffusionInpaintPipelineLegacy ) has the `strength` argument for set an offset, an independant argument from `num_inference_steps` . Is there any reason to remove that feature? Intuitively it could be different, small num_inference_steps vs large num_inference_steps x low stren... | closed | completed | false | 2 | [] | [] | 2022-10-26T11:03:51Z | 2023-05-19T10:16:53Z | 2022-10-27T03:37:54Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | mrgomdev | 18,096,269 | MDQ6VXNlcjE4MDk2MjY5 | User | false |
huggingface/diffusers | 2,656,598,908 | I_kwDOHa8MBc6eWHt8 | 9,923 | https://github.com/huggingface/diffusers/issues/9923 | https://api.github.com/repos/huggingface/diffusers/issues/9923 | [Community] Add MagicTailor Personalization Training Script | ### Model/Pipeline/Scheduler description
Recent advancements in fine-tuning techniques for text-to-image (T2I) personalization still struggle to distill visual concepts from reference images when there are both image-wide and spatially localized concepts present in each reference image.
The techniques in this paper... | open | null | false | 1 | [
"stale",
"contributions-welcome"
] | [] | 2024-11-13T19:43:43Z | 2024-12-14T15:03:07Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | clarencechen | 8,482,341 | MDQ6VXNlcjg0ODIzNDE= | User | false |
huggingface/diffusers | 2,657,163,583 | I_kwDOHa8MBc6eYRk_ | 9,924 | https://github.com/huggingface/diffusers/issues/9924 | https://api.github.com/repos/huggingface/diffusers/issues/9924 | Can we get more schedulers for flow based models such as SD3, SD3.5, and flux | It seems advanced schedulers such as DDIM, and the dpm++ 2m does work with flow based model such as SD3, SD3.5, and flux.
However, I only see 2 flow based schedulers in diffusers codebase:
FlowMatchEulerDiscreteScheduler, and'
FlowMatchHeunDiscreteScheduler
I tried to use DPMSolverMultistepScheduler, but it do... | open | null | false | 40 | [
"wip",
"scheduler"
] | [] | 2024-11-14T00:07:56Z | 2025-01-14T18:31:12Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | linjiapro | 4,970,879 | MDQ6VXNlcjQ5NzA4Nzk= | User | false |
huggingface/diffusers | 2,657,492,299 | I_kwDOHa8MBc6eZh1L | 9,926 | https://github.com/huggingface/diffusers/issues/9926 | https://api.github.com/repos/huggingface/diffusers/issues/9926 | gguf quantize and speed up support | **Is your feature request related to a problem? Please describe.**
GGUF is becoming the mainstream method for large model compression and accelerated inference. Transformers currently supports the loading of T5's GGUF format, but inference does not support acceleration.
**Describe the solution you'd like.**
If mod... | closed | completed | false | 6 | [
"stale"
] | [] | 2024-11-14T03:57:02Z | 2025-01-13T15:07:09Z | 2025-01-13T15:07:08Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | chuck-ma | 74,402,255 | MDQ6VXNlcjc0NDAyMjU1 | User | false |
huggingface/diffusers | 2,657,866,846 | I_kwDOHa8MBc6ea9Re | 9,927 | https://github.com/huggingface/diffusers/issues/9927 | https://api.github.com/repos/huggingface/diffusers/issues/9927 | HeaderTooLarge when train controlnet with sdv3 | ### Describe the bug
Hello, I tried diffuser to train controlnet with sdv3 but it didn't start training and send `safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge` feedback. I don't know how to handle it.
### Reproduction
Follow the README_v3 guide.
### Logs
```shell
(diffusers) [... | closed | completed | false | 3 | [
"bug"
] | [] | 2024-11-14T07:28:03Z | 2024-11-21T13:02:05Z | 2024-11-21T13:02:05Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Viola-Siemens | 31,766,783 | MDQ6VXNlcjMxNzY2Nzgz | User | false |
huggingface/diffusers | 1,423,924,622 | I_kwDOHa8MBc5U312O | 993 | https://github.com/huggingface/diffusers/issues/993 | https://api.github.com/repos/huggingface/diffusers/issues/993 | Data type mismatch when using stable diffusion in fp16 | ### Describe the bug
When run following code to try stable diffusion v1.5,
```Python
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained(
"local_project_path/stable-diffusion-v1-5",
torch_dtype=torch.float16, revision="fp16"
)
pipe = pipe.to("cuda")
... | closed | completed | false | 12 | [
"bug",
"stale"
] | [] | 2022-10-26T12:15:30Z | 2023-03-06T18:28:31Z | 2022-12-06T12:20:37Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | ParadoxZW | 32,508,168 | MDQ6VXNlcjMyNTA4MTY4 | User | false |
huggingface/diffusers | 2,660,504,884 | I_kwDOHa8MBc6elBU0 | 9,930 | https://github.com/huggingface/diffusers/issues/9930 | https://api.github.com/repos/huggingface/diffusers/issues/9930 | [PAG] - Adaptive Scale bug | ### Describe the bug
I am looking for the purpose of the PAG adaptive scale? Because I was passing a value in it, for example 5.0, and passing 3.0 in the PAG scale, according to the implemented code we will have a negative number and the scale will return 0 and the PAG will not be applied and I did not find an expla... | open | null | false | 1 | [
"bug",
"stale"
] | [] | 2024-11-15T02:00:19Z | 2024-12-15T15:03:05Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | elismasilva | 40,075,615 | MDQ6VXNlcjQwMDc1NjE1 | User | false |
huggingface/diffusers | 2,660,546,729 | I_kwDOHa8MBc6elLip | 9,933 | https://github.com/huggingface/diffusers/issues/9933 | https://api.github.com/repos/huggingface/diffusers/issues/9933 | StableDiffusion3Img2ImgPipeline.__call__() is missing width and height parameters | ### Describe the bug
The docstring for the `StableDiffusion3Img2ImgPipeline.__call__()` function includes `width` and `height` parameters, but the function itself does not include these parameters.
Is this a typo or is width and height supposed to be handled by the function?
Source file:
`diffusers/src/diffusers/... | closed | completed | false | 11 | [
"bug"
] | [] | 2024-11-15T02:46:46Z | 2024-11-20T03:00:36Z | 2024-11-20T00:53:05Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | chie2727 | 72,862,115 | MDQ6VXNlcjcyODYyMTE1 | User | false |
huggingface/diffusers | 2,663,298,531 | I_kwDOHa8MBc6evrXj | 9,936 | https://github.com/huggingface/diffusers/issues/9936 | https://api.github.com/repos/huggingface/diffusers/issues/9936 | nccl timeout on train_controlnet_flux.py when doing multigpu training | ### Describe the bug
Running train_controlnet_flux.py with multiple gpus results in a NCCL timeout error after N iterations of train_dataset.map(). This error can be partially solved by initializing Accelerator with a greater timeout argument in the following way:
```
from accelerate import InitProcessGroupKwargs
f... | closed | completed | false | 9 | [
"bug",
"stale"
] | [] | 2024-11-15T22:08:19Z | 2025-01-12T05:49:17Z | 2025-01-12T05:49:17Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | neuron-party | 96,799,331 | U_kgDOBcUKYw | User | false |
huggingface/diffusers | 1,423,969,011 | I_kwDOHa8MBc5U4Arz | 994 | https://github.com/huggingface/diffusers/issues/994 | https://api.github.com/repos/huggingface/diffusers/issues/994 | Question about CLIP-guided SD | In cond_fn of examples/community/clip_guided_stable_diffusion.py, we have
```python
# "predicted x_0" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf
pred_original_sample = (latents - beta_prod_t ** (0.5) * noise_pred) / alpha_prod_t ** (0.5)
fac = torch.sqrt(beta_prod_t)
sample = pred_original_sample ... | closed | completed | false | 12 | [
"stale"
] | [
"patil-suraj"
] | 2022-10-26T12:50:52Z | 2023-05-11T18:11:00Z | 2023-01-11T15:05:36Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | KevinGoodman | 43,786,278 | MDQ6VXNlcjQzNzg2Mjc4 | User | false |
huggingface/diffusers | 2,665,403,888 | I_kwDOHa8MBc6e3tXw | 9,941 | https://github.com/huggingface/diffusers/issues/9941 | https://api.github.com/repos/huggingface/diffusers/issues/9941 | Error running stable diffusion in colab. | ### Describe the bug
I have been using a notebook that I found on a youtube video, so that I could use Stable Diffusion to generate images in colab. and it was working for months. but 5 days ago the same code started generating errors and I can no longer use it. Can someone help me?
This is the error I get...
A... | closed | completed | false | 6 | [
"bug",
"stale"
] | [] | 2024-11-17T06:55:59Z | 2025-01-12T05:48:50Z | 2025-01-12T05:48:49Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Mikeskates | 188,498,638 | U_kgDOCzxCzg | User | false |
huggingface/diffusers | 2,665,654,096 | I_kwDOHa8MBc6e4qdQ | 9,942 | https://github.com/huggingface/diffusers/issues/9942 | https://api.github.com/repos/huggingface/diffusers/issues/9942 | Unable to install pip install diffusers>=0.32.0dev | ### Describe the bug
I am installing the following version
pip install diffusers>=0.32.0dev
However it does nothing
```
(c:\aitools\CogVideo\cv_venv) C:\aitools\CogVideo>pip install diffusers>=0.32.0dev
(c:\aitools\CogVideo\cv_venv) C:\aitools\CogVideo>
```
I even uninstalled the previous version
```... | closed | completed | false | 0 | [
"bug"
] | [] | 2024-11-17T10:26:19Z | 2024-11-17T12:27:23Z | 2024-11-17T12:27:23Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,665,703,051 | I_kwDOHa8MBc6e42aL | 9,944 | https://github.com/huggingface/diffusers/issues/9944 | https://api.github.com/repos/huggingface/diffusers/issues/9944 | Loading size mismatching for SD3.5 Medium | ### Describe the bug
We have two pieces of code:
1. naive pipeline loading for sd3.5:
```
import torch
from diffusers import StableDiffusion3Pipeline
torch.manual_seed(42)
torch.cuda.manual_seed_all(42)
pipe = StableDiffusion3Pipeline.from_pretrained("/path/to/stable-diffusion-3.5-medium", torch_dtype=t... | closed | completed | false | 1 | [
"bug"
] | [] | 2024-11-17T10:47:15Z | 2024-11-17T11:39:48Z | 2024-11-17T11:39:47Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | pprp | 29,230,784 | MDQ6VXNlcjI5MjMwNzg0 | User | false |
huggingface/diffusers | 2,666,992,390 | I_kwDOHa8MBc6e9xMG | 9,946 | https://github.com/huggingface/diffusers/issues/9946 | https://api.github.com/repos/huggingface/diffusers/issues/9946 | xformers-enable_xformers_memory_efficient_attention | ### Describe the bug
The error is :
python3.10/site-packages/diffusers/models/transformers/transformer_flux.py", line 180, in forward
attn_output, context_attn_output = self.attn(
ValueError: not enough values to unpack (expected 2, got 1)...
diffusers==0.32.0.dev0
torch==2.5.1
xformers==0.0.28.post3
transf... | closed | completed | false | 18 | [
"bug"
] | [] | 2024-11-18T03:27:35Z | 2024-11-19T07:08:22Z | 2024-11-19T06:15:04Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | algorithmconquer | 10,041,695 | MDQ6VXNlcjEwMDQxNjk1 | User | false |
huggingface/diffusers | 2,668,237,675 | I_kwDOHa8MBc6fChNr | 9,948 | https://github.com/huggingface/diffusers/issues/9948 | https://api.github.com/repos/huggingface/diffusers/issues/9948 | [Tests] add fast GPU workflow to the PR CI | As discussed with @DN6, we are considering to add a workflow that would run [fast GPU tests](https://github.com/huggingface/diffusers/blob/345907f32de71c8ca67f3d9d00e37127192da543/.github/workflows/push_tests.yml#L1) on PRs that affect the core functionality of the library.
We might make it a must in order for thos... | closed | completed | false | 2 | [
"stale"
] | [
"DN6",
"sayakpaul"
] | 2024-11-18T11:29:02Z | 2025-02-20T17:37:02Z | 2025-02-20T17:37:02Z | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
huggingface/diffusers | 2,668,256,191 | I_kwDOHa8MBc6fClu_ | 9,949 | https://github.com/huggingface/diffusers/issues/9949 | https://api.github.com/repos/huggingface/diffusers/issues/9949 | [Experimental] expose dynamic upcasting of layers as experimental APIs | Functionalities like https://github.com/huggingface/diffusers/pull/9177 are immensely helpful to load a checkpoint in say, `torch.float8_e5m2`, perform computation in say, `torch.float16`, and then keep the result in `torch.float8_e5m2` again.
Even though this feature isn't immediately compatible with `torch.compil... | closed | completed | false | 11 | [
"stale"
] | [
"DN6",
"a-r-r-o-w"
] | 2024-11-18T11:32:41Z | 2025-01-22T14:19:38Z | 2025-01-22T14:19:38Z | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
huggingface/diffusers | 1,424,240,414 | I_kwDOHa8MBc5U5C8e | 995 | https://github.com/huggingface/diffusers/issues/995 | https://api.github.com/repos/huggingface/diffusers/issues/995 | Inpainting pipeline with larger resolution? | Hello,
Rescent inpainting pipeline with explicit mask input is amazing, but it has some incompatible point with the original pipeline.
I think the most important part is resolution, that it only support 512 size.
In my opinion, theoretically new inpainting pipeline can be extended to larger resolution just like the ... | closed | completed | false | 2 | [] | [] | 2022-10-26T15:16:42Z | 2022-10-26T18:51:41Z | 2022-10-26T18:51:41Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | juno-hwang | 52,326,857 | MDQ6VXNlcjUyMzI2ODU3 | User | false |
huggingface/diffusers | 2,668,498,198 | I_kwDOHa8MBc6fDg0W | 9,950 | https://github.com/huggingface/diffusers/issues/9950 | https://api.github.com/repos/huggingface/diffusers/issues/9950 | Improve SD35 LoRA support to cover most popular LoRA formats | SD3.x pipeline does implement `SD3LoraLoaderMixin` and as such `load_lora_weights` on SD3.x does "work".
However, attempting to load any of the most popular LoRAs results in silent failure:
load is successful without any warnings, but loads ZERO keys.
Looking at implementation at: <https://github.com/hugging... | open | null | false | 9 | [] | [
"sayakpaul"
] | 2024-11-18T13:15:22Z | 2025-03-10T03:58:54Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | vladmandic | 57,876,960 | MDQ6VXNlcjU3ODc2OTYw | User | false |
huggingface/diffusers | 2,669,219,445 | I_kwDOHa8MBc6fGQ51 | 9,951 | https://github.com/huggingface/diffusers/issues/9951 | https://api.github.com/repos/huggingface/diffusers/issues/9951 | All schedulers have broken beta and exponential sigma methods | ### Describe the bug
following schedulers implement beta and exponential sigma methods:
```log
scheduling_deis_multistep.py
scheduling_dpmsolver_multistep.py
scheduling_dpmsolver_multistep_inverse.py
scheduling_dpmsolver_sde.py
scheduling_dpmsolver_singlestep.py
scheduling_euler_discrete.py
scheduling_heun_d... | closed | completed | false | 2 | [
"bug"
] | [] | 2024-11-18T16:53:33Z | 2024-11-20T11:20:36Z | 2024-11-20T11:20:36Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | vladmandic | 57,876,960 | MDQ6VXNlcjU3ODc2OTYw | User | false |
huggingface/diffusers | 2,669,594,167 | I_kwDOHa8MBc6fHsY3 | 9,953 | https://github.com/huggingface/diffusers/issues/9953 | https://api.github.com/repos/huggingface/diffusers/issues/9953 | Moving a pipeline that has a quantized component, to cuda, causes an error | ### Describe the bug
After trying out the new quantization method added to the diffusers library, I encountered a bug. I could not move the pipeline to cuda as I got this error
```
Traceback (most recent call last):
File "/workspace/test.py", line 12, in <module>
pipe.to("cuda")
File "/usr/local/lib/p... | closed | completed | false | 3 | [
"bug"
] | [] | 2024-11-18T19:12:55Z | 2024-11-20T13:39:01Z | 2024-11-20T12:31:10Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Leommm-byte | 110,293,491 | U_kgDOBpLx8w | User | false |
huggingface/diffusers | 2,669,948,365 | I_kwDOHa8MBc6fJC3N | 9,957 | https://github.com/huggingface/diffusers/issues/9957 | https://api.github.com/repos/huggingface/diffusers/issues/9957 | BDIA-DDIM Scheduler | ### Model/Pipeline/Scheduler description
The BDIA-DDIM scheduler was first applied in stable diffusion in the ECCV 2024 paper ["Exact Diffusion Inversion via Bi-directional Integration Approximation"](https://arxiv.org/abs/2307.10829) by Guoqiang Zhang, J. P. Lewis, and W. Bastiaan Kleijn. Below are results from the i... | open | null | false | 15 | [
"stale"
] | [] | 2024-11-18T21:26:25Z | 2024-12-30T15:03:24Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Jdh235 | 146,073,856 | U_kgDOCLTpAA | User | false |
huggingface/diffusers | 2,670,317,333 | I_kwDOHa8MBc6fKc8V | 9,958 | https://github.com/huggingface/diffusers/issues/9958 | https://api.github.com/repos/huggingface/diffusers/issues/9958 | AutoencoderKLTemporalDecoder support for tiling | **Is your feature request related to a problem? Please describe.**
I would like to use the tiling feature from AutoEncoderKL in the AutoencoderKLTemporalDecoder.
**Describe the solution you'd like.**
Implement tiling with the AutoencoderKLTemporalDecoder
**Describe alternatives you've considered.**
Manually ti... | open | null | false | 1 | [
"enhancement"
] | [
"a-r-r-o-w"
] | 2024-11-18T23:36:33Z | 2024-11-20T00:43:59Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | lucienfostier | 770,272 | MDQ6VXNlcjc3MDI3Mg== | User | false |
huggingface/diffusers | 2,670,491,628 | I_kwDOHa8MBc6fLHfs | 9,959 | https://github.com/huggingface/diffusers/issues/9959 | https://api.github.com/repos/huggingface/diffusers/issues/9959 | xlab-flux's lora load error in single_blocks | ### Describe the bug

### Reproduction
```
import re
def _convert_xlabs_flux_lora_to_diffusers(old_state_dict):
new_state_dict = {}
orig_keys = list(old_state_dict.keys())
def handle_qkv(sds_sd, ait_sd, ... | closed | completed | false | 3 | [
"bug"
] | [] | 2024-11-19T02:05:41Z | 2024-11-21T03:52:12Z | 2024-11-20T22:54:26Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | zhaowendao30 | 49,309,820 | MDQ6VXNlcjQ5MzA5ODIw | User | false |
huggingface/diffusers | 2,671,552,606 | I_kwDOHa8MBc6fPKhe | 9,962 | https://github.com/huggingface/diffusers/issues/9962 | https://api.github.com/repos/huggingface/diffusers/issues/9962 | got an unexpected keyword argument 'use_cuda_graph' | ### Describe the bug
Cannot import /home/ubuntu/16T/lsm/ComfyUI/custom_nodes/ComfyUI-CogVideoXWrapper module for custom nodes: Failed to import diffusers.models.autoencoders.autoencoder_kl_cogvideox because of the following error (look up to see its traceback):
autotune() got an unexpected keyword argument 'use_cuda_... | closed | completed | false | 7 | [
"bug"
] | [] | 2024-11-19T09:57:22Z | 2025-06-27T04:53:55Z | 2024-11-20T01:54:54Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | pyoliu | 114,060,912 | U_kgDOBsxucA | User | false |
huggingface/diffusers | 2,673,787,635 | I_kwDOHa8MBc6fXsLz | 9,966 | https://github.com/huggingface/diffusers/issues/9966 | https://api.github.com/repos/huggingface/diffusers/issues/9966 | Add support for SD 3.5 IP-Adapters | First IP-Adapter for SD 3.5 just released at <https://huggingface.co/InstantX/SD3.5-Large-IP-Adapter>
with code for the modified pipeline available in the same location.
Ask is to integrate support for SD 3.5 IP-Adapter into standard t2i/i2i/inpaint pipelines
@yiyixuxu @sayakpaul @DN6 @asomoza
| closed | completed | false | 9 | [
"New pipeline/model",
"contributions-welcome"
] | [] | 2024-11-19T22:44:28Z | 2025-02-12T16:24:03Z | 2025-02-12T16:24:03Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | vladmandic | 57,876,960 | MDQ6VXNlcjU3ODc2OTYw | User | false |
huggingface/diffusers | 2,674,103,239 | I_kwDOHa8MBc6fY5PH | 9,967 | https://github.com/huggingface/diffusers/issues/9967 | https://api.github.com/repos/huggingface/diffusers/issues/9967 | Tried to run diffusers/stable_diffusion.ipynb on Mac M3 and ran into errors | ### Describe the bug
I tried to run diffusers/stable_diffusion.ipynb on Mac M3 and it failed in several parts, including:
* nvidia-smi call doesn't work
* wasn't able to download the torch models due to some weird issues like `ImportError: cannot import name 'DIFFUSERS_SLOW_IMPORT' from 'diffusers.utils'` due to c... | closed | completed | false | 9 | [
"bug",
"stale"
] | [] | 2024-11-20T02:24:47Z | 2024-12-30T21:59:40Z | 2024-12-30T21:59:40Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | chrismattmann | 395,887 | MDQ6VXNlcjM5NTg4Nw== | User | false |
huggingface/diffusers | 2,674,696,687 | I_kwDOHa8MBc6fbKHv | 9,970 | https://github.com/huggingface/diffusers/issues/9970 | https://api.github.com/repos/huggingface/diffusers/issues/9970 | Training DreamBooth SDXL LoRA script failed | ### Describe the bug
I used the default script for training a SDXL with Lora with dog dataset, but it output the following error:
You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
You are using a model of type ... | closed | completed | false | 3 | [
"bug"
] | [] | 2024-11-20T07:03:24Z | 2024-11-20T09:14:26Z | 2024-11-20T09:14:06Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | zyf2316 | 87,761,159 | MDQ6VXNlcjg3NzYxMTU5 | User | false |
huggingface/diffusers | 2,674,735,002 | I_kwDOHa8MBc6fbTea | 9,971 | https://github.com/huggingface/diffusers/issues/9971 | https://api.github.com/repos/huggingface/diffusers/issues/9971 | DEISMultistepScheduler not working on FLUX | ### Describe the bug
DEISMultistepScheduler not working on FLUX
### Reproduction
import torch
from diffusers import FluxPipeline
from diffusers.schedulers import DEISMultistepScheduler
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
pipe.scheduler = DEISMul... | open | null | false | 4 | [
"bug",
"stale"
] | [] | 2024-11-20T07:24:41Z | 2024-12-20T15:03:23Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | PromeAIpro | 178,361,217 | U_kgDOCqGTgQ | User | false |
huggingface/diffusers | 2,675,063,774 | I_kwDOHa8MBc6fcjve | 9,972 | https://github.com/huggingface/diffusers/issues/9972 | https://api.github.com/repos/huggingface/diffusers/issues/9972 | CogX fails on MacOS requesting a 10TB buffer. | ### Describe the bug
Tried to run the THUDM/CogVideoX1.5-5B model using Diffusers from git (20th Nov, approx 8:30am GMT)
The script failed with
```
hidden_states = F.scaled_dot_product_attention(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Invalid buffer size: 10973.48 GB
```
While th... | open | null | false | 9 | [
"bug",
"stale"
] | [] | 2024-11-20T09:04:39Z | 2025-01-12T15:03:31Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Vargol | 62,868 | MDQ6VXNlcjYyODY4 | User | false |
huggingface/diffusers | 2,675,102,371 | I_kwDOHa8MBc6fctKj | 9,973 | https://github.com/huggingface/diffusers/issues/9973 | https://api.github.com/repos/huggingface/diffusers/issues/9973 | "ValueError: Attempting to unscale FP16 gradients" for training dreambooth lora sdxl script | ### Describe the bug
when I was training dreambooth lora sdxl script on dag dataset, it output the errors as following:
ValueError: Attempting to unscale FP16 gradients.
### Reproduction
export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0"
export INSTANCE_DIR="dog"
export OUTPUT_DIR="lora-trained-xl"
exp... | closed | completed | false | 1 | [
"bug"
] | [] | 2024-11-20T09:17:35Z | 2024-11-20T09:33:50Z | 2024-11-20T09:33:50Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | zyf2316 | 87,761,159 | MDQ6VXNlcjg3NzYxMTU5 | User | false |
huggingface/diffusers | 2,675,696,723 | I_kwDOHa8MBc6fe-RT | 9,974 | https://github.com/huggingface/diffusers/issues/9974 | https://api.github.com/repos/huggingface/diffusers/issues/9974 | add expected parameters to controlnet_sd3 | ### Describe the bug
the transformer model introduced in SD3 expects the below parameters (transformer_sd3.py). there are two missing parameters that remain undefined in the SD3ControlNetModel class (controlnet_sd3.py) - dual_attention_layers and qk_norm.
```
@register_to_config
def __init__(
self,
... | closed | completed | false | 4 | [
"bug"
] | [] | 2024-11-20T12:21:01Z | 2024-11-24T13:28:48Z | 2024-11-24T13:28:48Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sarahahtee | 81,594,044 | MDQ6VXNlcjgxNTk0MDQ0 | User | false |
huggingface/diffusers | 2,675,933,827 | I_kwDOHa8MBc6ff4KD | 9,976 | https://github.com/huggingface/diffusers/issues/9976 | https://api.github.com/repos/huggingface/diffusers/issues/9976 | ControlNet broken from_single_file | ### Describe the bug
controlnet loader from_single_file was originally added via #4084
and method `ControlNet.from_single_file()` works for non-converted controlnets.
but for controlnets in safetensors format that contain already converted state_dict, it errors out.
its not reasonable to expect from user to k... | closed | completed | false | 7 | [
"bug"
] | [
"DN6"
] | 2024-11-20T13:46:14Z | 2024-11-22T12:22:53Z | 2024-11-22T12:22:53Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | vladmandic | 57,876,960 | MDQ6VXNlcjU3ODc2OTYw | User | false |
huggingface/diffusers | 2,677,860,934 | I_kwDOHa8MBc6fnOpG | 9,979 | https://github.com/huggingface/diffusers/issues/9979 | https://api.github.com/repos/huggingface/diffusers/issues/9979 | flux img2img controlnet channels error | ### Describe the bug
When I use flux's img2img controlnet for inference, a channel error occurs.
### Reproduction
```python
import numpy as np
import torch
import cv2
from PIL import Image
from diffusers.utils import load_image
from diffusers import FluxControlNetImg2ImgPipeline, FluxControlNetPipeline
fr... | closed | completed | false | 10 | [
"bug",
"good first issue",
"help wanted",
"contributions-welcome"
] | [] | 2024-11-21T03:39:12Z | 2025-04-23T20:43:51Z | 2025-04-23T20:43:51Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | wen020 | 54,690,997 | MDQ6VXNlcjU0NjkwOTk3 | User | false |
huggingface/diffusers | 2,678,676,465 | I_kwDOHa8MBc6fqVvx | 9,983 | https://github.com/huggingface/diffusers/issues/9983 | https://api.github.com/repos/huggingface/diffusers/issues/9983 | Using StableDiffusionControlNetImg2ImgPipeline Enable_vae_tiling(), seemingly fixed the patch is 512 x 512, where should I set the relevant parameters | ```
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
prompt = "a beautiful landscape photograph"
pipe.enable_vae_tiling()
``` | closed | completed | false | 6 | [] | [
"a-r-r-o-w"
] | 2024-11-21T09:21:24Z | 2024-12-02T08:32:52Z | 2024-12-02T08:32:52Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | reaper19991110 | 54,790,092 | MDQ6VXNlcjU0NzkwMDky | User | false |
huggingface/diffusers | 2,681,646,230 | I_kwDOHa8MBc6f1qyW | 9,990 | https://github.com/huggingface/diffusers/issues/9990 | https://api.github.com/repos/huggingface/diffusers/issues/9990 | How to diagnose problems in training custom inpaint model | ### Discussed in https://github.com/huggingface/diffusers/discussions/9989
<div type='discussions-op-text'>
<sup>Originally posted by **Marquess98** November 22, 2024</sup>
What I want to do is to perform image inpainting when the input is a set of multimodal images, using sdxl as the pre trained model. But the... | closed | completed | true | 2 | [] | [] | 2024-11-22T03:16:50Z | 2024-11-23T13:37:53Z | 2024-11-23T13:37:53Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Marquess98 | 68,719,779 | MDQ6VXNlcjY4NzE5Nzc5 | User | false |
huggingface/diffusers | 2,683,658,713 | I_kwDOHa8MBc6f9WHZ | 9,995 | https://github.com/huggingface/diffusers/issues/9995 | https://api.github.com/repos/huggingface/diffusers/issues/9995 | Support Lightricks LTX-Video | ## Lightricks LTX-Video
_LTX-Video is the first DiT-based video generation model capable of generating high-quality videos in real-time. It produces 24 FPS videos at a 768x512 resolution faster than they can be watched. Trained on a large-scale dataset of diverse videos, the model generates high-resolution videos wi... | closed | completed | false | 4 | [
"New pipeline/model",
"Good second issue",
"contributions-welcome"
] | [
"DN6",
"a-r-r-o-w"
] | 2024-11-22T15:46:52Z | 2024-12-17T21:38:52Z | 2024-12-17T21:38:52Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | hlky | 106,811,348 | U_kgDOBl3P1A | User | false |
huggingface/diffusers | 2,683,890,697 | I_kwDOHa8MBc6f-OwJ | 9,996 | https://github.com/huggingface/diffusers/issues/9996 | https://api.github.com/repos/huggingface/diffusers/issues/9996 | Flux.1 cannot load standard transformer in nf4 | ### Describe the bug
loading different flux transformer models is fine except for nf4.
it works for 1% of fine-tunes provided on Huggingface, but it doesn't work for 99% standard fine-tunes available on CivitAI.
example of such model: <https://civitai.com/models/118111?modelVersionId=1009051>
*note* i'm using `... | open | null | false | 16 | [
"bug",
"wip"
] | [
"DN6"
] | 2024-11-22T16:55:11Z | 2024-12-28T19:56:54Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | vladmandic | 57,876,960 | MDQ6VXNlcjU3ODc2OTYw | User | false |
huggingface/diffusers | 2,683,938,632 | I_kwDOHa8MBc6f-adI | 9,997 | https://github.com/huggingface/diffusers/issues/9997 | https://api.github.com/repos/huggingface/diffusers/issues/9997 | Support new loras from Flux Authors | I have tried this code:
```python
import torch
from diffusers import AutoPipelineForImage2Image
pipeline = AutoPipelineForImage2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
pipeline.unload_lora_weights()
pipeline.load_lora_weights("black-forest-labs/FLUX.1-Redux-dev")
pipel... | closed | completed | false | 4 | [
"stale"
] | [] | 2024-11-22T17:09:35Z | 2025-01-12T05:47:21Z | 2025-01-12T05:47:21Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | MohamedAliRashad | 26,205,298 | MDQ6VXNlcjI2MjA1Mjk4 | User | false |
huggingface/diffusers | 2,684,406,561 | I_kwDOHa8MBc6gAMsh | 9,998 | https://github.com/huggingface/diffusers/issues/9998 | https://api.github.com/repos/huggingface/diffusers/issues/9998 | EMA training for PEFT LoRAs | **Is your feature request related to a problem? Please describe.**
EMAModel in Diffusers is not plumbed for interacting well with PEFT LoRAs, which leaves users to implement their own.
The idea has been thrown around that LoRA did not benefit from EMA, and research papers had shown this. However, after curiosity ... | open | null | false | 6 | [
"enhancement"
] | [] | 2024-11-22T19:56:27Z | 2025-07-08T13:28:12Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | bghira | 59,658,056 | MDQ6VXNlcjU5NjU4MDU2 | User | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.