first commit
Browse files- README.md +68 -0
- condition_adapter_0/condition_adapter.pt +3 -0
- condition_adapter_0/config.json +9 -0
- condition_config.json +11 -0
- feature_extractor/preprocessor_config.json +28 -0
- scheduler/scheduler_config.json +15 -0
- text_encoder_0/config.json +25 -0
- text_encoder_0/merges.txt +0 -0
- text_encoder_0/pytorch_model.bin +3 -0
- text_encoder_0/special_tokens_map.json +24 -0
- text_encoder_0/tokenizer_config.json +33 -0
- text_encoder_0/vocab.json +0 -0
- unet/config.json +71 -0
- unet/diffusion_pytorch_model.bin +3 -0
- vae/config.json +30 -0
- vae/diffusion_pytorch_model.bin +3 -0
- vocoder/config.json +37 -0
- vocoder/vocoder.pt +3 -0
README.md
CHANGED
@@ -1,3 +1,71 @@
|
|
1 |
---
|
2 |
license: cc-by-nc-sa-4.0
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-nc-sa-4.0
|
3 |
+
datasets:
|
4 |
+
- AudioCaps+others
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
tags:
|
8 |
+
- audio
|
9 |
---
|
10 |
+
#
|
11 |
+
|
12 |
+
**Auffusion** is a latent diffusion model (LDM) for text-to-audio (TTA) generation. **Auffusion** can generate realistic audios including human sounds, animal sounds, natural and artificial sounds and sound effects from textual prompts. We introduce Auffusion, a TTA system adapting T2I model frameworks to TTA task, by effectively leveraging their inherent generative strengths and precise cross-modal alignment. Our objective and subjective evaluations demonstrate that Auffusion surpasses previous TTA approaches using limited data and computational resource. We release our model, inference code, and pre-trained checkpoints for the research community.
|
13 |
+
|
14 |
+
📣 We are releasing **Auffusion-Full-no-adapter** which was pre-trained on all datasets described in paper and created for easy use of audio manipulation.
|
15 |
+
📣 We are releasing **Auffusion-Full** which was pre-trained on all datasets described in paper.
|
16 |
+
📣 We are releasing **Auffusion** which was pre-trained on **AudioCaps**.
|
17 |
+
|
18 |
+
## Auffusion Model Family
|
19 |
+
|
20 |
+
| Model Name | Model Path |
|
21 |
+
|----------------------------|------------------------------------------------------------------------------------------------------------------------ |
|
22 |
+
| Auffusion | [https://huggingface.co/auffusion/auffusion](https://huggingface.co/auffusion/auffusion) |
|
23 |
+
| Auffusion-Full | [https://huggingface.co/auffusion/auffusion-full](https://huggingface.co/auffusion/auffusion-full) |
|
24 |
+
| Auffusion-Full-no-adapter | [https://huggingface.co/auffusion/auffusion-full-no-adapter](https://huggingface.co/auffusion/auffusion-full-no-adapter)|
|
25 |
+
|
26 |
+
|
27 |
+
## Code
|
28 |
+
|
29 |
+
Our code is released here: [https://github.com/happylittlecat2333/Auffusion](https://github.com/happylittlecat2333/Auffusion)
|
30 |
+
|
31 |
+
We uploaded several **Auffusion** generated samples here: [https://auffusion.github.io](https://auffusion.github.io)
|
32 |
+
|
33 |
+
Please follow the instructions in the repository for installation, usage and experiments.
|
34 |
+
|
35 |
+
|
36 |
+
## Quickstart Guide
|
37 |
+
|
38 |
+
First, git clone the repository and install the requirements:
|
39 |
+
|
40 |
+
```bash
|
41 |
+
git clone https://github.com/happylittlecat2333/Auffusion/
|
42 |
+
cd Auffusion
|
43 |
+
pip install -r requirements.txt
|
44 |
+
```
|
45 |
+
|
46 |
+
Download the **Auffusion** model and generate audio from a text prompt:
|
47 |
+
|
48 |
+
```python
|
49 |
+
import IPython, torch
|
50 |
+
import soundfile as sf
|
51 |
+
from auffusion_pipeline import AuffusionPipeline
|
52 |
+
|
53 |
+
pipeline = AuffusionPipeline.from_pretrained("auffusion/auffusion")
|
54 |
+
|
55 |
+
prompt = "Birds singing sweetly in a blooming garden"
|
56 |
+
output = pipeline(prompt=prompt)
|
57 |
+
audio = output.audios[0]
|
58 |
+
sf.write(f"{prompt}.wav", audio, samplerate=16000)
|
59 |
+
IPython.display.Audio(data=audio, rate=16000)
|
60 |
+
```
|
61 |
+
|
62 |
+
The auffusion model will be automatically downloaded from huggingface and saved in cache. Subsequent runs will load the model directly from cache.
|
63 |
+
|
64 |
+
The `generate` function uses 100 steps and 7.5 guidance_scale by default to sample from the latent diffusion model. You can also vary parameters for different results.
|
65 |
+
|
66 |
+
```python
|
67 |
+
prompt = "Rolling thunder with lightning strikes"
|
68 |
+
output = pipeline(prompt=prompt, num_inference_steps=100, guidance_scale=7.5)
|
69 |
+
audio = output.audios[0]
|
70 |
+
IPython.display.Audio(data=audio, rate=16000)
|
71 |
+
```
|
condition_adapter_0/condition_adapter.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d85709e250d7c262e38729fa9c2417df9ae039281baf0564563f0146267f8ff8
|
3 |
+
size 2370267
|
condition_adapter_0/config.json
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"text_encoder_name": "text_encoder_0",
|
3 |
+
"condition_adapter_name": "condition_adapter_0",
|
4 |
+
"condition_type": "clip-vit-large-patch14_text",
|
5 |
+
"pretrained_model_name_or_path": "openai/clip-vit-large-patch14",
|
6 |
+
"condition_max_length": 77,
|
7 |
+
"condition_dim": 768,
|
8 |
+
"cross_attention_dim": 768
|
9 |
+
}
|
condition_config.json
ADDED
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[
|
2 |
+
{
|
3 |
+
"text_encoder_name": "text_encoder_0",
|
4 |
+
"condition_adapter_name": "condition_adapter_0",
|
5 |
+
"condition_type": "clip-vit-large-patch14_text",
|
6 |
+
"pretrained_model_name_or_path": "openai/clip-vit-large-patch14",
|
7 |
+
"condition_max_length": 77,
|
8 |
+
"condition_dim": 768,
|
9 |
+
"cross_attention_dim": 768
|
10 |
+
}
|
11 |
+
]
|
feature_extractor/preprocessor_config.json
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"crop_size": {
|
3 |
+
"height": 224,
|
4 |
+
"width": 224
|
5 |
+
},
|
6 |
+
"do_center_crop": true,
|
7 |
+
"do_convert_rgb": true,
|
8 |
+
"do_normalize": true,
|
9 |
+
"do_rescale": true,
|
10 |
+
"do_resize": true,
|
11 |
+
"feature_extractor_type": "CLIPFeatureExtractor",
|
12 |
+
"image_mean": [
|
13 |
+
0.48145466,
|
14 |
+
0.4578275,
|
15 |
+
0.40821073
|
16 |
+
],
|
17 |
+
"image_processor_type": "CLIPImageProcessor",
|
18 |
+
"image_std": [
|
19 |
+
0.26862954,
|
20 |
+
0.26130258,
|
21 |
+
0.27577711
|
22 |
+
],
|
23 |
+
"resample": 3,
|
24 |
+
"rescale_factor": 0.00392156862745098,
|
25 |
+
"size": {
|
26 |
+
"shortest_edge": 224
|
27 |
+
}
|
28 |
+
}
|
scheduler/scheduler_config.json
ADDED
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_class_name": "PNDMScheduler",
|
3 |
+
"_diffusers_version": "0.18.2",
|
4 |
+
"beta_end": 0.012,
|
5 |
+
"beta_schedule": "scaled_linear",
|
6 |
+
"beta_start": 0.00085,
|
7 |
+
"clip_sample": false,
|
8 |
+
"num_train_timesteps": 1000,
|
9 |
+
"prediction_type": "epsilon",
|
10 |
+
"set_alpha_to_one": false,
|
11 |
+
"skip_prk_steps": true,
|
12 |
+
"steps_offset": 1,
|
13 |
+
"timestep_spacing": "leading",
|
14 |
+
"trained_betas": null
|
15 |
+
}
|
text_encoder_0/config.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "openai/clip-vit-large-patch14",
|
3 |
+
"architectures": [
|
4 |
+
"CLIPTextModel"
|
5 |
+
],
|
6 |
+
"attention_dropout": 0.0,
|
7 |
+
"bos_token_id": 0,
|
8 |
+
"dropout": 0.0,
|
9 |
+
"eos_token_id": 2,
|
10 |
+
"hidden_act": "quick_gelu",
|
11 |
+
"hidden_size": 768,
|
12 |
+
"initializer_factor": 1.0,
|
13 |
+
"initializer_range": 0.02,
|
14 |
+
"intermediate_size": 3072,
|
15 |
+
"layer_norm_eps": 1e-05,
|
16 |
+
"max_position_embeddings": 77,
|
17 |
+
"model_type": "clip_text_model",
|
18 |
+
"num_attention_heads": 12,
|
19 |
+
"num_hidden_layers": 12,
|
20 |
+
"pad_token_id": 1,
|
21 |
+
"projection_dim": 768,
|
22 |
+
"torch_dtype": "float32",
|
23 |
+
"transformers_version": "4.31.0.dev0",
|
24 |
+
"vocab_size": 49408
|
25 |
+
}
|
text_encoder_0/merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
text_encoder_0/pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5a536a73223d0aad893fd29e3b6095ac9b71ba6dafcb799c88f0abb4f173931f
|
3 |
+
size 492306077
|
text_encoder_0/special_tokens_map.json
ADDED
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token": {
|
3 |
+
"content": "<|startoftext|>",
|
4 |
+
"lstrip": false,
|
5 |
+
"normalized": true,
|
6 |
+
"rstrip": false,
|
7 |
+
"single_word": false
|
8 |
+
},
|
9 |
+
"eos_token": {
|
10 |
+
"content": "<|endoftext|>",
|
11 |
+
"lstrip": false,
|
12 |
+
"normalized": true,
|
13 |
+
"rstrip": false,
|
14 |
+
"single_word": false
|
15 |
+
},
|
16 |
+
"pad_token": "<|endoftext|>",
|
17 |
+
"unk_token": {
|
18 |
+
"content": "<|endoftext|>",
|
19 |
+
"lstrip": false,
|
20 |
+
"normalized": true,
|
21 |
+
"rstrip": false,
|
22 |
+
"single_word": false
|
23 |
+
}
|
24 |
+
}
|
text_encoder_0/tokenizer_config.json
ADDED
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_prefix_space": false,
|
3 |
+
"bos_token": {
|
4 |
+
"__type": "AddedToken",
|
5 |
+
"content": "<|startoftext|>",
|
6 |
+
"lstrip": false,
|
7 |
+
"normalized": true,
|
8 |
+
"rstrip": false,
|
9 |
+
"single_word": false
|
10 |
+
},
|
11 |
+
"clean_up_tokenization_spaces": true,
|
12 |
+
"do_lower_case": true,
|
13 |
+
"eos_token": {
|
14 |
+
"__type": "AddedToken",
|
15 |
+
"content": "<|endoftext|>",
|
16 |
+
"lstrip": false,
|
17 |
+
"normalized": true,
|
18 |
+
"rstrip": false,
|
19 |
+
"single_word": false
|
20 |
+
},
|
21 |
+
"errors": "replace",
|
22 |
+
"model_max_length": 77,
|
23 |
+
"pad_token": "<|endoftext|>",
|
24 |
+
"tokenizer_class": "CLIPTokenizer",
|
25 |
+
"unk_token": {
|
26 |
+
"__type": "AddedToken",
|
27 |
+
"content": "<|endoftext|>",
|
28 |
+
"lstrip": false,
|
29 |
+
"normalized": true,
|
30 |
+
"rstrip": false,
|
31 |
+
"single_word": false
|
32 |
+
}
|
33 |
+
}
|
text_encoder_0/vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
unet/config.json
ADDED
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_class_name": "UNet2DConditionModel",
|
3 |
+
"_diffusers_version": "0.18.2",
|
4 |
+
"act_fn": "silu",
|
5 |
+
"addition_embed_type": null,
|
6 |
+
"addition_embed_type_num_heads": 64,
|
7 |
+
"addition_time_embed_dim": null,
|
8 |
+
"attention_head_dim": 8,
|
9 |
+
"block_out_channels": [
|
10 |
+
320,
|
11 |
+
640,
|
12 |
+
1280,
|
13 |
+
1280
|
14 |
+
],
|
15 |
+
"center_input_sample": false,
|
16 |
+
"class_embed_type": null,
|
17 |
+
"class_embeddings_concat": false,
|
18 |
+
"conv_in_kernel": 3,
|
19 |
+
"conv_out_kernel": 3,
|
20 |
+
"cross_attention_dim": 768,
|
21 |
+
"cross_attention_norm": null,
|
22 |
+
"decay": 0.9999,
|
23 |
+
"down_block_types": [
|
24 |
+
"CrossAttnDownBlock2D",
|
25 |
+
"CrossAttnDownBlock2D",
|
26 |
+
"CrossAttnDownBlock2D",
|
27 |
+
"DownBlock2D"
|
28 |
+
],
|
29 |
+
"downsample_padding": 1,
|
30 |
+
"dual_cross_attention": false,
|
31 |
+
"encoder_hid_dim": null,
|
32 |
+
"encoder_hid_dim_type": null,
|
33 |
+
"flip_sin_to_cos": true,
|
34 |
+
"freq_shift": 0,
|
35 |
+
"in_channels": 4,
|
36 |
+
"inv_gamma": 1.0,
|
37 |
+
"layers_per_block": 2,
|
38 |
+
"mid_block_only_cross_attention": null,
|
39 |
+
"mid_block_scale_factor": 1,
|
40 |
+
"mid_block_type": "UNetMidBlock2DCrossAttn",
|
41 |
+
"min_decay": 0.0,
|
42 |
+
"norm_eps": 1e-05,
|
43 |
+
"norm_num_groups": 32,
|
44 |
+
"num_attention_heads": null,
|
45 |
+
"num_class_embeds": null,
|
46 |
+
"only_cross_attention": false,
|
47 |
+
"optimization_step": 100000,
|
48 |
+
"out_channels": 4,
|
49 |
+
"power": 0.6666666666666666,
|
50 |
+
"projection_class_embeddings_input_dim": null,
|
51 |
+
"resnet_out_scale_factor": 1.0,
|
52 |
+
"resnet_skip_time_act": false,
|
53 |
+
"resnet_time_scale_shift": "default",
|
54 |
+
"sample_size": 64,
|
55 |
+
"time_cond_proj_dim": null,
|
56 |
+
"time_embedding_act_fn": null,
|
57 |
+
"time_embedding_dim": null,
|
58 |
+
"time_embedding_type": "positional",
|
59 |
+
"timestep_post_act": null,
|
60 |
+
"transformer_layers_per_block": 1,
|
61 |
+
"up_block_types": [
|
62 |
+
"UpBlock2D",
|
63 |
+
"CrossAttnUpBlock2D",
|
64 |
+
"CrossAttnUpBlock2D",
|
65 |
+
"CrossAttnUpBlock2D"
|
66 |
+
],
|
67 |
+
"upcast_attention": false,
|
68 |
+
"update_after_step": 0,
|
69 |
+
"use_ema_warmup": false,
|
70 |
+
"use_linear_projection": false
|
71 |
+
}
|
unet/diffusion_pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:67de0577883b7ee3211dac1b579e5191e71b336a12e1a758a94c0bd5576a8a1a
|
3 |
+
size 3438366373
|
vae/config.json
ADDED
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_class_name": "AutoencoderKL",
|
3 |
+
"_diffusers_version": "0.18.2",
|
4 |
+
"act_fn": "silu",
|
5 |
+
"block_out_channels": [
|
6 |
+
128,
|
7 |
+
256,
|
8 |
+
512,
|
9 |
+
512
|
10 |
+
],
|
11 |
+
"down_block_types": [
|
12 |
+
"DownEncoderBlock2D",
|
13 |
+
"DownEncoderBlock2D",
|
14 |
+
"DownEncoderBlock2D",
|
15 |
+
"DownEncoderBlock2D"
|
16 |
+
],
|
17 |
+
"in_channels": 3,
|
18 |
+
"latent_channels": 4,
|
19 |
+
"layers_per_block": 2,
|
20 |
+
"norm_num_groups": 32,
|
21 |
+
"out_channels": 3,
|
22 |
+
"sample_size": 512,
|
23 |
+
"scaling_factor": 0.18215,
|
24 |
+
"up_block_types": [
|
25 |
+
"UpDecoderBlock2D",
|
26 |
+
"UpDecoderBlock2D",
|
27 |
+
"UpDecoderBlock2D",
|
28 |
+
"UpDecoderBlock2D"
|
29 |
+
]
|
30 |
+
}
|
vae/diffusion_pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c02e0c7c263a4c0630ca3a72380ff55b9e38e0ab41d64dff7bf620a58342bc75
|
3 |
+
size 334712113
|
vocoder/config.json
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"resblock": "1",
|
3 |
+
"num_gpus": 0,
|
4 |
+
"batch_size": 16,
|
5 |
+
"learning_rate": 0.0002,
|
6 |
+
"adam_b1": 0.8,
|
7 |
+
"adam_b2": 0.99,
|
8 |
+
"lr_decay": 0.999,
|
9 |
+
"seed": 1234,
|
10 |
+
|
11 |
+
"upsample_rates": [5,4,4,2],
|
12 |
+
"upsample_kernel_sizes": [11,8,8,4],
|
13 |
+
"upsample_initial_channel": 512,
|
14 |
+
"resblock_kernel_sizes": [3,7,11],
|
15 |
+
"resblock_dilation_sizes": [[1,3,5], [1,3,5], [1,3,5]],
|
16 |
+
|
17 |
+
"segment_size": 5120,
|
18 |
+
"num_mels": 256,
|
19 |
+
"num_freq": 2049,
|
20 |
+
"n_fft": 2048,
|
21 |
+
"hop_size": 160,
|
22 |
+
"win_size": 1024,
|
23 |
+
|
24 |
+
"sampling_rate": 16000,
|
25 |
+
|
26 |
+
"fmin": 0,
|
27 |
+
"fmax": null,
|
28 |
+
"fmax_for_loss": null,
|
29 |
+
|
30 |
+
"num_workers": 4,
|
31 |
+
|
32 |
+
"dist_config": {
|
33 |
+
"dist_backend": "nccl",
|
34 |
+
"dist_url": "tcp://localhost:54321",
|
35 |
+
"world_size": 1
|
36 |
+
}
|
37 |
+
}
|
vocoder/vocoder.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d71bb8ec87c21902175cb0f918e739789ec085663ed0f8ebd2f20adba4d5b5af
|
3 |
+
size 54808625
|