Update README.md
Browse files
README.md
CHANGED
@@ -20,6 +20,9 @@ the music samples, both in the time domain and in the latent space. Using beat-s
|
|
20 |
encourages the model to interpolate between the training samples, but stay within the domain of the training data. The
|
21 |
result is generated music that is more diverse while staying faithful to the corresponding style.
|
22 |
|
|
|
|
|
|
|
23 |
## Model Sources
|
24 |
|
25 |
- [**🧨 Diffusers Pipeline**](https://huggingface.co/docs/diffusers/api/pipelines/musicldm)
|
|
|
20 |
encourages the model to interpolate between the training samples, but stay within the domain of the training data. The
|
21 |
result is generated music that is more diverse while staying faithful to the corresponding style.
|
22 |
|
23 |
+
This work is licensed under a
|
24 |
+
[Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/).
|
25 |
+
|
26 |
## Model Sources
|
27 |
|
28 |
- [**🧨 Diffusers Pipeline**](https://huggingface.co/docs/diffusers/api/pipelines/musicldm)
|