data
stringlengths 14
24.3k
|
---|
Philpax#0001: nah, it converts the riffusion spectogram to audio and vice versa
nullerror#1387: it does audio to img and vice versa
nullerror#1387: ah dang same mind
Philpax#0001: it doesn't convert arbitrary images
Philpax#0001: for that you'd need something to map the image to the audio space
cravinadventure#7884: ahg i've been looking for a program to convert images to audio
nullerror#1387: this does that
nullerror#1387: spectrograms tho
nullerror#1387: if u have fl studio u can use harmor to convert images to audio
Philpax#0001: mm I'll keep that as an option, wanna avoid python to minimise the dependencies though
nullerror#1387: look up "harmor images to audio tutorial"
cravinadventure#7884: i wanna use this as input https://cdn.discordapp.com/attachments/1053081177772261386/1054985256882667601/black_white_optical_illusion_display.jpeg
nullerror#1387: yeah harmor can do that
nullerror#1387: thats what ur looking for
cravinadventure#7884: SICKKKK |
nullerror#1387: sent u a dm
cravinadventure#7884: thanks 🙂
nullerror#1387: with a tut
DeadfoxX#0666: Will there be a way to create longer Songs and to add lyrics later?
JL#1976: Pinned a message.
matteo101man#6162: anyone have a way to do a batch file2img
matteo101man#6162: also this 4gb model appears to be broken (https://www.reddit.com/r/riffusion/comments/znbo75/how_to_load_model_to_automatic1111/) what was the process of converting the model into a smaller ckpt
Edenoide#0166: https://huggingface.co/ckpt/riffusion-model-v1/tree/main at least the second model works fine for training! (4.27GB)
matteo101man#6162: oh it disabled it automatically cause of pickles nvm
matteo101man#6162: i was going to try to dreambooth a single song and see what happens
Edenoide#0166: For training I've been using the fast dreambooth colab https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb
matteo101man#6162: i have a 3090
Edenoide#0166: lol ok nevermind
Edenoide#0166: I got only basic knowledge of it but I've been using this model for generating images with AUTOMATIC 1111 in local
Edenoide#0166: not for training of course with my humble 3070 |
Edenoide#0166: The painful process of installing Riffusion Inference Server on Windows10 (PART III)
Hi again. First of all amb not a programmer so my python knowledge is very basic. I've installed a clean version of anaconda and then git cloned the riffusion inference server repository:
conda create --name riffusion-inference python=3.9
conda activate riffusion-inference
Then I've installed all the the things it requires:
conda install -c "nvidia/label/cuda-11.7.0" cuda-toolkit
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
conda install -c conda-forge ffmpeg
conda install soundfile
conda install torchaudio
pip install PySoundFile
Edenoide#0166: I've run python -m pip install -r requirements.txt & python -m riffusion.server --port 3013 --host 127.0.0.1
Edenoide#0166: but I'm stuck here:
Edenoide#0166: https://cdn.discordapp.com/attachments/1053081177772261386/1055062037434208266/INFERENCE-ERROR2.PNG |
Edenoide#0166: there's always a 404 error when opening the site. Any ideas?
Edenoide#0166: Thank you in advance!
matteo101man#6162: 🤷♂️
Edenoide#0166: My windows experience with riffusion is pain. So easy when running the colab's I've found. I asume it's optimized for linux?
matteo101man#6162: I'm not very knowledgeable either I just run it locally but it took hours to get all the conda stuff working
Edenoide#0166: are you using windows too?
matteo101man#6162: yes
Edenoide#0166: at least there's hope then
noop_noob#0479: May I know what dataset was riffusion trained on? Or at least, what kind of text labels were used?
denny#1553: You have to interface with a post request
denny#1553: @Edenoide you'll either have to use curl or a language with a rest api like nodejs with fetch/axios and send a post request to 127.0.0.1:3013/run_inference
denny#1553: It'll return data in a base 64 encoded url that you'll have to convert to binary data. You can do that with node through fs.writefilesync -- but If you run the front-end code it should handle all that stuff.
denny#1553: But you have it running, gj!
denny#1553: https://github.com/riffusion/riffusion-app install that to interface with the server
Edenoide#0166: Thanks @denny ! So I need to install riffusion-app and then it's going to connect with the running server. My fault I thought the inference server was the only think needed for generating real time infinite AI songs |
Edenoide#0166: The thing I want to achieve is running the real time song with my custom model
Edenoide#0166: I'm training an electronic cumbia model that's starting to generate funny results
Edenoide#0166: https://cdn.discordapp.com/attachments/1053081177772261386/1055118609770364998/haunted_cumbias.mp3
Edenoide#0166: It's working on Windows 10!! Thank you guys https://cdn.discordapp.com/attachments/1053081177772261386/1055123493173329940/image.png
0nion_man_LV#6572: annyone else ran into said issue while trying to selfhost? https://cdn.discordapp.com/attachments/1053081177772261386/1055140047998947338/image.png
Edenoide#0166: As a real dummy I wrote my installation process for Windows from zero:
Edenoide#0166: https://cdn.discordapp.com/attachments/1053081177772261386/1055140652565938276/RIFFUSION_APP_WINDOWS_INSTALLATION_FOR_DUMMIES.txt
Edenoide#0166: No more pain
0nion_man_LV#6572: was gonna say i'm trying to just copy paste your commands as i don't even want to be bothered at this point.
0nion_man_LV#6572: Thanks mate!
Edenoide#0166: If it really works from zero for you or need to be improved I'm gonna post it on reddit
Edenoide#0166: *hit enter instead of type lol
Edenoide#0166: English is my third rusty language...
0nion_man_LV#6572: i'm doing it from the part of installing conda packages since i already cloned all repos earlier.
Edenoide#0166: OK! Good luck |
0nion_man_LV#6572: the "soundfile" command didn't find any package so i had to use the command conda page suggested:
`https://anaconda.org/bricew/soundfile`
`conda install -c bricew soundfile`
as well as PySoundFile didn't work so their solution is the following:
`https://anaconda.org/conda-forge/pysoundfile`
`conda install -c conda-forge pysoundfile`
doomsboygaming#2550: Doing a start from zero, will update you if it works.
I'll see if I run into those issues. If I do I'll also Inform you
0nion_man_LV#6572: i'd say it's safe to be posted around. It got me to the entry step of where i got last night by myself but couldn't reproduce with a fresh install today.
0nion_man_LV#6572: my issue is running out of vram but there should be some workaround. No way this can't work if i burnt my gpu for 2 weeks straight generating SD images all day long.
doomsboygaming#2550: I'd say adjusting the max VRAM usage and maybe some other optimizations
doomsboygaming#2550: lmao
doomsboygaming#2550: or buy a better gpu
0nion_man_LV#6572: yeah i didn't really look into it, i just went to sleep after it failed lol |
0nion_man_LV#6572: give money :^)
doomsboygaming#2550: college student here, broke as all get out
doomsboygaming#2550: But yeah, installing from a fresh env
0nion_man_LV#6572: SD worked perfectly fine, be it cpu or gpu generating the image. I refuse to believe that generating a black and white spectrum would be any more demanding.
doomsboygaming#2550: I think it may be the sound portion
doomsboygaming#2550: Synths do take ram, and vram if you set it to use that
doomsboygaming#2550: if your system is overclocked I've heard that it can create some issues
0nion_man_LV#6572: i thought sound would be covered by cpu
doomsboygaming#2550: It could
0nion_man_LV#6572: nah i'm running all out-of-the-box clocks
doomsboygaming#2550: My Synths within FLStudio do use Vram
doomsboygaming#2550: Any cooling issues?
doomsboygaming#2550: I run mine in the basement where its freezing
doomsboygaming#2550: LMAO
doomsboygaming#2550: Man the Cuda Toolkits take ages |
doomsboygaming#2550: I forgot how many packages were in there
doomsboygaming#2550: Thank goodness i have fast download and SSD
0nion_man_LV#6572: only minor cpu issues but it's bigtime bottlenecked by gpu anyways.
doomsboygaming#2550: Yeah the soundfile and that has issues just as stated by the other person
doomsboygaming#2550: ```conda install -c bricew soundfile
conda install torchaudio
conda install -c conda-forge pysoundfile```
doomsboygaming#2550: @0nion_man_LV Sorry for the tag, but for some reason my Conda term is not finding NPM as a command smh
0nion_man_LV#6572: you need nodejs https://nodejs.org/en/
doomsboygaming#2550: Ah thats why
doomsboygaming#2550: time to install it smh
doomsboygaming#2550: I had to fresh install windows 10
doomsboygaming#2550: cause pissy PC updated to win 11 without my permission
doomsboygaming#2550: and did not let me downgrade
doomsboygaming#2550: Well shoot |
0nion_man_LV#6572: lol
doomsboygaming#2550: just needed to restart the terminal
doomsboygaming#2550: Might be nice to add "you need to have node.js installed"
doomsboygaming#2550: with the link
doomsboygaming#2550: cause my dumb brain forgot thats what npm uses
doomsboygaming#2550: delicious download speeds https://cdn.discordapp.com/attachments/1053081177772261386/1055155190988283934/image.png
Edenoide#0166: ah! yes I installed node a week ago! I'm changing some lines then
doomsboygaming#2550: it works
Edenoide#0166: https://cdn.discordapp.com/attachments/1053081177772261386/1055157551412563968/RIFFUSION_APP_WINDOWS_INSTALLATION_FOR_DUMMIES.txt
doomsboygaming#2550: Got it all setup and running based on your guide
Edenoide#0166: improved
Edenoide#0166: great!!!!
doomsboygaming#2550: Did you add the lines for the replaements for the packages?
Edenoide#0166: yes!
doomsboygaming#2550: kk |
doomsboygaming#2550: Now how does one train LMAO
0nion_man_LV#6572: there's plenty of guides online
doomsboygaming#2550: kk
0nion_man_LV#6572: have fun managing your storage if you wanna train anything actually worth your time
doomsboygaming#2550: I have 8 tb
Edenoide#0166: I'm using the fast dreambooth colab: https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb
doomsboygaming#2550: and 16tb of external
Edenoide#0166: you need to be familiarized with colab but it's not hard
doomsboygaming#2550: collab is not hard
Edenoide#0166: haha it's not
doomsboygaming#2550: I'm thinking of doing a electro swing dataset
Edenoide#0166: I'm going to create a tutorial
doomsboygaming#2550: kk
Edenoide#0166: the thing is it works on 5.12 seconds loops
0nion_man_LV#6572: tryna figure how does one do this 🥺 |
0nion_man_LV#6572: server files are kinda abstract to me 🥺
doomsboygaming#2550: God damn, this thing makes someone decent vocals
Edenoide#0166: so it means if you want clean loops with no cuts or weird rhythm jumps it only works on 94bpms
Edenoide#0166: or half/doubles
Edenoide#0166: electroswing is about 128 beats per minutes
Edenoide#0166: even 135
doomsboygaming#2550: Yeah, I'll need to do some manual edits
Edenoide#0166: audacity works great with this part
Edenoide#0166: it's going to be slow electroswing or gabber-electroswing haha
doomsboygaming#2550: I have FL studio
Edenoide#0166: yeah you can adapt it there too
Edenoide#0166: it's about 94.2 bpm
doomsboygaming#2550: Yeah
doomsboygaming#2550: This is still very bare bones
Edenoide#0166: for generating the spectrograms you need RIFFUSION MANIPULATION. |
Edenoide#0166: (turning your audios into images for training)
doomsboygaming#2550: Yeah, audio to spectrogram
Edenoide#0166: This colab works great https://discord.com/channels/1053034685590143047/1053081177772261386/1054726766129844224
doomsboygaming#2550: Did i just hear the AI make a "person" say Gangnam Style?
doomsboygaming#2550: I did
Edenoide#0166: lol what
doomsboygaming#2550: lmao
doomsboygaming#2550: I wonder what would be outputted based from just 1 song of data
doomsboygaming#2550: Probs Copyright Infringment
0nion_man_LV#6572: is there any other way of lowering vram requirements? I've been cucked by my 4gb lol
0nion_man_LV#6572: set batch size to lowest allowed size of 21 and it still fills it up in seconds and doesn't start.
doomsboygaming#2550: Code optimizations
doomsboygaming#2550: https://cdn.discordapp.com/attachments/1053081177772261386/1055162986941132871/image.png
doomsboygaming#2550: Or maybe find a alt to use CPU instead
0nion_man_LV#6572: i tried getting pytorch for cpu but the inference is built with cuda in mind |
doomsboygaming#2550: I see
doomsboygaming#2550: Yeah 4gb of Vram is kinda bad
0nion_man_LV#6572: i wonder what are the minimum requirements in that case
hulla#5846: hello i just come in this discord channel and see what you said humm it is possible to use a cluster of more than one computer ?
doomsboygaming#2550: I don't know if that has support yet
hulla#5846: some linux distri can ? is this soft linux compatible ?
doomsboygaming#2550: I'm sure it can be run on linux in some hacky way
a_robot_kicker#7014: Hey so in my coding the other day I wasn't able to get 5000ms cleanly onto the spectrogram and had to resize it from 502 px to 512px. Are yall saying that the expected clip duration is actually 5120ms?
a_robot_kicker#7014: I did some algebra and found a rounding error in the current code for 5s clips, so that would totally explain it
a_robot_kicker#7014: In which case the code is just straight wrong when it defines clip duration as 5000ms
db0798#7460: Yes, the duration is not exactly 5s, it's a bit longer
a_robot_kicker#7014: Wtf ok. Bug in code then
db0798#7460: The riffusion-manipulation GitHub page says the duration is 5119 ms
a_robot_kicker#7014: I'm talking about riffusion-inference
db0798#7460: Oh okay |
a_robot_kicker#7014: Which defines clip duration as 5000ms and proceeds from there, but using those numbers ends up producing a 502px spectrogram 🤔
a_robot_kicker#7014: I wonder if it's doing something like that to make space to loop the clips smoothly or something
db0798#7460: With riffusion-manipulation scripts you would also get a spectrogram that has one side that's too short if you used exactly 5000 ms as the duration
db0798#7460: I wrote this snippet for my own use. It's a wrapper around chavinlo's file2img.py from https://github.com/chavinlo/riffusion-manipulation https://cdn.discordapp.com/attachments/1053081177772261386/1055201132131131392/file2img_batch.py
sperzieb00n#3903: kickstarter has just changed their policy in allowing generative AI fundraisers, now they assume its always a copyright violation and anti artist and wont allow anything AI related
Nikuson#6709: I've added bulk audio conversion from folder to spectrograms for my colab with riffusion manipulation
https://colab.research.google.com/drive/1-REue4KpDhOMDI-v6gRytMpANoMUqFvi?usp=sharing
Nikuson#6709: I also made a small colab that splits a long record into segments of 5 seconds for riffusion manipulation.
https://colab.research.google.com/drive/1g9wgBMYrnGtXgnh66jcvwGYNiatHQ9-q?usp=sharing
hayk#0058: Pure shilling but if we get 88 more points on this hackernews post it'll get into the top 10 of the year, so upvote if you can 🙂 https://news.ycombinator.com/item?id=33999162
MentalPistol#9423: I just wanna say I love you all
cravinadventure#7884: correct! I've noticed my loops have been in-between 93 and 94 BPM
Edenoide#0166: 51200 ms
Edenoide#0166: one ms x pixel I guess
Edenoide#0166: *5120 |
matteo101man#6162: Can you train models at irregular resolutions like 512x2048?
Edenoide#0166: I think so but the original model and the Riffusion app work with 512x512 chunks
Edenoide#0166: training 4 bars of a song per image would be great
monasterydreams#4709: I don't know if this plausible. But I was thinking about trying to create a plugin for Ableton that used Riffusion.
Mainly so I could chop them up and play with samples on my keyboard.
I was thinking about extending MAX with node js to create the plugin.
What would I need in the Riffusion side. Can I create a inference server, and send requests to it and it will send audio back to me?
I could then have a interface in ableton to type some prompts make a call with node JS to the inference server and receive audio.
Then I could record and sample as I like as a very simple use case.
|
Does this make sense based on how I understand riffusion can work. Or am I way off?
denny#1553: you got the basic principle down. You can use node to convert the base64 encoded URI into an mp3/wav file (I use wav with some code modifications). Yeah if you can create a plugin that can send a POST request out in ableton this is possible. If you wanted to play a sound first you'd have to script a way to convert the sound to img using the file2img script or find if ableton can export it directly. Been doing something similar with VCV rack 😄
denny#1553: super fun!
denny#1553: problem with vcv rack is I don't know how to update the file input dynamically
denny#1553: so I can't feed back samples on the fly... yet...
monasterydreams#4709: oh sick, well I now have a project to work on over the holidays. Interesting well I will keep you posted on how it goes. I have a friend that wants to collab on it. Do you have a soundcloud just curious
denny#1553: I do but I barely post anything
denny#1553: maybe soon I'll post more stuff
monasterydreams#4709: kk well if you don't want to share in the chat you can dm me if ya want. Either way is fine
denny#1553: https://soundcloud.com/user-284684988 yeah again nothing to show 😛
denny#1553: mostly do live music stuff on twitch anyway
monasterydreams#4709: gotcha, just always curious.
Interested to see if I can get something working without banging my head against the wall, but I think it's inevtiable
denny#1553: you got this! |
MentalPistol#9423: how do you make this leave drums out
gorb#1295: you mean separating drums from a mix?
gorb#1295: i use demucs for that
MentalPistol#9423: good looking out
Meatfucker#1381: Heads up, the anti ai-art campaigners are out for blood recently. Got kickstarter to kick all AI stuff off it. Non-zero chance they discover and come for you once they realize what this does.
Meatfucker#1381: They are trying to get patreon and others to follow suit
Meatfucker#1381: Im guessing theres not really funding things to worry about with this project thankfully, but wanted to give the heads up in case you start catching flak from assholes.
IgnizHerz#2097: its inevitable in anything. Music is particularly plagued by all sorts of copyright type issues, so its expected with the territory.
Meatfucker#1381: Yep, thats why I figure its only a matter of time
Meatfucker#1381: I was also considering that though, theres pretty well established guidelines on music clip usage
Meatfucker#1381: there may be ways to play within a strict interpretation of copyright rules regardless of angry people complaining
db0798#7460: It seems to me that most anti-AI campaigners don't understand how AI works and think it is just literally copying and pasting content
IgnizHerz#2097: which is why its important we teach people how it works, why it works, and etc
Meatfucker#1381: Yeah, theres a lot of myth and misunderstanding behind the process. Education and information is important
Meatfucker#1381: Big thing I advocate for but sometimes slip on myself is always calling it machine learning rather than AI |
IgnizHerz#2097: It's also important we understand why said people react. Pointing fingers at each other will never solve anything. But thats my two cents.
Meatfucker#1381: framing it as AI gives it a mystical quality it simply doesnt have
Meatfucker#1381: Yeah, there are valid concerns from every context, mixed in along with misunderstandings and misconceptions
Meatfucker#1381: Many artists see overfitting and go "copying" rather than seeing the poor training that made such a model. The distinction between the tools and the things produced with them is being lost
Meatfucker#1381: Many ml art enthusiasts lump artists concerns as inconsequential, without considering the massive amount of pedestrian art assets that do get created and will likely be facing a crunch from these tools
Meatfucker#1381: Big profit oriented game studios are absolutely going to just fire a bunch of texture artists and such in the name of making more money
Meatfucker#1381: but thats more a function of capitalism than ml software
db0798#7460: Yes, there are also lots of misconceptions around what AI means. Most people don't understand the difference between narrow AI and artificial general intelligence, and think that anything that is called an AI is meant to function as artificial general intelligence. And when it doesn't actually work as artificial general intelligence, they think it must be a complete failure or a scam
Meatfucker#1381: It doesnt help most of the jargon around ML sounds like straight technobabble
IgnizHerz#2097: well, it is bleeding edge
denny#1553: People tend to forget we're conducting electrons through rocks and sand
IgnizHerz#2097: need good teachers to make it more understandable (to a wider audience) though yes
denny#1553: Education systems can't keep up with the pace of technology. It's been a failure to assume old ways of educating people will stand the test of time. Computer literacy is necessary for all-- being a programmer shouldn't be a specialized field but a necessary form of communication like any other form of literacy.
denny#1553: Otherwise we have a lot of confusion and angst over technology. It becomes 'magic' with wizards owning the land. It doesn't have to be that way
IgnizHerz#2097: old ways of educating people has been a failure for a long time |
denny#1553: mhmm. Grading systems don't work. We're being taught arbitrary structure of society over anything else
denny#1553: I have hope that all the backlash strengthens the technology and allows people to see how to use it beyond making a quick dollar
denny#1553: because it's powerful.
IgnizHerz#2097: Humans are wonderful creatures at finding the creativity and beauty from things. I'm certain we will see amazing things from it
denny#1553: It feels like I make amazing things every day for the past few weeks
denny#1553: well direct amazing things I suppose
0x4d#1101: Yeah I feel like a lot of the discussion surrounding AI art has been about how it's been taking away from artists and not enough thought has been given to how it could transform art going into the future
Edenoide#0166: Basic question here! I'm running Inference server & app on local and I'm trying to use my own finetuned model instead of the original riffusion-model-v1.ckpt. I've gitcloned https://huggingface.co/riffusion/riffusion-model-v1 but nothing happens, I mean it seems like is catching the .ckpt online. I can even delete all the .ckpts of my system and it still works. Is there a code I need to change? Someone said something about --checkpoint argument but I don't know where this script is. Thanks!
denny#1553: have you tried to put your model in the inference server in the riffusion directory? https://github.com/riffusion/riffusion-inference/blob/main/riffusion/server.py#L47
denny#1553: I tried that before with the model when I was having issues with the --checkpoint argument, tried to put it in the `/riffusion` directory without the argument and it worked... also not sure if it caches from HF though
Edenoide#0166: Damn! I'v opened server.py (riffusion-inference/riffusion/server.py) and I've found this:
def run_app(
*,
checkpoint: str = "riffusion/riffusion-model-v1",
Edenoide#0166: So I think you are right! I'm checking it |
Nikuson#6709: Anyone have an unconditional learning notepad of any model to generate 512*512 images?
Edenoide#0166: mmm I think it still catching it from huggingface
Edenoide#0166: There's something inside server.py for changing it for sure... But should be a shorter way to achieve it https://cdn.discordapp.com/attachments/1053081177772261386/1055452674394439710/checkpoint.PNG
denny#1553: try changing the checkpoint name to your custom model?
denny#1553: If it starts downloading things when you rerun the inference server then I'm not sure...
Edenoide#0166: you think riffusion-model-v1 is the name of the file and not the folder?
denny#1553: oh yeah it's the name of the file
Edenoide#0166: !! oook
denny#1553: so put the ckpt in the folder directly
Edenoide#0166: cool
Edenoide#0166: it was the folder name, instead of changing the .ckpt name I've changed the name to match my model in server.py and this happened: https://cdn.discordapp.com/attachments/1053081177772261386/1055454229415862282/image.png
denny#1553: hmm, I'll see if I can run a custom model in my end in a bit
denny#1553: I see-- it's not checking a local directory but a remote huggingface directory. For instance I tried `sd-dreambooth-library/rage3` as a checkpoint ( https://huggingface.co/sd-dreambooth-library/rage3 ) and it works
denny#1553: makes more sense to me now... but yeah if you have it hosted on huggingface it should work if you put in the right directory
denny#1553: sorry for the confusion! |
denny#1553: more info here: https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipeline_utils.py#L300-L355 you would have to have a directory with a pipeline.py file in it
hayk#0058: I think the issue is the traced UNet is still coming from huggingface. If you give the server.py script the --checkpoint to your diffusers format checkpoint, then comment out the traced unet (we should make that a param) it should work. https://github.com/riffusion/riffusion-inference/blob/main/riffusion/server.py
doomsboygaming#2550: Its fun just putting random images through the model and seeing what it is interpreted as
M4tZeSS#0001: It seems the website dies when using Firefox, chromium is fine though. Any thoughts?
Nikuson#6709: I started training my single-channel diffusion model. Here is the 40th epoch on a small dataset of birdsong: https://cdn.discordapp.com/attachments/1053081177772261386/1055585965168656504/diffusi1111111.mp3
Nikuson#6709: 50th epoch https://cdn.discordapp.com/attachments/1053081177772261386/1055588125495279646/bird1t.mp3
ryan_helsing#7769: @Nikuson could you point me in the right direction on how one can train their own model like you have?
Nikuson#6709: I simply adapted the classical diffusion model for black-and-white generation and trained it on spectrograms compressed to a small size from RIFFUSION MANIPULATION.
I expect to post at least some code by Christmas
Nikuson#6709: https://cdn.discordapp.com/attachments/1053081177772261386/1055591679605878784/bird64.mp3
doomsboygaming#2550: What software are you using to convert audio to a spectrogram that the AI can read?
doomsboygaming#2550: Sorry for the tag
Nikuson#6709: Riffusion manipulation
doomsboygaming#2550: Ah yeah the .py
XIVV#9579: yo |
XIVV#9579: am i the only one
XIVV#9579: that constanly has this servers scaling error
XIVV#9579: like
XIVV#9579: i have no idea why its happening
Meatfucker#1381: you on mobile?
Meatfucker#1381: discords had scaling issues on mobile for months
Meatfucker#1381: itll just get bigger and bigger depending on how you switch to the app
Meatfucker#1381: its also been having a scrolling issue where youll go to scroll up a little bit and itll teleport you half a day back
hayk#0058: If you're talking about riffusion.com then it's because we've scaled down the number of GPUs available since we're paying for it
a_robot_kicker#7014: I am literally writing a vst plugin right now
a_robot_kicker#7014: Slow going but it's almost done. Thing that's messing with me is getting base64 encoded wav bytes into correct format in the vst.
a_robot_kicker#7014: But I have sending data from the DAW into riffusion, generating, outputting and playing all working
a_robot_kicker#7014: It's just the stuff I'm playing ends up sounding like white noise so I'm messing up somewhere along the line
a_robot_kicker#7014: I will post a branch and a vst3 plugin binary once done
a_robot_kicker#7014: When I save it as a wav file on the server it works though so I'm sure there's just some misconception. One thing I just realized is base64 actually outputs newlines for "readability" |
monasterydreams#4709: that is sick, would love to see how you accomplish it
a_robot_kicker#7014: Juce it turns out is very magical. Almost as full featured as Qt
XIVV#9579: oh
XIVV#9579: ok
monasterydreams#4709: Yeah I would love to get a peak if you already have head way on this. See how I could help in any programming way
a_robot_kicker#7014: Ok will post either tonight or tomorrow
Edenoide#0166: Thank you for all this work @hayk! So in server.py instead of:
checkpoint: str = "riffusion/riffusion-model-v1",
should I put for example:
checkpoint: str = "C:\RIFFUSION\riffusion-inference\riffusion-model-v1\cumbiatron.ckpt",? (My programming knowledge is very low)
tugen#7971: Is there a way for the replicate/rifffusion model to produce output that is longer than 5s? <https://replicate.com/hmartiro/riffusion>. Also, if we are to run this on our own machines... Are the minimum requirements for this model a Nvidia 3XXX Series card? Thank you all for participating in this amazing project 🙏
doomsboygaming#2550: I can run this just fine on a 2XXX series card
doomsboygaming#2550: @tugen
rennnn#5483: hey, how could i use riffusion with google colab? 🙏
tugen#7971: if stable diffusion can generate 'horizontal' pictures, does that mean there is possiblity for the riffusion end translation to be > 5 seconds, if we have a training set for more horizontal 'landscape portrait' mode resolutions? |
hayk#0058: There's a colab linked here https://github.com/riffusion/riffusion-app#riffusion-app
hayk#0058: For sure, just some code needs to be written to handle varying resolutions and adapt spectrograms
doomsboygaming#2550: I’m surprised nobody thought of this kind of product earlier, if you are able to know what sounds look like, then can’t you learn how to output what they would look like.
a_robot_kicker#7014: alrighty, my vst is working now. Going to experiment a bit, do some cleanup and then post a github link to source. Will make a binary later for vst3. Note right now it requires my custom server, but I might make another version that points to a standard riffusion server (which has no input, sadly)
a_robot_kicker#7014: jk, still produces distorted garbage. Maybe tomorrow.
April#5244: for sd generating wider images, it's *possible* but the model isn't built for it, and the results kinda come out bad. You'd need to train a model using the wider spectrograms, which afaik there's not really any software to do that.
MentalPistol#9423: sup all
tugen#7971: are you saying stable diffusion isn't built for wider resolutions? I am looking at https://beta.dreamstudio.ai interface and they have a slider for width to go to 1024 px?
MentalPistol#9423: what tools can I use to isolate elements of music besides lalala.ai ?
April#5244: so stable diffusion models are trained with square aspect ratio, so it's not really "built" to generate different aspect ratios from that. However for images, it works *good enough.* But for spectrograms it seems to break pretty much entirely, especially since it's not really a regular image.
April#5244: sd1.5 and below are trained on 512x512 images, which is true for riffusion as well
April#5244: hence the 5s limit
April#5244: with sd2.0+ they have 768x models, for 768x768 images, which would increase to 7.6s
April#5244: but still square aspect ratio
April#5244: presumably it's theoretically possible to train a model that can work with different aspect ratios, but I haven't really seen it done |
tugen#7971: thx, that is enlightening 💡
IgnizHerz#2097: There's definetly some routes to be explored, on the laion server uptightmoose explains some ideas for making the generation longer. In much finer detail than I can explain
April#5244: it's the same thing for sizes. you *can* use a 512 model to gen, say, 1024x1024 images, but the model wasn't really trained/built to do that
April#5244: it's definitely possible to get a wider aspect ratio for your trained model, but afaik none of the "user-friendly" software lets you do this easily
April#5244: thats kinda why it was a big deal when sd2.0 had 768x768 models
tugen#7971: so the app does a walk through latent space piecing together the chunks into a visualization right?, it would be interesting to add controls to adjust which direction to go in the space, and also 'fallback' to a previous spot in the song if the path isn't sonically pleasing, and retry from there
denny#1553: What would be really cool is to implement some sort of aesthetic scorer based on what is pleasing like https://github.com/tsngo/stable-diffusion-webui-aesthetic-image-scorer
denny#1553: and try walking toward results that score higher than a certain threshold
sperzieb00n#3903: Unstable Diffusion patreon got brigaded as well, and now its down
db0798#7460: Some of these anti-AI activists are raising money on GoFundMe to lobby for laws that ban AI art: https://www.gofundme.com/f/protecting-artists-from-ai-technologies
April#5244: riffusion is basically just doing txt2img for spectrograms, then doing interpolation between prompts. The actual model just generates 5s clips depending on your prompt/diffusion settings. you can do whatever you'd like with that (outpainting, interpolation, seed travel, etc)
April#5244: you could very well train an aesthetic scorer and then gen a bunch of 5s clips, filter them for the highest score, and play that next. provided your computer can generate them fast enough
Nubsy#6528: Sorry if this has been asked a thousand times, but is there an easy way to convert audio I have to a spectrogram that will work well inside of img2img?
hayk#0058: Yes see `spectrogram_from_waveform` and `image_from_spectrogram` in https://github.com/riffusion/riffusion-inference/blob/main/riffusion/audio.py. Or alternatively see https://github.com/chavinlo/riffusion-manipulation
Nubsy#6528: ok, I can work this stuff out eventually but I'm a bit awful at python. There's no gui stuff out there, right? |
hayk#0058: There's probably some. You can check the huggingface spaces. But I will also add something basic in the next few days
hayk#0058: Hey @Edenoide I fixed loading custom checkpoints with this commit: https://github.com/riffusion/riffusion/commit/8349ccff5957f42d8ae7838b6d8218e3060ad1ee
So now if you specify a non-default checkpoint with `--checkpoint` it will not used the traced unet. But your checkpoint needs to be in the huggingface diffusers directory format, not a `ckpt` file in the CompVis format.
Yumeshiro#0098: So I threw this together with Reaper whilst testing riffusion today. It's fantastic and very usable. A neat trick for stereo is to use the same seed but slightly adjust the CFG for a second set of gens. (7 to 6 for example and then hard pan results L/R) Prompt was, believe it or not, "sad, sad, sad, slow acoustic indie, the japanese house and slow meadow" and I believe "electric guitar" was in the neg prompt. Heun is great for clarity. https://cdn.discordapp.com/attachments/1053081177772261386/1055755712577409084/Refunkfusion_Test.mp3
Yumeshiro#0098: Oh. Obviously the above has been edited, retimed, effected, and added instruments, synths, etc.
.
Yumeshiro#0098: The below is AI only, with timing edits, stutters, and reversed beginning. https://cdn.discordapp.com/attachments/1053081177772261386/1055755968513839144/Refunkfusion_Test_Dry.mp3
Yumeshiro#0098: When hard panning L/R with same seed and different CFGs, it's best to have something like mongoose or another effect in your DAW to combine the lows back to mono, that way you keep your bass and kick front and center.
Edenoide#0166: Wow! I'm gonna try it! Thanks a lot for your dedication
Robin🐦#8003: Heyo 😄 is there something like dreambooth for riffusion? Or can dreambooth be used with riffusion? (sorry for the noob questions 🙂 )
undefined#3382: It is possible, you can use the manipulation tools to obtain a 768x768px image by adjusting the `nmels` and `duration` params like so `--nmels 768 --duration 7678`, then you can dreambooth with those resulting images.
I recommend cropping the audio manually on chunks and then convert them to image and train with them.
|
I believe you can train an 1.5 model with the 768px images without much trouble, but ideally the 2.1 768 model would be best 🤔
undefined#3382: You get higher quality and also longer samples of almost 8s
Edenoide#0166: You can use dreambooth for training. I've been using the 4GB model: https://huggingface.co/ckpt/riffusion-model-v1/tree/main
Edenoide#0166: But you need first to generate spectrograms
Edenoide#0166: You can install localy RIFFUSION MANIPULATION or use this Colab: https://colab.research.google.com/drive/1-REue4KpDhOMDI-v6gRytMpANoMUqFvi?usp=sharing
Edenoide#0166: It works with 5,119 seconds audio cuts if longer they will generate extra .png
Edenoide#0166: Sure it's impossible without a big graphic card to make it sound in real time but being able to generate stereo spectrograms with your simple trick seems great for improved quality! A friend of mine said yesterday that riffusion sounds like phone hold music but in a good way.
Twee#2335: https://twitter.com/snowden/status/1606274781124386816?s=61&t=3UyYGdnVpGWCtQ87dgdxwg
Yumeshiro#0098: Phone hold music is an apt description, as I plan to test Vaporwave next, heh. It'll be perfect for such purposes.
IgnizHerz#2097: harmonai on stability side of things has been brewing for some time
a_robot_kicker#7014: there we are, VST now producing not-garbage noises. UX is very primitive but here it is running and sort of working in Trackiton Waveform https://cdn.discordapp.com/attachments/1053081177772261386/1055875130447900732/image.png
a_robot_kicker#7014: biggest headache would be implementing something that actually maintains clips, writes them at the proper time to the timeline and so on, but I'm not really sure how to do that in a vst. Maybe two vsts, one that generates the sounds and the other that acts as a sampler.
a_robot_kicker#7014: right now it just outputs audio as if it were a synth, so you must record it to a send
a_robot_kicker#7014: repo is here if you want to mess with it https://github.com/mklingen/RiffusionVST
ggga#2688: Recently discovered this project, th results are pretty cool: |
https://jukebox.openai.com/
Edenoide#0166: Yeeehaa! It works!! I've followed this post for the conversion from .ckpt to diffusers if anyone is interested: https://www.reddit.com/r/StableDiffusion/comments/xooavu/how_does_this_script_works_ckpt_to_diffusers/
monasterydreams#4709: Gonna give this a shot, dope man
Nubsy#6528: anyone have a good keyword for getting vocals into a song? Singing and Vocals rarely work for me
a_robot_kicker#7014: a specific artist with "vocal" "singing" "acapella" or "choir"
Nubsy#6528: ok! Thank you. I think some of the things I'm aiming for just aren't in the dataset. I'm looking forward to watching this project grow though!
Nubsy#6528: is there a resource out there to help with my stupid beginner questions? I'm using Automatic1111's gui with the extension. I'm wondering what sampling method to use, and looking for general tips and tricks etc.
Edenoide#0166: I wrote an installation guide on local for beginners https://www.reddit.com/r/riffusion/comments/zrubc9/installation_guide_for_riffusion_app_inference/
Edenoide#0166: (Windows)
Robin🐦#8003: thanks, I'm a beginner 😄
Edenoide#0166: me too!
bread browser#3870: this might be why. https://updates.kickstarter.com/ai-current-thinking/
Meatfucker#1381: That happened as a result of the brigade
Meatfucker#1381: Im hoping riffusion flies under their radar for now
Meatfucker#1381: just might be obscure enough yet they wont care |
Meatfucker#1381: and its mostly visual artists, so since its audio based they may also be less interested in attacking this project
Jay#0152: https://colab.research.google.com/github/thx-pw/riffusion-music2music-colab/blob/main/riffusion_music2music.ipynb
April#5244: sounds like kickstarter will only allow ai projects that have properly licensed datasets? ie either own the rights, granted the rights, or public domain?
matteo101man#6162: yea that sucks
Meatfucker#1381: I can say they never reached out to UD about our potential dataset. Just nuked
Meatfucker#1381: So doubt they actually care about the dataset
Meatfucker#1381: unfortunate but thats the risk of using a platform like kickstarter
April#5244: huh. that's kinda unfortunate
Meatfucker#1381: Definitely felt pretty kneejerk
doomsboygaming#2550: They don’t bother to give a crap or actually investigate what they are actually banning
doomsboygaming#2550: Plus them preventing kickstarts they are just further damaging AI and creativity
Haycoat#4808: Works like a wonder!
Nubsy#6528: nobody's really found a way to extend a riff locally without using one of the preset images like og_beat, right?
XIVV#9579: ay
XIVV#9579: so um |
XIVV#9579: what's the best (and easiest) place to use this
XIVV#9579: i wanna generate some sweet metal riffs
XIVV#9579: but i just cant with the site
Meatfucker#1381: You can run it on your own computer locally. Has similar requirements to stable diffusion
Meatfucker#1381: Im not sure how much the process has changed since I first installed it, so Im hesistant to give too much direction there, but previously you would install the inference server, and then install the app server
Meatfucker#1381: they it had a web interface similar to the one you see on thier site
denny#1553: Set the width to 4x the and img2img a few times.
denny#1553: Txt2img a riff first
denny#1553: 20s is the best you can do in auto1111 I think
denny#1553: In one go. Otherwise you can use Sox to stitch samples
Nubsy#6528: ah thanks, in my experience the results with wider generations are much worse than when keeping things square, which makes sense
denny#1553: Yeah it's not perfect. I've yet to try the latent walking method. I'm also interested in trying https://www.painthua.com/ soon... been experimenting on too many things
AshleyDR#9711: AI is a creative work of this time in our history, and to try and stifle it is ultimately futile!, AI is ultimately an evolution of mankinds techo. for the benefit of all the willing!
Meatfucker#1381: We have a lot of exciting things coming, many we almost certainly wont see coming.
Meatfucker#1381: I like to liken our current state of ML to the internet around 2005 |
Meatfucker#1381: everyones about to get it in their pockets and its going to be transformative
0x4d#1101: arguably people already have it in their pockets
0x4d#1101: well, the compute is of course not run locally yet but internet access is a constant in the first world these days
0x4d#1101: but you can pretty easily use ChatGPT to generate correct answers to most homework that a high schooler would reasonably encounter
April#5244: > but internet access is a constant in the first world these days
I'd like to clarify that it's actually not. It's constant for people with money. At least, here in the US. No regular income = not able to reliably afford internet/phone/data bills = no access. At best you might have a library that's miles away. Likewise, even if you pay for home internet, it very often goes down. Then again, US is kinda like a 3rd world country so 🤷♀️
Meatfucker#1381: I meant more in their pockets as in the equivalent time as when phones did. The equivalent reality would be the tight integration of these systems into everything, which is coming for sure
Meatfucker#1381: having to go to chatgpt and ask is the equivalent of having to dial in to aol
Meatfucker#1381: eventually itll just suggest things based on the listening its already doing
Meatfucker#1381: also discord wouldnt let me post that how i worded it previously
Meatfucker#1381: second time Ive seen it doing realtime message filtering. It filtered a UD mods message in the moderation channel the other day while they were trying to paste what a user had wrote
Meatfucker#1381: https://cdn.discordapp.com/attachments/1053081177772261386/1056326014965399643/image.png
April#5244: it's a per-server thing. riffusion admins/mods decided to have a word filter
April#5244: and yes I understand what you mean. stuff like chatgpt is gonna be integrated into a lot of stuff
Deleted User#0000: I guys, Just arrived - I must say my mind has been completely blown by this technology. This is a massive leap for humanity. I can't even fathom that AI will be generating music this good. Well done Seth and Hayk. |
Tivra#3760: Merry Christmas
Elconite#8348: after I make a spectrograph how can I transcribe it to a .wav?
denny#1553: you can use the https://github.com/chavinlo/riffusion-manipulation img2audio script here or use the auto1111 extension https://github.com/enlyth/sd-webui-riffusion with a denoise strength of 0 in img2img
tjthejuggler#7689: Merry Christmas!
tjthejuggler#7689: Does anyone know how many spectogram images were used to finetune the original Riffusion?
sperzieb00n#3903: felt important to share with another ML art tool group, in light of recent events https://twitter.com/fractalcounty/status/1603823751287668741
COMEHU#2094: adobe? im not surprised but i expected better, specially knowing that they have ai tools in their products
COMEHU#2094: disney is definitely expected tho 💀
sperzieb00n#3903: and warner
Jack Julian#8888: Dankje boontje
Jack Julian#8888: Anymore info on that 'scheme'?
Jack Julian#8888: Ah opened the tweet, thanks again
Broccaloo#0266: @Jack Julian To be clear, I don't disagree with you about establishing ethics. I wouldn't support using these samples for commercial purposes (I upload similar things on my Youtube channel and have never monetized anything for this reason). What we're doing here is testing out a new exciting AI tool (research and commentary happen to be purposes protected by fair use btw), we're not establishing a market substitute. You may see it differently, but personally, I don't see a problem with that at all.
Jack Julian#8888: Thats how it mostly starts right? Same with images, but here we are now. Some other players in the game see the dollar signs in their eyes and create a product from this. Like Lensa did.
a_robot_kicker#7014: This is definitely going to be commercialized. The use case is just as clear as with images. The copyright law hasn't caught up yet, and it's unclear if using samples of copyrighted music in training this model is even legal |
a_robot_kicker#7014: There are already rules on music sampling and licensing, but for AI sampling it's totally unclear what IP law that would fall under. I expect some kind of new IP regulation or court interpretation of existing regulation, but it's always slower than the technology development.
a_robot_kicker#7014: I expect in 20 years it will be highly regulated and training on a dataset will require crediting or paying royalties to the artists in the dataset
a_robot_kicker#7014: And of course big players like Disney and Adobe absolutely want those regulations in place so they can safely monetize the technology and keep away unregulated competition
Robin🐦#8003: copyright law is so old and draconic when it comes to this stuff
Edenoide#0166: I can imagine Universal, Sony & Warner making their own models with their records and selling them for millions or holding the rights of the derivative works. This could be the worst scenario
Meatfucker#1381: Theres a whole wild west of new things to figure out when it comes to copyright law, fair use, and ownership
Meatfucker#1381: will be interesting to see how it all plays out
nullerror#1387: edenoide i also had that exact same thought
nullerror#1387: slash concern
ryan_helsing#7769: if something is derived from an artists work, having attribution be a legal requirement and having an organization like ascap collect royalties for the artists if money is being made might be good. I don’t know how we can get to that point from where we are now, but I don’t think we should care too much about the company’s share ethically, even though they have the most power to lobby for regulation and attempt lawsuits. If somehow more power could be given to individual artists and less to the labels and music companies during this shift, that seems like a win to me. Rather than the worst case scenario presented above and rather than the exploitative nature of the image models currently. I think we do need to be careful with how we shape and use these tools.
noop_noob#0479: Do we still not know what dataset riffusion was trained on?
sperzieb00n#3903: yep... seperating style from original works, and whats fair use, is going to give looooooots of food for thought the coming years
sperzieb00n#3903: if it didn't already lol
0x4d#1101: shoutout to @violette for chillin in the voice channel for like 3 days straight
S.#2668: Hi. Has anyone turned the default spectrograms into audio? Would be interested in hearing them |
S.#2668: Also can anyone tell me what the other seed images are for? I assume for blending together segments of audio? How is the best way we can do this?
S.#2668: I’d like to build up a library of useful seed images
wedgeewoo#7793: heres an embedding of mine if anyone is interested
wedgeewoo#7793: https://civitai.com/models/2658/wedgeewoo-riffusion-embedding
Mikhail#2755: what interesting patterns have y'all observed about this model? Is there anything that makes you think it will (or won't) be able to produce real useful audio artefacts? It seems like such a side-ways approach to DNN audio synthesis
yahtzeefish#9367: This is crazy...
Meatfucker#1381: I think the somewhat sideways approach is exactly why itll eventually make for some very interesting music. It already goes to some neat places sometimes.
Marcos | Meta Pool#2081: Hey everyone! Pleasure to be in this group and to connect!
Marcos | Meta Pool#2081: Congrats to the founders, amazing tool!!!
Delayedchaos#3646: Has anyone coded wav2img specifically on its own? I've seen other ppl ask variations of this and I think I see what would functionally be wav2img in one of the colabs but it's only at 5 sec intervals and doesn't indicate anything bout exporting.
Delayedchaos#3646: So far I've just used some external spectrogram generators but I have a feeling if I can get it functional through Automatic1111 that it'd be wayy better.
Delayedchaos#3646: Ideally what I'd like to do is try converting songs I've made into spectrogram and then convert images I've made into sound and then back into spectrogram
Delayedchaos#3646: and then bash them together lol
Delayedchaos#3646: I really liked the idea of having a song have a whole hidden image coded into it. When I saw Aphex Twin's vid on it a while back and I've always wanted to do something like that.
Delayedchaos#3646: I'm sure what they did was probably a whole different process largely but it'd be nice to find a completely different way to go about getting there with more detailed imagery. |
hayk#0058: 🔥 **Riffusion v0.3.0** 🔥
@everyone Hey everyone! I'm excited to announce the v0.3.0 code release of the Riffusion repo. This includes a full rewrite to go from a hack to a quality software project. It also includes a CLI tool and an interactive streamlit app for common tasks, MPS backend support, stereo spectrogram encoding, a test suite run by CI, and more.
The CLI tool and streamlit playground are very extensible, so I hope that you will hop on and submit PRs and raise issues so we can build together rather than having separate codebases.
We love all the interest and tinkering from you!
https://github.com/riffusion/riffusion/releases/tag/v0.3.0
0nion_man_LV#6572: wonderful
clambake#5510: this could be big
einsneun91#7311: Can we get some comparison clips, please?
harsh2011#5849: Awesome
hayk#0058: ^ There's no new model here, this is a code release. It will provide tools to help experimentation of all kinds
hayk#0058: Pinned a message. |
joseph#9145: Huge thanks for considering Mac users!
Marcos | Meta Pool#2081: This is huge!
a_robot_kicker#7014: Excellent. Will rebase next week and try to get a vst running on the new version.
jp#4195: Very nice! I'll try to install this version on my laptop again (GeForce, RTX 3090, 16hn) and see. The previous version, i had issues with node version and i had to revert few commuts to have it working.
I have some access to big gpus (A100, 40gb). Is it possible to share the training code/setup a bit more than the generic HF page and the "about" blog post? I would like to see if i can get vocals (perhaps just ohhh ahhh ouuuu) generated as well, perhaps with tone and style.
Elconite#8348: I want to use this to generate a bunch of short wav files of various styles, can I do that? How do I translate and save the results to a wav?
Elconite#8348: can you message me suggestions?
Nubsy#6528: I see this can deal with stereo spectrograms, is there a model available that generates those?
dent#5397: Can you train it on your own labeled sounds? That would be an amazing feature
dent#5397: Would put you up there with HarmonAI probably
Poland#6025: Hello
Meatfucker#1381: Its a standard diffusion model. If you were to convert audio into spectrograms and tag em, you should be able to train the model on them just like any other
Meatfucker#1381: I believe Ive seen a few people doing it already to some degree
Meatfucker#1381: There are tools to do the conversion already as well |
dent#5397: Like with the dream booth fine tuning notebook it would work?
dent#5397: Not sure how to use the PyTorch audio spectrogram converter but I guess i could figure it out
dent#5397: Just saying making a notebook that streamlines the process would be great
hayk#0058: Yes please see the streamlit app, the Text to Audio and Text to Audio Batch pages https://github.com/riffusion/riffusion/#riffusion-playground
hayk#0058: The sample-clips and audio-to-image tasks in `riffusion.cli` should be a great way to get started: https://github.com/riffusion/riffusion/blob/main/riffusion/cli.py
wedgeewoo#7793: Is there a way to just update to the new v0.3 or do i have to delete all the project files like the app, and inference repos. Also has the manipulation repo been added to this update?
hayk#0058: * the previous riffusion-inference repo has been renamed to riffusion. you should be able to continue pulling from it
* the riffusion-app repo is untouched. it is compatible with v0.3
* the riffusion-manipulation repo has helper tools that I think now should be captured in riffusion, but welcome any other contributions
wedgeewoo#7793: When we activate the env, can we still use the riffusion-inference env, or do we have to create a new environment for the new riffusion env?
undefined#3382: will that be added to the playground? it'd be really useful to create datasets for training (and possibility to generate higher resolution too)
hayk#0058: Should be fine to use the same, just install requirements again
hayk#0058: Sample clips is already in the playground, audio to image will be added by me or someone
ryan_helsing#7769: This is huge! What's the best api and web accessable version right now for those of us unable to get it running on our local machines?
Delayedchaos#3646: yayyyy I got the streamlit app to work even after getting stuck a bunch of times |
Delayedchaos#3646: I just plugged in my info into chatgpt whenever I got confused. In terms of coding I'm just a smidge more functional than a normie lol
Delayedchaos#3646: it looks great though! I'm so excited. I'm running this on a 3060 so I won't get the fast live stuff but I intend to pull samples out into other things anyway so this is a big step forward IMO.
Delayedchaos#3646: Appreciate all the work team&everyone else who is passionate about this!
Nikuson#6709: here is my little work, where my friend and I made a classic diffusion for audio using riffusion manipulation: https://github.com/nikuson/Audioffusion
Nikuson#6709: I absolutely didn't provide job guides there, but maybe I will when I'm done with a larger model
Nico-Flor#2315: Hi everyone, I'm new to this, I'm trying to run riffusion locally in a jupyter notebook and I've installed all the requirements but I'm getting an error saying the library riffusion does not exist, I'm sure this must be obvious but I don't understand how do I actually install the library
wedgeewoo#7793: maybe you have to activate the riffusion env?
wedgeewoo#7793: can i use custom textual inversion embeddings with the streamlit app?
hayk#0058: Not correctly but I welcome you to add, that’s a good thing to have
Nico-Flor#2315: I don't think that's it, in fact, the issue is not only local. When I run this colab notebook: https://colab.research.google.com/drive/1FhH3HlN8Ps_Pr9OR6Qcfbfz7utDvICl0 which should be able to run riffusion as far as I understand, the same error emerges...
hayk#0058: You probably need to add the riffusion repo root to your PYTHONPATH, or run your commands from it as the working directory in which case it happens automatically. This is the general case for Python packages
Nico-Flor#2315: I'm sorry, I'm gonna restate the problem because I now have some new information. I can read the riffusion library, but it doesn't appear to agree with the code I've found online. For example, the line:
from riffusion.audio import wav_bytes_from_spectrogram_image, spectrogram_from_waveform
|
gives an error, since riffusion.audio does not exist in the library (and I can confirm that searching the github repo). Does anyone have some sample code to do a test run of the library that is updated?
wedgeewoo#7793: haha oh boy i dont even know where to start
hayk#0058: Ah yes that module was refactored away. So either you can use the v0.2.0 tag temporarily or better is to update the notebook to the new code structure
Nico-Flor#2315: Right! Do you have some sample code which uses the library with the new code structure? I'm having a hard time coming up with it myself, since I'm not familiared with the structure deeply
hayk#0058: I have to go but I made some progress in refactoring it: https://colab.research.google.com/drive/1JOOqXLxXgvNmVwatb7UwHP-_wkYwjVAP?usp=sharing
The best place to look for examples are:
https://github.com/riffusion/riffusion/blob/main/riffusion/cli.py
https://github.com/riffusion/riffusion/tree/main/riffusion/streamlit/pages
Robin🐦#8003: random question, but has anyone tried adding effects like blur, mosaic, distort to spectrograms of music and see what it sounds like?
Delayedchaos#3646: I used chatgpt to assist during the install process and I had a problem where it didn't see riffusion due to it not installing it because of some path issues. I copied the code into chatgpt and asked why it wasn't working and it explained pretty well. 😄
Delayedchaos#3646: I installed locally on anaconda so it may be a bit diff but I'm sure chatgpt should help us all(newbies) quite a bit if we put it to use
bread browser#3870: like processors and effects
Nico-Flor#2315: Thank you so much!! This helps a lot, still gives an error but I'm much closer now
Robin🐦#8003: I'm trying to see if there's any tools that can export and import spectrograms easily |
Robin🐦#8003: I found this online tool that can play them (and generate them, sorta) https://nsspot.herokuapp.com/imagetoaudio/ but for some reason it's not generating anything but noise
wedgeewoo#7793: https://github.com/chavinlo/riffusion-manipulation
Robin🐦#8003: thanks 😄
bread browser#3870: like this https://pytorch.org/tutorials/beginner/audio_datasets_tutorial.html
bread browser#3870: or this https://github.com/symphonynet/SymphonyNet
Robin🐦#8003: I'm terrible with python but I set up a virtual environment, installed the requirements.txt + ffmpeg, and tried running the file2img.py file which unfortunately gives me the error "RuntimeError: Numpy is not available" :/
Seemingly caused by a mismatching API version
```UserWarning: Failed to initialize NumPy: module compiled against API version 0x10 but this version of numpy is 0xe (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:77.)"```
Sorry to bother you with this 😅 but maybe you have an idea on how to resolve it
(also just tried upgrading numpy as suggested on the interwebz using ```pip install numpy --upgrade ```) https://cdn.discordapp.com/attachments/1053081177772261386/1057468685444464720/image.png
wedgeewoo#7793: Oh i think you need to set up an environment with this : https://github.com/riffusion/riffusion
Robin🐦#8003: ah, it's not a standalone thing
wedgeewoo#7793: yeah sorry i forgot, im just getting into this so im not a reliable source
Robin🐦#8003: oh no problem 😄 any help is appreciated |
Robin🐦#8003: didn't help unfortunately, getting the same error - I downloaded the riffusion repo and extracted it to a fresh folder, created a new env in there and installed the requirements, then I got a fresh copy of riffusion-manipulation and moved it to a folder within my riffusion folder, and ran the scripts in there while having the riffusion env active
Robin🐦#8003: maybe I should try with anaconda
wedgeewoo#7793: yes
wedgeewoo#7793: i forgot to mention that too, conda has numpy built in i think
wedgeewoo#7793: and everything started to work for me when i switched lol
Robin🐦#8003: the python ecosystem scares me
Robin🐦#8003: which is a shame because I'm very much interested in messing around with this stuff :p
Robin🐦#8003: oof, still no luck with anaconda :/ https://cdn.discordapp.com/attachments/1053081177772261386/1057480103887654992/image.png
Robin🐦#8003: gonna go to bed now, it's 3 am 😅
Robin🐦#8003: oh wait, I actually managed to make that work now by running ``>py -m pip install numpy --upgrade``
Robin🐦#8003: okay a lot of headaches later I've got it working 😄
Robin🐦#8003: this is what skewing a waveform sounds like https://cdn.discordapp.com/attachments/1053081177772261386/1057493582979805224/globoxtestout.mp3,https://cdn.discordapp.com/attachments/1053081177772261386/1057493583390855168/image.png
Robin🐦#8003: or a pixelated filter https://cdn.discordapp.com/attachments/1053081177772261386/1057495815997882459/globoxtestout.mp3,https://cdn.discordapp.com/attachments/1053081177772261386/1057495816341831770/image.png
Tucker#6676: Love This! It'll be interesting to see how image filters become 'effects pedals'. Like can you create a wah-wah, reverb or delay effects simply by affecting the image? Such fun.
wedgeewoo#7793: https://cdn.discordapp.com/attachments/1053081177772261386/1057527029689503764/00043-3596410539-ambient_liquid_tech_house_song_with_a-major_kick_filter_sweep_by_sorewaplusv_1.png,https://cdn.discordapp.com/attachments/1053081177772261386/1057527030054408223/raw_1.mp3,https://cdn.discordapp.com/attachments/1053081177772261386/1057527030524157962/edit.png,https://cdn.discordapp.com/attachments/1053081177772261386/1057527031174287380/curves_1.mp3 |
wedgeewoo#7793: in curves i adjusted the curve levels in gimp to get rid of some frequencies, cool stuff
Edenoide#0166: Yess we can edit music with photoshop, copy paste just some frequencies or even stretch the beat when it's out of step! It's very cool when your eyes start to identify instruments only by its visual signature.
Edenoide#0166: I'm very suprised with how you can really mess the spectrogram and it stills be recognizable
Marcos | Meta Pool#2081: Hey everyone!
Marcos | Meta Pool#2081: We are currently experimenting around with Riffusion, and are wondering: did you teach the autoencoder yourself? Or did you use a pretrained one? @hayk
Haycoat#4808: Ooh you know what might be interesting? Spectrogram style transfer using tensorflow
Robin🐦#8003: isn't that kinda what img2img does with riffusion?
Haycoat#4808: Yeah but with your audio style
Haycoat#4808: Like if you want a song to sound like a certain song
undefined#3382: hope this gets merged soon: https://github.com/riffusion/riffusion/pull/50
Nico-Flor#2315: Hi, I finally got riffusion working 🙂 I'm always getting 7 second clips, does anybody know how to get a longer sound file?
johannezz#3779: Did you get the colab working?
Nico-Flor#2315: Yes! The pillow version needs to be updated before importing anything else
Nico-Flor#2315: !pip install pillow==9.1.0
Nico-Flor#2315: When I added that as the first cell of the notebook, it worked |
Nico-Flor#2315: When working with this updated notebook: https://colab.research.google.com/drive/1JOOqXLxXgvNmVwatb7UwHP-_wkYwjVAP?usp=sharing
April#5244: riffusion generates spectrograms at a default 512x512 resolution which gives you a 5.12s clip. You can increase the width of the generated images to generate longer clips, but since riffusion wasn't trained on wider images, the results may end up pretty poor. I've had some decent success with outpainting though.
kyemvy#0433: hi guys
kyemvy#0433: just wanted to ask, is it possible to use negative prompts in the site version
Nico-Flor#2315: Hmm... But then how is the demo from the site possible? It produces a continuous stream of generated music with riffusion
April#5244: the way it does "infinite music" like that is by generating multiple clips and doing interpolation, which basically generates the "in-between" clips
April#5244: it still does 5.12 second clips at a time, and just generates them repeatedly
Nico-Flor#2315: Oh! That would solve my problem. Is there any script available for that?
April#5244: the riffusion software should be able to do this. there's also scripts for automatic1111's web ui iirc
April#5244: I think some were linked earlier
Nico-Flor#2315: Hm I'm not sure where to find that, but still, thank you so much! You've helped me a lot
Delayedchaos#3646: hey riffusion gang!
Delayedchaos#3646: got a fun inquiry for ya'll and I wanted to see if ya'll found this interesting:
https://vimeo.com/226597331
https://www.teddavis.org/xyscope/ |
Delayedchaos#3646: the idea I've been toying around with lately is to see how detailed of an image we could export into this sort of thing while still having it be somewhat musical
Delayedchaos#3646: here's an example of some oscilloscope music if you're not familiar:
https://youtu.be/jQjJZbgMw7E
0x4d#1101: someone correct me if I'm wrong but I think it would be challenging to get anything usable since oscilloscope music is highly dependent on the phase relationship between the two audio channels and riffusion doesn't preserve phase information
Delayedchaos#3646: hmmm...interesting. I suspected it would be a hard thing to make. Here's what the about says to confirm what you mentioned:
> The STFT is invertible, so the original audio can be reconstructed from a spectrogram. However, the spectrogram images from our model only contain the amplitude of the sine waves and not the phases, because the phases are chaotic and hard to learn. Instead, we use the Griffin-Lim algorithm to approximate the phase when reconstructing the audio clip.
Delayedchaos#3646: Griffin-Lim Algorithm would approximate the phase.
Delayedchaos#3646: Now I need to look that up lol.
Delayedchaos#3646: I feel like a cave man with all this stuff.
Delayedchaos#3646: Here's what ChatGPT had to say about it within the context of the about:
> The Griffin-Lim algorithm is a method for reconstructing a signal from its magnitude spectrum, which is the absolute value of the STFT spectra. Because the phase information is lost when computing the magnitude spectrum, the algorithm iteratively estimates the phase spectrum by comparing the synthesized signal to the given magnitude spectrum and updating the phase spectrum accordingly. This allows the original signal to be reconstructed from the spectrogram, even though the phase information is not available.
Delayedchaos#3646: lmao now I'm just winding it up
Delayedchaos#3646: https://cdn.discordapp.com/attachments/1053081177772261386/1057863185283170404/image.png
Delayedchaos#3646: Is a "complex STFT" a thing? 😛 |
Delayedchaos#3646: "Cost-prohibitive?" oof
Yeah that makes sense though ChatGPT could be making stuff up idk(cross referencing)
sperzieb00n#3903: thats smart; yeah, why not use all colors?
zanz#3084: I am still learning about this, some of the results have been very cool. I'm sure this or something similar has been thought of, but for using some high quality midi patches or something to generate training data. This tool has a really impressive musicians toolkit https://www.onemotion.com/chord-player/
bread browser#3870: what about https://youtu.be/qD1rGYDLFNI or https://youtu.be/_p4gB7OgYI8 or https://www.youtube.com/watch?v=XziuEdpVUe0
Delayedchaos#3646: I first encountered it on a smart everyday video that Jerobeam was on detailing the art form. It's so beautiful. I love it and want to try to integreate it more into my modern life. Here's a tool you can use that's available on github:
https://github.com/jameshball/osci-render
Delayedchaos#3646: I still have aspirations to maybe connect this to SD(or MJ, deforum---etc) but not by the riffusion model but I'll have to think about it and do a few people finding missions to see if someone smarter than me can help me figure things out. 😄
bread browser#3870: agreed and thanks
bread browser#3870: i was thinking of making gpt-neo make music with FFmpeg.
Delayedchaos#3646: I want to do something a bit more weird but if I were to remove all the technical jargon...I'd like to be able to paint with sound or...See sound
bread browser#3870: maybe instead of using SD you use SG
Delayedchaos#3646: like if all of conceptual reality is connection via a symbolic layer structure if we were to be able to like...hear the symbols and make "landmarks" within a soundscape. IDK a lot of this is heady stoner fuzz talk to a degree.
Delayedchaos#3646: so like for example, red apple, purple grapes, yellow sun
If I were to generate a prompt matrix the concept map of those 2 fruit would make the concept of fruit strong enough that the correlation between yellow and fruit would generate a lemon |
Delayedchaos#3646: due to overlapping concepts
Delayedchaos#3646: so if there's a sound map attached to each of these meanings it's almost like a whole hidden dimension to them
bread browser#3870: i was thinking of using FFmpeg to make different Hz's to make music. like https://www.youtube.com/@realwebdrivertorso but more sounding like music
Delayedchaos#3646: we can't read it well enough to interpolate it into visuals but I'm sure an AI workflow could
Delayedchaos#3646: I got a loop table and A1111 has a riffusion tool where i can just fill a folder full of images and it will generate audio files based on those images. I was thinking of converting them into samples and bending and twisting them into music. 😛
bread browser#3870: well I need to find a way to convert 180,00 midi files to text.
Delayedchaos#3646: audio2img--->img2txt? Not sure the text will be meaningful so how would we go about making it meaningful. A specific model or fine tuning? I understand models, embeddings, and most other things but I'm still in the dark on how fine tuning works(though I have a sense this might be the best place for me to learn tbh) lol. Eventually lol
bread browser#3870: like this https://huggingface.co/datasets/breadlicker45/midi-music-codes. I did them all manually
Delayedchaos#3646: oooo dude awesome! I wonder would you have to then map them to tokens at that point?
bread browser#3870: what do you mean 'map them to tokens'?
Delayedchaos#3646: I might not be meaning anything meangful with that. My brain is a confusing place sometimes. I made a word to color spreadsheet that goes across 850+ colors using this:
https://xkcd.com/color/rgb/
bread browser#3870: https://mrcheeze.github.io/musenet-midi/ with this.
Delayedchaos#3646: I guess what I mean by map is like...whatever training would be needed to link up each individual code to a word
Delayedchaos#3646: some stuff I'm saying probably already exists I'd imagine |
bread browser#3870: it doesn't, and it is super hard to find any answers to it.
Delayedchaos#3646: oh ok well that's good to confirm it. I try to keep that POV in mind just in case because ppl like to tell me that with my ideas a lot lol
Delayedchaos#3646: it might exist in parts though
bread browser#3870: i have made my ideas real
Delayedchaos#3646: that we have to plug together
bread browser#3870: https://huggingface.co/breadlicker45/MuseNeo. it can make music
Delayedchaos#3646: I'm inspired by it too. I really appreciate you taking the time to chat.
Delayedchaos#3646: I have too many ideas to make real so I have to pick and choose 😛
Delayedchaos#3646: I also am fully down with other ppl making my ideas reality too. I mainly just want to use the thing lol
bread browser#3870: found a new way to make to gen music with hex
Delayedchaos#3646: actually huh...
Delayedchaos#3646: so if I fed it a list of tokens from my tokenizer in A1111
Delayedchaos#3646: ok now I need to find the token for orange real quick
bread browser#3870: well midi file hex makes the music so now I can just convert all the midi's into hex in a txt file and gen music what way
Delayedchaos#3646: so big red dog= 1205, 736, 1929 |
Delayedchaos#3646: I think we could have the two systems play together but I'd say the limiting factor would be the latent space maybe? idk
bread browser#3870: more like this https://cdn.discordapp.com/attachments/1053081177772261386/1058070464582393917/midi-hex.txt
Delayedchaos#3646: I'm just thinking out loud on how the 2 systems function currently. I think the problem is that it'd just create a whole new sound-realm instead of combine with the existing img/text one unless there's some other integration method that can tokenize midi.
Delayedchaos#3646: Maybe if all the midi files were converted to images first and then a fine tuning was done?
Delayedchaos#3646: I need to learn more about what fine tuning actually is. TY for chatting. I'll make a thread on this so we can continue chatting later if you want. 😄(gotta go)
bread browser#3870: i will not use any image gen. i will just fine-tune gpt-neox
Delayedchaos#3646: Started a thread.
Delayedchaos#3646: I'll have to learn more about gpt-neox so I really do appreciate you chatting. Hopefully I can offer up some useful ideas myself one day!
bread browser#3870: Same to you too.
oliveoil2222#2222: any riffusion-related tools out there to batch process a directory of prepared .wavs to the standard spectrogram? trying to collect some training material.
undefined#3382: not yet I guess, I had to modify the script to support converting all audios from a folder
Ainz#5062: hey guys is there any service offering riffusion as an api ?
Delayedchaos#3646: I thought the newest version had a gui with batch capabilities I'll check.
https://github.com/riffusion/riffusion/releases/tag/v0.3.0
Delayedchaos#3646: You can batch txt2 audio with their GUI |
Delayedchaos#3646: Only way I've seen audio2img is directly in a conda environment. https://cdn.discordapp.com/attachments/1053081177772261386/1058165378141925426/image.png
Delayedchaos#3646: but that's still not technically batch. You could probably just code it to process a whole folder IDK. ChatGPT is a pretty decent resource in terms of disseminating the info, though. Cross reference is always necessary anyway so it all ++++.
Forghetti#0923: where do you put the checkpoint model?
Forghetti#0923: I got everything running but just dont know where it goes
AdaptivePath#4443: This Riffusion thing is amazeballs
AdaptivePath#4443: Never ending musical idea generator. I recorded about 10 min and pulled a handful of absolute gems. Thank you for building this!!
bread browser#3870: riffusion isn't that good for music with no talking. this is better https://huggingface.co/JammyMachina/elec-gmusic-familized-model-13-12__17-35-53
Ainz#5062: oh tyy
Edenoide#0166: I use this colab: https://colab.research.google.com/drive/1Ma6dmAJb9XX7NpiD-I8bXpbxw_ru2_1u
AccelRose#2422: @Edenoide Just got the 'You need access' message whilst trying to download the colab. Can you please share it for everyone?
oliveoil2222#2222: i second that lmao
AccelRose#2422: This is rather neat!
Is there a writeup/code-fragments anywhere that we can parse through?
stirante#0001: hey, I was thinking about fine tuning a model myself. How much data did you use to fine tune riffusion? Also does SD automatically after some training start producing specific images, that match the data? Like you want spectrograms out of it. Does fine tuned model always return spectrograms, or is it sometimes breaking with something weird?
naklecha#6317: how do I create something like this? |
AdaptivePath#4443: Anyone have a good method for extracting midi data from these clips?
AdaptivePath#4443: Right now I will record like 10 min of output, then listen through and pull out short phrases (manually clipping in audacity). Then I take that into reason and bounce down to midi. But it's sh1t for the most part and tedious process
AdaptivePath#4443: how to use that, good sir?
TemporAlyx#1181: sounds like you're looking for AI source separation
johannezz#3779: please share https://cdn.discordapp.com/attachments/1053081177772261386/1058526174638248036/image.png
johannezz#3779: Does anyone know if it is possible to access the streamlit ui using runpod?
bread browser#3870: gpt2 and 15 gigs of vram
bread browser#3870: https://huggingface.co/spaces/JammyMachina/streamlit-jam-machine
bread browser#3870: i want to make a muzic type of model
bread browser#3870: i'm guessing mubert is just MusicBERT
bread browser#3870: Which they have the model of in Muzic
Delayedchaos#3646: are you doing your training locally?
bread browser#3870: on jan 10 of next year, yes
Delayedchaos#3646: hell yea! what kind of set will you be working with?
bread browser#3870: they are getting shiped |
bread browser#3870: m40 gpus
bread browser#3870: a 12 gig m40 gpu and a 16 gig m40 gpu
Delayedchaos#3646: You putting it into a computer or do you have some sort of special rig sorted out?
Delayedchaos#3646: I've heard it can be a pain to install those unless you have the right stuff.
bread browser#3870: computer
bread browser#3870: i can install them. i can just use server fans to cool them as they have no built in cooling i think
Delayedchaos#3646: ahhhh cool. looks like those should fit no problem then. I think I saw some used tesla gpus that didn't fit in a guys rig months back but I'm guessing he just didn't plan well enough.
Delayedchaos#3646: not having cooling definitely would be a problem if you didn't plan ahead
bread browser#3870: I'm putting them in an old Lenovo motherboard. this is a video someone made with one https://www.youtube.com/live/v_JSHjJBk7E?feature=share
Delayedchaos#3646: If I get the $$$ for it later on I'm going to just load up a bunch of server GPUs on a rack and try to set up some GPU over ethernet thing. I saw a video a while back and it just looks too cool.
bread browser#3870: and try to create a super computer
Delayedchaos#3646: exactly! 😄
bread browser#3870: it doesn't work like that. I have 2 laptops and a pc and a server I tried it doesn't work.
Delayedchaos#3646: https://www.protocase.com/
Delayedchaos#3646: super(big) computer 😛 |
bread browser#3870: then it must not use arm
Delayedchaos#3646: I can just make modified parts.
bread browser#3870: the best server in the world costs only $500,000
Delayedchaos#3646: I know a few ppl who've done modded racks so I'd have to pick their brain and go get a buddy to let me cut out some parts on his CNC
Delayedchaos#3646: I don't seriously plan on getting a set up like that anytime soon. Having a 12gb 3060 is more than I imagined having even a year ago so I'm satisfied with what I have for now
bread browser#3870: my school has a 3d printer
Delayedchaos#3646: me too! 😄 I'm a former CNC operator so I know a few ppl who have big machines that would let me use it in their off hours.
Delayedchaos#3646: I can't wait until we get CAD-based AI systems. SO cool to imagine.
bread browser#3870: i have never seen a cnc in person before
Delayedchaos#3646: They're neat! Dangerous but neat. Last one I operated was this old dinosaur Fanuc brand one that was from the late 80s/early 90s I wanna say.
bread browser#3870: dang
Delayedchaos#3646: Even though it's old they're still nice machines.
Delayedchaos#3646: I'll show you one of the most exciting vids I've ever seen on CNC. I tell everyone about it whenever I end up talking about them.
Delayedchaos#3646: https://youtu.be/G5eo5d8F5DU
Delayedchaos#3646: I saw this vid while I was working as a cabinetmaker and I've fantasized about doing something with this ever since. |
Delayedchaos#3646: I didn't really have the $$$ at the time to pursue a lot of these goals but now that's all I can think of. I'd like to think if he could get it to function on a CNC there should be no reason it wouldn't work on a number of other tools. Any additive/subtractive manufacturing I'd say.
Delayedchaos#3646: I'm half tempted to just try to e-mail the guy. I never considered doing that before this year but I figure why not? lol
naklecha#6317: ::(
naklecha#6317: what kind of prompts work for you guys?
DavesEmployee#2760: how much VRAM do you need for <5 second generation? I have a 4090 and is currently taking ~11 seconds
ClayhillJammy#0563: Hey, when is the next RIFFUSiON update coming out?
ClayhillJammy#0563: @seth
seth#5021: Hey jammy, no scheduled updates at the moment. Fun experiments are happening all the time so stay tuned
naklecha#6317: should not take so lojng
naklecha#6317: are you doing pipeline.to("cuda")
naklecha#6317: ?
DavesEmployee#2760: I’m not sure. Where would I check? I’m doing the simple Reddit guide
bread browser#3870: sounds cool
naklecha#6317: oh my bad, I thought you were following the google colab
naklecha#6317: on hugging face |
ClayhillJammy#0563: How do I save like, 5 minutes of a song?
Aurora~#0001: is there a way to use riffusion-app with auto1111 webui, kinda struggling to get it to work normally
Aurora~#0001: alternatively is there a way to generate from only one prompt and get good results rather than using prompt travel
ClayhillJammy#0563: BRUH MINE JUST GENERATED THE N-WORD
doomsboygaming#2550: M8 you have been listening to too much rap
ClayhillJammy#0563: @doomsboygaming no
ClayhillJammy#0563: I'll send it to you@doomsboygaming
ClayhillJammy#0563: Nah Dawg, they said "Biggo"
Delayedchaos#3646: yo, anyone that uses SD+Xformers should probably check and make sure they're not comprimised(I guess xformers is part of what's susceptible, EluehterAI @-everyone'd their server about this so I wanted to pass it on):
https://pytorch.org/blog/compromised-nightly-dependency/
ClayhillJammy#0563: Hey
ClayhillJammy#0563: Anyone here know how to Save like 5 minutes of song from the RIFFUSiON site
Aurora~#0001: is there a good way to generate longer clips from only one prompt locally
naklecha#6317: How do I generate music of longer duration? When I increase width of image I get this error
|
OutOfMemoryError: CUDA out of memory. Tried to allocate 4.00 GiB (GPU 0; 14.76 GiB total capacity; 9.26 GiB already allocated; 3.44 GiB free; 10.26 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Nico-Flor#2315: Hi everyone, I have been struggling with this same problem for a few days now
Nico-Flor#2315: I have just now made it work, a way to generate longer clips from interpolation locally
Aurora~#0001: howd you do it
Nico-Flor#2315: https://colab.research.google.com/drive/1eOt4Ybd4MxRbJca5vlBekLDHl-f0SJ1v?usp=sharing
Nico-Flor#2315: That's the code, right now it's working
Nico-Flor#2315: the variable "num_interpolation_steps" controls how long the final clip will be
TemporAlyx#1181: I've done it by doing out painting and then tiled img2img passes, but it I'd very finnicky as to whether it will be cohesive or not
Nico-Flor#2315: each step is about 5 seconds long, so a value of 12 would be around 60 seconds
Aurora~#0001: ive never used collab
Aurora~#0001: how do i run it locally
Nico-Flor#2315: just copy the code to wherever you're running python
naklecha#6317: Ooo ill try it out
naklecha#6317: What is the best way to write this prompts?
ClayhillJammy#0563: Man dis thing sucks at piano |
TemporAlyx#1181: I've gotten good piano riffs out of it
taste#0960: Hi all! Having a great time with riffusion!
A question- how can we convert our own music to the same looking image for the spectrogram?
MatrixMoney#5643: Hello guys how do I start to use Riffusion, is there a webapp ?
ClayhillJammy#0563: Yes, Look up Riffusion and it should be the first result
Spooki#1689: hi.. ive been working on pretty much the same idea to generate sounds with stable diffusion and i just discovered its been done
Spooki#1689: great work
Spooki#1689: I do have a couple of specific questions (in order to finnish my project) if anyone knows about this stuff and is willing to help
evangeesman#8969: How beefy does the processor to be on google collab and can you get higher quality than the one that’s on the website?
evangeesman#8969: In terms of the bitrate
COMEHU#2094: today i dreamed about riffusion and now i have the need of finetuning it
Aurora~#0001: can i use this with auto1111 somehow
JaM35#3587: I had a dream about it last night as well
JaM35#3587: I wonder if there is a way to increase the fidelity to be that of a cleanly produced wav. It sounds slightly artifactish, I think to do this the right way would require some kind of mastering capability or refinement. Especially for things like classical and techno. Not interested in things with vocals.. yet
rob 🦕#9139: has anyone tried training on splice loops? it seems like the perfect training material as they have text tags/descriptions, they're royalty free, and theres thousands of them |
TruBlu#6206: Im not seeing a channel for troubleshooting, but I am struggling with the hard transitions between images. Im using Riffusion in Automatic1111 via <https://github.com/enlyth/sd-webui-riffusion> + <https://github.com/Kahsolt/stable-diffusion-webui-prompt-travel>. I have the 13.6gb model. The regular generation of spectrograms works as expected; it basically follows the prompts and produces unique images. But, when using the Prompt Travel, suddenly it only produces identical images. I have a working Riffusion webapp as well, the sound is great, but I cannot record the live session. I am wondering if anyone here has screenshots of working settings or anything else that might help solve the hard transition issue, I would be very grateful. I just cannot find any docs to mine for answers. 😮
TruBlu#6206: I have been wowed by music from images and have been amazed to almost be up and working the first day. Its just so new that I cannot find much to build upon. 😄
TruBlu#6206: Riffusion in Automatic1111 via <https://github.com/enlyth/sd-webui-riffusion> + <https://github.com/Kahsolt/stable-diffusion-webui-prompt-travel> + <https://huggingface.co/riffusion/riffusion-model-v1>
Edenoide#0166: I'm installing the prompt-travel this afternoon and let's see if I got the same issue. When using the app in local with my poor GPU it generates a lot of repeated loops but I thing it's because when it's not able to generate new loops in less than 5 seconds it repeats the previous one as times as needed
Edenoide#0166: eventually it generated new fresh ones
Edenoide#0166: But I don't think you are dealing with the same issue
Edenoide#0166: LOL the prompt travel example is a bit disturbing
Edenoide#0166: @TruBlu maybe try the 4,27GB model https://huggingface.co/ckpt/riffusion-model-v1/tree/main
TruBlu#6206: Ill have to try it, but the 3090 should take all comers.
Edenoide#0166: ok!
Spooki#1689: hey guys im interested if anyone knows how was riffusion trained to output spectrograms? How much of the original model was retrained.. Was it done with hypernetworks, textual inversion?
Spooki#1689: im trying to do it myself for a school project and im wondering what the best and easiest might be
Spooki#1689: i would really appreciate any help
Aurora~#0001: I have been doing this, I mean specifically the mentioned script as I don't want to use multiple prompts
TruBlu#6206: "This is the v1.5 stable diffusion model with no modifications" --Riffusion |
TruBlu#6206: Youve gotta be doing better than I am...
Aurora~#0001: wdym
TruBlu#6206: I get clone images the second that I try to get away from the hard transitions problem.
TruBlu#6206: I mean, I am making EDM... it can be whatever. But, not with the jarring transitions every 5 seconds without the Prompt Traveling
TruBlu#6206: I burned 4,000 images before I realized Auto1111 sounded different than the Riffusion webapp
TruBlu#6206: https://tenor.com/view/pussy-financial-brr-crypto-pussy-meme-coin-gif-24512871
Aurora~#0001: i haven't gotten the inference server to work locally
TruBlu#6206: The double command prompts is a trek into the weeds and I have an IT background...
TruBlu#6206: First thing I would reccommend, is starting from scratch. Clear out everything python and start again. I botched it the first time when the window appeared to be hung. It does not actually crash, one part just takes forever.
TruBlu#6206: It is mostly just tedious copy-pasting. ***The guide is good, though; totes works.***
TruBlu#6206: *I **really** wanna add a category and four additional text channels to expand the troubleshooting facilities here, but I have no roles in this new place.*
TruBlu#6206: Category: "Tech"
Channels: "Webapp" "Automatic1111" "Troubleshooting" "Documentation"
TruBlu#6206: The discord needs a tech discussion area and a designated troubleshooting workflow to process new signups... 😅
TruBlu#6206: Thats wild. Good looking out. |
Edenoide#0166: Check my guide: https://www.reddit.com/r/riffusion/comments/zrubc9/installation_guide_for_riffusion_app_inference/
Edenoide#0166: (It's for Windows)
Aurora~#0001: yea thats what i followed
Edenoide#0166: maybe something changed with the new version
Aurora~#0001: hold on lemme show what went wrong again
Edenoide#0166: okk
Aurora~#0001: https://cdn.discordapp.com/attachments/1053081177772261386/1059843359885238292/image.png,https://cdn.discordapp.com/attachments/1053081177772261386/1059843360329826364/image.png
Aurora~#0001: also where do i put the riffusion model
Edenoide#0166: I'm not a programmer and I don't had any notion of python previously but let me check
Edenoide#0166: did you start first the server on one anaconda window and then the app on a second one? (keeping the first window open)
Aurora~#0001: this is with just the server running
Edenoide#0166: I think I had a similar error before doing this: https://cdn.discordapp.com/attachments/1053081177772261386/1059844237723697242/image.png
Aurora~#0001: this is with the app running https://cdn.discordapp.com/attachments/1053081177772261386/1059844328673001532/image.png
Aurora~#0001: the server itself doesnt work for whatever reason
Edenoide#0166: it's important to name the files just like that https://cdn.discordapp.com/attachments/1053081177772261386/1059844568469737493/image.png |
Edenoide#0166: *the file
Aurora~#0001: i did do that yeah
Aurora~#0001: the issue is the server itself doesnt work https://cdn.discordapp.com/attachments/1053081177772261386/1059844703232720896/image.png
Aurora~#0001: ill try installing all the requirements again ig
Edenoide#0166: good luck then! I know they've changed the code. I've installed it in their previous version
Edenoide#0166: so in the v.0.3.0 they changed the name of the inference folder: https://cdn.discordapp.com/attachments/1053081177772261386/1059847415437459556/image.png
Edenoide#0166: If it's not generating a riffusion-inference folder but a riffusion folder
Aurora~#0001: yeah i replaced everything with that
Edenoide#0166: ok
Edenoide#0166: so you installed riffusion-app outside of riffusion-inference folder
Aurora~#0001: https://cdn.discordapp.com/attachments/1053081177772261386/1059848207414345818/image.png
Edenoide#0166: the hierarchy should be (at least in the old version):
RIFFUSION
├RIFFUSION-APP
├RIFFUSION-INFERENCE |
Aurora~#0001: ya
Edenoide#0166: perfect
Edenoide#0166: try to change the name 'riffusion' for 'riffusion-inference' maybe it's a mess but who knows
Kama#1898: how do i get my settings to stick? (the seed image and denoising).
seems to very rarely sometimes work, but usually it just stays at whatever it was set to
Nubsy#6528: Has anyone figured out a decent way to clean up the sound in post? Some kind of effect or filter to get rid of that mp3-ish sounds? I know this won't be universal lol, but I can't figure out a way to start
Nubsy#6528: mostly some way to clean up the high end
Robin🐦#8003: @hayk ^ 3 messages up
nullerror#1387: nubsy i was able to put together a chain together a while back using some vsts send me a dm and ill give u the list
nullerror#1387: it de-mp3s and tries to recreate the high end
hayk#0058: Sorry what are you pointing to?
Robin🐦#8003: seems like it's been removed, was some fake discord nitro spam link
hayk#0058: ah yeah I just deleted
Robin🐦#8003: thanks 😄
hayk#0058: Hey folks I made a couple of new channels - #❓︱help and #🧱︱training , so take things there to clean up this channel a bit |
tugen#7971: run it through some AI mastering? https://www.landr.com/en/online-audio-mastering/
Nubsy#6528: Oh I'll try that! I feel like it's less mastering I need and more like... upscaling?
Nubsy#6528: I'm also looking for stuff I can do locally
tugen#7971: yeah , what happens if you run the spectrogram through say.. a HD Upscaler?
Nubsy#6528: I think it'd need to be trained on spectrograms to not suck
Nubsy#6528: I tried the mastering thing, and it definitely kept that MP3-ish sound
Nubsy#6528: oh crap I didn't see this! Sorry!
Nubsy#6528: has anyone figured out how to shift a generated piece left or right a bit? I get some real banger loops that are just missing the first downbeat and have extra stuff on the right and it kills me. Using Automatic1111. I was thinking editing and inpainting maybe???
Nubsy#6528: had pretty good results shifting to the right in paint, and inpainting the area with full denoising strength, but it'd be great to hear what other people do
hayk#0058: you can generally just use an image editor. on the web I use https://www.photopea.com/ for example
Nubsy#6528: Thanks! I mean to actually get that missing beat generated though. Inpainting works pretty well but it's not a silver bullet haha
evangeesman#8969: Anyone fool around with telling SD to generate a spectrogram of a sound, then feed that into a spectrogram to audio converter?
Nubsy#6528: correct me if I'm wrong, but I think that's literally what this server is about lol
Lobo, King of Rats#1357: Damn i thought this was a competitive tetris server :PepeHands:
Nubsy#6528: if there are any python nerds here that can write plugins for Automatic1111's SD gui, I've figured out a manual process that should be easy to automate that gives pretty good results |
Nubsy#6528: when I need a little more of a generated song, I open it in paint, drag it left or right, depending on whether I want more before it starts or after it finishes, and then do inpainting with the same prompt, but masking out the white space and setting the denoising strength to 1. It seems to work very well even if I move it by 50%.
I think if someone managed to automate this process we could get some really great output. It works similarly to the actual site, but it doesn't rely on "og_beat" or whatever.
Nubsy#6528: I'd do it myself but I'm unbelievably stupid
Nubsy#6528: I think the resulting images would also have to be stitched together into one extra-wide image, but I think that wouldn't be hard
TemporAlyx#1181: in Automatic1111s gui, you can use the SDupscale at a scale of 1.0 to do a tiled img2img
TemporAlyx#1181: so you can use outpainting and then SDupscale to get decent results
Nubsy#6528: that would only work if the beat magically loops
Nubsy#6528: like if it divides equally into 5 second slices
Nubsy#6528: I think that's the theory behind them using the premade spectrograms like og_beat to do long looping songs on their GUI
TemporAlyx#1181: it can work with a high overlap for the outpaint, will get something that is close, and then the tiled img to img smooths out the transition
TemporAlyx#1181: heavily depends on the prompt though, as if a prompt can give wildly different results, its really hard to get a proper continuation
Nubsy#6528: if you try what I said by hand, the results are pretty damn amazing compared to that
TemporAlyx#1181: yea but I'm laaazy lol
Nubsy#6528: that's what I'm saying though - if you leave like 50% in, it seems to continue pretty much flawlessly
Nubsy#6528: it won't get a whole song structure for sure |
Nubsy#6528: oh preach brother, that's why I'm hoping someone writes a script lol
TemporAlyx#1181: I do think that an automated script that detects the bpm and adjusts the outpaint / tiled img2img would work wonders
Nubsy#6528: see you don't even need that though
Nubsy#6528: just shift it left, cut it in half, and inpaint the second half
Nubsy#6528: it just works, doesn't need to know about the bpm or anything
TemporAlyx#1181: well outpainting at 256p with a 256p overlap is basically doing just that
TemporAlyx#1181: I think the larger problem is that the model wasn't trained with inpainting/outpainting in mind
TemporAlyx#1181: my best results have been with a model that mixes 5% of dreamlikediffusion into riffusion, which seems to have made it much easier to prompt for
Nubsy#6528: that might work, I haven't tried it yet, but I feel like that might do things differently than the inpainting which seems to work exactly like the 512px canvas does
TemporAlyx#1181: oh it for sure does things differently, and i have no idea why
Nubsy#6528: you owe it to yourself to try what I'm saying lol, it just works
TemporAlyx#1181: what you're saying about manually extending the canvas and inpainting works better than outpainting using all the same settings, and as far as I can tell it should be the same
TemporAlyx#1181: which is aggrevating
Nubsy#6528: I definitely don't extend the canvas
Nubsy#6528: just move the contents left or right depending on where I want the new information |
Nubsy#6528: because in the end it works waaaay better if it's still working on 512
TemporAlyx#1181: right, its still 512 x 512, just using half of the first input
TemporAlyx#1181: although I have had some limited success doing tiled img2img, where I do a pass with ~0.20-0.35 denoising at 512x512, and then a pass at ~0.05-0.1 denoising at 512x2048, and it seemed to help with overall cohesion
TemporAlyx#1181: but its super prompt dependant, like if you can run a large batch of the initial prompt, and get similar results within the same batch, getting cohesive results out of trying to extend it works much better
hayk#0058: Can you elaborate on this? Are the images still spectrograms? I find it surprising
TemporAlyx#1181: yeah I had the idea from another finetuned model that was difficult to prompt
basically adding in 5% of a different model won't change the outputs that much, but seems to do more for how it 'reads' a prompt,
TemporAlyx#1181: haven't done any tests with higher %s, might do that now
Nubsy#6528: Ok you all defeated me I've made an example just to show what I mean
Nubsy#6528: here was my original https://cdn.discordapp.com/attachments/1053081177772261386/1060348679837466634/05667-3532982410-electro_funk.png
Nubsy#6528: like so many other generations it starts on the up beat for some reason which sucks
Nubsy#6528: I shifted it and left the starting area white https://cdn.discordapp.com/attachments/1053081177772261386/1060348793033338960/SHIFTED.png
Nubsy#6528: and then I inpainted that white area
Nubsy#6528: The following wav is the original audio, followed by my first four generations hastily slapped on there using only my eyeballs (a script would be able to do it automatically)
I did 0 cherry picking here just to demonstrate how well it works out of the box |
Nubsy#6528: https://cdn.discordapp.com/attachments/1053081177772261386/1060349154129363014/example.wav
Nubsy#6528: (there's a little phasing effect just because I literally just stacked them on top of each other where the overlap)
Nubsy#6528: and I think it works waaaay better than the online example despite not being based off of any input music like og_beat which means it's a lot more flexible
Nubsy#6528: I think this is useful not only because we can fantasize that it'll sorta kinda be able to make whole songs, but because it so far hasn't failed me when I hear something cool and want to turn it into a loop. It's actually usable
Nubsy#6528: almost every time I heard something exciting before and wanted to loop it, it was either missing something critical at the start (almost always the dang the downbeat) or at the end
Nubsy#6528: this is doubly true for slower music
Nubsy#6528: anyway I hope this helps someone because it turns this incredible toy into something resembling an incredible tool (for me at least)
Ninozioz#5426: Hi. I saw riffusion checkpoint on civitai, but it weighs 10 gb less than the checkpoint on huggingface. Is it something like pruned version? Or Is there something wrong with it?
https://civitai.com/models/1619/riffusion-txt2audio
TemporAlyx#1181: I've got a mixed model saved from riffusion at float16 thats only 5.4 gb, works fine
Ninozioz#5426: Thanks
TemporAlyx#1181: at 25% mix it still works, but its starting to seem a bit off in some cases
TemporAlyx#1181: even 45% still is giving spectrogram like results, but its starting to not be music
Robin🐦#8003: Does using the "repeating" mode from SD make stuff loop seamlessly?
Robin🐦#8003: Like, you can generate wrapping images in SD |
evangeesman#8969: But I want to generate singular sounds, not loops of multiple instruments
TruBlu#6206: You think inpainting something this would solve the transitions between images? https://cdn.discordapp.com/attachments/1053081177772261386/1060566020940640306/05667-3532982410-electro_funk.png
TruBlu#6206: Enabling long stretches of audio without the problem every five seconds.
Nubsy#6528: the problem with this is that you'd have to make sure that it fits with the bpm
Nubsy#6528: since you'd have to make sure it's ok to loop every 5 seconds
Nubsy#6528: with the method I'm suggesting, any bpm works because where the beats are in the image can shift over time
TruBlu#6206: I 100% do not understand any of this well enough.
Silenc3r#6429: hey guys, is riffusion broken? it keeps looping on the same prompt?
TruBlu#6206: Unfortunately, that is a known issue. We have not received any updates, other than it is known. People are talking in #❓︱help
TruBlu#6206: Honestly the best workaroud that I have seen yet is to run the webapp version and use a browser plugin to record the sound from that tab.
PeterGanunis#3634: THIS FASCINATES ME
PeterGanunis#3634: Has anyone else tried this?
PeterGanunis#3634: How does this effect behavior?
Silenc3r#6429: if it was fine yesterday, but today it seems broken., thanks for the info
Silenc3r#6429: it would be interesting tio know the timing changing from one promt to theother |
TruBlu#6206: There was this timeline info I found as well. https://cdn.discordapp.com/attachments/1053081177772261386/1060639328344211586/image_1.png
TruBlu#6206: So little info around 😮
TruBlu#6206: Maybe some updates to Riffusion in a couple of months 🔥
mataz#8375: try just "birds", it's the best
TemporAlyx#1181: My last two riffs posted in #🤘︱share-riffs are done with a 20% mixed model
PeterGanunis#3634: Oh wow!! Which method did you use to get a longer track here? Were these the img2img method with 2 passes you mentioned? It’s very cohesive!
TemporAlyx#1181: Out painting and then tiled img2img
TemporAlyx#1181: Using auto1111 webui
Initial generation, I like either Euler a at ~70 steps, cfg around 10.5, or DPM adaptive, max steps, and cfg from 10.5 - 17.5
I usually run several batches trying to refine the initial prompt and find a starting riff
Then I take it to img2img
Using poormansoutpainting I extend one direction by 80-256 px, with an original mask of 32-96 depending
I run that several times to get something that fits, and then I continue until I'm done, sometimes ever so slightly editing the prompt to add or remove some elements
Then I use the SDupscale with scale 1 and overlap 768 with a tilesize of 512x1024, denoise around 0.05, then a second pass at default 512 x 512 size, overlap 256 with denoising around 0.15
hayk#0058: The traffic to the website has gone up, and our GPUs are behind. We don't have the funds to supply a large volume of GPUs, so I suggest running the app yourself |
ALVARO#2720: how would copyright work if someone else use's the same prompt? i guess first to release gets it? lol
TruBlu#6206: lol I am. We spoke of this before. The Prompt Travel.
TemporAlyx#1181: Ai created works under current law are not eligible for copyright on their own I believe
TemporAlyx#1181: Also many open source models state that any generations are public domain
ALVARO#2720: but who says AI made it? like how could you proof that
ALVARO#2720: i could easily recreate something AI has come up with
TemporAlyx#1181: Modifying or recreating it would be its own thing
TemporAlyx#1181: I mean how does anyone prove they made anything, same problems of attribution and authenticity, just at a different scale
ALVARO#2720: well usually u prove that by releasing it and copyright
ALVARO#2720: same how it works with splice samples for example
TemporAlyx#1181: Also should note, that not only would someone need the same prompt, but the same generation settings and seed, in order to create the same riff
ALVARO#2720: true, but since riffusion's website is using the same denoise strength and seed it's easier to get the same result right?
ALVARO#2720: oh its a different seed everytime i see
Silenc3r#6429: thanks, yes I thought that was a case
Silenc3r#6429: 8GIG gpu good enough and how easy is this to run locally? |
PeterGanunis#3634: I’ve been rocking with only 6gb vram on automatic1111
PeterGanunis#3634: I had no problems at all setting up anything
sperzieb00n#3903: pff... glad its still relatively quiet here when it comes to people caring about who owns what art and style... just like in image SD now, this place gonna be wild once txt2music starts to make things that scare warner, disney, and the likes
sperzieb00n#3903: they really don't like it when an unexpected new path to a better public domain suddenly exists, and would probably nuke fair use out of existence if it weren't for a few cooler heads in history who prevented them from doing so completely
Silenc3r#6429: thanks, I suppose I need to research on how to run it locally.
Silenc3r#6429: can someone not make it run using google servers or something
ALVARO#2720: you already can i think: https://colab.research.google.com/drive/1FhH3HlN8Ps_Pr9OR6Qcfbfz7utDvICl0?usp=sharing
Silenc3r#6429: nice, thanks
Fucius#3059: Is there such a thing as audio upsamplers? similar to what you can do for images. GfpGAN etc.
Fucius#3059: Would it be possible to train an autoencoder to upscale either the spectogram image outputs from Riffusion?
Fucius#3059: Or could you train an autoencoder to upscale actual audio directly? (if that makes sense)
PeterGanunis#3634: There are seemingly amazing results from a paper called Audio Super Resolution if I recall correctly?
PeterGanunis#3634: But I don’t think they released any training data or a trained model to the best of my knowledge
PeterGanunis#3634: I’m sure there are good tools for VSTs that can do this??
PeterGanunis#3634: I don’t know. I’ve been on the hunt for a while |
Fucius#3059: Hmmm I'll check it out.
Fucius#3059: Like do you think you could take normal song samples, condense them into 512x512 spectograms.. Read those spectograms into code and train an autoencoder against the original sample?
joachim#4676: I don’t know anything about how much gpu power it takes to get this tech working right. So I’m asking: why is it low resolution in khz etc? Would take a lot to make it cd quality at least?
rob 🦕#9139: This is a pretty good idea. I think you would want to degrade the spectrogram with a variety of image artefacts that emulate image diffusion such as noise or blur, convert back to audio and then train some kind of audio to audio encoder on the unaltered audio
hayk#0058: I agree, it's a great idea!
hayk#0058: This test does a round trip conversion and plots the FFT, should be a good starting point to explore: https://github.com/riffusion/riffusion/blob/main/test/spectrogram_image_converter_test.py
DJNastyNige 🇬🇾#8294: I see this guy generating awesome beats, but when I copy/paste his prompts my beats sound nothing like his. Wondering why
DJNastyNige 🇬🇾#8294: https://cdn.discordapp.com/attachments/1053081177772261386/1060978498681982976/v12300gd0001ceq8ccrc77u296m9o70g.MP4
DJNastyNige 🇬🇾#8294: prompt was: 156 BPM dark drill beat with fast ethnic guitar
tugen#7971: hes just making that stuff up for clicks lol
DJNastyNige 🇬🇾#8294: got ya
Silenc3r#6429: yea he got me too
hulla#5846: hello i think my virtual beaver need to learn how to use Riffusion https://youtube.com/shorts/otQQdghiD
prescience#0001: does `Denoising Strength` in the Playground app aim to achieve the same thing as "prompt weight" on normal `image2image` stable-diffusion?
JoshieGibby#3531: I don't know the technical terms but what ever the material it mashes together it will make it more like said material than the base spectrograms provided. I like the clarity raising it makes. |
prescience#0001: has anyone started working on a stems extractor `page` in the Streamlit app?
I'm about to dig in to add it to my workflow, but if a better one is likely to be merged soon I might wait
hayk#0058: Just merged a great one! https://github.com/riffusion/riffusion/blob/main/riffusion/streamlit/pages/split_audio.py https://cdn.discordapp.com/attachments/1053081177772261386/1061397999789551707/image.png
hayk#0058: using https://arxiv.org/abs/2211.08553
𝔐𝔯.𝔊𝔬𝔬𝔰𝔢#5135: Sorry, can i download fragments over 5 seconds long?
prescience#0001: oh damn that's super nice!
I was going to try find a midi converter to add as a final step actually - but might check if the built in Ableton midi extractor is sufficient once the above utility executes and all the parts are separated as wavs.
If the Ableton converter performs ok, then in retrospect the wavs as you've done it here are actually more useful than the final midis, as they have dual utility (in that they can be used in-place or extracted, rather than needing to build it back up).
ryan_helsing#7769: Spotify basic pitch is really good. Uses onsets and frames.
prescience#0001: https://github.com/spotify/basic-pitch
nice, already in py as well which would make it convenient to add in locally (rather than handle in Ableton)
Jack Julian#8888: might be interesting for here.. https://twitter.com/AiBreakfast/status/1611756436379557888
Jack Julian#8888: https://twitter.com/DrJimFan/status/1611397525541617665
ALVARO#2720: usually thats just for speech, would be nice to see some cloning of artist's singing lol
teseting#9616: What about midi + gpt 2? |
https://www.gwern.net/GPT-2-music#generating-midi-with-10k30k-context-windows
teseting#9616: Or gpt neo
ALVARO#2720: midi always seems to be so tricky, i'd rather record in melodyne and re-draw the midi
teseting#9616: I guess that's just musenet though
Mandapoe#6608: Are the requirements to run this the same as normal stable diffusion?
ARTOMIA#8987: yea, its just a stable diffusion model
Mandapoe#6608: damn, thanks, i was hoping it was maybe less intensive and I'd be able to run it, at least there's a colab
ARTOMIA#8987: So I just managed to make riffusion and the extension for Auto1111, I wanna toy with this thing and I have to ask what is the best workflow to get a decent 5 seconds?
ARTOMIA#8987: I might have even more questions if I'm not too anoying with my newbie knowledge
bread browser#3870: MuseNet uses MuseNet encoders not ABC.
wedgeewoo#7793: has anyone figure out why the mp3's generated come out like this? could it be a padding value on the beginning and end. I think i saw some code for padding but im not sure if its causing this https://cdn.discordapp.com/attachments/1053081177772261386/1062536955205799986/image.png
wedgeewoo#7793: this might be the wrong server for this as it relates to the extension rather than the official repo
Kevin [RTX 3060]#1512: Link for anyone interested in trying out my Riffusion GUI. It's not fully done yet and needs a Riffusion checkpoint loaded in A1111. It makes it very easy to work with segments and to make transitions more or less seamless: https://kfs334.itch.io/prompt-crafter-organizer
DustyATX#7147: Wow you are awesome!!
Avant_Garde_1917#8538: I really hope someone trains a new MIDI gpt2 or neo. Musenet was the best at composing original classical music. Nothing else comes close in my experience. but they took musenet down a few weeks ago |
Avant_Garde_1917#8538: the trick to midi is to tokenize the different values taking into account not just note but velocity, instrument and timing, represented as integers, and then to just feed it in and it learns to see the 5 integers as a single token
Avant_Garde_1917#8538: and that tokenization and conversion is already available and integrated into musetree and what not
PeterGanunis#3634: I’m curious if anyone has any recommendations for a VST pipeline to spruce up audio?
PeterGanunis#3634: People have mentioned methods to remove the CD effect and restore frequencies?
DustyATX#7147: Not sure what you mean by the CD effect but if you're talking about the frequency limiting (I think it's its above 20khz or something similar). I believe that will be addressed in an upcoming release.
DustyATX#7147: The short answer to your question AFAIK is there is no good way to restore missing frequencies with plugins or a FX chain and I've tried a lot of different software & techniques. There are other models that attempt to do this but from my experimentation they don't work well.. I do believe once Riffusion is updated, there is a good possibility to use outpainting to create a better solution
MintOctopus#8867: Peter, I'm not a developer, but as an audio guy I have been using this to get better (not awesome) sounding results.
- Bring my Riffusion loops into Ableton Live 11 on a new track 1
- Use Unfilter (https://www.zynaptiq.com/unfilter/) on that track and play with the settings A LOT, especially the master EQ
- If this isn't enough, I apply a three-band compressor/EQ to the track, again playing with it A LOT
- To get some stereo width, apply a tiny stereo delay via a return track, make it pretty wide sounding, and apply to track1 via return level
Hope that is somewhat useful for you.
MintOctopus#8867: @PeterGanunis - You can hear an example of mine using some of those techniques (not the width enhancement, however) here: https://discord.com/channels/1053034685590143047/1053036924467679322/1062118223694282823 |
MintOctopus#8867: This is Unfilter's UI, check the aqua/teal curve to see the output after the settings are applied to the pink original signal. https://cdn.discordapp.com/attachments/1053081177772261386/1062752717648445532/unfilter.png
MintOctopus#8867: The EQ alone, even as extreme as shown here, won't get me the higher mid and high freq where I want it, I found, but Unfilter does this quite well.
MintOctopus#8867: (If anyone wants to hear the uncompressed WAV file that I can't upload due to size, feel free to check it out here: https://www.dropbox.com/s/c1oa4gf2utymvwq/AI%20-%20Fun%20Funkai.wav?dl=0)
MintOctopus#8867: OH worth mentioning! I grab 4-5 min long streams of my prompt in Riffusion, but then I go in and use only the 'good' signal from each, this is a manual process not programmatic! The washed out parts I was unable to make sound good.
MintOctopus#8867: https://cdn.discordapp.com/attachments/1053081177772261386/1062755733868912690/cycles.png
PeterGanunis#3634: Hey wow this is very interesting stuff!!! I’ll have to start playing with unfilter. Thanks!
bread browser#3870: I like it.
Nubsy#6528: have there been any new model releases for riffusion since the first one? And where's the best place to keep an eye out for that?
teseting#9616: it's hard to make models. you have to have a lot of resources and most of the time the results aren't good.
i did dreambooth for anime openings and it didn't turn out that well
Nada Existe#4889: Hi! First of all , Thanks ! i´m a newbie and dont´t urdestand this "needs a Riffusion checkpoint loaded in A1111" if anyone have a video or website when this is explained it would be very useful
liz.salazar#6508: https://huggingface.co/spaces/fffiloni/img-to-music
hayk#0058: 🎸 **Riffusion Playground Colab**
@everyone Happy to share an "official" colab for running the Riffusion Playground app: https://colab.research.google.com/drive/1_aIoS9DYietlVfDWKNcWjH_OFZCRGNFF |
This will let you run text to audio, audio to audio, stem splitting, interpolation, and more on GPU for free. Ping here or open an issue in the riffusion repo if you run into problems or have improvements.
To get more updates on Riffusion, throw in your email here: http://eepurl.com/ih9ZPz
Meatfucker#1381: Awesome
Meatfucker#1381: You should make an announcements/news channel for this sort of thing so it doesnt get lost in scrollback
AshleyDR#9711: Thank you very much❤️
hayk#0058: I just made #📣︱announcements
hayk#0058: Pinned a message.
SegmentationFault#8268: I don't know if anyone shares the same opinion, but
I honestly think the best way to generate complete songs using AI is to first create a massive dataset of MIDI songs, labeling each one with accurate tags, then have a training algorithm generate a model that "understands" sequential and instrumental patterns by cross-referencing the tags
Then using a song generator that applies a natural sounding synthesizer over the AI-generated MIDI
teseting#9616: i am testing this rn
https://github.com/alepintaudi/music-generation
ClayhillJammy#0563: It's not letting me upload audio for the audio to audio thingy |
shoeg#9037: Hello everyone. I used this in a project a couple years ago. https://github.com/bearpelican/musicautobot
hulla#5846: hello i have used " mubert " right now
hulla#5846: https://youtu.be/IpeDxWexzXI
hulla#5846: hello i have do another one https://youtu.be/jjLh-JCR8Nw
evangeesman#8969: I get an error on the final output of the 3 merged clips every time, anyone else have the same problem?
hayk#0058: Could you open an issue with specifically what you're running and screenshot(s)? https://github.com/riffusion/riffusion/issues
Socialnetwooky#4142: hey there. There is no way to interpolate automatically between the prompts in a json batch in the playground, right? The batch only generates single files?
MentalPistol#9423: sounds fire! Its like Lo fi pop. Ill dm to chat bout how u can shoot me over that unfilter.
evangeesman#8969: Will do, question, does this have the ability for me to train it? Like upload my whole discography of music I’ve made and tag it with my name, and tell it to make songs in my style?
evangeesman#8969: https://cdn.discordapp.com/attachments/1053081177772261386/1063870150530584626/IMG_2033.png
evangeesman#8969: Also get errors when using split
Socialnetwooky#4142: pretty sure the playground has a memory leak. using it over an extended period of time results in out of CUDAmemory
Broccaloo#0266: https://huggingface.co/spaces/riffusion/riffusion-playground is not working at all for me. just says "Please wait" and then "Connection timed out".
AVTV64#2335: same
jumango#9376: same |
hulla#5846: hello i come back with another one https://youtu.be/6u75HuBPAys
Norgus#2992: ok I've just been playing about with the riffusion extension version in auto1111 with the relevant model, haven't quite settled on a way of producing coherent clip merging
Norgus#2992: outpainting was somewhat promising, but still swings quite violently about\
Norgus#2992: enabling 'tiling' kind of broke it in an interesting way https://cdn.discordapp.com/attachments/1053081177772261386/1064208221151174808/20230115154058-1195287790-disco_clown-1.mp3
Norgus#2992: latent horizontal mirroring kinda interesting, less broken https://cdn.discordapp.com/attachments/1053081177772261386/1064208702036525127/20230115154437-3250208500.0-disco_clown-2.mp3
PeterGanunis#3634: This is a very curious idea!
PeterGanunis#3634: How did you set this up in auto1111?
Norgus#2992: makes it loop ok
Norgus#2992: so you just need the model and the extension
Norgus#2992: and the extension is in the built-in list you can install
PeterGanunis#3634: Right
PeterGanunis#3634: The extension for horizontal mirror?
Norgus#2992: I think it's built in, let me check
PeterGanunis#3634: Aha!
Norgus#2992: oh I think it was built in for a while then got separated into an extension |
Norgus#2992: there's the settings I used on that clip anway https://cdn.discordapp.com/attachments/1053081177772261386/1064209470051328020/image.png
Norgus#2992: I think the 'alternate steps' sounded better than 'blend average'
Norgus#2992: I reckon this might be a nice way to make an underlying spectrograph to img2img on top of?
Norgus#2992: since it could repeat seamlessly
Norgus#2992: ok I tried the idea out - made a horizontally mirrored loop, pasted it back to back in a wide image, then did img2img at 0.7 denoising in 512 chunks along it https://cdn.discordapp.com/attachments/1053081177772261386/1064211948935327774/2023-01-15_15_56_05_openOutpaint_image.wav
Norgus#2992: still a bit crazy tbh
Norgus#2992: have much luck?
Norgus#2992: I've noticed that almost every image generated has this annoying white break at the beginning https://cdn.discordapp.com/attachments/1053081177772261386/1064220151622422568/00618-1655764986-information_backing_music_track.png
Norgus#2992: but latent mirror does seem to solve it approaching 0.5 step fraction
Norgus#2992: and the sort of thing I get with same prompt throughout, simple outpainting https://cdn.discordapp.com/attachments/1053081177772261386/1064223626716516402/romantic_ballad.wav
evangeesman#8969: If I’m remembering correctly this was a fresh instance with just 2 or 3 generations
AVTV64#2335: audio to audio is really fun to do, someone should fix this lol
kyemvy#0433: frfr
kyemvy#0433: fun asf
kyemvy#0433: still giving me the error too |
AVTV64#2335: I just wanna make the bootleg style transfer thing
vananaBanana#0866: Is anyone training a model specifically for normal everyday sounds?
vananaBanana#0866: If so, I'd love to help on such a project
Leon -#4657: https://flavioschneider.notion.site/flavioschneider/Audio-Generation-with-Diffusion-c4f29f39048d4f03a23da13078a44cdb
vananaBanana#0866: thats so cool thanks for sharing
vananaBanana#0866: seems like this is cutting edge technology
vananaBanana#0866: amazing!
obelisk#1740: Ayo! noob here! whats the current state of art in ai music gen, pretty please?🙏
obelisk#1740: WOOOOOAHHH 00
it sounds hellish, i like the style
obelisk#1740: the dubstep also like, good? no way
obelisk#1740: actually good
obelisk#1740: the sub movement and drops
obelisk#1740: oh my god
obelisk#1740: UPSAMPLER??? GUYS YOU ARE THOUSAND YEARS AHEAD |
matteo101man#6162: The upsampler looks amazing
matteo101man#6162: Don’t particularly understand how to use it just yet but I think combining that with riffusion would yield interesting results
matteo101man#6162: if someone could explain how you'd go about running it from this to someone who doesn't understand python very well https://cdn.discordapp.com/attachments/1053081177772261386/1065420477062987826/image.png
teseting#9616: can you even install xformers on windows?
teseting#9616: oh wait the problem i had was due to incompatibility with the cuda version
obelisk#1740: can someone please explain why does this thing produces such good outputs?
https://huggingface.co/spaces/fffiloni/img-to-music
COMEHU#2094: mubert is not ai generated, it just mix samples from their library
obelisk#1740: ooh, oooh, that explains
COMEHU#2094: i was also confused cause they said it was made by an ai
obelisk#1740: oh, maybe you could tell me please, whats the current state of ai music? lets say riffusion
obelisk#1740: trying to dive in but there's seem to be too few info
COMEHU#2094: The best atm is Jukebox AI and its really good but its painfully slow and old, im still using it tho
COMEHU#2094: hear this generation: https://youtu.be/WZ1Kwnia72o?t=368
COMEHU#2094: 6:08 |
COMEHU#2094: i still love it
COMEHU#2094: ooh the guitar solo at 8:46 is also fire
obelisk#1740: wooo
obelisk#1740: it then turned into some indian song xd
COMEHU#2094: i always liked the creativity of Jukebox
obelisk#1740: hm, but lets say i have 1 (or many) particular artist, who's style i want to replicate. What steps should i take? What the approach for this task
COMEHU#2094: the audio quality doesnt bother me since i just want ideas
COMEHU#2094: you can finetune it but i dont have experience doing that
COMEHU#2094: maybe in the Jukebox server someone can help you
obelisk#1740: oh, i see, thanks! finetuning jukebox or riffusion?
COMEHU#2094: jukebox, the 1b model
obelisk#1740: okay, thank you so much! what are your predictions of riffusion tho?
obelisk#1740: so they basically assemble tracks from different pieces? can you use your own sample banks???
COMEHU#2094: riffusion is early stage but i can definitely see potential, it feels very Jukebox-ish
COMEHU#2094: i lost interest in mubert tbh so i dont know if you can, but i dont think so |
obelisk#1740: ehhh paywall as expected. I like their outputs tho
obelisk#1740: this one in particular https://cdn.discordapp.com/attachments/1053081177772261386/1065447435960336534/2ec551fc9d3f4313b1fa8a455b6f00f2.wav,https://cdn.discordapp.com/attachments/1053081177772261386/1065447436316835923/image.png
obelisk#1740: it would be veeeerry based if someone made colab for this:
https://flavioschneider.notion.site/flavioschneider/Audio-Generation-with-Diffusion-c4f29f39048d4f03a23da13078a44cdb
matteo101man#6162: I just want to know how to run it as a python script
obelisk#1740: ok i think i got it
obelisk#1740: this is Unconditional gen
obelisk#1740: so it just makes random chune out of noise
obelisk#1740: below there is text to music gen
obelisk#1740: where you can put some text and run it
obelisk#1740: the `sample` variable at the very end should be inspected. This code probably misses the step where you download you finished sample. This part you prob need to implement yourself
matteo101man#6162: thanks but i mean like
matteo101man#6162: if i were to do python upsampler.py
matteo101man#6162: in cmd
obelisk#1740: in cmd? id rather try on colab first |
teseting#9616: i could show you but unfortunately i can't run because i have a 4090 so it's incompatible
matteo101man#6162: That’s tough
mataz#8375: it would be cool to make a thing like riffusion for emotional states (Valence-Arousal-Dominance) and call it "effusion"
mataz#8375: https://www.researchgate.net/figure/The-VAD-Valence-Arousal-Dominance-model-spanned-across-the-six-basic-emotions_fig1_338118399
vananaBanana#0866: have u got this to work ? I am new to machine learning and throwing myself in the deep
vananaBanana#0866: the `sample` object is a tensor, no idea how to go from a tensor to a piece of audio
vananaBanana#0866: wtf is a tensor and why do they flow
vananaBanana#0866: What is an autoencoder? Is that the function that converts output data to a waveform>?
vananaBanana#0866: Or is the `sample `just a slice of a spectrogram?
vananaBanana#0866: I gotta say, these tensors got me pretty... tensed-up 😂🖐️
matteo101man#6162: Can you maybe explain? I don’t see any arguments so idk how I’d use it in a simple capacity
vananaBanana#0866: hey i got a simple example working
vananaBanana#0866: BUT i havent loaded the diffusion model yet, he removed it a few minutes ago from huggingface
tugen#7971: I think in the future the upsampler could be incorporated into the steamit UI in the playground
matteo101man#6162: Aw |
matteo101man#6162: I don’t use the playground but that’s dope
Draconiator#6375: For the most part it gets genres right holy crap. Still need work on Trance though.
Draconiator#6375: This is getting dangerously close to my vision. Wanna hear what a farting dragon sounds like?
Leon -#4657: yeah i feel like lucille ball sometimes with all this stuff comoing out
Leon -#4657: https://tenor.com/view/lucille-ball-i-love-lucy-chocolate-factory-gif-10904610
\lim#8550: hey friends
\lim#8550: anyone here ever messed with autovc before?
\lim#8550: or more specifically, simple-autovc
sagelywizard#9962: Where can I find out more detailed information about how riffusion was fine-tuned on spectrograms? riffusion.com/about is a bit sparse on details.
obelisk#1740: ok, so who made this notion?
https://flavioschneider.notion.site/flavioschneider/Audio-Generation-with-Diffusion-c4f29f39048d4f03a23da13078a44cdb
obelisk#1740: was it archisound or someone else?
obelisk#1740: this stuff works and we have access to their github. Whats stopping us from playing with it rn?
obelisk#1740: @vananaBanana did you encounter any problems?
vananaBanana#0866: The fact that he deleted the diffusion model off of hugging face in front of my eyes yesterday |
vananaBanana#0866: No, I got a working diffusion example, BUT I do not have the diffusion model now
vananaBanana#0866: Ofcourse the autoencoder and the vocoder are downloadable and do work
obelisk#1740: ok so he deleted this model
obelisk#1740: how about we directly ask him whats going on? (extremely politely)
obelisk#1740: is he going to publish updated model?
vananaBanana#0866: I hope so
obelisk#1740: imho if he decided to paywall this project, github would alo be gone
vananaBanana#0866: It sucks cause I was on the page and about to download it manually
vananaBanana#0866: Yeah
vananaBanana#0866: Well, I am already considering training a model purely on regular audio
vananaBanana#0866: Because I want to recreate LSD sound effects and let the computer hallucinate
obelisk#1740: i think that his model would that perfectly
vananaBanana#0866: Exactly
obelisk#1740: ok then i will write him on github or somehow and ask about this
vananaBanana#0866: I am completely new to machine learning though, yesterday I touched it for the first time |
vananaBanana#0866: It's mind blowing
vananaBanana#0866: And also mind blowing how flawless they managed to make the libraries and APIs
vananaBanana#0866: You don't even need to download the model urself u can just type AutoModel.load('identifier') and itll automatically download it
vananaBanana#0866: It's crazy
obelisk#1740: wait, are you sure you need to download any models at all?
obelisk#1740: there's not a single word about downloading models on their github page
obelisk#1740: ---------
obelisk#1740: okay so i checked their twitter
obelisk#1740: https://colab.research.google.com/gist/flavioschneider/d1f67b07ffcbf6fd09fdd27515ba3701/audio-diffusion-pytorch-v0-2.ipynb
obelisk#1740: here is their colab
obelisk#1740: this stuff, in fact, is pretty old, colab was published on sep 30
obelisk#1740: oh, i see now, the model is gone
obelisk#1740: ehhh they closed twitter dms because of i assume death threats by ai haters
obelisk#1740: https://cdn.discordapp.com/attachments/1053081177772261386/1066033339661828166/image.png
obelisk#1740: @vananaBanana they'll update, gg |
vananaBanana#0866: lool thank god
vananaBanana#0866: I was literally trying to set it up when it got removed
vananaBanana#0866: I showed someone the model like "check this out"
vananaBanana#0866: and then the page 404'd like I was bullshitting them xD
obelisk#1740: keeping hand on pulse
MintOctopus#8867: HA that is great, like when you take the car into the mechanic and it just WILL NOT make 'the noise'.
tugen#7971: 🙏 streamlit page for upsampling of audio
Jackzilla991#2795: is there any times where the servers aren't over loaded? I want to keep trying it but I can't seem to get even a full riff out without it looping cause of the server
Draconiator#6375: Why does the basic riff always sound the same?
pll_llq#8920: hey 👋 do you folks have community/developer calls? it would be really nice to talk to like-minded people
obelisk#1740: well it may be cuz seed is locked
vananaBanana#0866: https://cdn.discordapp.com/attachments/1053081177772261386/1066730842560413807/message.txt
vananaBanana#0866: I asked ChatGPT to categorize all soudns
vananaBanana#0866: Any thoughts?
vananaBanana#0866: (im gonna compile 1TB of sounds to use for training) |
obelisk#1740: oh thats quite interesting
tugen#7971: i think colab was updated for audio-diffusion-pytorch? I see commits from only 2 days ago... the upsampler looks so wild!
tugen#7971: Just caved and bought colab Pro LOL
tugen#7971: oh nvm, unauthorized to download model from hugging face while running colab https://cdn.discordapp.com/attachments/1053081177772261386/1066793964247732224/Screen_Shot_2023-01-22_at_1.57.49_PM.png
bread browser#3870: What are you going to do with it?
tugen#7971: I hit quota too often and sometimes have multiple notebooks open (free version only lets you do 1 runtime simulateneously)
obelisk#1740: i've burned 10$ one in 3 days
obelisk#1740: the problem is there is no model on hugging face (yet)
tugen#7971: I like the default 25 step option on audio-to-audio, it doesn't seem to make much difference pumping it from 25->50. saves on gpu cost
matteo101man#6162: so he released some cool stuff then took it away
naytheyounay#2314: What?
bread browser#3870: Buying the hardware is cheaper in the long run.
bread browser#3870: I own a m40 gpu.
Invisible Mending Music#8879: is the website app completely down right now? TWO MINUTES LATER: NOW IT SEEMS TO BE WORKING !? FIGURES... two minutes later, again it's not working - not even getting the message about the servers being backed up...
kyemvy#0433: bit of a long shot but is there any ai upsampling tools availible atm currently released |
RawrXD#3892: how come the riffs in this discord sound much better than the website?
norm#1888: Someone in the share-riffs channel mentioned using Ableton Live, so I'm guessing people process the website audio in some way, but I don't know for sure
kyemvy#0433: probs post processing
kyemvy#0433: thats what i do anyways
kyemvy#0433: like a / b it
\lim#8550: sort of but afaik the pretrained models available for them are speech centric, and were trained on dataset pairs of a ground truth recording and a naive downsampling of that recording. So, I don't think the pretrained models would give very good results on outputs from riffusion. Another reason is because outputs from riffusion have griffin-lim artifacts, which come from the step where it iteratively approximates a waveform from a spectrogram. This is what causes the subtle pulsing / heartbeat sound.
I don't think it would be too difficult to set up a colab to train something like nuwave2 on dataset pairs of ground truth music recordings and output wavs from using riffusion's cli tools to convert them to spectrogram images and back to audio.
Then I think you would get decent results and maybe even help to eliminate the griffin lim artifacts
aw3#9919: Hello everybody! I love what the project is doing and would love to contribute. I have my own ideas on how I can use the technology to generate awesome music 🙂
Zerglingz#6579: Hi everyone, been playing around with the website. It's a lot of fun. Is there a version for Colab? (Where I could perhaps, turn up the quality of the spectrograms / Audio itself?)
norm#1888: Check the Announcements channel for a colab, but it sounds like people use other programs for quality improvement
Jay#0152: why are the huggingface files gone??
Jay#0152: playground is unusable atm
김건우_NLP#6123: if i want to find some specific mood, what prompt should i use?
MintOctopus#8867: I have found that prompts that might connect to artists to be more effective than any other for geting to a mood. For example, “latin jazz” may or may not (mostly not) get you anything remotely like latin jazz, while “Pablo Sanchez" gets the vibe consistently. |
ARTOMIA#8987: yo! new guy here, I wanna learn how to toy with this model but i cant figure out how Latent Space works, anyone that has patience with idiots to ELI5? I'm using auto1111
! Kami#0420: Automatic1111 has this in extension's in case anyone is using that. One click Install super dope!
obelisk#1740: i've seen this issue
obelisk#1740: https://github.com/archinetai/audio-diffusion-pytorch/issues/33
obelisk#1740: so their model can be further trained?
obelisk#1740: oh i didn't read they have trainer!
obelisk#1740: https://github.com/archinetai/audio-diffusion-pytorch-trainer
obelisk#1740: wait so i can create my own model for this ?
obelisk#1740: also it seems that there are some audio-diffusion models left on hugging face
obelisk#1740: https://huggingface.co/models?sort=downloads&search=audio-diffusion
obelisk#1740: https://huggingface.co/spaces/teticio/audio-diffusion
obelisk#1740: ehhh, their youtube training example doesnt work for me for some reason
Twigg#8481: riffusion seems to be able to respond to text input to alter the sound, however I'm curious to using a target audio file to alter the sound. Imagine trying to segue from one loop to another; what might the interpolated version of these two sound like?
Any thoughts? |
obelisk#1740: isnt there audio2audio?
Twigg#8481: The audio 2 audio is "text prompt to text prompt"
obelisk#1740: oh
Twigg#8481: or
"audio to text prompt"
Really nothing that interpolates between source /target
obelisk#1740: what about conventional morphing?
Twigg#8481: Point me somewhere?
mutant0#0319: Is there a way to get this working on RunPod?
Invisible Mending Music#8879: Hi everyone, I tried to send a message but I think it was too long. Just testing to see if this one goes through, then I will send the other one in pieces...
Invisible Mending Music#8879: Hi everyone, not sure I’m posting this in the right place – it’s a combination of basic questions and a wish list. I stopped writing code in about 1980!? (anybody remember WATFOR?)
I love Riffusion and thank the creators for it. Since it’s only their side hustle, maybe some of my queries/suggestions could be picked up by other techies lurking on this Discord.
Invisible Mending Music#8879: • Settings – Seed Image – not sure what the options mean (OG Beat, Agile, etc.)
• De-noising – not sure exactly what the number refers to – If 0.95 produces “what is tempo?” then shouldn’t “on beat” be something like 0.05? Why is it so high, at 0.75? What happens between 0.00 and 0.75? Maybe this de-noising option could be anything between 0.00 and 1.00? |
Invisible Mending Music#8879: • Mel-scale – okay, I sort of understand what this is, but WHY is it used for the frequency bins in the spectrograms? A standard equal temperament tuning would make it more feasible to accompany the Riffusion output playing a “real” musical instrument.
• In that case, it would be helpful if Riffusion could output information for an accompanist as to what tonal centre (if any) the music was currently in
Invisible Mending Music#8879: • Servers – I note that the servers are often “behind, scaling up..” Could some big company looking to garner positive PR maybe donate some server resources so we don’t get these problems?
• Spectograms – could it be set up so that sometimes the user, instead of providing a text prompt, could upload their own spectrogram? i.e. based on a 5-6 second audio clip. (Or, the spectrogram-producing function could be built in, so that the user could upload the clip itself…)
Invisible Mending Music#8879: • I suppose this leads to a possibility that would actually be quite detached from the current interactive web app: where the user can upload a number of spectrograms and specify how long it should take to transform (via img2img) from spectrogram #1 to spectrogram #2 etc..
• Some sort of interface so that a user could play along with and/or effect the evolving spectrogram/music
Invisible Mending Music#8879: That's it for now. Thanks very much to anyone for any feedback on any of this...
matteo101man#6162: this is super late but how do i use the arguments for this
matteo101man#6162: wait i think i figured it out
matteo101man#6162: yep
matteo101man#6162: ahh training with dreambooth on post 2022 automatic1111 is so different
db0798#7460: That's great then!
matteo101man#6162: just wish they hadn't changed dreambooth so much
db0798#7460: When were the changes that you are talking about made?
matteo101man#6162: sometime around december i believe |
db0798#7460: I hadn't updated the Dreambooth plugin installation on my computer since December, updated it now and saw that it's all different indeed
Avant_Garde_1917#8538: people should migrate away from MSE loss based training and migrate towards CLIP multi modal trainings. dreambooth afaik is all MSE on pixel vs predicted pixel and doesnt really teach the model the fundamentals
Avant_Garde_1917#8538: like. even encoding the target image and generated image with clip.encode_image, using only the two image encodings without using the text part during the cliploss, is more rich in semantic information to the unets than just taking the VAE latents and getting the mean squared error. thats why memorization of the training data is so common because its only ever learning the answer instead of learning how to arrive at the answer by understanding the meaning
matteo101man#6162: Yeah my results don’t really seem the same
JL#1976: News from Google: https://techcrunch.com/2023/01/27/google-created-an-ai-that-can-generate-music-from-text-descriptions-but-wont-release-it/amp/?guccounter=1
TecnoWorld#3509: hello, is there a GUI to run riffusion locally, as it is for stable diffusion? The first part should be image creation, so not a real issue (it's enough to have the right model to generate spectrograms), but what after that?
matteo101man#6162: Dude what
matteo101man#6162: Can’t use automatic1111?
TecnoWorld#3509: I mean how to convert spectrograms to music?
TecnoWorld#3509: let's say I was able to generate a spectrogram (I'm downloading the ckpt model right now). Then what do I do with it? Which tool can convert it to audio (not the web tool, I mean a local tool)
matteo101man#6162: there is an extension
matteo101man#6162: https://github.com/enlyth/sd-webui-riffusion
matteo101man#6162: oddly enough I can't get this new version of the extension to work for me
matteo101man#6162: something wrong with pedalboard it seems
audry#7777: i got it to work |
audry#7777: you just have to redownload pedalboard i think
audry#7777: or import it in the webui python script
audry#7777: i just run the webui on colab though so it might be different if youre doing it locally
matteo101man#6162: locally is a no
audry#7777: wdym
audry#7777: im pretty sure you can run the webui locally
Kevin [RTX 3060]#1512: I made a GUI for Riffusion in my app. Runs with A1111 and 100% locally. It uses a timeline and spectrograms can be blended together using gradient masks, inpainting, play 5/10/15 second clips, merge all clips etc. If you try it and would like to see something added just let me know. I added a space to the link after // so that an image wouldn't take half this page. You don't need patch-02 and patch-03 for Riffusion but I highly recommend applying patch-10. https:// kfs334.itch.io/prompt-crafter-organizer
joachim#4676: Does this mean you can actually hear the results in your app?
Kevin [RTX 3060]#1512: For the clips, yes. The full track is exported as .wav
joachim#4676: I’m not very technical. Is there an easy step by step to get this to work?
Kevin [RTX 3060]#1512: If you're already running A1111 with --api it should be as easy as just unzipping the 7z files and running the app. The patch is applied by just replacing the files in the main dir.
Haycoat#4808: Hey does anyone have any idea why I can't get the music2music notebook to work?
Haycoat#4808: @Jay
COMEHU#2094: offtopic but check this, new text2audio model: https://noise2music.github.io/ https://cdn.discordapp.com/attachments/1053081177772261386/1068975443765637190/wavegen_89.wav,https://cdn.discordapp.com/attachments/1053081177772261386/1068975444143116419/wavegen_7.wav,https://cdn.discordapp.com/attachments/1053081177772261386/1068975444679991337/wavegen_87.wav
Twigg#8481: Can you link me to what you're referencing? Can't find music2music references online. |
Haycoat#4808: https://colab.research.google.com/github/thx-pw/riffusion-music2music-colab/blob/main/riffusion_music2music.ipynb#scrollTo=9SE80Grls13Z
Twigg#8481: I got as far as step 3.
```
RuntimeError: Detected that PyTorch and torchvision were compiled with different CUDA versions. PyTorch has CUDA Version=11.7 and torchvision has CUDA Version=11.6. Please reinstall the torchvision that matches your PyTorch install.
```
TecnoWorld#3509: this is incredibly interesting! Thanks. Am I obliged to use it with A1111? I use other methods from stable diffusion, and I'd like to continue using them. What I'm asking is: may I use other tools to create spectrograms and then have your sw to convert them into music? Thanks again
Kevin [RTX 3060]#1512: Yep. There's a function to import existing spectrograms into the timeline. edit: most functions won't work without A1111 though
TecnoWorld#3509: Great. Which files should I download to run it standalone?
TecnoWorld#3509: https://cdn.discordapp.com/attachments/1053081177772261386/1069051353277681704/image.png
Kevin [RTX 3060]#1512: I wouldn't use it without A1111 since it will severely limit the features. PCO-A1111...7z and Patch-10...7z are the ones you would need though.
TecnoWorld#3509: oh I see, the fact is A1111 is the generator I like the least for SD
TecnoWorld#3509: I know it's possibly the most used, but I prefer, for example, invokeAI
Kevin [RTX 3060]#1512: Same here but I never touch the gradio UI. I only use it as a backend. I'm planning to eventually move away from it and replace it.
TecnoWorld#3509: oh, so you mean having A1111 just to use your UI for riffusion? |
Kevin [RTX 3060]#1512: No, pretty much everything in the app runs with A1111.
TecnoWorld#3509: ok I need to try to understand I guess
[PRINCESS MISTY]#0003: heyy
[PRINCESS MISTY]#0003: how can i help i am a chatgpt prompt engigneer
joachim#4676: cool but we can't make music with it ourselves?
joachim#4676: @seth can we do stereo with the colab version of Riffusion?
COMEHU#2094: Not yet, i hope it gets released so we can play with it
[PRINCESS MISTY]#0003: heyy
[PRINCESS MISTY]#0003: wht r u thoughts on rave.dj competitor
[PRINCESS MISTY]#0003: the firset muisc bot
[PRINCESS MISTY]#0003: we we we
[PRINCESS MISTY]#0003: mcdonalds toys mario movie
[PRINCESS MISTY]#0003: chatgpt + riffusion + gradio. = ?
[PRINCESS MISTY]#0003: heheahhuerdfhfdhuh
arha#9740: oh, sheesh i thought gradio was a radio thing |
arha#9740: i'm mega interested on how riffusion works for the purpose of IDing rf and ham signals (beyond how awesomely cool the concept is)
amisane#9173: just checking out the riffusion colab - very cool
question I've got - what is the "Negative prompt" input's purpose/use in text-to-audio?
is it "what you don't want" - e.g. if I put "a heavy metal song" in prompt, could I then put "electric guitars" in negative prompt, to (hopefully) get a heavy metal song with no electric guitars?
or have I misunderstood? https://cdn.discordapp.com/attachments/1053081177772261386/1069346178996633710/image.png
amisane#9173: to get vocals, are people just using audio-to-audio, or is there some trick to getting text-to-audio to generate vocals (my attempts at this have been so far unsuccessful)
norm#1888: You're correct about the negative prompt. Not sure about vocals though. I recall being able to generate faint gibberish vocals. Most of the audio files people post here have been manually mixed and enhanced, so maybe that's the key to getting better sounding vocals, but I don't known from personal experience
amisane#9173: Cool, thanks for the confirmation on the negative prompt.
I often like to turn normal vocals into something like “faint gibberish” in the music I make anyway, so that sounds alright to me!
dholt24#6009: I'm interested in making indie games as a hobby, and I got to wondering if ai might be the right way to make the soundtracks for my work, since the game will likely be free so I don't have the budget to hire a composer. This popped on my radar as a potential good one to look into. Does what I described sound like something this tool would be useful for?
dholt24#6009: If not that's fine, this just seemed like it was worth looking into
JL#1976: Do you need to do SFX, ambient or something else? |
Steak#5270: Hello, just stumbled upon rifussion, is there any guide to use the model from local machine?
Steak#5270: will automatic1111 gui works just fine?
matteo101man#6162: I don’t think it’s really there yet tbh, but it might give you an outline to follow so you could recreate something coherent if you’re good at that
matteo101man#6162: Then you wouldn’t have to really think about a unique arrangement, just copying what the ai spits out
[PRINCESS MISTY]#0003: hey guys
[PRINCESS MISTY]#0003: i just wrote a new snes emulator
[PRINCESS MISTY]#0003: i would like chatgpt prompters
[PRINCESS MISTY]#0003: i have the sdk the compiler i just dont know how to get a better ui and bios
dholt24#6009: I meant to make the music for it mostly.
dholt24#6009: Maybe, the problem is that I can't even properly understand music enough to do that. I can't even tell the difference between tracks that are supposed to be happy or sad, let alone figure out how to make or edit one myself
dholt24#6009: To be honest, I might just make the game with no music at all to save the trouble if I can't find a good tool to do it for me
MintOctopus#8867: Hey wanted to mention that this can be trained pretty easily...sitting down with a friend who has built up the understanding for an hour and going through tracks is surprisingly useful. I mean, I could tell you that in Western culture, major key songs tend to 'feel' happy, but it's so much better with the song playing and someone telling you how they are pulling the emotion out of the track.
dholt24#6009: That's a good point, I have tried to train myself to at least be able to tell for years but I never really got anywhere. Although it will probably take at least a year to develop the game enough that a soundtrack will even be that important, so maybe I'll just come back and check on this stuff then
dholt24#6009: Given how fast these ai models are improving, there's a decent chance it would be ready for what I need by then
MintOctopus#8867: Yea Riffusion may not be the right fit for you right now, but I strongly encourage you to keep pushing towards your goal. I don't know you or your capabilities, but being proactive and trying to learn like you are doing here is going to serve you very well. |
MintOctopus#8867: No disrespect intended if I'm coming off as patronizing.
dholt24#6009: No it's good
MintOctopus#8867: @dholt24 - speaking of music in videogames, this is my all-time favorite musical thing in any game ever, it's from Kentucky Route Zero: https://www.youtube.com/watch?v=ufAUonsYhVU
[PRINCESS MISTY]#0003: does anyone have any healthy way of coding without burn out
[PRINCESS MISTY]#0003: i have a addiction to coding in retro languages
[PRINCESS MISTY]#0003: is it time i learn rust
[PRINCESS MISTY]#0003: hahah she uses vb6
matteo101man#6162: Metal Gear Solid 3 - Snake Eater is pretty good in my opinion, sounds super dramatic and cinematic like it would play at the ending of a 90s action movie
nullerror#1387: https://www.youtube.com/watch?v=o09BSf9zP-0
nullerror#1387: https://www.youtube.com/watch?v=o09BSf9zP-0
hulla#5846: hello still no channel for " self promo or random" or something else than sorry https://youtu.be/Fq9A5qkz5Bg
norm#1888: I tried using auto1111 since it's been mentioned so many times. The Riffusion plugin only gives me the option to convert images in a folder to audio. Can I access the functionality from the playground in auto1111, and if so, how?
JL#1976: Haha "beaver factory", now that's random and creative
JL#1976: https://www.youtube.com/watch?v=1LV1K69885E Riffusion featured
V-Perm#1436: what is the most learned sample/instrument on the ai |