data
stringlengths
14
24.3k
dep#0002: You could have just redirected the welcome to a new channel yk JL#1976: Let's keep all the discussions in this general chat, so #🙏︱welcome channel serves as a landing page and rules page if we get more users. ZestyLemonade#1012: **Welcome to #💬︱general** This is the start of the #💬︱general channel. JL#1976: You learn something new every day 🙂 『komorebi』#3903: where's the language list for riffusion 『komorebi』#3903: i want to know what genres it supports 『komorebi』#3903: found it nvm db0798#7460: Where is it then? 『komorebi』#3903: https://huggingface.co/riffusion/riffusion-model-v1/blob/main/tokenizer/vocab.json 『komorebi』#3903: here db0798#7460: Thanks! 『komorebi』#3903: it has a lot of non-musical words though doesn't exactly help 『komorebi』#3903: not too sure how it's supposed to work
db0798#7460: This is identical with the vocab.json of the Stable Diffusion 1.5 base model https://huggingface.co/runwayml/stable-diffusion-v1-5/raw/main/tokenizer/vocab.json db0798#7460: I think the genre names probably come from somewhere else dep#0002: @seth (sorry if too many pings) I am trying to make my own audio seed. Could the spectogram_from_waveform (from https://github.com/hmartiro/riffusion-inference/blob/6c99dba1c81b2126a2042712ab0c35d0668bd83c/riffusion/audio.py#L89) be used to transform a WAV tensor's (I'm guessing from torchaudio.load) into a spectogram object, and then do the reverse of spectrogram_from_image to basically have a custom seed? dep#0002: I also see the following comment: ``` """ Compute a spectrogram magnitude array from a spectrogram image. TODO(hayk): Add image_from_spectrogram and call this out as the reverse. """ ``` I could try doing it alfredw#2036: Can we make it 10x better soon? dep#0002: I was looking into converting it to TensorRT with Volta but it seems it has 1 more layer 『komorebi』#3903: could we send some songs of different genres so that the ai can generate a wider array of genres? i have a decent amount of obscure genres in my playlist
alfredw#2036: what's the training set? 『komorebi』#3903: vaporwave, chopped and screwed, free folk, experimental rock, art pop, etc? 『komorebi』#3903: don't think the ai knows those genres too well dep#0002: gptchat: ```python def image_from_spectrogram(spectrogram: np.ndarray, max_volume: float = 50, power_for_image: float = 0.25) -> Image.Image: """ Compute a spectrogram image from a spectrogram magnitude array. """ # Reverse the power curve data = np.power(spectrogram, power_for_image) # Rescale to the range 0-255 data = data * 255 / max_volume
# Invert data = 255 - data # Flip Y and add a single channel data = data[::-1, :, None] # Convert to an image return Image.fromarray(data.astype(np.uint8)) ``` dep#0002: anyways I will see what I can do dep#0002: this is basically audio2audio 『komorebi』#3903: could we help increase the dataset dep#0002: If they release the training code I will train it on the entirety of pandemic sound db0798#7460: I would also like to have some way of adding things to the dataset 『komorebi』#3903: i'd train it on my playlist + more albums that i somewhat like
Slynk#7009: omg I've been dying for something audio related to happen with all this AI hype. 『komorebi』#3903: so that way the ai can endlessly churn out music i like >:D my playlist would probably be too smoll for it though Thistle Cat#9883: Hi what's happened? Paulinux#8579: What I'm doing wrongly? I have URL like this: https://www.riffusion.com/?%20&prompt=folk&%20denoising=0.05&%20seedImageId=agile And sometimes this AI couldn't produce for me anything db0798#7460: Try the Colab (https://colab.research.google.com/drive/1FhH3HlN8Ps_Pr9OR6Qcfbfz7utDvICl0?usp=sharing) instead, perhaps? Colab seems to work consistently Paulinux#8579: OK, thanks 『komorebi』#3903: ooh wait i know what i'd do i gather all the alternative/avant-garde genres i can muster, select some albums from those genres (with the genre names + other descriptions with them) and then i'd put those in the ai dep#0002: overloaded dep#0002: If anyone wants I can host a mirror hayk#0058: We have this code and it's very simple, we just haven't added it to the inference repo dep#0002: Could it be possible for you to send it here or push it to the repo?
hayk#0058: Yeah if you open an issue on github we will aim to get to it soon! AmbientArtstyles#1406: Hey @ZestyLemonade, I'm writing an article about sound design (sfx for games/movies) and Riffusion, can I use your sentience.wav clip in it? AmbientArtstyles#1406: I so want to collaborate on training the algorithm with my personal sound libraries. 🎚️ 🎶 Thistle Cat#9883: Has the website been fixed? JL#1976: Works for me. Thistle Cat#9883: Nice! Thistle Cat#9883: I will have to check it again tonight Tekh#3634: How do I make a continuous stream of interconnected clips with the colab? Tekh#3634: also, is there any way to change tempo and such like on the webapp using the colab? dep#0002: thanks pls do so asap I cant wait dep#0002: in the meantime im getting things like these dep#0002: https://cdn.discordapp.com/attachments/1053081177772261386/1053101779992186941/FINAL.png Thistle Cat#9883: Anyone hearing a snapping noise when it goes to the next spectrogram? yokento#6970: Can you seed riffusion with your own clip of audio? April#5244: 14gb ckpt?
dep#0002: thats what I was trying Jack Julian#8888: Crazy stuff yall im a musician myself, and seeing this is both interesting and 'worrying'. Love how you thought this out and put it to works. April#5244: was hoping to gen using automatic1111's sd webui and perhaps finetune the model using dreambooth. but I feel like I'm in a bit over my head lol WereSloth#0312: you'll need to convert music to a spectrogram WereSloth#0312: and then, yes, you should be able to dreambooth db0798#7460: Is there a script for converting your own audio sample to the right kind of spectrogram already available anywhere? There seem to be functions that do this in the Riffusion codebase but I guess they don't work as a standalone script? dep#0002: the image_from_spectogram function is missing dep#0002: https://github.com/hmartiro/riffusion-inference/issues/9 dep#0002: supposedly they have it but they haven't added it yet dep#0002: Maybe tomorrow it could be ready lxe#0001: 👋 lxe#0001: Just wanted to stop by and say how awesome this thing is lxe#0001: Wonder if something like deforum for it is in the works. justinethanmathews#7521: this is very interesting. i am mostly in the "afraid of AI" camp, but music is a field I understand and I can see how interesting this is.
this might be a stupid question. but what was this trained on? April#5244: I'm also curious about the dataset tbh April#5244: also managed a small success: converting from wav file to spectrogram and back is working perfectly, and I have a working ckpt that can generate the spectrogram images. Next is to make a finetuning dataset and run it through dreambooth 🙂 dep#0002: Can you share your convertor? April#5244: https://pastebin.com/raw/0ALzwee4 April#5244: just a word of warning @dep I have no idea what I'm doing and this code was generated with the help of an ai and my own tinkering. might have some serious stuff wrong with it lol April#5244: I've only tested it on the generated 5-second wav files that are created from the sister script April#5244: also trying to re-input the generated pics doesn't work right so I have to manually save in paint and then it works for some reason lol April#5244: but from my testing it seems to work well enough April#5244: currently seeing if I can get it to work from mp3 and clip like the first 5 seconds or something dep#0002: time to train on the internet dep#0002: 🥂 April#5244: okay so I think I got it working with mp3 so I threw a whole dang song in there and it generated an image but it's much wider in resolution, and cropping it down just results in junk lol April#5244: might have to limit it to 5 seconds
April#5244: ``` def spectrogram_image_from_mp3(mp3_bytes: io.BytesIO, max_volume: float = 50, power_for_image: float = 0.25) -> Image.Image: """ Generate a spectrogram image from an MP3 file. """ # Load MP3 file into AudioSegment object audio = pydub.AudioSegment.from_mp3(mp3_bytes) # Convert to mono and set frame rate audio = audio.set_channels(1) audio = audio.set_frame_rate(44100) # Extract first 5 seconds of audio data audio = audio[:5000]
# Convert to WAV and save as BytesIO object wav_bytes = io.BytesIO() audio.export(wav_bytes, format="wav") wav_bytes.seek(0) # Generate spectrogram image from WAV file return spectrogram_image_from_wav(wav_bytes, max_volume=max_volume, power_for_image=power_for_image) ``` ``` # Open MP3 file with open('music.mp3', 'rb') as f: mp3_bytes = io.BytesIO(f.read()) # Generate spectrogram image image = spectrogram_image_from_mp3(mp3_bytes)
# Save image to file image.save('restoredinput.png') ``` April#5244: add this function and those lines at the bottom for mp3 and cutting first 5 seconds, seems to work great a_robot_kicker#7014: any idea what could be happening here? ``` ERROR:server:Exception on /run_inference/ [POST] Traceback (most recent call last): File "C:\Users\matth\miniconda3\envs\ldm\lib\site-packages\flask\app.py", line 2525, in wsgi_app response = self.full_dispatch_request() File "C:\Users\matth\miniconda3\envs\ldm\lib\site-packages\flask\app.py", line 1822, in full_dispatch_request rv = self.handle_user_exception(e) File "C:\Users\matth\miniconda3\envs\ldm\lib\site-packages\flask_cors\extension.py", line 165, in wrapped_function return cors_after_request(app.make_response(f(*args, **kwargs)))
File "C:\Users\matth\miniconda3\envs\ldm\lib\site-packages\flask\app.py", line 1820, in full_dispatch_request rv = self.dispatch_request() File "C:\Users\matth\miniconda3\envs\ldm\lib\site-packages\flask\app.py", line 1796, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) File "C:\Users\matth\Documents\StableDiffusion\riffusion-inference\riffusion\server.py", line 147, in run_inference response = compute(inputs) File "C:\Users\matth\Documents\StableDiffusion\riffusion-inference\riffusion\server.py", line 177, in compute image = MODEL.riffuse(inputs, init_image=init_image, mask_image=mask_image) ... (lots of lines here) attn_output = torch.bmm(attn_probs, value_states) RuntimeError: expected scalar type Half but found Float ``` a_robot_kicker#7014: ``` File "C:\Users\matth\Documents\StableDiffusion\riffusion-inference\riffusion\prompt_weighting.py", line 229, in get_unweighted_text_embeddings text_embeddings = pipe.text_encoder(text_input)[0]
``` April#5244: this is the img2wav script I'm using https://cdn.discordapp.com/attachments/1053081177772261386/1053166079804981268/audio.py April#5244: I'm actually not using any other code lol April#5244: so idk why/how to fix any issues with the riffusion ui stuff dep#0002: What error did you got dep#0002: I didnt got any errors but it took longer because it was larger than 512by512 dep#0002: after resizing it it worked as normal dep#0002: however I can barely hear the see dep#0002: I had to set denoising to 0.01 to actually remember the tempo dep#0002: Anyways dep#0002: I have an A100 so I will see if I can finetune it April#5244: I actually fixed the error by editing the img2wav script lol April#5244: one of the things had an extra parameter for whatever reason which was messing it up April#5244: https://cdn.discordapp.com/attachments/1053081177772261386/1053175675709837412/output.wav,https://cdn.discordapp.com/attachments/1053081177772261386/1053175675986649158/clip.wav,https://cdn.discordapp.com/attachments/1053081177772261386/1053175676334788689/outputspectro.png April#5244: example conversion
April#5244: "clip.wav" is the 5 second clip from the original mp3 that's used for conversion. the image is the converted spectrum from the mp3. and output.wav is the reconverted song from the image April#5244: scripts used https://cdn.discordapp.com/attachments/1053081177772261386/1053176008141963364/audio2spectro.py,https://cdn.discordapp.com/attachments/1053081177772261386/1053176008590770236/audio.py April#5244: notably the script doesn't clip the first 5 seconds, but rather the next 5 after that April#5244: since I wanted to avoid the sometimes slow intro that songs have lol April#5244: still need to do some more work on the scripts before I can have it auto-generate some dataset images properly @.@ April#5244: might actually need to fetch later in the songs lol 🤔 April#5244: I noticed it still distorts the sound a bit... April#5244: comparing: clip is the cropped audio, output.wav is the audio->img->audio convert https://cdn.discordapp.com/attachments/1053081177772261386/1053177485497475113/output.wav,https://cdn.discordapp.com/attachments/1053081177772261386/1053177485862387712/clip.wav dep#0002: WoohH!!!!! dep#0002: I dont know how to thank you dep#0002: and its just been a day Lol dep#0002: @JL may I suggest some emojis April#5244: I'm sure someone smarter than me can figure out how to fix it lol JL#1976: Yes, let me know if you have any cool ideas for the channel dep#0002: Can I dm you the stickers and emojis
dep#0002: I prob will also make it into a bot dep#0002: (riffusion) dep#0002: although I've also heard that another dev is also working on one dep#0002: u know lopho from sail? dep#0002: the decentralized training server dep#0002: I might ask him tomorrow dep#0002: he knows a lot about this stuff dep#0002: he rewrote the bucketing code himself lol dep#0002: yet haru didnt merged it and he deleted it JL#1976: Yup dep#0002: hm.... I dont think we should mess with the channels.... dep#0002: but its worth the attempt April#5244: 🤷‍♀️ honestly most of this code is ai generated. I don't know anything about music lol. removing that line of code seems to break the conversion entirely April#5244: looking into the actual conversion a bit more it seems like the sample rate is getting changed which may be why the quality is decreasing 🤔 dep#0002: = ( https://cdn.discordapp.com/attachments/1053081177772261386/1053183348715032576/recompiled.mp3
dep#0002: og https://cdn.discordapp.com/attachments/1053081177772261386/1053183641699753984/invadercrop.wav dep#0002: fixed https://cdn.discordapp.com/attachments/1053081177772261386/1053184852154916894/recompiled.mp3 April#5244: got it pretty close https://cdn.discordapp.com/attachments/1053081177772261386/1053189308154122391/reconstructed.wav,https://cdn.discordapp.com/attachments/1053081177772261386/1053189308531613707/clip.wav April#5244: there's some clipping though 🤔 April#5244: basically just change max volume to 80 on both scripts to get this result dep#0002: @April My image is on 512x501, is there anyway to fix that? dep#0002: or just resize in paint dep#0002: https://cdn.discordapp.com/attachments/1053081177772261386/1053191450768187392/agile.png April#5244: > audio = audio[:5119] April#5244: when you're doing converting April#5244: the size of the image is based on the length of the audio April#5244: clipping it to 5119 seems to work April#5244: I'm currently using this to get the middle of the song: > audio = audio[int(len(audio)/2):int(len(audio)/2)+5119] dep#0002: kay I will investigae more tomorrow
dep#0002: maybe the first finetune April#5244: I wonder if there's a way to just have the whole song in the image 🤔 April#5244: I guess it'd have to be a larger image... vai#0872: any way to fine tune this model? Milano#2460: hi All! are you aware of https://www.isik.dev/posts/Technoset.html ? Milano#2460: Technoset is a data-set of 90,933 electronic music loops, totalling around 50 hours. Each loop has a length of 1.827-seconds and is at 128bpm. The loops are from 10,000 separate electronic music tracks. Milano#2460: I'm wondering how best to preprocess it. db0798#7460: I think this one is pretty cool actually, sounds like Autechre jacobresch#3699: here's an extension for auto1111 which automatically converts the images to audio again 🙂 https://github.com/enlyth/sd-webui-riffusion JL#1976: Pinned a message. HD#1311: this is exactly the kind of plugin I was looking for HD#1311: thanks HD#1311: I'll report if it works when I get home from work April#5244: Worked for me but it messed up my sd install and python because it tried to install pytorch audio which I already had
JeniaJitsev#1332: Great work, folks, very impressive! I am scientific lead and co-founder of LAION, datasets of which are used to train original image based stable diffusion. Very nice to see such a cool twist for getting spectrogram based training running. We would be very much interested to cooperate on that and scale it further up - just join our LAION discord : https://discord.gg/we4DaujH JL#1976: Welcome, happy to see you here! HD#1311: just tested it out and it works HD#1311: fun stuff pnuts#1013: 👋 dep#0002: So, me an lopho have been experimenting with converting audio to spectogram images dep#0002: @April we got better results by increasing the n_mels but it would prob not be compatible with the current model dep#0002: 512 (original) https://cdn.discordapp.com/attachments/1053081177772261386/1053353723319038012/invadercrop_nmels_512.png,https://cdn.discordapp.com/attachments/1053081177772261386/1053353723826552852/invader_nmels_512.wav dep#0002: 768 https://cdn.discordapp.com/attachments/1053081177772261386/1053353776838365224/invadercrop_nmels_768.png,https://cdn.discordapp.com/attachments/1053081177772261386/1053353777136148602/invader_nmels_768.wav dep#0002: 1024 https://cdn.discordapp.com/attachments/1053081177772261386/1053353811399409724/invadercrop_nmels_1024.png,https://cdn.discordapp.com/attachments/1053081177772261386/1053353811714002996/invader_nmels_1024.wav dep#0002: original (nothing) https://cdn.discordapp.com/attachments/1053081177772261386/1053353879825305650/invadercrop.wav dep#0002: you will probably need good headphones to hear the difference dep#0002: but you can hear one of the beats more clearly compared to 512 JL#1976: https://www.futurepedia.io/tool/riffusion Riffusion added to futerepedia.io dep#0002: I'll be updating these tools here:
https://github.com/chavinlo/riffusion-manipulation JL#1976: Let's create a post and get that pinned. JL#1976: **Official website:** https://riffusion.com/ **Technical explanation:** https://www.riffusion.com/about **Riffusion App Github:** https://github.com/hmartiro/riffusion-app **Riffusion Inference Server Github: **https://github.com/hmartiro/riffusion-inference/ **Developers:**
@seth @hayk **HackerNews thread:** https://news.ycombinator.com/item?id=33999162 **Subreddit:** https://reddit.com/r/riffusion **Riffusion manipulation tools from @dep :** https://github.com/chavinlo/riffusion-manipulation **Riffusion extension for AUTOMATIC1111 Web UI**: https://github.com/enlyth/sd-webui-riffusion
**Notebook:** https://colab.research.google.com/gist/mdc202002/411d8077c3c5bd34d7c9bf244a1c240e/riffusion_music2music.ipynb ** Huggingface Riffusion demo:** https://huggingface.co/spaces/anzorq/riffusion-demo (pm me if any new resources have to be added or there are any errors in current listings) JL#1976: https://techcrunch.com/2022/12/15/try-riffusion-an-ai-model-that-composes-music-by-visualizing-it/ Wow, TechCrunch article! Eclipstic#9066: hi JL#1976: Hey Eclipstic#9066: whats up Eclipstic#9066: im messing around with riffusion now Eclipstic#9066: it mostly doesnt follow my prompts Eclipstic#9066: but sometimes it surprises me
JL#1976: Feel free to share the best stuff you get in #🤘︱share-riffs Eclipstic#9066: i can only say this: this is cool as hell HD#1311: what's the song dep#0002: https://www.youtube.com/watch?v=jezqbMVqcLk dep#0002: I am messing with the script rn so I can train a whole model on him dep#0002: another sample dep#0002: https://cdn.discordapp.com/attachments/1053081177772261386/1053381711276294234/planet_girl_rebuild_default.wav dep#0002: https://cdn.discordapp.com/attachments/1053081177772261386/1053381799297949777/planet_girl.png joao_betelgeuse#0410: yoooooooo JL#1976: Hey Tivra#3760: Good evening dep#0002: @seth have you considered using the 3 channels rather than just 1?
dep#0002: I've been discusing about it with another dev, and it could help to "channels for 24bit amp" dep#0002: power 0.75 https://cdn.discordapp.com/attachments/1053081177772261386/1053421209599082496/planet_girl_rebuild.wav dep#0002: power 0.1 (earrape) https://cdn.discordapp.com/attachments/1053081177772261386/1053421333830180914/power01.wav dep#0002: power 0.4 https://cdn.discordapp.com/attachments/1053081177772261386/1053421463572586566/0.4.wav dep#0002: power 0.3 https://cdn.discordapp.com/attachments/1053081177772261386/1053421536628977794/0.3.wav dep#0002: 0.25 is the default and the best. SuperSonicDiscord1#4751: Do any devs here know how to do a latent space walk from one seed image to another? SuperSonicDiscord1#4751: `InferneceInput` implies that you can only have one seed image if you're useing the inference server. WereSloth#0312: y'all have made it to the big time. Softology has added Riffusion to Visions of Chaos. :slothrofl: April#5244: the problem is the resulting image *must* be 512x512 to work properly with stable diffusion. obviously different sized spectrograms would be ideal but that really isn't possible 『komorebi』#3903: question: how do you train/finetune the ai i want to train it on ambient/drone music so that it can churn out stuff i like April#5244: just stick the model in dreambooth as usual. then feed it your spectrogam images in the same format hayk#0058: one of my favorite reddit comments https://cdn.discordapp.com/attachments/1053081177772261386/1053461992024850462/reddit_comment.png
『komorebi』#3903: oh ok how to convert images to spectrogram also i have to import 5 second stuff right? nothing longer? IgnizHerz#2097: https://github.com/chavinlo/riffusion-manipulation April#5244: I posted code that does this earlier. though it seems a link to something better was just posted? April#5244: and yes, has to be 5 second clips 『komorebi』#3903: ah ok 『komorebi』#3903: another dumb question how do i put it in dreambooth where do i even get dreambooth IgnizHerz#2097: also @dep, it would be awesome if you set it where you can generate the images for the entire song automatically rather than manually getting every 5 seconds. I did this very poorly locally myself, but I'm sure people would appreciate it. dep#0002: I will do this soon dep#0002: The next version should grab the audio files, split it in chunks of 5 seconds, make a spectogram image for each chunk, and done dep#0002: rn I am downloading some artists to train on dep#0002: also, @hayk have you looked into the possibility of using an outpainting model, rather than img2img?
dep#0002: I can help you train one if needed dep#0002: he just logged off .-. IgnizHerz#2097: sweet, I did this with my very rusty python code. Only thing is dealing with getting to the end having like 2seconds or something leftover, guess you'll have to add empty noise. dep#0002: yeah empty noise should do April#5244: https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb this is the colab I use for dreambooth 『komorebi』#3903: ok Sheppy#4289: if someone sets up a eli5 local setup guide, DM me please, i feel dumb, but i don't know what i'm doing dep#0002: eli5? Manly P Hall#3191: acronym for **e**xplain **l**ike **i**'m **5** dep#0002: a Sheppy#4289: yeah, my b dep#0002: well its not that hard dep#0002: just follow the instructions lel dep#0002: install the latest version of nvm, then install npm and node, then execute the inference server first, and then the webui
dep#0002: if you need help I can join the vc and explain you in depth oliveoil2222#2222: and thats all in command line right dep#0002: yes dep#0002: (unfortunately) oliveoil2222#2222: yup hayk#0058: I haven't tried it beyond a bit in the automatic1111 ui. Sometimes the longer results got repetitive, but there's a lot to play with there! oliveoil2222#2222: ah i was lost on what node version manager was now we're in business dep#0002: yeah, the nextjs version that the webui uses is only supported by node 16+ dep#0002: Heres a script to extract the tags of a song based off the filename from last.fm https://cdn.discordapp.com/attachments/1053081177772261386/1053518335624626226/message.txt dep#0002: https://cdn.discordapp.com/attachments/1053081177772261386/1053520950714454067/image.png dep#0002: @April btw what did you used on auto webui April#5244: I just converted the model file to a ckpt (the included one was 14gb for some reason so I didn't bother with it) and just used the converted ckpt in webui and it worked fine 🤷‍♀️ dep#0002: what prompts April#5244: ? for what exactly? dep#0002: like
dep#0002: to generate dep#0002: the images dep#0002: in the webui dep#0002: @April April#5244: whatever you want it to generate? dep#0002: You used Automatic's webUI to generate spectograms, right? April#5244: yeah IgnizHerz#2097: usually the prompt depends on what you intend to get the result as, jazz for jazz and such April#5244: ^ IgnizHerz#2097: the model is unusually good at jazz dep#0002: simple as that? April#5244: yeah dep#0002: like if I want something electro, just type the same as the riffusion webui? dep#0002: "Electro Pop" April#5244: yes. just make sure you're using the riffusion model
dep#0002: no "Spectogram of Electro Pop" then IgnizHerz#2097: no written rules, can run the same seed with different prompts to see what does best April#5244: no. riffusion generates spectrograms by default IgnizHerz#2097: but uh it generates without having to say it yeah IgnizHerz#2097: I imagine this fancy interpolation stuff would also help me effectively audio2audio style transfer IgnizHerz#2097: If I do each sections under the same seed they still have a bit of disconnect that makes them jumpy (im just using the auto webui) noop_noob#0479: Does anybody know what training data was used for riffusion? dep#0002: @April btw do you think your dreambooth finetune worked? April#5244: it was decent I guess? Not really that great lol. The background music kinda just became a mess, though it seemed to get her vocals okay April#5244: might need more data matteo101man#6162: how would we go about generating things longer than 5 seconds? April#5244: or perhaps just more proper labeling with terms or something 🤔 April#5244: the actual riffusion inference software should be able to do it which I am literally just starting to look into. can't seem to get it running :\ April#5244: basically what you need to do is generate a couple of clips and then generate the interpolation between them to "extend" the music April#5244: auto1111 won't be able to do it I think
matteo101man#6162: Ah I see, thanks matteo101man#6162: Shoot let me know if you figure it out matteo101man#6162: they do have interpolating prompt scripts and seed travel but I wouldn't know if that would work or if you could somehow frankenstein that with the webui riffusion thing April#5244: https://cdn.discordapp.com/attachments/1053081177772261386/1053542594996617216/image.png matteo101man#6162: I'm pretty new to all of it and I know zero python so 🤷‍♂️ April#5244: basically we need to generate interpolation pics between two seeds, convert all the clips into audio, and stick all the audio together April#5244: auto1111 definitely can't do the convert to audio (it can, with an extension, but it's very limited). and afaik there's no way to do interpolation in there either? April#5244: is there a script for that? April#5244: I might try to do it manually April#5244: the riffusion software is throwing errors :\ dep#0002: theres a func for it on the inference server matteo101man#6162: yeah they have two things for what you're talking about matteo101man#6162: https://github.com/EugeoSynthesisThirtyTwo/prompt-interpolation-script-for-sd-webui https://github.com/yownas/seed_travel April#5244: interesting, I'll have to try it
dep#0002: soon https://cdn.discordapp.com/attachments/1053081177772261386/1053554055764512878/image.png dep#0002: https://cdn.discordapp.com/attachments/1053081177772261386/1053554170227064893/Snail_s_House____waiting_for_you_in_snowing_city._chunk_2.png April#5244: testing seed travel https://cdn.discordapp.com/attachments/1053081177772261386/1053554728098865152/output.mp3 April#5244: this was with 5 steps April#5244: ``` img = Image.open("test/00000.png") wav0 = wav_bytes_from_spectrogram_image(img) sound0 = pydub.AudioSegment.from_wav(wav0[0]) img = Image.open("test/00001.png") wav1 = wav_bytes_from_spectrogram_image(img) sound1 = pydub.AudioSegment.from_wav(wav1[0]) img = Image.open("test/00002.png") wav2 = wav_bytes_from_spectrogram_image(img) sound2 = pydub.AudioSegment.from_wav(wav2[0]) img = Image.open("test/00003.png")
wav3 = wav_bytes_from_spectrogram_image(img) sound3 = pydub.AudioSegment.from_wav(wav3[0]) img = Image.open("test/00004.png") wav4 = wav_bytes_from_spectrogram_image(img) sound4 = pydub.AudioSegment.from_wav(wav4[0]) sound = sound0+sound1+sound2+sound3+sound4 mp3_bytes = io.BytesIO() sound.export(mp3_bytes, format="mp3") mp3_bytes.seek(0) with open("test/output.mp3", "wb") as outfile: outfile.write(mp3_bytes.getbuffer()) ``` lazy code lmao matteo101man#6162: kinda sounds like a scratched up vinyl that keeps skipping or a messed up cd lol but thats progress
April#5244: the issue is there's no smoothing between clips :\ April#5244: I imagine a higher amount of interpolating steps would smooth it out a bit, but it'd also make the song longer... matteo101man#6162: what gpu do you have April#5244: my gpu is bad lol. 1660ti matteo101man#6162: dang April#5244: I can do like one gen every 10-15 seconds matteo101man#6162: i have a 3090 matteo101man#6162: i could try at more steps for you cause I'm curious about this as well but matteo101man#6162: how did you go about the process April#5244: threw the riffusion/dreamboothed model into webui and used the scripts you linked earlier to generate the interpolating/seed travel steps. that generates the various spectrogram images. after that I just used the code I just posted to convert them all into audio and append them together matteo101man#6162: that actually makes sense matteo101man#6162: let me make a bomb ass prompt first April#5244: here's with the interpolation script between hatsuki yura and jazz lol https://cdn.discordapp.com/attachments/1053081177772261386/1053557262876151859/music.wav April#5244: notable cuts matteo101man#6162: can't tell which one is better lol
April#5244: warped sound is just due to low txt2img smapling steps I think April#5244: I'm running 20 steps each since it's fast though it kinda gets the best results with like 70+ April#5244: also this interpolation script is nice since it stitches the spectrogram images together automatically so I can just throw that through the regular spectro->audio converter without bothering to stitch stuff together or mess with multiple files matteo101man#6162: ah, so which one is better matteo101man#6162: seed travel or prompt interpolation? April#5244: they do different things lol April#5244: seed travel you only get one prompt and it just generates between seeds April#5244: prompt interpolation you can give it two prompts and it'll generate the in-between matteo101man#6162: well just found out my seed travel script is broken anyway April#5244: rip April#5244: you can do the seed travel thing with the prompt interpolation one anyway I think by just putting the same prompt twice April#5244: at least, I imagine that's how it works? matteo101man#6162: guess ill find out matteo101man#6162: generation is actually taking pretty long though at 20 steps with prompt interpolation matteo101man#6162: just 4 images but taking much longer than usual
April#5244: tbh I don't know what settings riffusion uses by default 🤷‍♀️ matteo101man#6162: nah it's just interesting never really messed with the scripts i was talking about, just kinda downloaded them and read about their function matteo101man#6162: ah isee why matteo101man#6162: how do i execute this code within a certain directory April#5244: ? matteo101man#6162: your code matteo101man#6162: i'm not very familiar with python April#5244: this is to be used with the earlier spectro2audio code posted lol April#5244: lemme get the whole script matteo101man#6162: i only know LUA which is useless practically April#5244: https://cdn.discordapp.com/attachments/1053081177772261386/1053559417355903016/audio.py April#5244: this is the script i'm using April#5244: note the commented mess at the bottom April#5244: ``` # image = spectrogram_image_from_mp3(mp3_bytes)
parser = argparse.ArgumentParser() parser.add_argument("filename", help="the file to process") args = parser.parse_args() # The filename is stored in the `filename` attribute of the `args` object filename = args.filename img = Image.open(filename) wav = wav_bytes_from_spectrogram_image(img) write_bytesio_to_file("music.wav", wav[0]) ``` This is the default that i have it as which lets you just run the script specifying which file to convert. comment this out and use the other stuff below it to grab multiple files and stitch together April#5244: it's a mess since it's just for my personal use lol T_T matteo101man#6162: right so essentially like matteo101man#6162: paste the long script above matteo101man#6162: and uncomment your stuff and paste it below
April#5244: here I fixed the commenting https://cdn.discordapp.com/attachments/1053081177772261386/1053559902334881823/audio.py matteo101man#6162: uh, how do i define specific file names like if i have a folder of things named say "image (1)" April#5244: just run this and have your files in the "test" folder next to it April#5244: `Image.open("test/00004.png")` specifies which file to open matteo101man#6162: cool April#5244: it's just hardcoded to use test/0000#.png files April#5244: 0-4 April#5244: to make it more adaptable it'd require a bit of a rewrite lol April#5244: as you can see it's just loading up each file, converting it to wav file data, converting that into pydub audio, and concatting them together, then writing as mp3 matteo101man#6162: right matteo101man#6162: perhaps i'm a bit of a dummy April#5244: outpainting test https://cdn.discordapp.com/attachments/1053081177772261386/1053562076926316554/music.wav April#5244: everything past the 5s mark is outpainted matteo101man#6162: Am I supposed to run it as a py file?
April#5244: yeah the script I posted is a python script 🙂 matteo101man#6162: right matteo101man#6162: so say matteo101man#6162: i took that code verbatim matteo101man#6162: and put it into notepad++ and saved it as a .py and ran it in a folder with images from 00000 to 00019 and a test folder in that same folder April#5244: it's hardcoded to grab test/00000.png through test/00004.png and stitch them together. any higher won't work without modifying the script lol. dep#0002: https://github.com/chavinlo/riffusion-manipulation/blob/master/scraper/mass_a2i.py April#5244: should be fairly simple to write a loop to loop through such files though dep#0002: this converts a whole audio file into multiple imgs dep#0002: its meant for a dataset but should be easy to edit matteo101man#6162: yeah that's not the issue for me, took those extra ones out but just straight up running it causes nothing to happen; do you use a compiler or anything specific? matteo101man#6162: I see I have idle and I got an unexpected indent, so perhaps that's an issue April#5244: ? should just run it with python: `python audio.py` April#5244: yeah if it's saying unexpected indent, the indentation somewhere is messed up April#5244: since python is kinda strict about that
April#5244: make sure it's either all spaces or all tabs and not a mix lol matteo101man#6162: yeah fixed tht matteo101man#6162: that* matteo101man#6162: now no module named numpy matteo101man#6162: ill look into that April#5244: pip install numpy 🙂 matteo101man#6162: yep matteo101man#6162: and for PIL matteo101man#6162: ? April#5244: I think for pil it's pip install pillow matteo101man#6162: holy cow matteo101man#6162: I'm missing so many things lol April#5244: ``` import io import typing as T
import numpy as np from PIL import Image import pydub from scipy.io import wavfile import torch import torchaudio import argparse ``` April#5244: are the imports for the file matteo101man#6162: halfway there April#5244: torch, argparse, scipy, pydub, pillow, numpy April#5244: I think io and typing are python defaults? matteo101man#6162: installed a bunch of other things matteo101man#6162: now i've got torch not compiled with cuda enabled which appears to mean ughh, something
matteo101man#6162: quite a lot of things April#5244: this is why I do python stuff manually. I had torch installed already thanks to the whole auto webui stuff April#5244: then again I actually code in python 🤷‍♀️ matteo101man#6162: yeah i'm definitely missing something April#5244: https://pytorch.org/get-started/locally/ April#5244: pytorch has a weird install process to get it working with cuda April#5244: I imagine this is why most people use auto installers for stuff lmao matteo101man#6162: i'll let you know how it goes with that 😓 oliveoil2222#2222: man im so stubborn im just using the ckpt in a111 and throwing the spectrogram into img2audio.py 💀 matteo101man#6162: never thought installing something would be so hard April#5244: this is pretty much what I've been doing though if you're just doing txt2img for the 5 second clips, keep in mind there's a webui script that can run the img2audio script automatically oliveoil2222#2222: yeah i found it db0798#7460: Earlier today I tried to install the same things too and got stuck in the same place, haven't resolved it oliveoil2222#2222: anticipating easier stuff around the corner matteo101man#6162: i installed a bunch of random stuff
matteo101man#6162: did not help lol oliveoil2222#2222: would love to see if dall-e 2 style clip variations could happen but I guess spectrogram2spectrogram is the closest one can get for now matteo101man#6162: i get frozen solve when trying to install cuda on anaconda then it gets stuck on solving environment matteo101man#6162: im gonna come back to it in a few minutes ig matteo101man#6162: Right matteo101man#6162: just ran a perfect install of cuda matteo101man#6162: same torch not compiled matteo101man#6162: finally got it to work matteo101man#6162: cost 20 gb though db0798#7460: What did you have to do to get it to work? matteo101man#6162: https://stackoverflow.com/questions/57238344/i-have-a-gpu-and-cuda-installed-in-windows-10-but-pytorchs-torch-cuda-is-availa matteo101man#6162: do pip uninstall torch in cmd matteo101man#6162: then matteo101man#6162: https://cdn.discordapp.com/attachments/1053081177772261386/1053601238438133760/image.png matteo101man#6162: on this site choose those and take that command
db0798#7460: Thanks, I'll try this later matteo101man#6162: @April after taking all the time just to get that darn thing working i see what you mean matteo101man#6162: it doesnt really interpolate all that well matteo101man#6162: also increasing to max steps doesn't really make it much better the skipping is still obvious April#5244: Out painting seems to work for extending it tbh. matteo101man#6162: what do you use to ~~outpaint~~? better yet how exactly are you implementing that Lun-Sei#5355: ? matteo101man#6162: like how are you converting the larger image into a music matteo101man#6162: cause i just edited code and I made some frightening af stuff for 5:17AM IDDQD#9118: Hello, is that colab demo similar to the demo at riffusion.com in a way that it's possible to edit settings as the riffusion goes on? IDDQD#9118: yeah, no it's not IDDQD#9118: there are no colabs etc, that would allow similar kinda flow as the demo at riffusiondotcom? Can't run locally 😦 Onusai#6441: ~~not a colab but you can run it locally <https://github.com/hmartiro/riffusion-app>~~ mb misread IDDQD#9118: Would be cool, but I quess no luck with a laptop gpu ;DD Onusai#6441: oof yea i wouldnt count on it
IDDQD#9118: Yeah, thanks anyhow ! IDDQD#9118: can't wait for this to get optimized n stuff. Would like to ought that there's alot that can be done on that front Jay#0152: https://colab.research.google.com/gist/mdc202002/411d8077c3c5bd34d7c9bf244a1c240e/riffusion_music2music.ipynb Jay#0152: finally, enjoy everyone! Jay#0152: i know i will 🙂 Jay#0152: i fixed bugs in original notebook, credit to original author Jay#0152: (also is *very* configurable) EgorV4X#6102: This is probably stupid, but what if you do a fine tune on speech / vocals and use label data as a prompt? Jay#0152: Could someone plz pin this? EgorV4X#6102: and also if you train on instruments with midi/musicxml in prompt? EgorV4X#6102: I wonder if it will understand the meaning of this EgorV4X#6102: xd https://cdn.discordapp.com/attachments/1053081177772261386/1053659363937620058/image.png JL#1976: Pinned dep#0002: the author of that colab needs to use commit specific raw gets to avoid these kind of things cvtecvts#9059: Any musicians found any uses for Riffusion yet? So far, I've been able to chop some samples out of the generated clips, and some clips are ok inspiration for parts of songs.
dep#0002: I'm not a musician but I do tinker a lot with the models and tech At this point you can do txt2img to attempt to generate some new seeds You can also use img2img to generate other variants of the same "beat"(?) or tempo dep#0002: I posted some on #🤘︱share-riffs dep#0002: Using songs that you like, convert them to spectrogram and use them as a seed is way more pleasant imo dep#0002: I've also tried fine-tuning it but so far no good changes, might try training an outpainting one soon, since the current method is just img2img and the chunks are distinguishable Nikuson#6709: https://github.com/chavinlo/riffusion-manipulation I also can't figure out how to use it via pycharm Nikuson#6709: Perhaps a banal question, but I'm not very good at pycharm either head_robotics_AI#0742: Hello; when I tried to load/switch to a riffussion model in Automatic1111 web ui I got a loadin error ending in "AttributeError: 'NoneType' object has no attribute 'sd_model_checkpoint'" Any suggestions for how I might explore a soution? head_robotics_AI#0742: is a .yaml file needed for the models? db0798#7460: Is the model that you are loading to Automatic1111 the 14 Gb file? There are other versions with smaller files that work for me, at least: https://www.reddit.com/r/riffusion/comments/znbo75/how_to_load_model_to_automatic1111/ dep#0002: what error u getting
dep#0002: never used pycharm btw Nikuson#6709: I get absolutely nothing. just the terminal outputs “python” in response and nothing happens dep#0002: have you tried running it directly on bash or cli dep#0002: because on every instance I have ran it I never got that error Nikuson#6709: just nothing happens, I don't think it's a bug. dep#0002: It doesn't prints something like "Original audio length"? Nikuson#6709: how did you launch it? I'm just afraid that if I work in the terminal of the operating system, I won't be able to integrate it into the diffusion stable in any way. dep#0002: python3 file2img.py -i inputaudio.wav -o outputimg.png dep#0002: its just python functions, even if the script doesn't works, you should be able to copy the functions into your own script dep#0002: and then call them Nikuson#6709: nothing at all, just the standard "python" response to a command like "python3...." Nikuson#6709: yes i do it. I also tried to convert Lady Gaga's song and it didn't work either 😆 dep#0002: Download this dep#0002: https://cdn.discordapp.com/attachments/1053081177772261386/1053788225128374292/message.txt dep#0002: save it as file2audio.py
dep#0002: and try running it dep#0002: tell me what you get Nikuson#6709: D:\Python\StableVoice\venv\lib\site-packages\pydub\utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning) D:\Python\StableVoice\venv\lib\site-packages\torchaudio\backend\utils.py:62: UserWarning: No audio backend is available. warnings.warn("No audio backend is available.") Traceback (most recent call last): File "D:/Python/StableVoice/img2audio.py", line 152, in <module> image = spectrogram_image_from_file(filename) File "D:/Python/StableVoice/img2audio.py", line 122, in spectrogram_image_from_file audio = pydub.AudioSegment.from_file(filename) File "D:\Python\StableVoice\venv\lib\site-packages\pydub\audio_segment.py", line 723, in from_file stdin_data = file.read() AttributeError: 'NoneType' object has no attribute 'read' The script is executing
About to call spectogram_image_from_file function Loading Audio File Process finished with exit code 1 dep#0002: you dont have ffmpeg installed dep#0002: thats why dep#0002: in what OS are you windows or linux Nikuson#6709: Win dep#0002: https://www.wikihow.com/Install-FFmpeg-on-Windows dep#0002: a bit hard ryan_helsing#7769: Hey @dep would you be willing to walk me through how to fine-tune on a custom data set? dep#0002: sure but the results aren't that good... yet dep#0002: wait dep#0002: rn I cant explain step by step but I can give you a brief summary of the process ryan_helsing#7769: That would be wonderful
Nikuson#6709: nothing has changed after installation Nikuson#6709: although it looks like windows doesn't see ffmpeg. Strange, I did everything according to the instructions Nikuson#6709: I fully installed it and the operating system even began to see ffmpeg, but the error is the same dep#0002: At this point I suggest you install WSL and run it from there db0798#7460: The script takes input and output paths as command line arguments, are you providing it these arguments? Nikuson#6709: It seems to me that this is an overly complicated setup process for this task. 🤕 I think everyone would be happy to see something like a colab laptop for this project. April#5244: auto1111 webui has an outpainting script. I load up the model and use the script to outpaint like normal. April#5244: I manually converted the model into a ckpt using separate tools, so it put it down to 4gb like normal. I saw someone upload already converted models at a 2gb and 4gb size but idk how well those work. I avoided the 14gb ckpt since I figured there's no way I'd be able to run it Nikuson#6709: I do everything like on github matteo101man#6162: @April how did you convert the larger outpainted image to an audio file IgnizHerz#2097: what settings for this? when I tried I got nice garbage April#5244: same script works fine 🤷‍♀️ April#5244: mostly default settings. denoise was set to like 0.75 I think? maybe higher? maskblur I set to like 20-40 or something to try and get it smoother. make sure you only extend to the right. I left fall off exponent and color variation as the defaults April#5244: I posted an outpainting sample here April#5244: since webui can only outpaint 256 pixels at a time, that means the outpainting only adds 2.56 seconds at once, so that's where your cuts are
IgnizHerz#2097: got an example of what your spectrogram looked like? April#5244: example outpainting spectro + converted song https://cdn.discordapp.com/attachments/1053081177772261386/1053814603445960824/music.wav,https://cdn.discordapp.com/attachments/1053081177772261386/1053814603890561054/test.png April#5244: this one I was deliberately going for a different prompt at the end April#5244: you can see that for the first half or so it's smooth and fine, and only once I switch the prompt it differs April#5244: jumping between prompts doesn't seem to work that well. too hard of an edge I think IgnizHerz#2097: and this I presume is from a generated one to begin with? IgnizHerz#2097: mine might not being doing well because I'm attempting an extend from a converted audio to graph IgnizHerz#2097: you can tell pretty quickly where it swaps https://cdn.discordapp.com/attachments/1053081177772261386/1053815712780668991/tmpjlpy8xou.png April#5244: yeah, so this is just extending a txt2img song using the same model and prompt that generated it. so the result of the outpainting should be near identical which leads to a useful continuation April#5244: trying to extend an existing song will lead to issues, because there's no guarantee the model *can* generate something similar April#5244: I'd try adjusting your denoising value, increase mask blur April#5244: maybe do a clip interrogation to try and get the tokens that most closely match the song IgnizHerz#2097: true its probably struggling to get what my song is Nikuson#6709: I still can't understand how you started it IgnizHerz#2097: anything remotely brass band esq tends to be a lot older fashioned than the song I'm using
Nikuson#6709: 😢 IgnizHerz#2097: like uh with the built-in clip interrogator? April#5244: yeah April#5244: ? how I started it? IgnizHerz#2097: `a black and white photo of a square area with a pattern on it` not sure if it'd help to be fair IgnizHerz#2097: something tells me every graph looks this way April#5244: 🤷‍♀️ IgnizHerz#2097: also this from earlier, are you implying this script uses interpolation? April#5244: no. that specific code just stitches together various spectrograms into a single audio clip April#5244: the interpolation was done using a webui script IgnizHerz#2097: I havent touched the actual install, I had enough install issues that I gave up IgnizHerz#2097: I was thinking of hastily cramming code into the auto extension temp, or just wait I guess IgnizHerz#2097: if the interpolation is even worth doing that, If it doesn't really mend that well, I could always just crossfade in an audio editor April#5244: I posted seed travel and interpolation tests a bit earlier April#5244: I found seed travel worked better in that it was less skipping. the interpolation test I tried skipped hard but I think that might've been due to not enough steps and a large difference between prompts
IgnizHerz#2097: I'd just like to keep the effective "style transfer" halfway consistent IgnizHerz#2097: on the stuff I've been doing IgnizHerz#2097: between two chunks of 5 seconds one will be quiet and then the next starts much louder IgnizHerz#2097: harder to fix than popping or silence matteo101man#6162: Ah maybe I’m using the wrong outpainting or something matteo101man#6162: Guess ill check when I get home matteo101man#6162: Does outpainting like extend it to the right there or what? Mine just makes a 768 image like an upscale IgnizHerz#2097: in autos you can specify a direction and how much April#5244: in the outpainting mk 2 script for auto1111 webui you can specify the direction to outpaint Nikuson#6709: for some reason I just can't use https://github.com/chavinlo/riffusion-manipulation and I'm wondering how others installed it April#5244: @Nikuson I'm using my own code I posted here db0798#7460: That's kind of the problem that OpenAI Jukebox has too. OpenAI Jukebox can extend songs and the extended bits aren't completely unrelated to the original but they are still so random that the output sounds like a very drunken jam session rather than a song. It would be good if there was a way to make the output conform to some song structure, like verse-chorus-verse or something like that. But I don't know what exactly would need to be done to achieve that April#5244: I can't imagine the approach riffusion is using will ever accomplish that. since it's just generating 5 second clips (or less) at once and it can't really "know" any larger structures April#5244: you'd need something that could be trained on whole songs, and generates whole songs April#5244: not 5 second clips
db0798#7460: I think a 5 second clip could be a seed for making a loop. In the output of OpenAI Jukebox, there are usually some 5 to 10 second bits that sound great when they are looped but instead of looping, OpenAI Jukebox moves on to something unrelated, and the incoherence makes the output sound bad. So just looping a bit would already be an improvement over a stream of randomness, I think. And then perhaps it would be possible to make variations to the loop by using something like image2image. Or chopping the loop up into smaller time slices and swapping the order of some slices, and duplicating some slices. When you have two 5 second loops, one could serve as the A section and the other as the B section in a larger structure April#5244: you should be able to make good loops by generating an x-tileable image when diffusing Nikuson#6709: Can you please provide the link again? April#5244: https://discord.com/channels/1053034685590143047/1053081177772261386/1053559417808900096 db0798#7460: That seems like a good idea Nikuson#6709: Do you only have audio conversion or also images? April#5244: again already posted earlier: https://discord.com/channels/1053034685590143047/1053081177772261386/1053176008687230978 April#5244: these are just my personal scripts. I know there's some more clean/nicer scripts out there somewhere posted lol IgnizHerz#2097: you could train an outpaint model just for riffusion couldn't you? April#5244: I was just asked this in the "shinonome ai lab" server lol. Perhaps? I have no idea how well it'd go or how inpaint/outpaint models even work. I tried merging the inpainting 1.5 model with riffusion and it failed miserably IgnizHerz#2097: obviously lol April#5244: outpainting using regular riffusion/dreamboothed model seems to work fine April#5244: even then though, outpainting won't actually solve the issue mentioned IgnizHerz#2097: a particular outpaint would be able to mend new pieces much nicer April#5244: since the length is always capped at 5.12s at most
IgnizHerz#2097: same way with images Nikuson#6709: no, I really like them. At least until I ran them and there were no problems 😅 IgnizHerz#2097: its how outpaint on images can keep a general idea and even add to them IgnizHerz#2097: I mean you can just keep outpainting couldn't you April#5244: I showed some outpainting examples earlier iirc April#5244: it works well for consistency but it won't be able to create proper song structure IgnizHerz#2097: music isn't too far from images IgnizHerz#2097: in terms of blending and keeping consistency IgnizHerz#2097: I think its moreso the outpaint not being so good at it on its own April#5244: the problem is sd and thus riffusion is stuck with 512x512 images, meaning the model is trained on 5 second clips, which aren't enough to teach song structure Nikuson#6709: I’ve just been writing various articles about all possible implementations of generative models with explanations for quite some time, and it became very interesting for me to release them at the end of the year. something about audio. Do you mind if your script gets on github and a couple of sites with articles about programming?😅 I'll try to point you out if possible IgnizHerz#2097: right, if it was an outpaint model trained specifically to add new pieces based off of old ones IgnizHerz#2097: wouldnt that work April#5244: I post code with the assumption that people are gonna use it as they please. so go nuts 🙂 April#5244: just don't post any identifying info about me please
IgnizHerz#2097: praise the open sourcing April#5244: I like my anonymity April#5244: the code I posted is mostly ripped from riffusion and chatgpt anyway 🤷‍♀️ April#5244: I'd formally release it but it's a mess lol April#5244: normally I try to keep proper releases to things that are actually nice 😂 Nikuson#6709: ok, it will remain a secret Nikuson#6709: By the way, it's very interesting how you can use depth2img in audio April#5244: ? Nikuson#6709: I'm just wondering how the depth2img function will behave in stable diffusion 2 on spectrograms Jay#0152: https://colab.research.google.com/github/thx-pw/riffusion-music2music-colab/blob/main/riffusion_music2music.ipynb Jay#0152: Everything seems to be fixed by og author, enjoy April#5244: If I'm understanding this correctly, this takes a song file, splits it up, runs img2img on each section, then stitches it back together? April#5244: also one thing I've been thinking is that it'd be super helpful to have a list of captions used on the training dataset of riffusion, rather than just blindly guessing what's in there db0798#7460: I recently watched this video from a music theory guy on Youtube that explains why just morphing a motif into something different over time doesn't work as well as having proper song structure: https://www.youtube.com/watch?v=8Z8zOLtgvgU&ab_channel=RyanLeach I think this depends on style, though: for some styles the outpainting approach would suit better than others
April#5244: with regular stable diffusion it's using the laion dataset and clip which has damn near everything you'd want. but what does riffusion have? Jay#0152: yes you're correct. April#5244: it's entirely possible to keep outpainting. however you can't guarantee consistency. outpainting only makes sure the connecting bit flows well, not that the entire structure works together db0798#7460: Yes IgnizHerz#2097: couldn't you train a model to specifically do this? April#5244: if you use the same prompt and just keep outpainting, you get something workable but no overarching structure IgnizHerz#2097: it improves outpainting on images to use the original image as a basis as well April#5244: no. sd only works up to 512x512 images, which means that you're stuck at 5s length for "knowledge" April#5244: you'd have to rework how sd diffusion works April#5244: to allow wider images Jay#0152: might not be very difficult, though April#5244: for a regular 3min song you'd need a 3000x512 image I think April#5244: actually. I think my math is off lol April#5244: 5s = 512 IgnizHerz#2097: I figured for a proper outpaint model you'd use the original image as base anyways
April#5244: 18432x512 April#5244: no that's still wrong April#5244: i'm dumb April#5244: actually.... IgnizHerz#2097: which is applicable to music as it is to images. For both you'd want something that consists of new material but is not just random nonsense April#5244: yes that's correct. 5s=512 180s/5s = 36 36*512 = 18432? April#5244: might be easier to just convert a full song and look at size lol April#5244: i'm dumb af IgnizHerz#2097: though I'd imagine getting a song length of that size to work kek Jay#0152: attempting style transfer w notebook to change song by the Beatles to one by the Doors lmao, wish me luck! 😆 April#5244: 24094x512 for a 4min song April#5244: so yeah 18k is about right
April#5244: tldr: sticking the whole dang song into sd is unworkable April#5244: even though that's what you'd need to do to get song structure dep#0002: Not really, I mean sure SD 1.5. but SD 2.0 works at 768, and we already have aspect ratio trainers April#5244: I suppose if you clip into like song sections like verses/chorus/whatever you could tag those individually, gen them using such tags, and manually stick together? April#5244: though that'd still be large I think April#5244: right, but 768 is already a huge jump that's computationally expensive, and that only nets you an extra 2 seconds April#5244: as the math I just did shows, you'd need 18000x512 style images to do whole songs April#5244: far larger than 768 April#5244: like 10x lmao dep#0002: We might be able to apply tensorrt and get a big boost as well dep#0002: But yeah, we are never going any higher than 2048 April#5244: tbh I think doing image diffusion on spectrograms is kinda a losing approach. computationally expensive for full length, very limited on smaller stuff, and spectrograms lose audio info dep#0002: 1024 even April#5244: exactly April#5244: though one thing is that sd is square aspect ratio which we don't need for music
April#5244: 512 height is fine April#5244: it's just width we need Jay#0152: oh my god it kind of worked..... with default settings dep#0002: About the loss, lopho and me were talking about inserting more data on the other 2 channels Currently it uses 1 channel (B&W) and replicates across the 2 other channels dep#0002: The problem is that it would be even harder to recognize April#5244: yeah I saw some talk about that earlier, using the full rgb dep#0002: Ah wait April#5244: I don't know literally anything about music lol dep#0002: I was talking about width too xd dep#0002: 512h is ok imo April#5244: as it stands, sd trains on 1:1 ratio, so 768w = 768h, which just makes it hard to scale as scaling both makes it much more pixels and expensive April#5244: being able to train on non-square aspect ratio would help a lot April#5244: but even then... dep#0002: Would?
dep#0002: Anyways ..... April#5244: yeah so we don't need like 2048x2048 images, but rather 2048x512 dep#0002: It exists April#5244: oh? dep#0002: It's called bucketing dep#0002: I just said it..... April#5244: doing that would definitely help April#5244: but still I think it gets expensive even past 2048x512 April#5244: or probably before that dep#0002: Yeah dep#0002: And also that the model would struggle at such extreme aspect ratios April#5244: decided to run a test with regular sd stuff and gen a 2048x512 image just to see if I can even do it lol April#5244: it's taking like 3min I think lmao April#5244: 2048 would be 20s lmao nowhere close to a song
db0798#7460: I think just generating loops with Stable Diffusion and then stitching them together with some post-processing script would work better than trying to outpaint a whole song April#5244: 5s loop isn't really that interesting to listen to though lol April#5244: okay seems my laptop *is* able to gen a 2048x512 image 🙂 April#5244: took forever though @.@ db0798#7460: Yes, it would need variations. I think those could be created by inpainting bits of the loop, and rearranging time slices of the loop April#5244: yup. though I feel like that quickly gets into "the human is making the music" territory, rather than the ai itself lol Nikuson#6709: does anyone have any ideas to improve the sound quality? db0798#7460: Yes, kind of. But if the post-processing is all done by a script instead of a human manually editing stuff, the result will still be fully computer-generated April#5244: true. but if you're essentially hardcoding song structure, all the songs will end up sounding similar I think? Nikuson#6709: one could immediately get audio from the image using a diffusion vocoder db0798#7460: I guess depends on how complex the post-processing script is. A simple one would make similar-sounding output every time, a more complex one wouldn't April#5244: true I suppose April#5244: but I can't help but feel it's the same exact approach as making a chatbot by hardcoding in lines and responses April#5244: like sure you technically get something workable, but it's really not a great solution April#5244: 🤷‍♀️ I guess I'll just have to see if someone actually makes something like that
Nikuson#6709: and in what variable does the untransformed spectrogram lie here? April#5244: ? db0798#7460: Yes, generating a song structure the way I described is definitely more primitive compared to getting a neural network to produce a song structure. But the neural network method seems to be more difficult to do April#5244: it's definitely a hard problem to solve db0798#7460: Yes, I'm just throwing ideas around here and won't promise to do this myself, as I have many other unrelated things that I'm already busy with Nikuson#6709: it’s already night for me and today I don’t seem to have time to understand your code and I just wanted to know in advance which of the variables contains the spectrogram taken from the image, but not yet converted to audio April#5244: the spectrogram is the input for that particular script. it converts the spectrogram image into audio. I have no idea how the actual inner workings of the script work. spectro->audio was provided by riffusion, and audio->spectro was ai generated based on the riffusion code April#5244: I know literally nothing about how this tech works or anything about music so sorry to burst bubbles there 🙏 April#5244: if i had to guess "spectrogram_from_image" likely is what converts the image to spectrogram data in-code Nikuson#6709: ok, I'll study this code closer to dinner dep#0002: @seth Sorry to bother you, but how many steps was riffusion trained for? And at what batch size? a_robot_kicker#7014: okay, I've been plunking away at this code for a while now and finally got it to work. No idea why but I had to reach deep into the transformers library and patch these three lines https://cdn.discordapp.com/attachments/1053081177772261386/1053874830933508129/image.png a_robot_kicker#7014: somehow I was ending up with "attn_probs" being Float32, and "value_states" being float16 a_robot_kicker#7014: I can see that RiffusionPipeline defines its datatype as float16, so I'm not sure how attn_probs ended up being float32 🤷 https://cdn.discordapp.com/attachments/1053081177772261386/1053875188028153916/image.png dep#0002: I got a similar error where tensors were mismatch but it was using raw diffusers
a_robot_kicker#7014: another change I ended up needing to do was converting the spectrogram image to float64, otherwise it would always overflow and produce nans https://cdn.discordapp.com/attachments/1053081177772261386/1053875449765318737/image.png a_robot_kicker#7014: but, now I'm getting output spectrograms and waveforms! https://cdn.discordapp.com/attachments/1053081177772261386/1053875587392995368/image.png a_robot_kicker#7014: will now learn how to save or play these. For learning purposes I took the guts out of the flask server and turned it into a command line interface. a_robot_kicker#7014: okay, not too bad! now I've got wav files!! https://cdn.discordapp.com/attachments/1053081177772261386/1053876328438440056/image.png a_robot_kicker#7014: wow, my first wav -- "Taylor Swift Beat Boxing" https://cdn.discordapp.com/attachments/1053081177772261386/1053877417183285268/output.wav a_robot_kicker#7014: thanks, gonna have a lot of fun with this tool 🙂 joao_betelgeuse#0410: Based matteo101man#6162: @April What exact script are you using to convert the longer outpainted sequences? like say something 896x512? img2audio? April#5244: same script that I posted. works for any size spectrogram lol matteo101man#6162: which one chief? there's a few I remember matteo101man#6162: like I remember you showed me audio.py but that ones for 4+ images April#5244: I didn't at all change the conversion script lol. so... any of them? the spectrogram->audio script I'm using is straight from riffusion matteo101man#6162: chavinlo ones? like riffusion-manipulation? or you mean something else matteo101man#6162: might be missing another maybe ill scroll up matteo101man#6162: cause definitely don't mess with img2audio.py for anything more than as is 🤮
April#5244: https://discord.com/channels/1053034685590143047/1053081177772261386/1053559417808900096 https://discord.com/channels/1053034685590143047/1053081177772261386/1053176008687230978 April#5244: I'm sure the riffusion-manipulation one will work fine too matteo101man#6162: oh no April#5244: or you can just grab audio.py from riffusion lol matteo101man#6162: audio2spectro.py was the thing i was missing though (edit: which has nothing to do with what I'm doing) matteo101man#6162: but nah manipulation one makes horrendous noises April#5244: you can mess around with the max_volume variable. it's normally set to 50 but you can tweak it matteo101man#6162: just static doing that though matteo101man#6162: I'm just gonna give up on it, I tried the other audio.py but there's a lot of variables I have to edit to actually convert the file matteo101man#6162: and if it's not mathematically sound it throws errors hayk#0058: Digging the prompts here: https://www.youtube.com/watch?v=BUBaHhDxkIc matteo101man#6162: it will be nice when we can generate stuff of that quality straight from stable diffusion BananaBot#3675: I’ve only tried a couple spaces, but I noticed that they typically do things in increments of time (/seconds). Wouldn’t it be more useful (since we’re dealing with music) to generate things by beat/bar and bpm? Or is that not possible? Jonestown#8964: Just trying out Riffusion now. This is pretty impressive. I've been following Harmonai for a while and haven't seen anything too interesting come out of it yet, but Riffusion is a big step forward.
noop_noob#0479: @hayk @seth Sorry for the ping. May I know what the dataset was? Or if not, maybe at least what the text in the dataset looks like? I think that maybe knowing what the data looks like could lead to better prompts. a_robot_kicker#7014: Yeah I can't find this info anywhere and it's critical to understand the limitations of this model and how it might be improved. outhippo#4297: damn, even if it does not adhere to prompts that well its still some great music outhippo#4297: Can someone explain where this spectogranm to audio script gets the "sounds" from meaning how the piano or drums sound? outhippo#4297: i cant wrap my head around it noop_noob#0479: The X axis of the spectogram is time. The Y axis is the frequency. At a specific time (i.e., in a single column of pixels), a specific timbre of sound (which, for example, distinguishes a piano from a violin) corresponds to a specific pattern of blacks/whites/grays. Placing this pattern higher or lower corresponds to a higher or lower pitch. outhippo#4297: got it, but there are thousands of possible piano sounds - some are terrible, midi sounding and some are full and realistic. I wonder why the music ends up sounding pretty good in terms of sound selection and not fake. Maybe it's just that the frequencies correspond to how iyt should sound "exactly" unlike midi which just says how high or low the sound should be but not the quality of the sound. noop_noob#0479: The different kinds of piano sounds correspond to slightly different timbres/patterns. noop_noob#0479: something something fourier transform idk lol outhippo#4297: got it, so if it was trained on music that is professionak sounding than it would retain the professional sound selection noop_noob#0479: Probably, yeah. Maybe prompts might affect that too. outhippo#4297: Thanks, I need to read more on spectograms too noop_noob#0479: @outhippohttps://youtu.be/spUNpyF58BY noop_noob#0479: https://www.reddit.com/r/StableDiffusion/comments/zoc365/new_riffusion_web_ui_realtime_music_generation_up/ Nikuson#6709: I don't know why, but pycharm refuses to see pydub even though I installed it
Nikuson#6709: from constant problems, I can only come to the conclusion that pycharm is far from the best IDE Twee#2335: damn this app is nuts dep#0002: https://developer.spotify.com/documentation/web-api/reference/#/operations/get-audio-features dep#0002: Might be really useful to use this for future finetuning dep#0002: @Jonestown ???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? dep#0002: https://media.discordapp.net/attachments/710745951236522019/791617162364190730/image0-19.gif Jonestown#8964: @dep Cat decided to play with a toy on the keyboard. Sorry for spam lol dep#0002: lol Nikuson#6709: there is no thematic mood at all Twee#2335: putting "shoegaze" on the prompt does not make shoegaze :( Twee#2335: pain Sheppy#4289: yeah, training it to recognize musical modes/keycenters/time signitures/tempos should probably be a high priority Twee#2335: im more of a genre person Twee#2335: would be great if adding an amalgamation of genres in a prompt could create an accurate fusion of those genres Twee#2335: or simply adding a genre and recreate a song in the vein and sound of that genre
Twee#2335: and im not talking about mere meta genres like pop, rock, jazz, hip hop, etc Twee#2335: im talking about more specific subgenres and scenes Sheppy#4289: that's important too Twee#2335: like if i added "radiohead but vaporwave" Sheppy#4289: lol a_robot_kicker#7014: working on a simple tkinter local GUI, will make a github fork once it is useful https://cdn.discordapp.com/attachments/1053081177772261386/1054134363387858954/image.png a_robot_kicker#7014: so far in terms of data representation, I have no idea which prompts will work. Seems to mostly only know about electronic music and a few big name artists a_robot_kicker#7014: but the seed can have a huge effect Twee#2335: nice!! Thistle Cat#9883: https://youtu.be/nHuF927CgkM Thistle Cat#9883: Idk why but my label is far more devious than this audiobook noop_noob#0479: Huh https://fxtwitter.com/1cebell/status/1604267031238434819 noop_noob#0479: Found this from 4 years ago https://www.youtube.com/watch?v=YRb0XAnUpIk pork#9581: can we use higher resolution generations to get better audio quality/longer clips? quique#5458: hi, is there any colab available to generate smooth audio between two prompts which also allows us to grab and use the last spectogram image to generate further audio transitions? My idea is chaining several prompts transitions like "typing -> jazz piano -> guitar riff -> rock guitar solo"? or how would you do it manually?
IDDQD#9118: Neither does it do black metal :((( XIVV#9579: or hardcore punk :(((( IDDQD#9118: Looking forwards to the future iterations of this riffusion. Also would gladly contribute if anyhow possible at some point (most likely via labelling etc. since I don't posses programming prowess). this immense. IDDQD#9118: "an exhiliratingly epic anti-anthem of the dysregulatory purgatory in the style of psychedelic cosmic black metal vaporwave noise mumblerap with contemporary avantgarde classical and free jazz passages by greg rutkowski trending on artstation" when??!! Twee#2335: post-avant jazzcore and progressive dreamfunk Jack Julian#8888: https://everynoise.com Twee#2335: everynoise is ok but tbh an actually accurate genre database is RateYourMusic Twee#2335: sounds like ur average oranssi pazuzu album Twee#2335: i did try "dark synthpop with ethereal female vocals" and it sounds eerily similar to early cocteau twins Twee#2335: i was very impressed Twee#2335: cant wait until im able to extend songs thru automatic1111's ui cravinadventure#7884: Hi! If anyone needs a Mix and Master of any sound for much better sound quality, please DM me!
*- **BEFORE:** (Original_Riffusion_Output_Sound)* *- **AFTER:** (Mixed_Mastered_Sound)* 🙂 https://cdn.discordapp.com/attachments/1053081177772261386/1054429627482910832/Original_Riffusion_Output_Sound.wav,https://cdn.discordapp.com/attachments/1053081177772261386/1054429627810062436/Mixed_Mastered_Sound.wav pnuts#1013: what's the benefit of running it via automatic1111? I'm using the web-app from the official repo atm pnuts#1013: nvm, just installed it to take a look Nikuson#6709: I can't solve the problem with pydub. It doesn't see ffmpeg even though I installed everything correctly clambake#5510: if we can use transformers to turn words into music, could we generate an ai image from music pnuts#1013: works pretty much out of the box here ```|0: e: | │1: cd git │ │2: git clone https://github.com/jiaaro/pydub.git │ │3: cd pydub │ │4: conda create -n pydub python=3.9 -y │ │5: conda activate pydub │ │6: python setup.py build │
│7: python setup.py install │ │8: conda install ffmpeg │ │9: python whatsthis.py |``` pnuts#1013: `whatsthis.py` is the first sample in the repo```import os import glob from pydub import AudioSegment video_dir = './samples' # Path where the videos are located extension_list = ('*.mp4', '*.flv') os.chdir(video_dir) for extension in extension_list: for video in glob.glob(extension): mp3_filename = os.path.splitext(os.path.basename(video))[0] + '.mp3' AudioSegment.from_file(video).export(mp3_filename, format='mp3')```
AgentA1cr#8430: does Riffusion support negative prompt weights? AgentA1cr#8430: Also, loving what this model can do. However, it seems to me that, given enough time, it will slowly (or sometimes not-so-slowly) drift away from the prompt and start doing its own thing, with a strong preference for percussion and piano. hayk#0058: Very nice! I'm super interested to automate parts of this into the riffusion-inference repo so it can generate higher quality. Are you doing this within a DAW? I'd love a breakdown of the steps so we can try to get it in code. Somewhat related, if any audio experts have a lead on a neural vocoder to try that might perform better than Griffin Lim, that's worth exploring. undefined#3382: I'd really be great if we could finetune the model (using EveryDream) ourselves too 🙂 but I haven't seen any info about that or I missed it db0798#7460: My guess is that you have multiple versions of Python on your system and something you installed got installed for the wrong installation of Python a_robot_kicker#7014: I've been working with a DAW as well and will post a little fork with my progress, probably later today. Biggest limitation I have with the tool so far IMO is the hard-coded 5s limit, makes it challenging to keep in sync with clips from the DAW. I'd like to eventually write a VST plugin or something to make interop a bit easier. a_robot_kicker#7014: VST plugin would have to call some kind of API to a server running riffusion w the audio input it gets, and then async output that either to a file or the VST output buffer. A bit difficult because the VST api pretty strongly assumes realtime processing aplications. j.ru.s#9349: Hey all, does anyone know what music dataset Riffusion uses to fine-tune on? j.ru.s#9349: Making a universal vocoder is a really tricky problem, usually it needs to be specifically trained towards a specific type of audio output. In the case of Riffusion, if we know the music dataset used to do the fine tuning we could potentially train a neural vocoder on that. Nikuson#6709: I thought so too and removed all versions except 3.9, but still does not determine Nikuson#6709: i suggest to use univnet, it is based on diffusion too and can be used for different types of sounds db0798#7460: I don't know then what goes wrong in your installation. I got the installation working for myself today but it was a troublesome process Nikuson#6709: I did everything according to the guide from the Internet. downloaded and added path to PATH.
Nikuson#6709: "Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work" Nikuson#6709: I installed psydub via pip, but the audio backend for it is ffmpeg and I installed it according to the guide from wikiHow a_robot_kicker#7014: alrighty, here's my fork that has a simple local tkinter gui that can read and write .wav files: https://github.com/mklingen/riffusion-inference a_robot_kicker#7014: https://cdn.discordapp.com/attachments/1053081177772261386/1054498695162372116/riffusion_gui.gif XIVV#9579: damn XIVV#9579: im just waiting for this thing XIVV#9579: to develop XIVV#9579: so i can generate a sick black metal riff XIVV#9579: cuz XIVV#9579: rn XIVV#9579: it sounds like some kind of lounge music XIVV#9579: or something Haycoat#4808: Someone should make some sort of img2img version so you can make a doodle and turn it into a spectrogram of your text to music prompt nullerror#1387: apologies if this has been asked before i just joined but there is currently anyway to finetune the model with music of your choice? nullerror#1387: like if i wanted to retrain the model on a specific genre how would i go about converting wavs to the correct spectrogram format to fine tune the stable diffusion model on
db0798#7460: It should be possible to use chavinlo's scripts from https://github.com/chavinlo/riffusion-manipulation to do conversions between audio and spectrograms, and to run Dreambooth to add new spectrograms to the Riffusion model cravinadventure#7884: Amazing! Thank you for sharing. 🙂 nullerror#1387: thank you db0789! nullerror#1387: i assume this also takes care of the phase issues mentioned in the about page of the website db0798#7460: I don't know if it takes care of the phase issues or not nullerror#1387: is the person who made this in the server by chance nullerror#1387: ? XIVV#9579: i think it's @seth nullerror#1387: oh i meant the chavinlo github posted above nullerror#1387: dmed seth with the same question but no response yet. must be busy db0798#7460: I think it's @dep dep#0002: whats a phase issue dep#0002: loss in information? nullerror#1387: here one sec nullerror#1387: ill grab the quote
dep#0002: ok nullerror#1387: https://cdn.discordapp.com/attachments/1053081177772261386/1054509685790744627/image.png dep#0002: you mean if the repo uses the griffin-lim thing to reconstruct the audio? nullerror#1387: https://cdn.discordapp.com/attachments/1053081177772261386/1054509803369668618/image.png nullerror#1387: or to make the spectrograms nullerror#1387: cuz it says here the spectrograms they use take advantage of these algos or whatever they are dep#0002: it uses the exact same functions that the original riffuser inference server uses. The only new addition is the `image_from_spectogram` which is the inverse of `spectogram_from_image` although our implementation was unofficial but works almost the same as the one that's currently on the official repo https://cdn.discordapp.com/attachments/1053081177772261386/1054510864667000892/image.png nullerror#1387: ah got it perfect thank you! nullerror#1387: ill give this a shot dep#0002: there are also some tests included in the repo if you want to take a look at the reconstruction quality with different parameters nullerror#1387: will do nullerror#1387: appreciate it April#5244: >people using some code I generated using an ai I'll never stop being amused lol. I'm honestly just surprised it works at all nullerror#1387: okay so ive made some images and am going to finetune the model. i think ive gone ahead and made to many tho haha
nullerror#1387: how many images/training steps are recommended (if known) db0798#7460: My first Dreambooth test run just finished. I used 53 5 second pieces of a chiptune for training, used 'techno' as the class prompt. The output sounded like a chiptune already after 750 steps. I think it got to overfitting territory quite quickly, although it didn't make an exact copy of the input tune nullerror#1387: shoot only 53?? db0798#7460: If anyone has more experience with this, I would also like to know how many images and steps are recommended nullerror#1387: man im over here with 1400 i think i should cut back nullerror#1387: would also be interested in hearing images/steps nullerror#1387: although i know a general rule of thumb is 100x the number of new images nullerror#1387: for steps nullerror#1387: its how many images is the question db0798#7460: I think 1400 might be a good number actually, if you want to have varied output nullerror#1387: oh fr? hey ill be the test dummy and give it a whirl nullerror#1387: is probably gonna take forever tho but ill shoot my shot db0798#7460: With as few as I used, basically what I got back from it was the original input with mutations and rearrangements. I think with a larger input dataset there will be more of a chance to get something completely new in the output db0798#7460: I'm curious to know how your run turns out nullerror#1387: im gonna cut back slightly so im not wasting massive amounts of time/compute power but will report my results
nullerror#1387: gonna go for like 200-400 db0798#7460: That's like in the Terminator movie where Skynet travels back in time and contributs to Skynet's code a_robot_kicker#7014: That's awesome. I'd love to try a fine tuned chip tune model db0798#7460: I'll try again later with a larger input dataset a_robot_kicker#7014: I've noticed that this thing is learning a 3 channel image and converting it to single channel, which seems pretty wasteful. Perhaps you could even fit more time domain into the G and B channels. Phase is apparently hard, but one low hanging fruit might be to just put more time in there so you could get longer samples. Like literally R is the first 5 seconds, G the second, and B the third db0798#7460: April and deb talked about something like this here earlier, would be good if they implemented it a_robot_kicker#7014: But since it's using stable diffusion which is trained originally on natural images anything that doesn't vaguely resemble real images might be hard for it to learn. denny#1553: has anyone been using SoX for audio processing? Seems super powerful for automated output Nikuson#6709: I still don't understand how to make it work on Windows a_robot_kicker#7014: What are you hoping to use ffmpeg for? hayk#0058: You should try generating riffs without img2img conditioning. I believe it's just the seed image being "sparse" that leads to it sounding more like lounge music ryan_helsing#7769: I use it a ton on my project (https://neptunely.com) denny#1553: Has been working great at concatenating audio files so far-- been wondering if there's a way to ping-pong loop though ryan_helsing#7769: Can we supply our own seed images? Is what’s happening a sort of img2img style transfer technically? ryan_helsing#7769: It’s extremely powerful and quick when you pipe in output .. I sometimes build commands with hundred of sub commands tying together files and it does it in under a second
denny#1553: yeah it's super fast! denny#1553: I've been impressed nullerror#1387: finetuned my model but ive run into the issue of idk how to run it now lmao nullerror#1387: i have an amd gpu so i dont think the webapp will work for me. is there any other way of running riffusion with a custom model? maybe a colab? nullerror#1387: https://huggingface.co/spaces/aross3/riffusion-rage nullerror#1387: https://huggingface.co/spaces/aross3/riffusion-rage nullerror#1387: apologies for two links my discord is glitchy nullerror#1387: but hey it worked. i trained it on yeat and carti. sounds a little off/vocoded. does it just need more training? was trained on 200 so images nullerror#1387: hmm actually its super phase-y. keeps like shifting up over time nullerror#1387: anyone know why? db0798#7460: Is the model that is loaded in the linked page your trained model? If so, what's the instance prompt for it? nullerror#1387: yes its my trained model and all the images its trained on are called rage nullerror#1387: lowercase "rage" nullerror#1387: it only came out as 1gb compared to the 14gb main model nullerror#1387: so maybe thats the issue? idk im more an audio guy than ml
db0798#7460: There are smaller versions of the Riffusion model that someone linked to on Reddit: https://www.reddit.com/r/riffusion/comments/znbo75/how_to_load_model_to_automatic1111/ . I was using the 4Gb one for training. I don't actually know what the difference is between these versions db0798#7460: Here's a random output file from the chiptunes test. It's kind of low fidelity but doesn't seem phasey in exactly in the same way as yours. I guess this could be because the material it was trained on was already 8 bit, so bit reduction doesn't affect it as much https://cdn.discordapp.com/attachments/1053081177772261386/1054571256814514277/sample_8750-00.mp3 nullerror#1387: hmm i wonder why mine is acting all weird nullerror#1387: did u use a colab to train it? db0798#7460: I used Automatic1111 WebUI run locally, with RTX 3080 nullerror#1387: ah see i got an amd card so im using a google colab db0798#7460: I tried right now what happens it I try to do the training with the 14 Gb model file, it doesn't work on my setup because CUDA runs out of memory nullerror#1387: i wonder if its chkpt setup is messy oir smth db0798#7460: I guess the spectrogram to audio conversion settings can make a difference to the output quality, too nullerror#1387: how did u convert? did u use that github u linked earlier? nullerror#1387: i did max settings except 512 height nullerror#1387: for audio to spectro nullerror#1387: i mean db0798#7460: Yes, I used the scripts from that Github page db0798#7460: For audio to spectrum I used the default settings
db0798#7460: For spectrum to audio I used the default settings, except I reduced maxvol from 100 to 50 because otherwise the audio started clipping nullerror#1387: gotcha nullerror#1387: i tried running spec to audio but again amd gpu so i couldnt do it db0798#7460: It would be handy if there was a Colab version of that Github repository nullerror#1387: yeah fr nullerror#1387: im using the dreambooth colab for fine tuning then spaces to run it nullerror#1387: also just some sort of how many images for fine tuning/steps ofc nullerror#1387: i guess im just asking for a guide at this point lmao db0798#7460: Yes. I think right now this hasn't been tested enough for anyone to write a guide IgnizHerz#2097: haha still developing the tools to play with it I think IgnizHerz#2097: plus training takes time LAIONardo#4462: do you actually have to use Dreambooth? Would it just replace the seeds in the interference model be enough? db0798#7460: How's that done? Would textual inversion do that? LAIONardo#4462: Maybe I am wrong here, but isn't this folder what train the model? https://github.com/riffusion/riffusion-inference/tree/main/seed_images LAIONardo#4462: Maybe your way of deploying DreamBot is the right thing to do
db0798#7460: I don't know what those images are for exactly but I think replacing the image files in that directory won't do anything unless you retrain the whole Riffusion model from scratch in the way the people who created that model did db0798#7460: I think textual inversion might also work in place of Dreambooth but I haven't heard of anyone trying it for Riffusion LAIONardo#4462: I wonder if @hayk could share some lights on what that folder does 🙂 LAIONardo#4462: Incredible work btw! a_robot_kicker#7014: No those are just initial images that can be used as a seed for img2img LAIONardo#4462: Oh I see that makes sense! noop_noob#0479: https://www.youtube.com/watch?v=uGRLOMf2hSc noop_noob#0479: AI music from a different team. Meatfucker#1381: Hello, enjoy your tool quite a bit. Also noted we have a bit of hobby crossovers when I checked your github profile. I fly fpv drones for shits n giggles. Meatfucker#1381: One thing I noted is the seams in between loops is a bit abrupt and had an idea about that. If you have a fast enough gpu you should be able to take two outputs and img2img the seam between them Meatfucker#1381: should make the transition between clips much smoother, but I think it would roughly double processing time since you would be making intermediate frames every time Meatfucker#1381: though you also wouldnt have to use the entire generated frame, just the edge, so maybe it wouldnt add so much noop_noob#0479: What is this? https://fxtwitter.com/naklecha/status/1598956352851693568 Meatfucker#1381: Looks like they are extracting chords from an audio file and then teaching a model to predict them Meatfucker#1381: a neat approach, but itll be limited to sounds that are chords Id imagine
JL#1976: https://arstechnica.com/information-technology/2022/12/riffusions-ai-generates-music-from-text-using-visual-sonograms/ Ars Technica article IDDQD#9118: yes, indeed Nikuson#6709: for one script to convert audio to image spectrogram XIVV#9579: how do i turn that off Edenoide#0166: Hi! I'm a windows user and I've been unable to make riffusion works on it... But I've created a simple Colab from the RIFFUSION MANIPULATION github for converting audio to spectrogram for model training: https://drive.google.com/file/d/1Mv3FsSiZGWt_qRv1UloG2gIawlalMlej/view?usp=share_link Edenoide#0166: My programming level is zero by the way Nikuson#6709: if anyone has a script without using something like ffmpeg - i will be grateful Nikuson#6709: this is great. but in principle, in this repository, I can only get noise Edenoide#0166: They look good enough but I haven't tried yet to turn them again into sound https://cdn.discordapp.com/attachments/1053081177772261386/1054723695203074108/aicumbia16_0.png Edenoide#0166: I've been working with 5,12 seconds loops Edenoide#0166: in wav format Edenoide#0166: for a reason I don't know when turning it into mp3 the time length changes a bit and creats an extra chunk with just white Edenoide#0166: so I'm only saving the first png of every loop Nikuson#6709: I have them written badly, maybe I mixed up the sizes Nikuson#6709: 512*512 https://cdn.discordapp.com/attachments/1053081177772261386/1054724689064366110/12bb35ac-1c9e-49ce-a576-2746ec474aeb.png
Edenoide#0166: wow Edenoide#0166: maybe it's something wrong with the audio. I've been generating the loops with Audacity (free sound software): Edenoide#0166: Just drag and drop a sound on it. Then select 4 beats for making a loop and delete the rest (In case you are generating 'four-to-the-floor' electronic music). Double click on the timeline and then Effect>Pitch & tempo>Change tempo and in the last parameter Length (seconds) put on the second cell 5.119 Edenoide#0166: Then File>Export>Export as WAV Edenoide#0166: The thing is your final file should be a 16bit .wav with a length of 5.119 seconds Nikuson#6709: I did it, I didn't exactly trim the song to 5 seconds last time https://cdn.discordapp.com/attachments/1053081177772261386/1054726259298537562/7fb7aa90-d9b5-4c8d-b8dd-d94d4f35002b.png Nikuson#6709: after reverse processing https://cdn.discordapp.com/attachments/1053081177772261386/1054726335085416468/LG.mp3 Edenoide#0166: that looks a lot better but your clip seems shorter than 5.119 seconds so there's a silence at the end. Nikuson#6709: I made to your img2audio notepad: https://colab.research.google.com/drive/1-REue4KpDhOMDI-v6gRytMpANoMUqFvi?usp=sharing now all functionality is implemented here Edenoide#0166: great! Nikuson#6709: in the original trimmed clip, there is also silence at the end Nikuson#6709: https://cdn.discordapp.com/attachments/1053081177772261386/1054727175238066196/Lady_Gaga.wav Edenoide#0166: perfect then
Edenoide#0166: How did you avoid the 'clipping' artifacts in the second .wav? Nikuson#6709: don't know, i just cut the audio through this service for even 5 seconds: https://mp3cut.net/ Edenoide#0166: I think riffusion only works with loops of 5,12 seconds (maybe I'm wrong). This means if you are not training your model with 'loopable' cuts of 94bpms (or 47bpm, 188bpm...etc) it would sound like a patchwork but maybe in an interesting way. Nikuson#6709: I don't want to use this for riffusion, I'm training my model Nikuson#6709: riffusion trained a little wrong a_robot_kicker#7014: Oh. Check out the branch I posted a bit earlier. All I did was invert the code that coverts image to wav. You will find that in my audio py. Btw it requires 16 bit 44.1 khz mono wav files. You shouldn't need ffmpeg for that as python has a native wave file reader Nikuson#6709: Can I please have a proverb for this? a_robot_kicker#7014: To convert other formats into that you can use vlc or audacity, although those things are probably running ffmpeg under the hood they at least install it for you a_robot_kicker#7014: I assume you mean link. See audio.py in there, has wave to spectrogram. There are the guts of loading a wave from disk in there in gui.py as well. https://github.com/mklingen/riffusion-inference Nikuson#6709: Thanks, I'll take a look Haycoat#4808: Can we get a huggingface demo for wav2spec2music? nullerror#1387: ooooooooooooooooooooooo the audio needs to be mono? nullerror#1387: hm nullerror#1387: wait also i assume thats a typo but it requires 44000 not 4400 sampling frequency? nullerror#1387: and are we sure its not the industry standard 44.1khz it has to be 44khz?
nullerror#1387: ok checked the code can confirm it is 44.1khz nullerror#1387: scared me for a sec Haycoat#4808: Should I start a list of artists the model currently recognizes? Haycoat#4808: Because there's a few that are very prominent when generating with their name a_robot_kicker#7014: Yeah 44.1 kHz a_robot_kicker#7014: If the author would just tell us the training set, we'd know that. But I can't find the training data.... Haycoat#4808: I know Avicii, Eminem, Post Malone, Frank Sinatra and Billie Ellish are some that generate good results a_robot_kicker#7014: Britney Spears, Charlie Parker, Jimi Hendrix and Aretha Franklin all worked for me. Surprisingly, Michael Jackson did not. Nirvana didn't. a_robot_kicker#7014: Oh I tried some classical. Bach and Chopin sort of work denny#1553: deadmau5 seems to work denny#1553: Using 'deadmau5 melody' gives more than just droning thumps denny#1553: Queen, Weezer, Kurt Cobain, backstreet boys seems to work too LAIONardo#4462: So a part I am confused here (for people who are training the model on a specific sample) are you just training Dreambooth with few pictures of different spectrogram? Cause what I still don't get is doesn't Dreambooth has a lot more picture than just the one you train the model with? Are you also replacing those as well with more spectrograms?? LAIONardo#4462: Like if I am retraining Dreambooth on new spectrogram, both the instance data and the class data needs to be spectrogram only I suppose? but what is the difference there, like how do I choose what's one or another since they are both spectrograms (unlike regular profile case where the instance would be a pic of myself) https://cdn.discordapp.com/attachments/1053081177772261386/1054796709248647198/CleanShot_2022-12-20_at_11.23.202x.png nullerror#1387: i’m training mine on a whole genre via a dreambooth google colab. converted the songs/samples to spectrogram and then uploaded as instance data. i’m not sure what class data is my colab doesn’t have that
Twee#2335: i should make an ai-generated lo-fi hip hop livestream Twee#2335: nobody will tell the difference nullerror#1387: uploaded roughly 1400pics nullerror#1387: haha twee that was what i was gonna go for here in a sec nullerror#1387: endless lofi beats Twee#2335: i mostly wanna do it as a critique Twee#2335: of how formulaic a lot of those beats are nullerror#1387: could be said of any genre Twee#2335: yes but this model is trained on beats lol nullerror#1387: ? Twee#2335: i mean thats what ppl in the share-riffs channel said Twee#2335: although i mostly have an issue with a lot of cookie-cutter lo-fi hip hop thats become too oversaturated lol nullerror#1387: i think it’s trained on a variety of lounge music and various electronic artists as people were mentioning above nullerror#1387: the main model i mean nullerror#1387: ofc a lofi hiphop centered one could also be made
nullerror#1387: yeye Semper#0669: Oh I see! Would you mind share the google collab link you used? Twee#2335: most of my ai ideas are mostly satirical critiques of lack of creativity within culture nullerror#1387: sure thing Semper#0669: I am interested in that question Semper#0669: As well nullerror#1387: it comes with a guide as well one sec Twee#2335: that or just shitposts nullerror#1387: https://bytexd.com/how-to-use-dreambooth-to-fine-tune-stable-diffusion-colab/ nullerror#1387: https://bytexd.com/how-to-use-dreambooth-to-fine-tune-stable-diffusion-colab/ nullerror#1387: apologies my links send twice cuz my discord is glitchy nullerror#1387: basically train it here then transfer to an instance of riffusion Twee#2335: unfortunately im still stuck with automatic1111's webui so i cant make anything fancy other than short clips and convert them to audio Semper#0669: Thank you are the best! I guess the name of the sample here is important as well right? Semper#0669: Like if I want to train it on jazz each sample should have an artist name to then be found in generation right?
pnuts#1013: install it from the official repo and run the web-app locally? at least that way you'll get continuous playback Twee#2335: tried and ran into a lot of headaches lol pnuts#1013: oh 🙂 Twee#2335: also storage is an issue Twee#2335: all these models, man Twee#2335: they eat ur hard drive up Semper#0669: Ahah yeah Twee#2335: and riffusion is like Twee#2335: 15 gb pnuts#1013: you're not wrong Twee#2335: probably the biggest one i have nullerror#1387: semper should all be in the guide but yeah all my images were named the same Twee#2335: ill try again some other time and maybe i can reach out if u know how to do it correctly? nullerror#1387: can’t confirm it works completely yet. did a quick test yesterday and it was getting there. trying a bigger one as of rn. when i load the bigger one i’ll send my finding as to if it worked nullerror#1387: also make sure ur audio is mono/16bit wav/44.1khz
nullerror#1387: then use that github linked somewhere above called like manipulation tools for riffusion or smth to get audiotoimg for training pnuts#1013: sure, I don't recall running into any major issues. I've got it working on 2 machines. I'm sure we can work it out nullerror#1387: twee do u have an amd graphics card pnuts#1013: nvidia here, does AMD even support CUDA? Twee#2335: nah Twee#2335: i have nvidia Twee#2335: more specifically Twee#2335: i use a cloud pc with a nvidia p5000 quadro Twee#2335: for the ai stuff Twee#2335: (and gaming ofc lmao) pnuts#1013: running locally on a 3080 here nullerror#1387: was gonna say if it ain’t working might be amd nullerror#1387: that’s my issue and why i have to use colab and stuff nullerror#1387: where do u rent a cloud computer from twee i’ve been looking for one Twee#2335: i mean this first step isnt really clear. npm install on your C home drive doesnt do anything lol https://cdn.discordapp.com/attachments/1053081177772261386/1054801152530731008/Screenshot_2022-12-20_at_11.42.17_AM.png
pnuts#1013: https://lambdalabs.com/ seems to be one of the cheaper options pnuts#1013: make sure you have node/npm installed, then run `npm install` from inside the folder Twee#2335: which folder though Twee#2335: the node folder? Twee#2335: i wish that was specified tbh pnuts#1013: the git repo you cloned Twee#2335: wait Twee#2335: ahhhhh Twee#2335: yea i think i tried that too Twee#2335: hold on pnuts#1013: so from inside the `riffusion-app` folder Twee#2335: yea screw it ill just try agian lol Twee#2335: oh ok pnuts#1013: in the same folder you also want to create a file called `.env` or `.env.local` and add ```RIFFUSION_FLASK_URL=http://127.0.0.1:3013/run_inference/``` pnuts#1013: if you run the inference server on the same box the above will work
pnuts#1013: if it's running elsewhere, add the correct IP nullerror#1387: thanks punts i’ve been looking at that and vast ai nullerror#1387: i’ll extend my search Twee#2335: i should had know i should had git cloned, im just still getting used to "developer unclear instructions to layman user" syndrome Twee#2335: do u edit the file like a textfile? pnuts#1013: yes Twee#2335: ok bc Twee#2335: i cant seem to be able to edit it pnuts#1013: open it in any old text editor pnuts#1013: if you're struggling create a .txt file, add `RIFFUSION_FLASK_URL=http://127.0.0.1:3013/run_inference/` and then rename it to .env or .env.local Twee#2335: no i gotcha Twee#2335: usually im used to right clicking a file and hit the "edit" button Twee#2335: ok time to donwload the model but where does that get put in pnuts#1013: right-click open with pnuts#1013: you don't actually need the 14GB checkpoint, but if you want it `git lfs clone https://huggingface.co/riffusion/riffusion-model-v1`
Twee#2335: oh lmao Twee#2335: whats the checkpoint download for then pnuts#1013: more fine=tuning perhaps? I'm pretty confident I didn't download it on the 2nd install I did. Twee#2335: also Twee#2335: wasnt this suppose to be in the inference folder Twee#2335: which is a separate download pnuts#1013: also it's 14GB I only have a 10gig card. pnuts#1013: no, that variable is to tell the app where to find the inference server a_robot_kicker#7014: Uh, it downloads the 14gb model on startup if it can. Haycoat#4808: Img2spec2music should be possible to do in a huggingface demo... a_robot_kicker#7014: Also checkpoint size doesn't seem to necessarily be the amount of vram it uses, but I could be mistaken pnuts#1013: any idea how it is split up, as I can run it on a 10gig card. I struggle running various other checkpoints due to their size. a_robot_kicker#7014: I'm not sure but in my case it seems to have downsampled the model to float16 pnuts#1013: too busy messing around with stuff, should really read the code a_robot_kicker#7014: It doesn't seem to use up my 10gb of vram
Haycoat#4808: Like if you have a spectrogram of your voice, you can convert it to the style of Avicii EDM with img2img implementation Twee#2335: https://cdn.discordapp.com/attachments/1053081177772261386/1054806218834718791/Screenshot_2022-12-20_at_12.03.15_PM.png pnuts#1013: you've launched the inference server too? Twee#2335: well i mean thats why i asked about the inference server too lol Twee#2335: i was wondering if it was a necessary download pnuts#1013: clone this fella <https://github.com/riffusion/riffusion-inference> pnuts#1013: the first part you completed it for the front-end only pnuts#1013: it's the inference-server that will generate your images nullerror#1387: there’s a smaller version of the model available nullerror#1387: as a chkpt file on hugface nullerror#1387: 4gb nullerror#1387: or so Twee#2335: cool https://cdn.discordapp.com/attachments/1053081177772261386/1054806906327285790/Screenshot_2022-12-20_at_12.05.36_PM.png Twee#2335: like i said Twee#2335: headache
pnuts#1013: you ran 3 commands at once Twee#2335: i mean that usually tends to work lol pnuts#1013: ```conda create --name riffusion-inference python=3.9 conda activate riffusion-inference python -m pip install -r requirements.txt``` Twee#2335: one command, then the other, then the other Haycoat#4808: What if we used the img2img part and use it in Riffusion? Then we can attempt a way to do audio2spec and use our own audio as a reference pnuts#1013: I can't remember if I had to also install torch after this. give it a try and see what errors it throws when you try and launch it a_robot_kicker#7014: In my case I had to separately install torch and cuda. Twee#2335: what step did i skipped this time lol https://cdn.discordapp.com/attachments/1053081177772261386/1054808178484838521/Screenshot_2022-12-20_at_12.10.55_PM.png pnuts#1013: `conda install ffmpeg` Twee#2335: still getting the no audio backend error Twee#2335: https://cdn.discordapp.com/attachments/1053081177772261386/1054808623445966918/Screenshot_2022-12-20_at_12.12.49_PM.png Twee#2335: do i have to install torch+cuda seperately or something pnuts#1013: running through the steps again on my sie
a_robot_kicker#7014: In my case that indicated needing to install torch a_robot_kicker#7014: Specifically torch audio and cuda pnuts#1013: `pip install --no-cache-dir --ignore-installed --force-reinstall --no-warn-conflicts torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116` Haycoat#4808: There's also a huggingface space for Riffusion already available! https://huggingface.co/spaces/fffiloni/spectrogram-to-music pnuts#1013: yep, had to re-install this. but it's up and running now pnuts#1013: https://cdn.discordapp.com/attachments/1053081177772261386/1054810986021924914/Capture.PNG Twee#2335: its downloading a bunch of stuff now lol pnuts#1013: will check in again shortly, off to grab some nom noms Twee#2335: laterrrr Haycoat#4808: My favorite artist to use is Avicii, hands down a bop to listen Twee#2335: lol https://cdn.discordapp.com/attachments/1053081177772261386/1054815748343738519/Screenshot_2022-12-20_at_12.41.06_PM.png Twee#2335: ngl i liked my results better with automatic1111's webui settings Twee#2335: the web app is not exactly versatile lol pnuts#1013: it gives you continuous playback, and you can add your seed images. Semper#0669: For the riffusion manipulation tool how do I execute the command on all the files in a folder instead of one by one? the command is:
python3 file2img.py -i INPUT_AUDIO.wav -o OUTPUT_FOLDER But using /foldername/*.wav doesn’t work Twee#2335: i can add spectrograms into the web app? pnuts#1013: yes, there's a seed image folder pnuts#1013: <https://github.com/riffusion/riffusion-inference/tree/main/seed_images> Twee#2335: once i add a seed image, how do i access it within the web app? pnuts#1013: https://localhost:3000/?&prompt=brazilian+Forr%C3%B3+dance&seed=51209&denoising=0.75&seedImageId=og_beat pnuts#1013: replace `og_beat` at the end with your seed pnuts#1013: I think I also had to edit something else and give it an initial seed hayk#0058: Thanks @Meatfucker ! I think there are several good ideas for smoothing between clips, some discussion here https://github.com/orgs/riffusion/discussions/18 hayk#0058: I'm going to be adding in a streamlit app (because I know it best over gradio, etc) to riffusion-inference that does some of the common operations for riffusion like converting from audio, generation, interpolation, etc Nikuson#6709: For those who find it difficult to trim an audio file to 5 seconds every time, I posted a repository with this script for quick and easy trimming
https://github.com/nikuson/trimmed Nikuson#6709: ChatGPT generated LAIONardo#4462: Thank you! nullerror#1387: doesnt the riffusion manipulation thing already do this for spectrograms? nullerror#1387: unless this is meant for smth else Philpax#0001: hey there! apologies if this has already been asked, but is there any information on finetuning sd/riffusion on your own collection of tagged samples? I'd like to do conditional sound effect generation and am wondering if anyone's explored this yet Meatfucker#1381: I've seen some people mention it. It's a standard diffusion model on terms of training though I don't know it's initial tagging Meatfucker#1381: You should be able to convert some things into spectrograms and train it like any other model Philpax#0001: aye, that's what I suspected - just wanted to make sure there wasn't any other kind of magic Nikuson#6709: no, spectrograms are obtained with artifacts Nikuson#6709: https://cdn.discordapp.com/attachments/1053081177772261386/1054837140544036955/12bb35ac-1c9e-49ce-a576-2746ec474aeb.png Haycoat#4808: Could you try getting ChatGPT to make a GitHub for wav2spec? Nikuson#6709: in what sense? diffusion manipulation has all the necessary tools for transformations Haycoat#4808: I mean like being able to upload an audio file and use it for Riffusion Edenoide#0166: Export audios as .WAV 16bit
Edenoide#0166: Is this Linux? Did someone been capable of make it run on windows? denny#1553: The inference server runs fine on windows through conda denny#1553: Haven't tried the front end but I suspect it's fine too Edenoide#0166: through conda you say? I'm gonna give it a try then Edenoide#0166: I had as lot of problems with transformerx, pydubs etc Edenoide#0166: I'm gonna try it. Thanks nullerror#1387: nokia on im confused what do you mean spectrogram are “obtained through artifacts”? nullerror#1387: nikuson db0798#7460: The audio to spectrogram scripts from the Riffusion Manipulation Tools Github page work okay for me. Nikuson must be doing something wrong to get artefacts nullerror#1387: can report that it works for me as well. likely mp3 conversion or smth with the settings on their end nullerror#1387: my images don’t come out like that at least i mean Edenoide#0166: I'm trying to run Rifussion inference server on Windows using conda. I've installed ffmpeg via conda install -c conda-forge ffmpeg and soundfile (pip install soundfile) but a lot of errors appear when running the last step: Edenoide#0166: https://cdn.discordapp.com/attachments/1053081177772261386/1054870021639262219/errors.PNG Edenoide#0166: Any idea? I feel like I'm almost there... a_robot_kicker#7014: you do not have cuda and torch installed.
db0798#7460: It looks more like torch is installed but Cuda isn't Meatfucker#1381: pytorch website has a little command configuration thing on it to build you a command Meatfucker#1381: https://pytorch.org/get-started/locally/ db0798#7460: If I remember it right, on my computer I first had to install Cuda from NVIDIA website and after that use the command from the https://pytorch.org/get-started/locally/ page. When I didn't install Cuda from the NVIDIA page first, I got the same error message that Ketedeneden got Edenoide#0166: Baaam! Yeah it seems a problem with cuda https://cdn.discordapp.com/attachments/1053081177772261386/1054874669024546886/cuda.PNG Meatfucker#1381: the program runs in an enviroment separate from your base system enviroment Edenoide#0166: I see! a virtual enviroment? Meatfucker#1381: yep Meatfucker#1381: I think conda in this case, not a venv Edenoide#0166: right! thanks Meatfucker#1381: so enter your conda enviroment, and then put in the command the pytorch website tells you Meatfucker#1381: and it should sort out your deps Meatfucker#1381: if its still funny make a fresh conda enviroment Meatfucker#1381: sometimes old deps can gunk up the works with conda Nikuson#6709: I have observed the dataset of singing and now I will run fine tune stable diffusion
Nikuson#6709: it seems to me that only the sampler is trained in riffusion, which gives such poor quality matteo101man#6162: Anyone know of any local mashup AIs? nullerror#1387: rave dot dj nullerror#1387: been around a while its okay hayk#0058: 🤘 Hey riffusers! 🤘 @here @seth and I have been absolutely blown away by the response to our little hobby project. We had no idea if this approach would even work, and to see musicians and tinkerers building on top of it and making fun sounds is a dream. We’re still trying to keep up with everything, but a few fun notes: + riffusion.com has been visited over a million times in the past few days and generated about a year of unique audio. + Our GPUs still can’t always keep up with requests, but they are getting close! + We will soon add a streamlit app that demos some of the common use cases like interpolation, img2img, and some audio transformations. + We are beginning to collaborate with LAION, the people behind the dataset that trained stable diffusion, to see how we can scale up. + We’ll also make GitHub issues to track a bunch of the improvement ideas we have.
+ Attached is an awesome sample created by producer Jamison Baken incorporating outputs from Riffusion. If you’re a software eng or musician interested in being more directly involved, feel free to send us a DM. And everyone, thanks for being here! https://cdn.discordapp.com/attachments/1053081177772261386/1054910368960499742/mix.mp3 Edenoide#0166: Great work!!!! COMEHU#2094: Good luck, im sure this is the start of something bigger Nikuson#6709: wrote in DM. cravinadventure#7884: Awesome post @hayk @seth ! Excited to contribute to the improvement of Riffusion and really see it take off :redrocket: Crazy that the site has been visited over 1M times and has generated about 1 yr of audio. :partyBlob: **FUN Discord Community Idea:** :partyBlob: - Host competitions where the community will be able to submit and *vote, by liking a post*, on the top 10 samples weekly. - Additionally, the prompt should be listed with each submission. **Competitions & Leaderboards info:**
- set up a new “***Competitions Channel***” which includes: - “***Weekly Top 10 Leaderboard***” - Top liked posts in 1 week. Ideally you should keep track of the weekly leaderboards so anyone could go back in time to look and see who won on any given week, in the past. - “***All-Time Top 10 Leaderboard***” - Top liked posts of all-time, from every weekly competition combined. I hope something like this will be implemented because I think it would be FUN for everyone & keep users active. 🙂 Philpax#0001: is there an standalone application that can convert a riffusion spectrograph to audio? preferably command-line nullerror#1387: yes nullerror#1387: riffusion manipulator i think its called? ill grab the link nullerror#1387: https://github.com/chavinlo/riffusion-manipulation nullerror#1387: https://github.com/chavinlo/riffusion-manipulation nullerror#1387: manipulation nullerror#1387: almost had it cravinadventure#7884: wait so does that software allow you to basically feed in images as input to create audio?