vall-e / README.md
ecker's picture
Update README.md
089db71 verified
|
raw
history blame
11.9 kB
metadata
license: agpl-3.0

This repo catalogs my weights for use with my VALL-E implementation as I try and iron out the kinks.

The model currently is in a semi-usable state, and I'm releasing them now in hopes that it also helps jumpstart anyone else that wants to use them.

To reiterate, this is by no means complete. I am not passing this off as competitive.

Models

This repo contains the following configurations under ./models/:

  • config.retnet.yaml / ar+nar-retnet-8: The previously released weights.

    • This configuration utilizes a RetNet (retention based "transformer") as the underlying architecture due to a number of misleading interpretations with comparisons, for better or for worse.
      • Prompt and response embeddings are summed (further RVQ levels gets the previous RVQ levels' embeddings factored in).
      • Tokenizer is a homebrewed "naive" implementation.
    • This model received the most training time between my 4070Ti, 7900XTX, and a few rental rigs to training further progress, entirely at bfloat16 with prodigyopt (and a few optimizer restarts).
    • The later part of training aimed to shuffle between speakers rather than the global pool of utterances to better focus on zero-shot performance. Due to this, I feel it achieved decent zero-shot performance.
    • However, due to the dataset being aggressively trimmed under 12 seconds for memory savings during training, it suffers trying to inference non-short utterances. Additional training may fix this, the following models seemed to adapt well to longer utterances.
      • From the ar+nar-llama-8 experiment, I believe this can be "fixed" with additional training on the currently processed dataset.
    • Prior testing showed that longer prompt durations results in better utterances.
    • Can benefit from additional training, but I recall the average loss being around 1.9 to 2.1.
      • However, due to regressions (or bias from working under llama), I don't think I can optimially train with a RetNet again (both in terms of VRAM consumption and throughput).
    • Currently does not seem to work anymore due to regressions in the code.
  • config.llama.yaml / ar+nar-llama-8: The most recent-ishly trained weights after learning from my mistakes.

    • This configuration utilizes Llama's attention-based transformer as the underlying architecture, making use of creature comforts like RoPE, GQA, and memory-efficient attention (trained under xformers, shouldn't really affect things).
      • Prompt and response embeddings ARE summed (half the model was trained without summing, but enabling it seemed to make the most sense, and it didn't affect anything to do so).
      • Utilizes a HF tokenizer for "optimal" vocab.
      • The current RVQ level is included as a token as well to help guide NAR tasks better.
    • This model received a few days of training on my 4xV100s, stepping up the duration window to try and better make the model inference for longer utterances.
      • Some sessions end up training the current duration window for a few epochs, but I don't know how much it affected things.
    • This model actually received additional post-training for a variety of issues needed to be addressed:
      • Training on shuffled batches of durations to have it better generalize on a variety of durations.
      • Non-naive prompt sampling for similar utterances to try and give better prompt adherance.
      • Additional languages (Japanese, French, and German) and an additional task: Speech-to-Text (phonemes)
      • etc.
    • However, it seems to only do well with long utterances. Short utterances fumble. I believe further training with a variety of durations should allow the AR to handle a variety of durations.
      • I believe the "slowly stepping up the context length" only works for text, and not audio.
      • Addendum: Additional brief training for a variety of duration lengths seemed to have mostly fixed this issue.
      • Addendum addendum: Properly creating the position IDs per-segment rather than the whole sequence, also helps a lot.
    • Zero-shot performance leaves a bit to be desired, as it did not receive the special training prioritizing shuffling between speakers rather than the global pool of utterances.
      • Addendum: Additional brief training for sampling based on speaker per "epoch" (per dataloader, not dataset) seemed to slightly improve it.
      • Addendum addendum: non-naive prompt sampling with a similar utterance to the output helps a non-negligible amount.
    • Testing showed that, despite also stepping up the prompt duration, it really likes three second prompts. longer input prompts does actually help.
      • Giving a wide coverage of phonemes to directly reference goes a long way.
    • Definitely needs additional training, but the next way to go is unknown.
      • Naturally, training it on a "next RVQ level is half as likely" distribution introduces some crust as the later RVQ levels are less accurate, introducing noise and artifacts.
      • Additional training on the AR will see huge diminishing returns, so I don't know if it's worth doing so. see slight improvements over additional epochs with differen training/sampling paradigms.
    • Seems to be a decent foundation for "distillation", at the very least for LoRA training.
      • Addendum: it seems to serve fine for patch-training a few extra tweaks, to non-unified position IDs, split classifier heads, and para-parallel decoding for the AR.
  • config.llama-tts+stt.yaml / ar+nar-tts+stt-llama-8: The above, but with partially trained for STT.

    • These weights use the above weights but with additional training for the default tts task and a new stt task (at a 3:1 ratio).
    • Initially was trained with duration_range: [3.0, 60.0] and sample_shuffle: True for a few hours, but then pivoted to my standard duration_range: [3.0, 12.0] and sample_shuffle: False
      • Will need the former training to "undo" any issues with durations, as it usually came up before.
    • stt task simply takes a piece of audio and outputs a transcription using IPA phonemes (that the model already is trained against for its text inputs).
      • Can be done with --task=stt and an empty ("") text input through the CLI interface or the Speech-to-Text tab in the web UI.
    • This mainly serves as a stepping stone before pivoting towards SpeechX tasks.
      • I first need a good mechanism to make sure I can extend existing weights with additional tasks, but with a simple enough task.
      • This also maybe seems to help bolster the initial TTS task by helping the model have a better internal state (or something to that tune).
    • STT is not perfect against voices that aren't close to a normal speaking voice (as per the dataset), unlike TTS where you can easily have "sounds close enough" and room for errors.
    • Addendum: this replaced the ar+nar-llama-8 as the defacto model (taking its name), so the above does apply.
  • config.llama[layerskip].yaml / ar+nar-layerskip-llama-8: The above, but with very brief training for LayerSkip:

    • Post-trained on a small English subset of Emilia and a small private corpus, and Japanese+French+German from Emilia.
    • Using shuffled batches (where each batch has the same durations) and a modified rvq_levels_p to help the NAR.
    • This model received LayerSkip-aware training, with layer dropout and early-exit loss to help try and bolster the model and enable self-speculation sampling.
    • I need to do heavy evaluation against the base model to ensure output quality does not drop before considering replacing the base model with this.
    • Goal is to utilize self-speculation sampling to enable speedups when possible.
      • Current implementation will early-exit if the entropy/varentropy of the logits are low enough
    • Training is a pain.
      • LayerSkip-aware training does not like to train under ROCm.
      • Training under float16+AMP with loss scaling will fry the model with a large enough de facto batch size (>512 samples/update step) and/or too low of a loss scale (<=8K).

Some additional configurations have been explored with, but experiments have not been fruitful:

  • Exotic wrappers like BitNet seemed to yield little gains in inferencing, somehow. The memory savings is pretty much unneccessary as the models are already manageable at ~200M parameters.
  • Mamba / Mamba2-based models have shown that it's really hard to have an AR+NAR model. I really do not want to bother throwing the compute at another meme arch I can't easily make use of all the other tech to throw at.
  • a pure NAR (plus length predictor) cannot be realized with the current architecture.
    • Transformer-based (or at least attention based) models can't seem to handle generating the initial (RVQ level 0) tokens from "thin air" (be it special tokens to repeating the input prompt).
    • A diffusion-based model will definitely work, as those are good at generating from noise.
    • The performance gains seem nice as the biggest "bottleneck" is the initial (RVQ level 0) AR pass, but it seems to require a lot of effort.
  • a model using Descript-Audio-Codec:
    • the 24KHz model will not converge no matter what. However, naively using just the first 8 RVQ levels might not be good enough, as there's too many codebooks for viable use.
    • the 44KHz model was erroneously assumed to be an even 44KHz, when in reality it's 44.1KHz. All of my audio has to be requantized, as there's some stuttering in it.
      • Because of this, training losses are high and it's having a hard time trying to converge.
    • It has sub-servicable output for the first 4 RVQ levels, but it's massive cope to try and use it as a model.
    • I believe there's hope to use it when I requantize my audio properly.
      • Addendum: even after properly processing my audio, the loss is actually worse than before. I imagine DAC just cannot be used as an intermediary for an LM.
  • a model with a causal size >1 (sampling more than one token for the AR):
    • re-using an existing model or training from scratch does not have fruitful results.
    • there's an inherent periodic stutter that doesn't seem to be able to be trained out, but it might require exotic sampling methods.
    • unfortunately it requires:
      • either something similar to Medusa heads, where there's additional parameters to perform speculative sampling,
      • a solution similar to what VALL-E 2 uses with group token embeddings or whatever, which will harm the NAR tasks in an AR+NAR model.
    • I just don't understand where the issue lies, since parallel decoding does work, as evidence with the NAR.

Some current "achitectural features" are in-use, but their effects need to be experimented with further:

  • split_classifier_heads is still a mystery whether it's truly helpful or not (each RVQ level gets its own output head).
  • audio_embeddings_sum is also a mystery whether it matters if each later RVQ level should "see" the past levels through summing embeddings, or if not doing it is preferable.
  • Disabling unified_position_ids seems to help quality more often than not, but I'm still unsure if it's beneficial in practice.

LoRAs

This repo also contains some LoRAs to serve as a reference under ./loras/.

Using a LoRA is the same as a base model, except you're required to have the base model already (obviously). Just use the LoRA's config YAML to load from instead to use it.

The only caveat is that my original dataset does contain (most of) these samples already, but given the sheer size of it, they're probably underutilized.

  • However, the base model already has almost adequate output from these speakers, but not enough to be satisfactory.