|
--- |
|
language: |
|
- en |
|
license: mit |
|
library_name: transformers |
|
datasets: |
|
- fnlp/AnyInstruct |
|
- fixie-ai/boolq-audio |
|
- fixie-ai/soda-audio |
|
- speechcolab/gigaspeech |
|
--- |
|
|
|
# Model Card for Ultravox |
|
|
|
Ultravox is a multimodal Speech LLM built around a pretrained [Llama3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B) and [Whisper-small](https://huggingface.co/openai/whisper-small) backbone.\ |
|
See https://ultravox.ai for the GitHub repo and more information. |
|
|
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
Ultravox is a multimodal model that can consume both speech and text as input (e.g., a text system prompt and voice user message). |
|
The input to the model is given as a text prompt with a special `<|audio|>` pseudo-token, and the model processor will replace this magic token with embeddings derived from the input audio. |
|
Using the merged embeddings as input, the model will then generate output text as usual. |
|
|
|
In a future revision of Ultravox, we plan to expand the token vocabulary to support generation of semantic and acoustic audio tokens, which can then be fed to a vocoder to produce voice output. |
|
No preference tuning has been applied to this revision of the model. |
|
|
|
- **Developed by:** Fixie.ai |
|
- **License:** MIT |
|
|
|
### Model Sources |
|
|
|
- **Repository:** https://ultravox.ai |
|
- **Demo:** See repo |
|
|
|
## Uses |
|
|
|
Voice agents, speech-to-speech translation, analysis of spoken audio |
|
|
|
|
|
## Training Details |
|
|
|
The model uses a pre-trained [Llama3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B) backbone as well as the encoder part of [Whisper-small](https://huggingface.co/openai/whisper-small). |
|
|
|
The multi-modal projector is first trained (while keeping backbones frozen) in stage 1 and then in stage 2, Llama3 is also fine-tuned using LoRA. |
|
|
|
### Training Data |
|
|
|
Training dataset is a mix of ASR datasets (Gigaspeech), instruction-following and QA data (AnyInstruct and an extended version of BoolQ), and conversational data (SODA with alternative generations for last two turns). |
|
|
|
|
|
### Training Procedure |
|
|
|
Supervised speech to audio finetuning. For more info, see [training code in Ultravox repo](https://github.com/fixie-ai/ultravox/blob/main/ultravox/training/train.py). |
|
|
|
|
|
#### Training Hyperparameters |
|
|
|
- **Training regime:** BF16 mixed precision training |
|
- **Hardward used:** 8x A100-40GB GPUs |
|
- **LLM LoRA Rank:** 64 |
|
|
|
#### Speeds, Sizes, Times |
|
|
|
The current version of Ultravox, when invoked with audio content, has a time-to-first-token (TTFT) of approximately 200ms, and a tokens-per-second rate of ~50-100 when using an A100-40GB GPU, all using a Llama 3 8B backbone. |
|
|
|
Check out the audio tab on [TheFastest.ai](https://thefastest.ai/?m=audio) for daily benchmarks and a comparison with other existing models. |
|
|
|
## Evaluation |
|
|
|
<!-- This section describes the evaluation protocols and provides the results. --> |
|
|
|
### Testing Data, Factors & Metrics |
|
|
|
#### Testing Data |
|
|
|
<!-- This should link to a Dataset Card if possible. --> |
|
|
|
[More Information Needed] |
|
|
|
#### Factors |
|
|
|
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> |
|
|
|
[More Information Needed] |
|
|
|
#### Metrics |
|
|
|
<!-- These are the evaluation metrics being used, ideally with a description of why. --> |
|
|
|
[More Information Needed] |
|
|
|
### Results |
|
|
|
[More Information Needed] |
|
|
|
#### Summary |