Spaces:
Runtime error
Runtime error
gorkemgoknar
commited on
Commit
•
e0a3048
1
Parent(s):
910f4c3
Update app.py
Browse files
app.py
CHANGED
@@ -763,11 +763,11 @@ with gr.Blocks(title=title) as demo:
|
|
763 |
|
764 |
gr.Markdown(
|
765 |
"""
|
766 |
-
This Space demonstrates how to speak to a chatbot, based solely on open
|
767 |
-
It relies on
|
768 |
Speech to Text : [Whisper-large-v2](https://sanchit-gandhi-whisper-large-v2.hf.space/) as an ASR model, to transcribe recorded audio to text. It is called through a [gradio client](https://www.gradio.app/docs/client).
|
769 |
-
LLM
|
770 |
-
|
771 |
Text to Speech : [Coqui's XTTS](https://huggingface.co/spaces/coqui/xtts) as a Multilingual TTS model, to generate the chatbot answers. This time, the model is hosted locally.
|
772 |
|
773 |
Note:
|
|
|
763 |
|
764 |
gr.Markdown(
|
765 |
"""
|
766 |
+
This Space demonstrates how to speak to a chatbot, based solely on open accessible models.
|
767 |
+
It relies on following models :
|
768 |
Speech to Text : [Whisper-large-v2](https://sanchit-gandhi-whisper-large-v2.hf.space/) as an ASR model, to transcribe recorded audio to text. It is called through a [gradio client](https://www.gradio.app/docs/client).
|
769 |
+
LLM Mistral : [Mistral-7b-instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) as the chat model, GGUF Q5_K_M quantized version used locally via llama_cpp[huggingface_hub](TheBloke/Mistral-7B-Instruct-v0.1-GGUF).
|
770 |
+
LLM Zephyr : [Zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) as the chat model. GGUF Q5_K_M quantized version used locally via llama_cpp from [huggingface.co/TheBloke](https://huggingface.co/TheBloke/zephyr-7B-alpha-GGUF).
|
771 |
Text to Speech : [Coqui's XTTS](https://huggingface.co/spaces/coqui/xtts) as a Multilingual TTS model, to generate the chatbot answers. This time, the model is hosted locally.
|
772 |
|
773 |
Note:
|