Post
538
I heard someone saying ππΌπΆπ°π² assistants are the future, and someone else that π ππ£ will rule the AI world... So I decided to combine both!π
Meet ππ²πππ (π§πpeπ¦cript π©oice πssistant, https://github.com/AstraBert/TySVA), your (speaking) AI companion for everyday TypeScript programming tasks!ποΈ
TySVA is a skilled TypeScript expert and, to provide accurate and up-to-date responses, she leverages the following workflow:
π£οΈ If you talk to her, she converts the audio into a textual prompt, and use it a starting point to answer your questions (if you send a message, she'll use directly thatπ¬)
π§ She can solve your questions by (deep)searching the web and/or by retrieving relevant information from a vector database containing TypeScript documentation. If the answer is simple, she can also reply directly (no tools needed!)
π To ease her life, TySVA has all the tools she needs available through Model Context Protocol (MCP)
π Once she's done, she returns her answer to you, along with a voice summary of what she did and what solution she found
But how does she do that? What are her components?π€¨
π Qdrant + HuggingFace give her the documentation knowledge, providing the vector database and the embeddings
π Linkup provides her with up-to-date, grounded answers, connecting her to the web
π¦ LlamaIndex makes up her brain, with the whole agentic architecture
π€ ElevenLabs gives her ears and mouth, transcribing and producing voice inputs and outoputs
π Groq provides her with speech, being the LLM provider behind TySVA
π¨ Gradio+FastAPI make up her face and fibers, providing a seamless backend-to-frontend integration
If you're now curious of trying her, you can easily do that by spinning her up locally (and with Docker!π) from the GitHub repo β‘οΈ https://github.com/AstraBert/TySVA
And feel free to leave any feedback!β¨
Meet ππ²πππ (π§πpeπ¦cript π©oice πssistant, https://github.com/AstraBert/TySVA), your (speaking) AI companion for everyday TypeScript programming tasks!ποΈ
TySVA is a skilled TypeScript expert and, to provide accurate and up-to-date responses, she leverages the following workflow:
π£οΈ If you talk to her, she converts the audio into a textual prompt, and use it a starting point to answer your questions (if you send a message, she'll use directly thatπ¬)
π§ She can solve your questions by (deep)searching the web and/or by retrieving relevant information from a vector database containing TypeScript documentation. If the answer is simple, she can also reply directly (no tools needed!)
π To ease her life, TySVA has all the tools she needs available through Model Context Protocol (MCP)
π Once she's done, she returns her answer to you, along with a voice summary of what she did and what solution she found
But how does she do that? What are her components?π€¨
π Qdrant + HuggingFace give her the documentation knowledge, providing the vector database and the embeddings
π Linkup provides her with up-to-date, grounded answers, connecting her to the web
π¦ LlamaIndex makes up her brain, with the whole agentic architecture
π€ ElevenLabs gives her ears and mouth, transcribing and producing voice inputs and outoputs
π Groq provides her with speech, being the LLM provider behind TySVA
π¨ Gradio+FastAPI make up her face and fibers, providing a seamless backend-to-frontend integration
If you're now curious of trying her, you can easily do that by spinning her up locally (and with Docker!π) from the GitHub repo β‘οΈ https://github.com/AstraBert/TySVA
And feel free to leave any feedback!β¨