Ngodi
mrngodi
ยท
AI & ML interests
RNN, CNN, LLM, LVM,
Recent Activity
replied to
JustinLin610's
post
about 1 month ago
Finally, Qwen1.5-110B is out! With weights and demo!
Blog: https://qwenlm.github.io/blog/qwen1.5-110b/
Demo: https://huggingface.co/spaces/Qwen/Qwen1.5-110B-Chat-demo
Base: https://huggingface.co/Qwen/Qwen1.5-110B
Chat: https://huggingface.co/Qwen/Qwen1.5-110B-Chat
This model has some specific features:
* GQA
* 32K token context length
* Multilingual support
We feel good about its performance on benchmarks, including those for base models and chat models, but we still need more of your testing and feedback to help us know its capabilities and limitations!
Additionally, the base model has not learned chatml tokens. Yeah if you use chatml format, you need to be careful about it!
Enjoy and stay tuned for Qwen2!
Organizations
None yet
mrngodi's activity
replied to
JustinLin610's
post
about 1 month ago
reacted to
merve's
post with ๐๐ง
about 1 month ago
Post
1793
A complete RAG pipeline includes a reranker, which ranks the documents to find the best document ๐
Same goes for multimodal RAG, multimodal rerankers which we can integrate to multimodal RAG pipelines!
Learn how to build a complete multimodal RAG pipeline with vidore/colqwen2-v1.0 as retriever, lightonai/MonoQwen2-VL-v0.1 as reranker, Qwen/Qwen2-VL-7B-Instruct as VLM in this notebook that runs on a GPU as small as L4 ๐ฅ https://huggingface.co/learn/cookbook/multimodal_rag_using_document_retrieval_and_reranker_and_vlms
Same goes for multimodal RAG, multimodal rerankers which we can integrate to multimodal RAG pipelines!
Learn how to build a complete multimodal RAG pipeline with vidore/colqwen2-v1.0 as retriever, lightonai/MonoQwen2-VL-v0.1 as reranker, Qwen/Qwen2-VL-7B-Instruct as VLM in this notebook that runs on a GPU as small as L4 ๐ฅ https://huggingface.co/learn/cookbook/multimodal_rag_using_document_retrieval_and_reranker_and_vlms
upvoted
a
collection
3 months ago
upvoted
an
article
6 months ago
Article
Welcome FalconMamba: The first strong attention-free 7B model
โข
108