Feature Extraction
Safetensors
English
minicpmv
VisRAG
custom_code
tcy6 commited on
Commit
aa32f88
1 Parent(s): 335a244

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -4
README.md CHANGED
@@ -12,10 +12,20 @@ tags:
12
  pipeline_tag: feature-extraction
13
  ---
14
  # VisRAG: Vision-based Retrieval-augmented Generation on Multi-modality Documents
15
- [![Hugging Face](https://img.shields.io/badge/VisRAG_Ret-fcd022?style=for-the-badge&logo=huggingface&logoColor=000)](https://huggingface.co/openbmb/VisRAG-Ret)
16
- [![Hugging Face](https://img.shields.io/badge/VisRAG_Collection-fcd022?style=for-the-badge&logo=huggingface&logoColor=000)](https://huggingface.co/collections/openbmb/visrag-6717bbfb471bb018a49f1c69)
17
- [![arXiv](https://img.shields.io/badge/arXiv-2410.10594-ff0000.svg?style=for-the-badge)](https://arxiv.org/abs/2410.10594)
18
- [![Github](https://img.shields.io/badge/VisRAG-000000?style=for-the-badge&logo=github&logoColor=000&logoColor=white)](https://github.com/OpenBMB/VisRAG)
 
 
 
 
 
 
 
 
 
 
19
  **VisRAG** is a novel vision-language model (VLM)-based RAG pipeline. In this pipeline, instead of first parsing the document to obtain text, the document is directly embedded using a VLM as an image and then retrieved to enhance the generation of a VLM.Compared to traditional text-based RAG, **VisRAG** maximizes the retention and utilization of the data information in the original documents, eliminating the information loss introduced during the parsing process.
20
  <p align="center"><img width=800 src="https://github.com/openbmb/VisRAG/blob/master/assets/main_figure.png?raw=true"/></p>
21
 
 
12
  pipeline_tag: feature-extraction
13
  ---
14
  # VisRAG: Vision-based Retrieval-augmented Generation on Multi-modality Documents
15
+ <div style="display: flex; align-items: center;">
16
+ <a href="https://huggingface.co/openbmb/VisRAG-Ret" style="margin-right: 10px;">
17
+ <img src="https://img.shields.io/badge/VisRAG_Ret-fcd022?style=for-the-badge&logo=huggingface&logoColor=000" alt="VisRAG Ret">
18
+ </a>
19
+ <a href="https://huggingface.co/collections/openbmb/visrag-6717bbfb471bb018a49f1c69" style="margin-right: 10px;">
20
+ <img src="https://img.shields.io/badge/VisRAG_Collection-fcd022?style=for-the-badge&logo=huggingface&logoColor=000" alt="VisRAG Collection">
21
+ </a>
22
+ <a href="https://arxiv.org/abs/2410.10594" style="margin-right: 10px;">
23
+ <img src="https://img.shields.io/badge/arXiv-2410.10594-ff0000.svg?style=for-the-badge" alt="arXiv">
24
+ </a>
25
+ <a href="https://github.com/openbmb/VisRAG" style="margin-right: 10px;">
26
+ <img src="https://img.shields.io/badge/VisRAG-000000?style=for-the-badge&logo=github&logoColor=000&logoColor=white" alt="GitHub">
27
+ </a>
28
+ </div>
29
  **VisRAG** is a novel vision-language model (VLM)-based RAG pipeline. In this pipeline, instead of first parsing the document to obtain text, the document is directly embedded using a VLM as an image and then retrieved to enhance the generation of a VLM.Compared to traditional text-based RAG, **VisRAG** maximizes the retention and utilization of the data information in the original documents, eliminating the information loss introduced during the parsing process.
30
  <p align="center"><img width=800 src="https://github.com/openbmb/VisRAG/blob/master/assets/main_figure.png?raw=true"/></p>
31