|
--- |
|
library_name: transformers |
|
tags: |
|
- colpali |
|
license: apache-2.0 |
|
datasets: |
|
- vidore/colpali_train_set |
|
language: |
|
- en |
|
base_model: |
|
- vidore/colqwen2-base |
|
pipeline_tag: visual-document-retrieval |
|
--- |
|
|
|
> [!WARNING] |
|
> EXPERIMENTAL: Wait for https://github.com/huggingface/transformers/pull/35778 to be merged before using! |
|
|
|
> [!IMPORTANT] |
|
> This version of ColQwen2 should be loaded with the `transformers 🤗` release, not with `colpali-engine`. |
|
> It was converted using the `convert_colqwen2_weights_to_hf.py` script |
|
> from the [`vidore/colqwen2-v1.0-merged`](https://huggingface.co/vidore/colqwen2-v1.0-merged) checkpoint. |
|
|
|
# ColQwen2: Visual Retriever based on Qwen2-VL-2B-Instruct with ColBERT strategy |
|
|
|
ColQwen2 is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features. |
|
It is a [Qwen2-VL-2B](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct) extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images. |
|
It was introduced in the paper [ColPali: Efficient Document Retrieval with Vision Language Models](https://arxiv.org/abs/2407.01449) and first released in [this repository](https://github.com/ManuelFay/colpali) |
|
|
|
<p align="center"><img width=800 src="https://github.com/illuin-tech/colpali/blob/main/assets/colpali_architecture.webp?raw=true"/></p> |
|
|
|
The HuggingFace `transformers` 🤗 implementation was contributed by Tony Wu ([@tonywu71](https://huggingface.co/tonywu71)) and Yoni Gozlan ([@yonigozlan](https://huggingface.co/yonigozlan)). |
|
|
|
## Model Description |
|
|
|
Read the `transformers` 🤗 model card: https://huggingface.co/docs/transformers/en/model_doc/colqwen2. |
|
|
|
## Model Training |
|
|
|
### Dataset |
|
Our training dataset of 127,460 query-page pairs is comprised of train sets of openly available academic datasets (63%) and a synthetic dataset made up of pages from web-crawled PDF documents and augmented with VLM-generated (Claude-3 Sonnet) pseudo-questions (37%). |
|
Our training set is fully English by design, enabling us to study zero-shot generalization to non-English languages. We explicitly verify no multi-page PDF document is used both [*ViDoRe*](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d) and in the train set to prevent evaluation contamination. |
|
A validation set is created with 2% of the samples to tune hyperparameters. |
|
|
|
## Usage |
|
|
|
```python |
|
import torch |
|
from PIL import Image |
|
|
|
from transformers import ColQwen2ForRetrieval, ColQwen2Processor |
|
from transformers.utils.import_utils import is_flash_attn_2_available |
|
|
|
|
|
model_name = "vidore/colqwen2-v1.0-hf" |
|
|
|
model = ColQwen2ForRetrieval.from_pretrained( |
|
model_name, |
|
torch_dtype=torch.bfloat16, |
|
device_map="cuda:0", # or "mps" if on Apple Silicon |
|
attn_implementation="flash_attention_2" if is_flash_attn_2_available() else None, |
|
).eval() |
|
|
|
processor = ColQwen2Processor.from_pretrained(model_name) |
|
|
|
# Your inputs (replace dummy images with screenshots of your documents) |
|
images = [ |
|
Image.new("RGB", (128, 128), color="white"), |
|
Image.new("RGB", (64, 32), color="black"), |
|
] |
|
queries = [ |
|
"What is the organizational structure for our R&D department?", |
|
"Can you provide a breakdown of last year’s financial performance?", |
|
] |
|
|
|
# Process the inputs |
|
batch_images = processor(images=images).to(model.device) |
|
batch_queries = processor(text=queries).to(model.device) |
|
|
|
# Forward pass |
|
with torch.no_grad(): |
|
image_embeddings = model(**batch_images).embeddings |
|
query_embeddings = model(**batch_queries).embeddings |
|
|
|
# Score the queries against the images |
|
scores = processor.score_retrieval(query_embeddings, image_embeddings) |
|
|
|
``` |
|
|
|
## Limitations |
|
|
|
- **Focus**: The model primarily focuses on PDF-type documents and high-ressources languages, potentially limiting its generalization to other document types or less represented languages. |
|
- **Support**: The model relies on multi-vector retreiving derived from the ColBERT late interaction mechanism, which may require engineering efforts to adapt to widely used vector retrieval frameworks that lack native multi-vector support. |
|
|
|
## License |
|
|
|
ColQwen2's vision language backbone model (Qwen2-VL) is under `apache-2.0` license. ColQwen2 inherits from this `apache-2.0` license. |
|
|
|
## Contact |
|
|
|
- Manuel Faysse: [email protected] |
|
- Hugues Sibille: [email protected] |
|
- Tony Wu: [email protected] |
|
|
|
## Citation |
|
|
|
If you use any datasets or models from this organization in your research, please cite the original dataset as follows: |
|
|
|
```bibtex |
|
@misc{faysse2024colpaliefficientdocumentretrieval, |
|
title={ColPali: Efficient Document Retrieval with Vision Language Models}, |
|
author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and Céline Hudelot and Pierre Colombo}, |
|
year={2024}, |
|
eprint={2407.01449}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.IR}, |
|
url={https://arxiv.org/abs/2407.01449}, |
|
} |
|
``` |