|
--- |
|
license: mit |
|
datasets: |
|
- vidore/colpali_train_set |
|
base_model: |
|
- Qwen/Qwen2-VL-7B-Instruct |
|
pipeline_tag: feature-extraction |
|
library_name: transformers |
|
tags: |
|
- vidore |
|
--- |
|
# Model Card for Model ID |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
ColQwen is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features. |
|
It is a Qwen2-VL-7B extension that generates ColBERT- style multi-vector representations of text and images. |
|
It was introduced in the paper ColPali: Efficient Document Retrieval with Vision Language Models and first released in this repository. |
|
This version is trained with batch_size 256 for 3 epochs. |
|
|
|
|
|
|
|
- **Developed by:** IEIT systems |
|
|
|
|
|
|
|
|