File size: 1,047 Bytes
3f585c7 a7b0310 3f585c7 851ad5b 3f585c7 d172f1f 3f585c7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
---
license: mit
datasets:
- vidore/colpali_train_set
base_model:
- Qwen/Qwen2-VL-7B-Instruct
pipeline_tag: feature-extraction
library_name: transformers
tags:
- vidore
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
ColQwen is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features.
It is a Qwen2-VL-7B extension that generates ColBERT- style multi-vector representations of text and images.
It was introduced in the paper ColPali: Efficient Document Retrieval with Vision Language Models and first released in this repository.
This version is trained with batch_size 256 for 3 epochs.
- **Developed by:** IEIT systems
|