Safetensors
English
vidore
HugSib commited on
Commit
a87a3da
1 Parent(s): e1efbc3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -1
README.md CHANGED
@@ -4,4 +4,65 @@ language:
4
  - en
5
  tags:
6
  - vidore
7
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - en
5
  tags:
6
  - vidore
7
+ ---
8
+
9
+ ---
10
+ license: mit
11
+ language:
12
+ - en
13
+ - fr
14
+ tags:
15
+ - vidore
16
+ ---
17
+ # BiSigLip: Visual Retriever based on PaliGemma-3B with ColBERT strategy
18
+
19
+ ColPali is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features.
20
+ It is a [PaliGemma-3B](https://huggingface.co/google/paligemma-3b-mix-448) extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images.
21
+ It was introduced in the paper [ColPali: Efficient Document Retrieval with Vision Language Models[add link]]() and first released in [this repository](https://github.com/ManuelFay/colpali)
22
+
23
+ ## Model Description
24
+
25
+ This model is built iteratively, starting from an off-the-shelf [Siglip](https://huggingface.co/google/siglip-so400m-patch14-384) model. We finetuned it to create *BiSigLip*.
26
+
27
+ ## Model Training
28
+
29
+ ### Dataset
30
+ Our training dataset of 127,460 query-page pairs is comprised of train sets of openly available academic datasets (63%) and a synthetic dataset made up of pages from web-crawled PDF documents and augmented with VLM-generated (Claude-3 Sonnet) pseudo-questions (37%).
31
+ Our training set is fully English by design, enabling us to study zero-shot generalization to non-English languages. We explicitly verify no multi-page PDF document is used both [*ViDoRe*](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d) and in the train set to prevent evaluation contamination.
32
+ A validation set is created with 2% of the samples to tune hyperparameters.
33
+
34
+ *Note: Multilingual data is present in the pretraining corpus of the language model (Gemma-2B) and potentially occurs during PaliGemma-3B's multimodal training.*
35
+
36
+ ### Parameters
37
+
38
+ All models are trained for 1 epoch on the train set. Unless specified otherwise, we train models in `bfloat16` format, use low-rank adapters ([LoRA](https://arxiv.org/abs/2106.09685))
39
+ with `alpha=32` and `r=32` on the transformer layers from the language model,
40
+ as well as the final randomly initialized projection layer, and use a `paged_adamw_8bit` optimizer.
41
+ We train on an 8 GPU setup with data parallelism, a learning rate of 5e-5 with linear decay with 2.5% warmup steps, and a batch size of 32.
42
+
43
+ ## Intended uses
44
+
45
+ #TODO
46
+
47
+ ## Limitations
48
+
49
+ - **Focus**: The model primarily focuses on PDF-type documents and high-ressources languages, potentially limiting its generalization to other document types or less represented languages.
50
+ - **Support**: The model relies on multi-vector retreiving derived from the ColBERT late interaction mechanism, which may require engineering efforts to adapt to widely used vector retrieval frameworks that lack native multi-vector support.
51
+
52
+ ## License
53
+
54
+ ColPali based model (PaliGemma) is under `gemma` license as specified in its [model card](https://huggingface.co/google/paligemma-3b-mix-448). The adapters attached to the model are under MIT license.
55
+
56
+ ## Contact
57
+
58
+ - Manuel Faysse: [email protected]
59
+ - Hugues Sibille: [email protected]
60
+ - Tony Wu: [email protected]
61
+
62
+ ## Citation
63
+
64
+ If you use any datasets or models from this organization in your research, please cite the original dataset as follows:
65
+
66
+ ```bibtex
67
+ [include BibTeX]
68
+ ```