Upload folder using huggingface_hub
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ pipeline_tag: image-text-to-text
|
|
12 |
We are excited to introduce [🤗 InternVL-Chat-V1-2](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2). Inspired by [LLaVA-NeXT-34B](https://llava-vl.github.io/blog/2024-01-30-llava-next/), we have also adopted [Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) as the language model. Below is the pipeline.
|
13 |
|
14 |
<p align="center">
|
15 |
-
<img src="https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/GIEKCvNc1Y5iMQqLv645p.png" style="width:
|
16 |
</p>
|
17 |
|
18 |
From the experimental results, we've observed that **a stronger language model (34B) can better leverage the powerful capabilities of our vision foundation model.**
|
@@ -43,16 +43,16 @@ For better training reproducibility, we follow the minimalist design and data ef
|
|
43 |
|
44 |
\* Proprietary Model
|
45 |
|
46 |
-
| name
|
47 |
-
|
|
48 |
-
| GPT−4V\*
|
49 |
-
| Gemini Ultra\*
|
50 |
-
| Gemini Pro\*
|
51 |
-
| Qwen−VL−Plus\*
|
52 |
-
| Qwen−VL−Max\*
|
53 |
-
|
|
54 |
-
| LLaVA−NEXT−34B
|
55 |
-
| InternVL−Chat
|
56 |
|
57 |
- Note that we use the [official evaluation server](https://huggingface.co/spaces/whyu/MM-Vet_Evaluator) to test the MMVet scores, with `GPT-4-0613` serving as the judge model. Using different versions of GPT-4 as the judge can result in significant score variations.
|
58 |
- In most benchmarks, InternVL-Chat-V1-2 achieves better performance than LLaVA-NeXT-34B.
|
@@ -75,9 +75,9 @@ For more details about training, please see [here](https://github.com/OpenGVLab/
|
|
75 |
|
76 |
The hyperparameters used for finetuning are listed in the following table.
|
77 |
|
78 |
-
| Hyperparameter
|
79 |
-
|
|
80 |
-
| InternVL−Chat
|
81 |
|
82 |
## Quick Start
|
83 |
|
|
|
12 |
We are excited to introduce [🤗 InternVL-Chat-V1-2](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2). Inspired by [LLaVA-NeXT-34B](https://llava-vl.github.io/blog/2024-01-30-llava-next/), we have also adopted [Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) as the language model. Below is the pipeline.
|
13 |
|
14 |
<p align="center">
|
15 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/GIEKCvNc1Y5iMQqLv645p.png" style="width: 100%;">
|
16 |
</p>
|
17 |
|
18 |
From the experimental results, we've observed that **a stronger language model (34B) can better leverage the powerful capabilities of our vision foundation model.**
|
|
|
43 |
|
44 |
\* Proprietary Model
|
45 |
|
46 |
+
| name | image size | MMMU<br>(val) | MMMU<br>(test) | MathVista<br>(testmini) | MMB<br>(test) | MMB−CN<br>(test) | MMVP | MME | ScienceQA<br>(image) | POPE | TextVQA<br>(val) | SEEDv1<br>(image) | VizWiz<br>(test) | GQA<br>(test) |
|
47 |
+
| ---------------------- | ---------- | ------------- | -------------- | ----------------------- | ------------- | ---------------- | ---- | -------- | -------------------- | ---- | ---------------- | ----------------- | ---------------- | ------------- |
|
48 |
+
| GPT−4V\* | unknown | 56.8 | 55.7 | 49.9 | 77.0 | 74.4 | 38.7 | 1409/517 | - | - | 78.0 | 71.6 | - | - |
|
49 |
+
| Gemini Ultra\* | unknown | 59.4 | - | 53.0 | - | - | - | - | - | - | 82.3 | - | - | - |
|
50 |
+
| Gemini Pro\* | unknown | 47.9 | - | 45.2 | 73.6 | 74.3 | 40.7 | 1497/437 | - | - | 74.6 | 70.7 | - | - |
|
51 |
+
| Qwen−VL−Plus\* | unknown | 45.2 | 40.8 | 43.3 | 67.0 | 70.7 | - | 1681/502 | - | - | 78.9 | 65.7 | - | - |
|
52 |
+
| Qwen−VL−Max\* | unknown | 51.4 | 46.8 | 51.0 | 77.6 | 75.7 | - | - | - | - | 79.5 | - | - | - |
|
53 |
+
| | | | | | | | | | | | | | | |
|
54 |
+
| LLaVA−NEXT−34B | 672x672 | 51.1 | 44.7 | 46.5 | 79.3 | 79.0 | - | 1631/397 | 81.8 | 87.7 | 69.5 | 75.9 | 63.8 | 67.1 |
|
55 |
+
| InternVL−Chat<br>−V1-2 | 448x448 | 51.6 | 46.2 | 47.7 | 82.2 | 81.2 | 56.7 | 1687/489 | 83.3 | 88.0 | 72.5 | 75.6 | 60.0 | 64.0 |
|
56 |
|
57 |
- Note that we use the [official evaluation server](https://huggingface.co/spaces/whyu/MM-Vet_Evaluator) to test the MMVet scores, with `GPT-4-0613` serving as the judge model. Using different versions of GPT-4 as the judge can result in significant score variations.
|
58 |
- In most benchmarks, InternVL-Chat-V1-2 achieves better performance than LLaVA-NeXT-34B.
|
|
|
75 |
|
76 |
The hyperparameters used for finetuning are listed in the following table.
|
77 |
|
78 |
+
| Hyperparameter | Trainable Param | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
|
79 |
+
| ---------------------- | ---------------- | ----------------- | ------------- | ------ | ---------- | ------------ |
|
80 |
+
| InternVL−Chat<br>−V1-2 | 40B (full model) | 512 | 1e-5 | 1 | 2048 | 0.05 |
|
81 |
|
82 |
## Quick Start
|
83 |
|