Update README.md
Browse files
README.md
CHANGED
@@ -17,9 +17,9 @@ tags:
|
|
17 |
|
18 |
## Model Overview
|
19 |
|
20 |
-
**INF-Retriever-Qwen2-7B** is an LLM-based dense retrieval model developed by [INF TECH](https://www.infly.cn/en). It is built upon the [gte-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) model and specifically fine-tuned to excel in retrieval tasks, particularly for Chinese and English data.
|
21 |
|
22 |
-
As of November
|
23 |
|
24 |
## Key Features
|
25 |
|
@@ -28,17 +28,17 @@ As of November 12, 2024, **INF-Retriever-Qwen2-7B** ranks **No.1** on the Automa
|
|
28 |
|
29 |
## AIR-Bench Evaluation Results
|
30 |
|
31 |
-
**INF-Retriever-Qwen2-7B** has demonstrated superior retrieval capabilities across multiple domains and languages. The results from the Automated Heterogeneous Information Retrieval Benchmark (AIR-Bench) as of
|
32 |
|
33 |
| Model Name | Average | wiki_en | wiki_zh | web_en | web_zh | healthcare_en | healthcare_zh | law_en | arxiv_en | news_en | news_zh | finance_en | finance_zh | msmarco_en |
|
34 |
|:---------------------------------------------------------------------------------:|:---------:|:---------:|:---------:|:---------:|:---------:|:-------------:|:-------------:|:---------:|:---------:|-----------|-----------|------------|------------|------------|
|
35 |
| [BGE-M3](https://huggingface.co/BAAI/bge-m3) | 46.65 | 60.49 | 62.36 | 47.35 | 50.38 | 49.1 | **42.38** | 26.68 | 40.76 | 48.04 | 40.75 | 51.52 | 32.18 | 54.4 |
|
36 |
| [BGE-Multilingual-Gemma2](https://huggingface.co/BAAI/bge-multilingual-gemma2) | 46.83 | 63.71 | 67.3 | 50.38 | 53.24 | 47.24 | 42.13 | 22.58 | 23.28 | 50.91 | 44.02 | 49.3 | 31.6 | **63.14** |
|
37 |
| [GTE-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) | 48.38 | 63.46 | 66.44 | 51.2 | 51.98 | 54.2 | 38.82 | 22.31 | 40.27 | **54.07** | 43.03 | 58.2 | 26.63 | 58.39 |
|
38 |
-
| **INF-Retriever-Qwen2-7B** | **52.22** | **64.96** | **67.87** | **52.84** | **55.54** | **58.82** | 37.71 | **34.89** | **52.35** | 53.6 | **47.93** | **58.5** | **33.92** | 59.96 |
|
39 |
|
40 |
## Final Release Coming Soon
|
41 |
|
42 |
-
The official version of **INF-Retriever-Qwen2-7B** will provide even better performance, stability, and additional features. We are working hard to finalize the model, and we look forward to sharing the full release with the community in the near future.
|
43 |
|
44 |
---
|
|
|
17 |
|
18 |
## Model Overview
|
19 |
|
20 |
+
- **INF-Retriever-Qwen2-7B** is an LLM-based dense retrieval model developed by [INF TECH](https://www.infly.cn/en). It is built upon the [gte-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) model and specifically fine-tuned to excel in retrieval tasks, particularly for Chinese and English data.
|
21 |
|
22 |
+
- As of November 13, 2024, **INF-Retriever-Qwen2-7B-Preview** ranks **No.1** on the Automated Heterogeneous Information Retrieval Benchmark ([AIR-Bench](https://huggingface.co/spaces/AIR-Bench/leaderboard)), showcasing its cutting-edge performance in heterogeneous information retrieval tasks.
|
23 |
|
24 |
## Key Features
|
25 |
|
|
|
28 |
|
29 |
## AIR-Bench Evaluation Results
|
30 |
|
31 |
+
**INF-Retriever-Qwen2-7B-Preview** has demonstrated superior retrieval capabilities across multiple domains and languages. The results from the Automated Heterogeneous Information Retrieval Benchmark (AIR-Bench) as of November 13 are as follows:
|
32 |
|
33 |
| Model Name | Average | wiki_en | wiki_zh | web_en | web_zh | healthcare_en | healthcare_zh | law_en | arxiv_en | news_en | news_zh | finance_en | finance_zh | msmarco_en |
|
34 |
|:---------------------------------------------------------------------------------:|:---------:|:---------:|:---------:|:---------:|:---------:|:-------------:|:-------------:|:---------:|:---------:|-----------|-----------|------------|------------|------------|
|
35 |
| [BGE-M3](https://huggingface.co/BAAI/bge-m3) | 46.65 | 60.49 | 62.36 | 47.35 | 50.38 | 49.1 | **42.38** | 26.68 | 40.76 | 48.04 | 40.75 | 51.52 | 32.18 | 54.4 |
|
36 |
| [BGE-Multilingual-Gemma2](https://huggingface.co/BAAI/bge-multilingual-gemma2) | 46.83 | 63.71 | 67.3 | 50.38 | 53.24 | 47.24 | 42.13 | 22.58 | 23.28 | 50.91 | 44.02 | 49.3 | 31.6 | **63.14** |
|
37 |
| [GTE-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) | 48.38 | 63.46 | 66.44 | 51.2 | 51.98 | 54.2 | 38.82 | 22.31 | 40.27 | **54.07** | 43.03 | 58.2 | 26.63 | 58.39 |
|
38 |
+
| **INF-Retriever-Qwen2-7B-Preview** | **52.22** | **64.96** | **67.87** | **52.84** | **55.54** | **58.82** | 37.71 | **34.89** | **52.35** | 53.6 | **47.93** | **58.5** | **33.92** | 59.96 |
|
39 |
|
40 |
## Final Release Coming Soon
|
41 |
|
42 |
+
The official version of **INF-Retriever-Qwen2-7B** will provide even better performance, stability, and additional features. We are working hard to finalize the model, and we look forward to sharing the full release with the community in the near future.
|
43 |
|
44 |
---
|