update
Browse files
README.md
CHANGED
@@ -2629,25 +2629,36 @@ FlagEmbedding can map any text to a low-dimensional dense vector which can be us
|
|
2629 |
And it also can be used in vector databases for LLMs.
|
2630 |
|
2631 |
************* 🌟**Updates**🌟 *************
|
2632 |
-
-
|
2633 |
-
- 09/
|
|
|
|
|
2634 |
- **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
|
2635 |
- **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
|
2636 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
2637 |
- 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
|
2638 |
-
- 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
|
2639 |
-
- 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
|
2640 |
-
- 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
|
|
|
|
|
2641 |
|
2642 |
|
2643 |
## Model List
|
2644 |
|
2645 |
`bge` is short for `BAAI general embedding`.
|
2646 |
|
2647 |
-
| Model | Language | | Description | query instruction for retrieval
|
2648 |
|:-------------------------------|:--------:| :--------:| :--------:|:--------:|
|
2649 |
-
| [BAAI/
|
2650 |
-
| [BAAI/bge-reranker-
|
|
|
2651 |
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
|
2652 |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
|
2653 |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
|
@@ -2662,11 +2673,15 @@ And it also can be used in vector databases for LLMs.
|
|
2662 |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
|
2663 |
|
2664 |
|
2665 |
-
|
2666 |
|
2667 |
-
|
2668 |
For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
|
2669 |
|
|
|
|
|
|
|
|
|
2670 |
## Frequently asked questions
|
2671 |
|
2672 |
<details>
|
@@ -2703,7 +2718,11 @@ please select an appropriate similarity threshold based on the similarity distri
|
|
2703 |
<summary>3. When does the query instruction need to be used</summary>
|
2704 |
|
2705 |
<!-- ### When does the query instruction need to be used -->
|
2706 |
-
|
|
|
|
|
|
|
|
|
2707 |
For a retrieval task that uses short queries to find long related documents,
|
2708 |
it is recommended to add instructions for these short queries.
|
2709 |
**The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
|
@@ -2963,7 +2982,7 @@ which is more accurate than embedding model (i.e., bi-encoder) but more time-con
|
|
2963 |
Therefore, it can be used to re-rank the top-k documents returned by embedding model.
|
2964 |
We train the cross-encoder on a multilingual pair data,
|
2965 |
The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker).
|
2966 |
-
More details
|
2967 |
|
2968 |
|
2969 |
## Contact
|
@@ -2973,7 +2992,8 @@ You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]
|
|
2973 |
|
2974 |
## Citation
|
2975 |
|
2976 |
-
If you find
|
|
|
2977 |
```
|
2978 |
@misc{bge_embedding,
|
2979 |
title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
|
@@ -2988,5 +3008,3 @@ If you find our work helpful, please cite us:
|
|
2988 |
## License
|
2989 |
FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
|
2990 |
|
2991 |
-
|
2992 |
-
|
|
|
2629 |
And it also can be used in vector databases for LLMs.
|
2630 |
|
2631 |
************* 🌟**Updates**🌟 *************
|
2632 |
+
- 10/12/2023: Release [LLM-Embedder](./FlagEmbedding/llm_embedder/README.md), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Paper](https://arxiv.org/pdf/2310.07554.pdf) :fire:
|
2633 |
+
- 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) of BGE has been released
|
2634 |
+
- 09/15/2023: The [masive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released
|
2635 |
+
- 09/12/2023: New models:
|
2636 |
- **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
|
2637 |
- **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
|
2638 |
+
|
2639 |
+
|
2640 |
+
<details>
|
2641 |
+
<summary>More</summary>
|
2642 |
+
<!-- ### More -->
|
2643 |
+
|
2644 |
+
- 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning.
|
2645 |
- 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
|
2646 |
+
- 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
|
2647 |
+
- 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
|
2648 |
+
- 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
|
2649 |
+
|
2650 |
+
</details>
|
2651 |
|
2652 |
|
2653 |
## Model List
|
2654 |
|
2655 |
`bge` is short for `BAAI general embedding`.
|
2656 |
|
2657 |
+
| Model | Language | | Description | query instruction for retrieval [1] |
|
2658 |
|:-------------------------------|:--------:| :--------:| :--------:|:--------:|
|
2659 |
+
| [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) |
|
2660 |
+
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
|
2661 |
+
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
|
2662 |
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
|
2663 |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
|
2664 |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
|
|
|
2673 |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
|
2674 |
|
2675 |
|
2676 |
+
[1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
|
2677 |
|
2678 |
+
[2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
|
2679 |
For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
|
2680 |
|
2681 |
+
All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI.
|
2682 |
+
If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models .
|
2683 |
+
|
2684 |
+
|
2685 |
## Frequently asked questions
|
2686 |
|
2687 |
<details>
|
|
|
2718 |
<summary>3. When does the query instruction need to be used</summary>
|
2719 |
|
2720 |
<!-- ### When does the query instruction need to be used -->
|
2721 |
+
|
2722 |
+
For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction.
|
2723 |
+
No instruction only has a slight degradation in retrieval performance compared with using instruction.
|
2724 |
+
So you can generate embedding without instruction in all cases for convenience.
|
2725 |
+
|
2726 |
For a retrieval task that uses short queries to find long related documents,
|
2727 |
it is recommended to add instructions for these short queries.
|
2728 |
**The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
|
|
|
2982 |
Therefore, it can be used to re-rank the top-k documents returned by embedding model.
|
2983 |
We train the cross-encoder on a multilingual pair data,
|
2984 |
The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker).
|
2985 |
+
More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
|
2986 |
|
2987 |
|
2988 |
## Contact
|
|
|
2992 |
|
2993 |
## Citation
|
2994 |
|
2995 |
+
If you find this repository useful, please consider giving a star :star: and citation
|
2996 |
+
|
2997 |
```
|
2998 |
@misc{bge_embedding,
|
2999 |
title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
|
|
|
3008 |
## License
|
3009 |
FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
|
3010 |
|
|
|
|