MiniCPM-Reranker-Light

MiniCPM-Reranker-Light 是面壁智能与清华大学自然语言处理实验室(THUNLP)、东北大学信息检索小组(NEUIR)共同开发的中英双语言文本重排序模型,有如下特点:

  • 出色的中文、英文重排序能力。
  • 出色的中英跨语言重排序能力。
  • 支持长文本(最长8192token)。

MiniCPM-Reranker-Light 基于 MiniCPM-1B-sft-bf16 训练,结构上采取双向注意力。采取多阶段训练方式,共使用包括开源数据、机造数据、闭源数据在内的约 500 万条训练数据。

欢迎关注 UltraRAG 系列:

MiniCPM-Reranker-Light is a bilingual & cross-lingual text re-ranking model developed by ModelBest Inc. , THUNLP and NEUIR , featuring:

  • Exceptional Chinese and English re-ranking capabilities.
  • Outstanding cross-lingual re-ranking capabilities between Chinese and English.
  • Long-text support (up to 8192 tokens).

MiniCPM-Reranker-Light is trained based on MiniCPM-1B-sft-bf16 and incorporates bidirectional attention in its architecture. The model underwent multi-stage training using approximately 6 million training examples, including open-source, synthetic, and proprietary data.

We also invite you to explore the UltraRAG series:

模型信息 Model Information

  • 模型大小:1.2B

  • 最大输入token数:8192

  • Model Size: 1.2B

  • Max Input Tokens: 8192

使用方法 Usage

输入格式 Input Format

本模型支持指令,输入格式如下:

MiniCPM-Reranker-Light supports instructions in the following format:

<s>Instruction: {{ instruction }} Query: {{ query }}</s>{{ document }}

例如:

For example:

<s>Instruction: 为这个医学问题检索相关回答。Query: 咽喉癌的成因是什么?</s>(文档省略)
<s>Instruction: Given a claim about climate change, retrieve documents that support or refute the claim. Query: However the warming trend is slower than most climate models have forecast.</s>(document omitted)

也可以不提供指令,即采取如下格式:

MiniCPM-Reranker-Light also works in instruction-free mode in the following format:

<s>Query: {{ query }}</s>{{ document }}

我们在BEIR与C-MTEB/Retrieval上测试时使用的指令见 instructions.json,其他测试不使用指令。

When running evaluation on BEIR and C-MTEB/Retrieval, we use instructions in instructions.json. For other evaluations, we do not use instructions.

环境要求 Requirements

transformers==4.37.2

示例脚本 Demo

Huggingface Transformers

from transformers import AutoModelForSequenceClassification
import torch

model_name = "OpenBMB/MiniCPM-Reranker-Light"
model = AutoModelForSequenceClassification.from_pretrained(model_name, trust_remote_code=True, torch_dtype=torch.float16).to("cuda")
# You can also use the following code to use flash_attention_2
# model = AutoModelForSequenceClassification.from_pretrained(model_name, trust_remote_code=True,attn_implementation="flash_attention_2", torch_dtype=torch.float16).to("cuda")
model.eval()

query = "中国的首都是哪里?" # "Where is the capital of China?"
passages = ["beijing", "shanghai"] # 北京,上海

rerank_score = model.rerank(query, passages,query_instruction="Query:", batch_size=32, max_length=1024)
print(rerank_score) #[0.01791382 0.00024533]


sentence_pairs = [[f"Query: {query}", doc] for doc in passages]
scores = model.compute_score(sentence_pairs, batch_size=32, max_length=1024)
print(scores) #[0.01791382 0.00024533]

Sentence Transformer

from sentence_transformers import CrossEncoder
from transformers import LlamaTokenizer
import torch

model_name = "OpenBMB/MiniCPM-Reranker-Light"
model = CrossEncoder(model_name,max_length=1024,trust_remote_code=True, automodel_args={"torch_dtype": torch.float16})
# You can also use the following code to use flash_attention_2
#model = CrossEncoder(model_name,max_length=1024,trust_remote_code=True, automodel_args={"attn_implementation":"flash_attention_2","torch_dtype": torch.float16})
model.tokenizer.padding_side = "right"

query = "中国的首都是哪里?" # "Where is the capital of China?"
passages = ["beijing", "shanghai"] # 北京,上海

INSTRUCTION = "Query: "
query = INSTRUCTION + query

sentence_pairs = [[query, doc] for doc in passages]

scores = model.predict(sentence_pairs, convert_to_tensor=True).tolist()
rankings = model.rank(query, passages, return_documents=True, convert_to_tensor=True)

print(scores) # [0.017913818359375, 0.0002453327178955078]
for ranking in rankings:
    print(f"Score: {ranking['score']:.4f}, Corpus: {ranking['text']}")
  
# Score: 0.0179, Corpus: beijing
# Score: 0.0002, Corpus: shanghai

Infinity

import asyncio
from infinity_emb import AsyncEngineArray, EngineArgs, AsyncEmbeddingEngine
query = "中国的首都是哪里?" # "What is the capital of China?"
docs = ["beijing", "shanghai"] # "北京", "上海"

INSTRUCTION = "Query:"
query = f"{INSTRUCTION} {query}"

array = AsyncEngineArray.from_args(
  [EngineArgs(model_name_or_path = "OpenBMB/MiniCPM-Reranker-Light", engine="torch", dtype="float16", bettertransformer=False, trust_remote_code=True, model_warmup=False)]
)

async def rerank(engine: AsyncEmbeddingEngine): 
    async with engine:
        ranking, usage = await engine.rerank(query=query, docs=docs)
        print(list(zip(ranking, docs)))

asyncio.run(rerank(array[0])) # [(RerankReturnType(relevance_score=0.017917344, document='beijing', index=0), 'beijing'), (RerankReturnType(relevance_score=0.00024729347, document='shanghai', index=1), 'shanghai')]

FlagEmbedding

from FlagEmbedding import FlagReranker
model_name = "OpenBMB/MiniCPM-Reranker-Light"
model = FlagReranker(model_name, use_fp16=True, query_instruction_for_rerank="Query: ", trust_remote_code=True)
# You can hack the __init__() method of the FlagEmbedding BaseReranker class to use flash_attention_2 for faster inference
#  self.model = AutoModelForSequenceClassification.from_pretrained(
#             model_name_or_path,
#             trust_remote_code=trust_remote_code,
#             cache_dir=cache_dir,
#             # torch_dtype=torch.float16, # we need to add this line to use fp16
#             # attn_implementation="flash_attention_2", # we need to add this line to use flash_attention_2
#         )
model.tokenizer.padding_side = "right"

query = "中国的首都是哪里?" # "Where is the capital of China?"
passages = ["beijing", "shanghai"] # 北京,上海

sentence_pairs = [[query, doc] for doc in passages]

scores = model.compute_score(sentence_pairs,normalize=True)
print(scores) # [0.01791734476747132, 0.0002472934613244585]

实验结果 Evaluation Results

中文与英文重排序结果 CN/EN Re-ranking Results

中文对bge-large-zh-v1.5检索的top-100进行重排,英文对bge-large-en-v1.5检索的top-100进行重排。

We re-rank top-100 docments from bge-large-zh-v1.5 in C-MTEB/Retrieval and from bge-large-en-v1.5 in BEIR.

模型 Model C-MTEB/Retrieval (NDCG@10) BEIR (NDCG@10)
bge-large-zh-v1.5(Retriever for Chinese) 70.46 -
bge-large-en-v1.5(Retriever for English) - 54.29
bge-reranker-v2-m3 71.82 55.36
bge-reranker-v2-minicpm-28 73.51 59.86
bge-reranker-v2-gemma 71.74 60.71
bge-reranker-v2.5-gemma2 - 63.67
MiniCPM-Reranker 76.79 61.32
MiniCPM-Reranker-Light 76.19 61.34

中英跨语言重排序结果 CN-EN Cross-lingual Re-ranking Results

对bge-m3(Dense)检索的top100进行重排。

We re-rank top-100 documents from bge-m3 (Dense).

模型 Model MKQA En-Zh_CN (Recall@20) NeuCLIR22 (NDCG@10) NeuCLIR23 (NDCG@10)
bge-m3 (Dense)(Retriever) 66.4 30.49 41.09
jina-reranker-v2-base-multilingual 69.33 36.66 50.03
bge-reranker-v2-m3 69.75 40.98 49.67
gte-multilingual-reranker-base 68.51 38.74 45.3
MiniCPM-Reranker 71.73 43.65 50.59
MiniCPM-Reranker-Light 71.34 46.04 51.86

许可证 License

  • 本仓库中代码依照 Apache-2.0 协议开源。
  • MiniCPM-Reranker-Light 模型权重的使用则需要遵循 MiniCPM 模型协议
  • MiniCPM-Reranker-Light 模型权重对学术研究完全开放。如需将模型用于商业用途,请填写此问卷
  • The code in this repo is released under the Apache-2.0 License.
  • The usage of MiniCPM-Reranker-Light model weights must strictly follow MiniCPM Model License.md.
  • The models and weights of MiniCPM-Reranker-Light are completely free for academic research. After filling out a "questionnaire" for registration, MiniCPM-Reranker-Light weights are also available for free commercial use.
Downloads last month
19
Safetensors
Model size
1.36B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Model tree for openbmb/MiniCPM-Reranker-Light

Finetuned
(1)
this model