id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
sequencelengths 1
1
|
---|---|---|---|---|---|---|
2310.08118#7 | Can Large Language Models Really Improve by Self-critiquing Their Own Plans? | Itâ s worth noting that there are no constraints on the type or format of feedback the verifier LLM produces. The system ceases generation either when the verifier LLM approves the candidate plan as valid or when the number of prompting iterations exceeds a set threshold (for our experiments, this threshold is set at 15 iterations). This method is similar to the backprompting technique described in [12]. However, the main distinction lies in the type of verifier employed. In our system, both the verifier and generator are LLMs, whereas the referenced approach utilizes an external sound verifier, VAL [4]. For all our experiments, GPT-4 serves as the default LLM. # 4.2 Prompt generation For the LLM+LLM Planning system described above, we utilize distinct prompts for the generator and verifier LLMs. The prompt generator (as shown in Figure 1) utilizes the PDDL domain and instance files to generate the required prompts in natural language. Our prompts are structured similarly to the natural language prompts found in [12]. For plan generation, our prompts are one-shot: we begin by presenting the domain description, followed by an example instance (along with its corresponding plan). We then present the query instance. These example instances are randomly selected from our set of instances, and this forms the input for the generator LLM. For the verifier LLM, we adopt a zero-shot approach. Here, we present the domain description, followed by the query instance and its corresponding plan. The verifier LLM is then tasked with verifying the query plan and providing feedback if necessary. As mentioned earlier, we do not restrict the type or format of the feedback for the verifier LLM. Detailed examples of the prompts given to both the generator and verifier LLMs can be found in the Appendix. # 5 Evaluation and Analysis We evaluate our planning system on Blocksworld, a widely recognized common-sense planning domain in AI planning literature [5]. We generate 100 random instances for evaluation across various methods. To provide a ground-truth assessment of the final LLM planâ s correctness, we employ an external sound verifier, VAL [4]. For all experiments, GPT-4 [9] serves as the chosen LLM and was run with a temperature of 0, thereby making it deterministic. | 2310.08118#6 | 2310.08118#8 | 2310.08118 | [
"2305.10601"
] |
2310.08118#8 | Can Large Language Models Really Improve by Self-critiquing Their Own Plans? | 3 # 5.1 Effect of self-critiquing on plan generation We assessed the impact of self-critiquing on plan generation by comparing the LLM+LLM back- prompting system with two other baselines. The first baseline is the LLM+VAL backprompting system, which mirrors the backprompting method described in [12]. In this method, the plan pro- duced by the LLM is validated by an external sound verifier, VAL. If the plan is found lacking, the generator-LLM is reprompted using feedback from VAL. The second baseline involves a generator- LLM without backprompting. Here, the generator LLM receives a single prompt, and the resulting plan is considered final. As illustrated in Table 1, the LLM+LLM backprompting approach slightly outperforms the non- backprompting method in terms of accuracy. However, it falls short when compared to the LLM+VAL system. | 2310.08118#7 | 2310.08118#9 | 2310.08118 | [
"2305.10601"
] |
2310.08118#9 | Can Large Language Models Really Improve by Self-critiquing Their Own Plans? | Itâ s worth noting that the marginal improvement over the generator-LLM-only method might not solely be attributed to the LLM verifier. The backprompting itself, which offers the generator LLM multiple opportunities to produce a plan, could be a contributing factor. The subpar performance of the LLM+LLM system, especially when compared to LLM+VAL, can likely be traced back to the substantial number of type-1 errors produced by the LLM verifier. Itâ s evident that incorporating a sound verifier in the backprompting process can significantly enhance overall performance. | 2310.08118#8 | 2310.08118#10 | 2310.08118 | [
"2305.10601"
] |
2310.08118#10 | Can Large Language Models Really Improve by Self-critiquing Their Own Plans? | Plan Generation Method Accuracy Avg. Number of iterations LLM+LLM w/ Backprompting (BP) 55/100 (55%) 3.48 LLM+VAL w/ BP 88/100 (88%) 4.18 Generator LLM only w/o BP 40/100 (40%) 1.00 # Table 1: Comparison between various plan generation methods on the Blocksworld domain. # 5.2 Analysis on the self-critique verifier We base our evaluation of the verifier LLM on its binary verification (i.e., determining whether the plan is valid or not) of the final plan produced by the LLM+LLM system. | 2310.08118#9 | 2310.08118#11 | 2310.08118 | [
"2305.10601"
] |
2310.08118#11 | Can Large Language Models Really Improve by Self-critiquing Their Own Plans? | Itâ s important to note that the system halts either when the verifier LLM considers the plan valid or when the number of iterations surpasses 15. We compare the LLM verifierâ s output with ground truth classifications made using VAL [4], a sound verifier. To make the ground truth determination available for each input plan, we separately evaluate that plan using VAL as well. As illustrated in Table 2, out of the 100 instances, the verifier accurately identifies 61 (or 61%). However, a deeper examination of the verifierâ s errors reveals a concerning number of false positives. In this context, a false positive refers to the verifier LLM deeming a generated plan valid when, in fact, it is not. Out of the 100 instances, the verifier LLM produces 54 true positives and 38 false positives (type-1 errors). This indicates that the verifier deemed 38 plans, which were actually invalid, to be valid which can be catastrophic if such a system is deployed in scenarios where correctness is paramount. Accuracy True Positive Rate False Positive Rate True Negative Rate False Negative Rate Verifier LLM 61/100 (61%) 54/55 (98.2%) 38/45 (84.45%) 7/45 (15.55%) 1/55 (1.8%) Table 2: Breakdown of Plan Verification results on Blocksworld domain. The denominators (in aspects other than Accuracy) are ground-truth values based on VAL. # 5.3 Effect of the levels of feedback on plan generation While the use of a sound verifier appears to enhance overall performance, we sought to further investigate the impact of varied levels of feedback on plan generation performance. We assessed the systemâ s performance across four distinct feedback levels: 4 1. No Feedback: At this level, the initial plan generated by the LLM is considered to be final and no feedback is provided to the LLM. 2. Binary Feedback: This level simply indicates whether the generated plan is valid or not. 3. Inexecutable Action Feedback: If the plan is invalid and inexecutable, this feedback high- lights the first inexecutable action and the unmet preconditions causing the inexecutability. If the plan is executable but fails to meet all goal conditions, the unmet goal conditions are presented. This feedback mirrors what VAL provides. 4. Open Conditions Feedback: | 2310.08118#10 | 2310.08118#12 | 2310.08118 | [
"2305.10601"
] |
2310.08118#12 | Can Large Language Models Really Improve by Self-critiquing Their Own Plans? | This level treats the plan as a partial-order plan [13] and presents all the actions for which there exists atleast one unmet pre-condition and the corresponding unmet preconditions. Further it also presents the unmet goal conditions. Table 3 showcases the LLMâ s performance when subjected to various levels of feedback (including one with no feedback). Interestingly, the amount of feedback provided to the LLM seems to have minimal influence on its performance improvement. As long as the binary feedback is accurate and the LLM is given ample opportunities to generate a plan, the detailed feedback on invalid plans doesnâ t appear to significantly enhance the LLMâ s performance. We have provided examples for each feedback level in the Appendix. Levels of feedback Accuracy Avg. no of steps No feedback 40/100 (40%) 1.00 Only binary feedback 37/50 (74%) 5.38 Binary + First error feedback (by VAL) 43/50 (86%) 4.18 Binary + All errors feedback 43/50 (86%) 4.42 Table 3: Performance of LLM+VAL system on plan generation with varied levels of feedback. # 6 Conclusion and Future Work In this paper, we conducted a systematic investigation into the ability of Large Language Models (LLMs) to critique their own outputs, specifically within the context of classical planning problems. While recent research has been optimistic about LLMsâ potential in self-critiquing, especially in iterative settings, our findings present a different perspective. Our empirical evaluations on Blocksworld, a simple common-sense domain, highlighted the in- effectiveness of self-critiquing in LLMs in the context of planning. We showed that the verifier LLM generates a significant number of false positives which be detrimental to the overall systemâ s reliability, particularly in domains where the correctness of plans is paramount. Interestingly, the nature of feedback, whether binary or detailed, did not have a pronounced impact on plan generation performance, suggesting that the core issue lies in the LLMâ s binary verification capabilities rather than the granularity of feedback. In the future, we plan to conduct more extensive experiments with respect to the number of instances, the number of domains and prompting methods (such as chain-of-thought). | 2310.08118#11 | 2310.08118#13 | 2310.08118 | [
"2305.10601"
] |
2310.08118#13 | Can Large Language Models Really Improve by Self-critiquing Their Own Plans? | # References [1] Roi Cohen, May Hamri, Mor Geva, and Amir Globerson. Lm vs lm: Detecting factual errors via cross examination. arXiv preprint arXiv:2305.13281, 2023. [2] Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenenbaum, and Igor Mordatch. Improv- ing factuality and reasoning in language models through multiagent debate. arXiv preprint arXiv:2305.14325, 2023. [3] Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jian, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D Hwang, et al. | 2310.08118#12 | 2310.08118#14 | 2310.08118 | [
"2305.10601"
] |
2310.08118#14 | Can Large Language Models Really Improve by Self-critiquing Their Own Plans? | Faith and fate: Limits of transformers on compositionality. arXiv preprint arXiv:2305.18654, 2023. 5 [4] Richard Howey, Derek Long, and Maria Fox. VAL: Automatic plan validation, continuous effects and mixed initiative planning using PDDL. In 16th IEEE International Conference on Tools with Artificial Intelligence, pages 294â 301. IEEE, 2004. [5] IPC. International planning competition, 1998. [6] Geunwoo Kim, Pierre Baldi, and Stephen McAleer. | 2310.08118#13 | 2310.08118#15 | 2310.08118 | [
"2305.10601"
] |
2310.08118#15 | Can Large Language Models Really Improve by Self-critiquing Their Own Plans? | Language models can solve computer tasks. arXiv preprint arXiv:2303.17491, 2023. [7] Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651, 2023. | 2310.08118#14 | 2310.08118#16 | 2310.08118 | [
"2305.10601"
] |
2310.08118#16 | Can Large Language Models Really Improve by Self-critiquing Their Own Plans? | [8] Drew McDermott, Malik Ghallab, Adele E. Howe, Craig A. Knoblock, Ashwin Ram, Manuela M. Veloso, Daniel S. Weld, and David E. Wilkins. Pddl-the planning domain definition language. 1998. [9] OpenAI. Gpt-4 technical report, 2023. [10] Noah Shinn, Beck Labash, and Ashwin Gopinath. Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366, 2023. [11] Tom Silver, Varun Hariprasad, Reece S Shuttleworth, Nishanth Kumar, Tomás Lozano-Pérez, and Leslie Pack Kaelbling. PDDL planning with pretrained large language models. In NeurIPS 2022 Foundation Models for Decision Making Workshop, 2022. | 2310.08118#15 | 2310.08118#17 | 2310.08118 | [
"2305.10601"
] |
2310.08118#17 | Can Large Language Models Really Improve by Self-critiquing Their Own Plans? | [12] Karthik Valmeekam, Matthew Marquez, Sarath Sreedharan, and Subbarao Kambhampati. On the planning abilities of large language modelsâ a critical investigation. arXiv preprint arXiv:2305.15771, 2023. [13] Daniel S Weld. An introduction to least commitment planning. AI magazine, 15(4):27â 27, 1994. [14] Yixuan Weng, Minjun Zhu, Shizhu He, Kang Liu, and Jun Zhao. | 2310.08118#16 | 2310.08118#18 | 2310.08118 | [
"2305.10601"
] |
2310.08118#18 | Can Large Language Models Really Improve by Self-critiquing Their Own Plans? | Large language models are reasoners with self-verification. arXiv preprint arXiv:2212.09561, 2022. [15] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601, 2023. | 2310.08118#17 | 2310.08118#19 | 2310.08118 | [
"2305.10601"
] |
2310.08118#19 | Can Large Language Models Really Improve by Self-critiquing Their Own Plans? | 6 | 2310.08118#18 | 2310.08118 | [
"2305.10601"
] |
|
2310.08319#0 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | 3 2 0 2 t c O 2 1 ] R I . s c [ 1 v 9 1 3 8 0 . 0 1 3 2 : v i X r a # Fine-Tuning LLaMA for Multi-Stage Text Retrieval # Xueguang Ma â Liang Wang â ¡ Nan Yang â ¡ Furu Wei â ¡ Jimmy Lin â â David R. Cheriton School of Computer Science, University of Waterloo â ¡ Microsoft Research # Abstract | 2310.08319#1 | 2310.08319 | [
"2302.13971"
] |
|
2310.08319#1 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that pre- date recent advances in large language models (LLMs). This study seeks to explore poten- tial improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a point- wise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language mod- els indeed surpasses that of smaller models. Additionally, since LLMs can inherently han- dle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMAâ RankLLaMA pipeline ex- hibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.1 # Introduction Text retrieval, which entails identifying and rank- ing the most relevant documents or text snippets in response to a query, is crucial in various open- domain language comprehension tasks (Petroni et al., 2021), including web search (Bajaj et al., 2016), open-domain question answering (Chen et al., 2017), and fact verification (Thorne et al., 2018). Retrieval also plays an important role in en- hancing the effectiveness of large language models (LLMs) in a retrieval-augmented generation (RAG) pipeline (Lewis et al., 2020b; Shi et al., 2023). This approach not only mitigates hallucinations but also enables LLMs to access knowledge that is not cap- tured within their parameters (Yang et al., 2023; Jiang et al., 2023). | 2310.08319#0 | 2310.08319#2 | 2310.08319 | [
"2302.13971"
] |
2310.08319#2 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | 1https://huggingface.co/castorini A typical multi-stage text retrieval pipeline con- sists of a retriever, designed to efficiently locate the top-k relevant texts from a corpus, and a reranker, which further refines the order of the retrieved can- didates to improve output quality (Nogueira and Cho, 2019). Both retrievers and rerankers have sig- nificantly benefited from the advent of pre-trained language models based on Transformers (Vaswani et al., 2017) such as BERT (Devlin et al., 2019) and T5 (Raffel et al., 2020). These models are trained to encode queries and documents into vector repre- sentations for retrieval (Karpukhin et al., 2020; Lin, 2021) or to directly score the relevance between a query and a document for reranking (Nogueira et al., 2019; Zhuang et al., 2023). Recent large language models with billions of pa- rameters, fine-tuned to follow instructions, such as InstructGPT (Ouyang et al., 2022), GPT-4 (Open- AI, 2023), and LLaMA (Touvron et al., 2023a,b), have exhibited extraordinary capabilities in many NLP tasks, surpassing previous smaller pre-trained language models (Zhao et al., 2023). For retrieval, recent methods such as LRL (Ma et al., 2023), RankGPT (Sun et al., 2023), and PRP (Qin et al., 2023) have explored prompting LLMs to perform zero-shot reranking using pairwise or listwise ap- proaches. These methods leverage LLMs by view- ing reranking as text generation. | 2310.08319#1 | 2310.08319#3 | 2310.08319 | [
"2302.13971"
] |
2310.08319#3 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | However, we see a number of potential issues. First, these methods do not address the entire multi- stage pipeline, as it is challenging to cast retrieval from a large corpus as a text generation task. Sec- ond, they do not leverage labeled data when avail- able. Finally, these rerankers are not efficient be- cause they do not support parallel scoring and are slowed by their multi-pass decoding design. Therefore, we argue that fine-tuning state-of- the-art large language models to function as re- trievers and rerankers can yield better effective- ness than previous smaller models. This approach can also optimally utilize LLMs within multi-stage | 2310.08319#2 | 2310.08319#4 | 2310.08319 | [
"2302.13971"
] |
2310.08319#4 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | pipelines. Thus, we are motivated to investigate the following research question: How do state-of- the-art large language models perform when specif- ically fine-tuned for multi-stage text retrieval? Our study aims to answer this question by con- ducting a comprehensive investigation into fine- tuning the latest LLaMA-2 model (Touvron et al., 2023b), a state-of-the-art, open-source large lan- guage model, as both a retriever and a reranker, which we refer to as RepLLaMA and RankLLaMA, respectively. Specifically, we utilize the MS MARCO (Bajaj et al., 2016) and BEIR (Thakur et al., 2021) datasets for our experiments. Our find- ings suggest that large language models surpass pre- vious smaller models, achieving state-of-the-art ef- fectiveness for both retrieval and reranking through a straightforward training regime and exhibiting strong zero-shot effectiveness. Furthermore, we ob- serve that LLMs, which are inherently pre-trained on longer contexts, demonstrate potential in repre- senting entire documents, thereby eliminating the need for traditional segmenting and pooling strate- gies for document retrieval. | 2310.08319#3 | 2310.08319#5 | 2310.08319 | [
"2302.13971"
] |
2310.08319#5 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | # 2 Method # 2.1 Preliminaries Task Definition Given a query Q and a corpus C = {D1, D2, ..., Dn} consisting of n documents, the goal of text retrieval is to find the k documents that are most relevant to the query Q, with k â ª n. In a multi-stage retrieval pipeline composed by a retriever and a reranker, the retrieverâ s task is to efficiently generate the top-k candidates that are relevant to the query based on the similarity metric Sim(Q, D) â R. The rerankerâ s task is to reorder these k candidate documents further to improve the relevance order using a more effective, but typ- ically more computationally expensive reranking model. Note that â documentâ in this context can refer to an arbitrary information snippet, including sentences, passages, or full documents. | 2310.08319#4 | 2310.08319#6 | 2310.08319 | [
"2302.13971"
] |
2310.08319#6 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | While a multi-stage pipeline can contain multiple rerankers, in this paper we focus on a single reranker. Modern retrievers typically follow a bi-encoder architecture that encodes text into vector representa- tions, with Sim(Q, D) computed as the dot product of the vector representations of the query Q and a document D (Karpukhin et al., 2020). In con- trast, a (pointwise) reranker typically takes both the query and a candidate document as input to directly generate a relevance score. These scores are then used to reorder the candidates (Nogueira et al., 2019; Gao et al., 2021). | 2310.08319#5 | 2310.08319#7 | 2310.08319 | [
"2302.13971"
] |
2310.08319#7 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | LLaMA LLaMA (Touvron et al., 2023a) is an auto-regressive, decoder-only large language model based on the Transformer architecture. The model is characterized by its billions of param- eters, pre-trained on a vast amount of web data. Being uni-directional means that the modelâ s at- tention mechanism only considers the preceding elements in the input sequence when making pre- dictions. Specifically, given an input sequence x = [t1, t2, ..., tnâ 1], the model computes the prob- ability of the next token tn based solely on the preceding tokens. The prediction process can be mathematically represented as P (tn|t1, ..., tnâ 1), where P denotes the probability and tn represents the next element in the sequence. | 2310.08319#6 | 2310.08319#8 | 2310.08319 | [
"2302.13971"
] |
2310.08319#8 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | # 2.2 Retriever Our retriever model, called RepLLaMA, follows the bi-encoder dense retriever architecture pro- posed in DPR (Karpukhin et al., 2020), but with the backbone model initialized with LLaMA. Previous work on dense retriever models of- ten uses a bi-directional encoder-only model like BERT, taking the representation of the prepended [CLS] token as the dense representation of the text input. However, as LLaMA is uni-directional, we append an end-of-sequence token </s> to the input query or document to form the input sequence to LLaMA. Thus, the vector embedding of a query or a document is computed as: VT = Decoder(â t1 t2 ... tk</s>â )[â 1] | 2310.08319#7 | 2310.08319#9 | 2310.08319 | [
"2302.13971"
] |
2310.08319#9 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | where Decoder(·) represents the LLaMA model, which returns the last layer token representation for each input token. We take the representation of the end-of-sequence token as the representation of the input sequence t1 . . . tk, which can be either a query Q or a document D. Relevance of D to Q is computed in terms of the dot product of their corresponding dense representation VQ and VD as Sim(Q, D) =< VQ, VD >. The model is then optimized end-to-end accord- ing to InfoNCE loss: L(Q, D*, {Dx}) = â logp(D = D* | Q) = exp(Sim(Q, D*)) ~ 18 Sp(Sim(@,D*)) + SS exp(Sim(Q, Dy) D; â ¬{Dn} i )) Here, D+ represents a document that is relevant to the query Q (based on human labels), while {DN } denotes a set of documents that is not relevant to the query. The set of negative documents includes both hard negatives, which are sampled from the top-ranking results of an existing retrieval system, and in-batch negatives, which are derived from the positive documents and hard negative documents associated with other queries in the same training batch. In practice, dense retrieval training tends to benefit from a larger set of hard negatives and in-batch negatives. During the inference phase, the query is typ- ically encoded in real-time and the top-k similar documents are searched within the pre-encoded cor- pus using an efficient approximate nearest neigh- bour search library such as HNSW (Malkov and Yashunin, 2020). However, in this study, we opt to perform exact nearest neighbour search using flat indexes to evaluate model effectiveness. # 2.3 Reranker Our reranker model, referred to as RankLLaMA, is trained as a pointwise reranker. This approach involves passing a query and a candidate document together as model input, with the model generating a score that indicates the relevance of the document to the query (Nogueira et al., 2019). In more detail, RankLLaMA reranks a queryâ document pair as follows: input = â query: {Q} document: {D}</s>â Sim(Q, D) = Linear(Decoder(input)[â 1]) | 2310.08319#8 | 2310.08319#10 | 2310.08319 | [
"2302.13971"
] |
2310.08319#10 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | where Linear(·) is a linear projection layer that projects the last layer representation of the end-of- sequence token to a scalar. Similar to the retriever, the model is optimized by contrastive loss. How- ever, in this case, the negative documents do not involve in-batch negatives. To train a reranker that is optimized to rerank candidates from a specific retriever in a multi-stage pipeline, hard negatives should be sampled from the top-ranking results from that retriever. Specif- ically, in our case, the hard negative training data for RankLLaMA are selected from the top-ranking results of RepLLaMA. During the inference stage, the top candidate documents retrieved by RepLLaMA are reordered. This reordering is based on the relevance score that RankLLaMA assigns to each queryâ document pair, with the documents arranged in descending order of relevance. | 2310.08319#9 | 2310.08319#11 | 2310.08319 | [
"2302.13971"
] |
2310.08319#11 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | # 3 Experiments We conduct experiments on MS MARCO passage ranking and document ranking datasets to inves- tigate the effectiveness of the multi-stage text re- trieval pipeline built using RepLLaMA and Rank- LLaMA for both passage and document retrieval. # 3.1 Passage Retrieval Dataset We train our retriever and reranker mod- els with LLaMA on the training split of the MS MARCO passage ranking dataset (Bajaj et al., 2016), which consists of approximately 500k train- ing examples. As discussed in Section 2.2, the incorporation of hard negatives is crucial for the effective training of the retriever. In our case, we use a blend of BM25 and CoCondenser (Gao and Callan, 2022b) hard negatives to ensure that the hard negatives are derived from both sparse and dense retrieval results, thereby enhancing the diver- sity of the samples. For the reranker, we select the hard negatives from the top-200 candidates gener- ated by the retriever. We evaluate the effectiveness of our models us- ing the development split of the MS MARCO pas- sage ranking task, comprising 6980 queries. Ef- fectiveness is reported using MRR@10 as the met- ric. In addition, we also evaluate our models on the TREC DL19/DL20 passage ranking test collec- tions (Craswell et al., 2020, 2021), which include 43 and 54 queries, respectively. These collections utilize the same passage corpus as MS MARCO, but provide query sets with dense, graded human relevance judgments. Following standard practice, we adopt nDCG@10 as the evaluation metric in our experiments. In addition, we assess the zero-shot effectiveness of RepLLaMA and RankLLaMA on BEIR (Thakur et al., 2021), which is a compilation of 18 datasets that spans a variety of domains (e.g., news, medi- cal) and retrieval tasks (e.g., fact verification, ques- tion answering). We focus our evaluation on the 13 datasets that are publicly available. Implementation Details We initialize our mod- els with the LLaMA-2-7B checkpoint2 and train on 16 Ã 32G V100 GPUs. | 2310.08319#10 | 2310.08319#12 | 2310.08319 | [
"2302.13971"
] |
2310.08319#12 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | For RepLLaMA, we extract the final layer representation of the </s> token as the dense representation, which has a di- mensionality of 4096. Additionally, we normalize these dense representations into unit vectors during 2https://huggingface.co/meta-llama/Llama-2-7b-hf Model size Source prev. DEV DL19 DL20 top-k MRR@10 R@1k nDCG@10 nDCG@10 BM25 (Lin et al., 2021) ANCE (Xiong et al., 2021) CoCondenser (Gao and Callan, 2022b) GTR-base (Ni et al., 2022) GTR-XXL (Ni et al., 2022) OpenAI Ada2 (Neelakantan et al., 2022) bi-SimLM (Wang et al., 2023) RepLLaMA - 125M 110M 110M 4.8B ? 110M 7B Retrieval - - - - - - - - |C| |C| |C| |C| |C| |C| |C| |C| 18.4 33.0 38.2 36.6 38.8 34.4 39.1 41.2 85.3 95.9 98.4 98.3 99.0 98.6 98.6 99.4 50.6 64.5 71.7 - - 70.4 69.8 74.3 48.0 64.6 68.4 - - 67.6 69.2 72.1 Reranking monoBERT (Nogueira et al., 2019) cross-SimLM (Wang et al., 2023) RankT5 (Zhuang et al., 2023) RankLLaMA RankLLaMA-13B 110M 110M bi-SimLM 220M 7B 13B BM25 GTR RepLLaMA RepLLaMA 1000 200 1000 200 200 37.2 43.7 43.4 44.9 45.2 85.3 98.7 98.3 99.4 99.4 72.3 74.6 - 75.6 76.0 72.2 72.7 - 77.4 77.9 RankVicuna (Pradeep et al., 2023) PRP (Qin et al., 2023) RankGPT3.5 (Sun et al., 2023) RankGPT4 (Sun et al., 2023) 7B 20B ? ? | 2310.08319#11 | 2310.08319#13 | 2310.08319 | [
"2302.13971"
] |
2310.08319#13 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | BM25 BM25 BM25 RankGPT3.5 100 100 100 30 - - - - - - - - 66.8 72.7 65.8 75.6 65.5 70.5 72.9 70.6 Table 1: The effectiveness of RepLLaMA and RankLLaMA on the MS MARCO passage corpus compared to existing methods. For the retriever, we compare against models trained with binary human judgments, without distillation from a reranker. Evaluation figures are copied from the original papers except for OpenAI Ada2, which is the successor to cpt-text (Neelakantan et al., 2022) and available as a commercial API. The effectiveness numbers of Ada2 are taken from Lin et al. (2023). both the training and inference stages, ensuring that their L2-norms are equal to 1. After pre-encoding the entire corpus, we end up with a 135G flat index for brute-force search. A challenge in fine-tuning LLMs for retrieval is the high GPU memory costs associated with con- trastive learning, as it requires large batch sizes for in-batch negatives. To address this, we em- ploy recent memory efficiency solutions, includ- ing LoRA (Hu et al., 2022), flash attention (Dao, 2023), and gradient checkpointing to reduce GPU memory usage. Both the retriever and reranker are trained with a batch size of 128, with 15 hard negative passages sampled for each query. At in- ference time, RepLLaMA retrieves the top-1000 passages from the corpus and RankLLaMA reranks the top-200 passages retrieved by RepLLaMA. To explore whether increases in model size can further improve effectiveness, we also train a version of RankLLaMA using LLaMA-2-13B initialization.3 In-Domain Evaluation Table 1 presents the ef- fectiveness of RepLLaMA and RankLLaMA on the MS MARCO passage corpus in comparison to existing methods. 3https://huggingface.co/meta-llama/ Llama-2-13b-hf For retrieval, RepLLaMA outperforms all com- peting methods, achieving the highest effective- ness. | 2310.08319#12 | 2310.08319#14 | 2310.08319 | [
"2302.13971"
] |
2310.08319#14 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The closest system in terms of effective- ness is bi-SimLM (Wang et al., 2023), which Rep- LLaMA outperforms by 2 points MRR@10 on the dev queries. However, bi-SimLM involves a pre- training stage for enhancing the text representation. In contrast, RankLLaMA directly uses the off-the- shelf LLaMA model as initialization. When com- pared to the GTR-XXL retriever, which also uses a model with billions of parameters based on the T5- encoder (Ni et al., 2022), our model achieves higher MRR@10 and Recall@1k on the dev queries and on TREC DL19/DL20. Specifically, RepLLaMA achieves 2.4 points higher MRR@10 and 0.4 points higher Recall@1k than GTR-XXL. It is worth noting that recent studies have shown the potential to further improve dense retrieval models by learning from soft labels provided by a reranker via optimizing KL-divergence. However, in this study, we train our model with only binary judgments. Training RepLLaMA by knowledge distillation will likely lead to further improvements, but we save this for future work. For reranking, RankLLaMA reranks the top-200 passages from RepLLaMA, resulting in the high- est end-to-end effectiveness of any multi-stage re- | 2310.08319#13 | 2310.08319#15 | 2310.08319 | [
"2302.13971"
] |
2310.08319#15 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | BM25 GTR-XXL cpt-text-XL Ada2 SGPT RepLLaMA RankT5 RankLLaMA RankLLaMA model size add. pretrain - - 4.8B Y 175B Y ? ? 5.8B Y 7B N 220M - 7B - 13B - Arguana Climate-FEVER DBPedia FEVER FiQA HotpotQA NFCorpus NQ Quora SCIDOCS SciFact TREC-COVID Touche-2020 39.7 16.5 31.8 65.1 23.6 63.3 32.2 30.6 78.9 14.9 67.9 59.5 44.2 54.0 26.7 40.8 74.0 46.7 59.9 34.2 56.8 89.2 16.1 66.2 50.1 25.6 43.5 22.3 43.2 77.5 51.2 68.8 40.7 - 63.8 - 75.4 64.9 29.1 56.7 23.7 40.2 77.3 41.1 65.4 35.8 48.2 87.6 18.6 73.6 81.3 28.0 51.4 30.5 39.9 78.3 37.2 59.3 36.2 52.4 84.6 19.7 74.7 87.3 25.4 48.6 31.0 43.7 83.4 45.8 68.5 37.8 62.4 86.8 18.1 75.6 84.7 30.5 33.0 21.5 44.2 83.2 44.5 71.0 38.1 61.4 83.1 18.1 75.0 80.7 44.0 56.0 28.0 48.3 83.9 46.5 75.3 30.3 66.3 85.0 17.8 73.2 85.2 40.1 50.8 29.2 48.7 86.2 48.1 76.4 28.4 66.7 81.7 19.1 73.0 86.1 40.6 Average 43.7 49.3 - 52.1 52.1 55.1 53.7 56.6 56.5 | 2310.08319#14 | 2310.08319#16 | 2310.08319 | [
"2302.13971"
] |
2310.08319#16 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | Table 2: Zero-shot effectiveness of RepLLaMA and RankLLaMA on BEIR datasets. The â add. pretrainâ row indicates whether the retriever model has undergone additional contrastive pre-training before supervised fine-tuning. The zero-shot effectiveness numbers of Ada2 are taken from Kamalloo et al. (2023). trieval system that we are aware of. Our complete RepLLaMAâ RankLLaMA pipeline beats the pre- vious state-of-the-art reranker, RankT5 (Zhuang et al., 2023), by 1.5 points MRR@10. Furthermore, our RankLLaMA-13B model outperforms the 7B model, achieving 0.3 points higher MRR@10 and slightly higher nDCG@10 on both DL19 and DL20. This indicates the potential for further improve- ments with even larger models. In contrast, RepLLaMA uses the base pre-trained model as initialization, achieving the highest zero- shot effectiveness we are aware of while maintain- ing simplicity. RankLLaMA-7B further enhances the retrieverâ s effectiveness by an average of 1.5 points on nDCG@10. Interestingly, the larger RankLLaMA-13B model does not appear to yield any further improvements. Compared to RankGPT4 (Sun et al., 2023), which prompts GPT-4 to perform passage rerank- ing through permutation generation within a multi- stage retrieval pipeline, our RepLLaMAâ Rank- LLaMA pipeline outperforms it by 0.4 and 7.3 nDCG@10 points on DL19 and DL20, respectively. As a pointwise reranker, RankLLaMA can rerank candidate passages in parallel, which means that inference can be accelerated to reduce latency as compared to RankGPT, which depends on a se- quential sliding-window strategy to rerank. Zero-Shot Evaluation The zero-shot evaluation of RepLLaMA and RankLLaMA on the BEIR datasets is presented in Table 2. Both models demonstrate superior zero-shot effectiveness, out- performing existing models. RepLLaMA surpasses other existing dense retrievers with billions of pa- rameters. | 2310.08319#15 | 2310.08319#17 | 2310.08319 | [
"2302.13971"
] |
2310.08319#17 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | Specifically, it outperforms SGPT (Muen- nighoff, 2022) and Ada2 by 3 points and exceeds GTR-XXL by approximately 6 points. Note that these methods require an unsupervised contrastive pre-training stage before the supervised fine-tuning. # 3.2 Document Retrieval Dataset The document retrieval task aims to rank document-length texts, which present the challenge of handling long input sequences (Bajaj et al., 2016). As illustrated in Figure 1, the MS MARCO document ranking corpus has an average docu- ment length of around 1500 tokens. Notably, only 24% of the documents have fewer than 512 to- kens, which is the maximum input length for most previous rerankers based on smaller pre-trained language models like BERT (Devlin et al., 2019). The standard solution to manage long sequences for retrieval is the MaxP strategy (Dai and Callan, 2019), which involves dividing the document into overlapping segments and determining the docu- ment relevance score based on the segment with the highest score. However, this process involves a heuristic pooling strategy and runs the risk of losing information spread across long contexts. Recent language models pre-trained on longer sequences (e.g., 4096 tokens for LLaMA-2) offer the poten- tial to represent document-length texts â | 2310.08319#16 | 2310.08319#18 | 2310.08319 | [
"2302.13971"
] |
2310.08319#18 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | in one goâ , reducing the need for segmentation. 1.0 0.8 0.6 0.4 0.2 0.0 10 100 512 1000 2048 4096 10000 Sequence Length # CDF Figure 1: Cumulative distribution function of document lengths in the MS MARCO document corpus, showing the proportion of documents that has a length less than a specific value (determined by the LLaMA tokenizer). For clarity, we exclude 3% of documents with a length exceeding 10,000 tokens. Model size Source prev. | 2310.08319#17 | 2310.08319#19 | 2310.08319 | [
"2302.13971"
] |
2310.08319#19 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | Seg. Dev top-k Y/N MRR@100 R@1k DL19 nDCG@10 DL20 nDCG@10 BM25 (Lin et al., 2021) BM25-Q2D (Pradeep et al., 2021) CoCondenser-MaxP RepLLaMA - - 110M 7B Retrieval - - - - |C| |C| |C| |C| N Y Y N 23.0 31.8 42.5 45.6 85.3 94.9 93.9 98.9 51.8 61.2 64.8 65.0 52.9 59.6 64.0 63.2 Reranking monoT5 (Pradeep et al., 2021) MORES+ (Gao and Callan, 2022a) RankLLaMA 3B BM25-Q2D 10000 100 100 110M CoCondenser RepLLaMA 7B Y Y N 41.1 49.3 50.3 94.9 - 98.9 - - 67.7 - - 67.4 Table 3: The effectiveness of RepLLaMA and RankLLaMA on the MS MARCO document corpus compared to existing methods. By default we allow the retriever and reranker to take the first 2048 tokens as input without any seg- mentation, which is a reasonable trade-off between input sequence length and the cost of training. This approach covers about 77% of the documents in the corpus entirely. We create the training data for the document retriever and reranker models based on the 300k training examples in the training set. Sim- ilar to the approach in passage ranking, we sample the hard negative documents to train RepLLaMA from the top-100 hard negatives from BM25 and our implementation of CoCondenser-MaxP. Here, BM25 directly indexes the entire documents, while CoCondenser retrieves documents using the afore- mentioned MaxP strategy. The hard negatives for RankLLaMA are selected from the top-100 results of RepLLaMA. Evaluation of document retrieval is performed on the development split of the MS MARCO docu- ment ranking dataset, which contains 5193 queries. | 2310.08319#18 | 2310.08319#20 | 2310.08319 | [
"2302.13971"
] |
2310.08319#20 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | Additionally, we evaluate our models on the TREC DL19/DL20 document ranking tasks, comprising 43 and 45 queries, respectively. document RepLLaMA and RankLLaMA, with the same computing resources. However, there are two key differences: First, the models are trained with a batch size of 128, with each query sampling 7 hard negative passages. Second, during inference, Rep- LLaMA retrieves the top-1000 documents while RankLLaMA reranks the top-100 documents that are retrieved by RepLLaMA. The document model also generates text embeddings with 4096 dimen- sions. For the MS MARCO document corpus, this results in a 49G (flat) index after pre-encoding the entire corpus. | 2310.08319#19 | 2310.08319#21 | 2310.08319 | [
"2302.13971"
] |
2310.08319#21 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | Results Table 3 reports the effectiveness of our RepLLaMAâ RankLLaMA pipeline for full- document retrieval on the MS MARCO docu- ment corpus. We see that both our retriever and reranker outperform existing methods. RepLLaMA achieves an MRR@100 score that is approxi- mately 3 points higher than CoCondenser-MaxP, while RankLLaMA exceeds (to our knowledge) the current state-of-the-art document reranker, MORES+ (Gao and Callan, 2022a), by 1 point in MRR@100. Implementation Details We follow a similar setup as in the passage ranking task to train both We again emphasize that both our retriever and reranker do not require document segmentation Train Dev DL19 DL20 46.6 FT LoRA 40.8 41.6 41.2 72.8 74.3 69.9 72.1 Table 4: Comparison of MRR@10 between full fine- tuning (FT) and LoRA when training RepLLaMA for the passage retrieval task. and rank score aggregation. Instead, RepLLaMA directly consumes the entire document, and Rank- LLaMA directly scores the relevance of the entire queryâ document pair. # 4 Ablation Study and Analysis # 4.1 Full Fine-Tuning vs. | 2310.08319#20 | 2310.08319#22 | 2310.08319 | [
"2302.13971"
] |
2310.08319#22 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | LoRA When fine-tuning large language models, a key de- cision is whether to conduct full fine-tuning, which updates all parameters in the model, or to use a parameter-efficient method such as LoRA. Table 4 compares the effectiveness of RepLLaMA when trained with full fine-tuning and LoRA for the pas- sage retrieval task. Both models are trained on the training set for one epoch. full fine-tuning achieves an MRR@10 score that is approximately 6 points higher than with LoRA on the training set. How- ever, on the development set, full fine-tuning only improves effectiveness by 0.4 points compared to LoRA. Interestingly, on the TREC DL19/DL20 datasets, which are derived from independent hu- man judgments, LoRA demonstrates better effec- tiveness. This suggests that full fine-tuning may be prone to overfitting on the training set distribution, while LoRA, with significantly fewer parameters, can generalizable better. For this reason, all the models presented in our main experiments (Sec- tion 3) use LoRA instead of full fine-tuning. | 2310.08319#21 | 2310.08319#23 | 2310.08319 | [
"2302.13971"
] |
2310.08319#23 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | # Input Sequence Length As discussed in Section 3.2, RankLLaMA has the advantage of accommodating longer inputs compared to previous models like BERT since its LLaMA backbone was pre-trained with a longer context window. We investigate the effects of vary- ing the maximum training input length and infer- ence input length on model effectiveness for the document reranking task. Results presented in Fig- ure 2 show a clear trend: the effectiveness of Rank- LLaMA improves as the maximum training length increases from 512 to 2048, with the MRR@100 score improving from 48.5 to 50.3. | 2310.08319#22 | 2310.08319#24 | 2310.08319 | [
"2302.13971"
] |
2310.08319#24 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | When the b o MRR@100 ES BS} 46 45 1000 2000 Input Length 3000 4000 Figure 2: Comparison of document ranking MRR@100 scores for RankLLaMA trained with different maximum input lengths and evaluated using different maximum input lengths. Each line represents a model trained with a specific maximum length, while points along the line indicate the effectiveness when varying the input length during inference (reranking). reranking input length is further increased to 4096, the MRR@100 score rises to 50.6. This demon- strates the modelâ s ability to exploit longer se- quences for improved effectiveness. However, it is important to note that the gains plateau beyond a certain length, suggesting a point of diminishing returns. The MRR@100 for the model trained with a length of 4096 is only 0.3 points higher than the model trained with a length of 2048, when evaluated on input lengths that match their training lengths. Moreover, the model trained with a length of 4096 takes about 8 days to train using 16 Ã V100 GPUs, while the model with a length of 2048 takes about 4 days. The same relative latency costs apply to inference as well. Therefore, while RankLLaMA can handle much longer input documents, it is crucial to balance this capability with the practical considerations of computational efficiency. # 5 Related Work # 5.1 Large Language Models Pre-trained language models based on the Trans- former architecture (Vaswani et al., 2017) have demonstrated impressive capabilities when fine- tuned for various downstream tasks since the ad- vent of BERT (Devlin et al., 2019). Depending on their architecture, pre-trained Transformers can be classified into three categories: encoder-only mod- els (Devlin et al., 2019; Liu et al., 2019; Conneau et al., 2020), encoderâ decoder models (Raffel et al., 2020; Lewis et al., 2020a), and decoder-only mod- els (Radford et al., 2018). Decoder-only models like GPT/GPT-2 have been lauded for their simplic- ity in terms of model architecture and pre-training procedures (Radford et al., 2018, 2019). | 2310.08319#23 | 2310.08319#25 | 2310.08319 | [
"2302.13971"
] |
2310.08319#25 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | Recent research has shown that scaling up LLMs by pre-training larger decoder-only models using larger and higher quality corpora can significantly enhance model capabilities for general-purpose NLP tasks such as question answering and code generation (Wei et al., 2022; Chen et al., 2021). This is achieved by fine-tuning the pre-trained LLMs with instruction-following data using rein- forcement learning with human feedback. Instruct- GPT (Ouyang et al., 2022) and GPT-4 (OpenAI, 2023) are two popular representatives in this class of models. Among the many implementations of open-source large language models, LLaMA (Tou- vron et al., 2023a,b) is among the most recent and among the top-performing on a variety of tasks. # 5.2 Multi-Stage Text Retrieval While multi-stage retrieval pipelines date back well over a decade (Matveeva et al., 2006; Cambazoglu et al., 2010; Wang et al., 2011), they have bene- fited immensely from pre-trained language mod- els such as BERT in recent years, starting with the monoBERT reranking model (Nogueira and Cho, 2019). Nogueira et al. (2019) proposed a multi-stage retrieval pipeline that employs a BM25 retriever followed by two BERT-based reranking stages. This design demonstrates the effective- ness of pre-trained language models in reranking tasks. RankLLaMA follows the same basic de- sign as monoBERT. The dense passage retriever (DPR) further proposed to fine-tune BERT to re- place the BM25 retriever with a dense retrieval model in a bi-encoder design (Karpukhin et al., 2020). DPR encodes text into low-dimensional dense vector representations and treats retrieval as a nearest-neighbor search task. RepLLaMA fol- lows the same bi-encoder design. Several solutions have been introduced to en- hance the effectiveness of retrievers and rerankers in a multi-stage pipeline. | 2310.08319#24 | 2310.08319#26 | 2310.08319 | [
"2302.13971"
] |
2310.08319#26 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | On the retriever side, works such as ANCE (Xiong et al., 2021), Rocket- QA (Qu et al., 2021), CoCondenser (Gao and Callan, 2022b), RetroMAE (Xiao et al., 2022), and SimLM (Wang et al., 2023), have shown that aug- menting the training data with hard negative mining or continuous retrieval-oriented pre-training can improve the effectiveness of dense retrievers. On the reranker side, monoT5 (Nogueira et al., 2020) and monoELECTRA (Pradeep et al., 2022) demon- strated that initializing the reranker with a custom pre-trained model can enhance effectiveness. Gao et al., 2021 proposed using a contrastive loss for reranker training to replace the default pairwise loss. Zhuang et al. (2023) studied the use of T5 as a reranker, analyzing the influence of different model architectures and loss functions. However, directly fine-tuning modern billion-parameter-scale large language models for multi-stage retrieval has not been explored to date. Recently, LLMs have demonstrated impressive effectiveness when prompted to perform few-shot or zero-shot text generation. As mentioned in the introduction, researchers have cast reranking as text generation. These models can be leveraged to directly generate a reordered list of candidates, e.g., LRL (Ma et al., 2023), RankGPT (Sun et al., 2023), RankVicuna (Pradeep et al., 2023). Alternatively, they can compare passages in a pairwise manner, e.g., PRP (Qin et al., 2023). Although prompt- based methods have shown good zero-shot effec- tiveness, they require multiple decoding passes, thus making them slow and non-parallelizable. Fur- thermore, reranking with prompts makes it difficult to exploit available human judgments such as MS MARCO (Bajaj et al., 2016) to further improve effectiveness. Finally, these approaches do not al- low for joint rerankerâ retriever optimization. In contrast, we address all these issues. | 2310.08319#25 | 2310.08319#27 | 2310.08319 | [
"2302.13971"
] |
2310.08319#27 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | Our work is most similar to GPT-XXL (Ni et al., 2022) and SGPT (Muennighoff, 2022), which also used billion-parameter-scale models as backbones of dense retrievers, achieving better zero-shot effec- tiveness than smaller models. However, LLaMA has demonstrated even better effectiveness on nat- ural language generation tasks, suggesting that it might serve as a better backbone and warranting further exploration. The cpt-text model (Neelakan- tan et al., 2022), initialized with the 175-billion- parameter GPT-3 model, also shows strong zero- shot effectiveness. However, cpt-text is not an open- source model. Additionally, none of the models referenced above are fully optimized for a multi- stage retrieval pipeline. Our RepLLaMA and Rank- LLaMA models are fully open-source and opti- mized for multi-stage retrieval, achieving state-of- the-art effectiveness on both retrieval and reranking, for both in-domain and out-of-domain evaluations. | 2310.08319#26 | 2310.08319#28 | 2310.08319 | [
"2302.13971"
] |
2310.08319#28 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | # 6 Conclusion The successful application of large language mod- els in generative tasks has sparked interest in their potential to enhance retrieval. In this study, we demonstrate that it is possible to fine-tune a large model to act as a dense retriever (RepLLaMA) and a pointwise reranker (RankLLaMA), thereby es- tablishing an effective, state-of-the-art multi-stage retrieval system that outperforms smaller models built on the same basic design. Moreover, our ap- proach offers greater optimization and efficient in- ference potential than recent methods that prompt large language models for text reranking in a gener- ative manner. This work underscores the potential of leveraging LLMs for retrieval tasks in the future, which we continue to explore. # Acknowledgments This research was supported in part by the Nat- ural Sciences and Engineering Research Council (NSERC) of Canada. # References Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, An- drew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, and Tong Wang. 2016. MS MARCO: A human generated machine reading comprehension dataset. arXiv:1611.09268. B. Barla Cambazoglu, Hugo Zaragoza, Olivier Chapelle, Jiang Chen, Ciya Liao, Zhaohui Zheng, and Jon De- genhardt. 2010. | 2310.08319#27 | 2310.08319#29 | 2310.08319 | [
"2302.13971"
] |
2310.08319#29 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | Early exit optimizations for additive machine learned ranking systems. In Proceedings of the Third ACM International Conference on Web Search and Data Mining, WSDM â 10, page 411â 420, New York, NY, USA. Association for Computing Machinery. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 1870â | 2310.08319#28 | 2310.08319#30 | 2310.08319 | [
"2302.13971"
] |
2310.08319#30 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | 1879, Vancouver, Canada. Association for Computational Linguistics. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Ka- plan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sas- try, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cum- mings, Matthias Plappert, Fotios Chantzis, Eliza- beth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welin- der, Bob McGrew, Dario Amodei, Sam McCan- dlish, Ilya Sutskever, and Wojciech Zaremba. 2021. | 2310.08319#29 | 2310.08319#31 | 2310.08319 | [
"2302.13971"
] |
2310.08319#31 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | Evaluating large language models trained on code. arXiv:2107.03374. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440â | 2310.08319#30 | 2310.08319#32 | 2310.08319 | [
"2302.13971"
] |
2310.08319#32 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | 8451, Online. Association for Computational Lin- guistics. Nick Craswell, Bhaskar Mitra, Emine Yilmaz, and Daniel Campos. 2021. Overview of the TREC 2020 deep learning track. arXiv:2102.07662. Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M. Voorhees. 2020. Overview of the TREC 2019 deep learning track. arXiv:2003.07820. Zhuyun Dai and Jamie Callan. 2019. Deeper text under- standing for IR with contextual neural language mod- eling. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIRâ 19, page 985â 988, New York, NY, USA. Association for Computing Machinery. | 2310.08319#31 | 2310.08319#33 | 2310.08319 | [
"2302.13971"
] |
2310.08319#33 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | FlashAttention-2: Faster atten- tion with better parallelism and work partitioning. arXiv:2307.08691. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171â | 2310.08319#32 | 2310.08319#34 | 2310.08319 | [
"2302.13971"
] |
2310.08319#34 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | 4186, Minneapolis, Minnesota. Association for Computational Linguistics. Luyu Gao and Jamie Callan. 2022a. Long document re-ranking with modular re-ranker. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR â 22, page 2371â 2376, New York, NY, USA. Association for Computing Machinery. Luyu Gao and Jamie Callan. 2022b. Unsupervised cor- pus aware language model pre-training for dense pas- sage retrieval. | 2310.08319#33 | 2310.08319#35 | 2310.08319 | [
"2302.13971"
] |
2310.08319#35 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | In Proceedings of the 60th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 2843â 2853, Dublin, Ireland. Association for Computational Lin- guistics. Luyu Gao, Zhuyun Dai, and Jamie Callan. 2021. Re- think training of BERT rerankers in multi-stage re- In Advances in Information Re- trieval pipeline. trieval: 43rd European Conference on IR Research, ECIR 2021, Virtual Event, March 28 â April 1, 2021, Proceedings, Part II, page 280â 286, Berlin, Heidel- berg. Springer-Verlag. | 2310.08319#34 | 2310.08319#36 | 2310.08319 | [
"2302.13971"
] |
2310.08319#36 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations. Zhengbao Jiang, Frank F. Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Active retrieval augmented generation. arXiv:2305.06983. Ehsan Kamalloo, Xinyu Zhang, Odunayo Ogundepo, Nandan Thakur, David Alfonso-hermelo, Mehdi Rezagholizadeh, and Jimmy Lin. 2023. Evaluat- ing embedding APIs for information retrieval. In Proceedings of the 61st Annual Meeting of the As- sociation for Computational Linguistics (Volume 5: Industry Track), pages 518â 526, Toronto, Canada. Association for Computational Linguistics. | 2310.08319#35 | 2310.08319#37 | 2310.08319 | [
"2302.13971"
] |
2310.08319#37 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open- domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769â 6781, Online. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. | 2310.08319#36 | 2310.08319#38 | 2310.08319 | [
"2302.13971"
] |
2310.08319#38 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and com- prehension. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7871â 7880, Online. Association for Computa- tional Linguistics. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich Küttler, Mike Lewis, Wen-tau Yih, Tim Rock- täschel, Sebastian Riedel, and Douwe Kiela. 2020b. Retrieval-augmented generation for knowledge- intensive NLP tasks. In Advances in Neural Infor- mation Processing Systems, volume 33, pages 9459â | 2310.08319#37 | 2310.08319#39 | 2310.08319 | [
"2302.13971"
] |
2310.08319#39 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | 9474. Curran Associates, Inc. Jimmy Lin. 2021. A proposed conceptual framework for a representational approach to information re- trieval. arXiv:2110.01529. Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng- Hong Yang, Ronak Pradeep, and Rodrigo Nogueira. 2021. Pyserini: A Python toolkit for reproducible information retrieval research with sparse and dense In Proceedings of the 44th Inter- representations. national ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR â | 2310.08319#38 | 2310.08319#40 | 2310.08319 | [
"2302.13971"
] |
2310.08319#40 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | 21, page 2356â 2362, New York, NY, USA. Association for Computing Machinery. Jimmy Lin, Ronak Pradeep, Tommaso Teofili, and Jasper Xian. 2023. Vector search with OpenAI em- beddings: Lucene is all you need. arXiv:2308.14963. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. | 2310.08319#39 | 2310.08319#41 | 2310.08319 | [
"2302.13971"
] |
2310.08319#41 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | RoBERTa: A robustly optimized BERT pretraining approach. arXiv:1907.11692. Xueguang Ma, Xinyu Crystina Zhang, Ronak Pradeep, and Jimmy Lin. 2023. Zero-shot listwise doc- ument reranking with a large language model. arXiv:2305.02156. Yu A. Malkov and D. A. Yashunin. 2020. Efficient and robust approximate nearest neighbor search us- ing hierarchical navigable small world graphs. IEEE Transactions on Pattern Analysis and Machine Intel- ligence, 42(4):824â 836. Irina Matveeva, Chris Burges, Timo Burkard, Andy Lau- cius, and Leon Wong. 2006. | 2310.08319#40 | 2310.08319#42 | 2310.08319 | [
"2302.13971"
] |
2310.08319#42 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | High accuracy retrieval with multiple nested ranker. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Re- trieval, SIGIR â 06, page 437â 444, New York, NY, USA. Association for Computing Machinery. Niklas Muennighoff. 2022. SGPT: GPT sentence em- beddings for semantic search. arXiv:2202.08904. Arvind Neelakantan, Tao Xu, Raul Puri, Alec Rad- ford, Jesse Michael Han, Jerry Tworek, Qiming Yuan, Nikolas Tezak, Jong Wook Kim, Chris Hallacy, Johannes Heidecke, Pranav Shyam, Boris Power, Tyna Eloundou Nekoul, Girish Sastry, Gretchen Krueger, David Schnurr, Felipe Petroski Such, Kenny Hsu, Madeleine Thompson, Tabarak Khan, Toki Sherbakov, Joanne Jang, Peter Welinder, and Lilian Weng. 2022. Text and code embeddings by con- trastive pre-training. arXiv:2201.10005. Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernandez Abrego, Ji Ma, Vincent Zhao, Yi Luan, Keith Hall, Ming-Wei Chang, and Yinfei Yang. 2022. Large dual encoders are generalizable retrievers. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9844â | 2310.08319#41 | 2310.08319#43 | 2310.08319 | [
"2302.13971"
] |
2310.08319#43 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | 9855, Abu Dhabi, United Arab Emirates. As- sociation for Computational Linguistics. Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with BERT. arXiv:1901.04085. Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. 2020. Document ranking with a pre- In Findings trained sequence-to-sequence model. of the Association for Computational Linguistics: EMNLP 2020, pages 708â 718, Online. Association for Computational Linguistics. Rodrigo Nogueira, Wei Yang, Kyunghyun Cho, and Jimmy Lin. 2019. Multi-stage document ranking with BERT. arXiv:1910.14424. OpenAI. 2023. GPT-4 technical report. arXiv:2303.08774. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Car- roll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke E. Miller, Maddie Simens, Amanda Askell, Peter Welin- der, Paul Francis Christiano, Jan Leike, and Ryan J. Lowe. 2022. Training language models to follow in- structions with human feedback. arXiv:2203.02155. Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, Vassilis Plachouras, Tim Rocktäschel, and Sebastian Riedel. 2021. KILT: a benchmark for knowledge intensive language tasks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2523â 2544, Online. Association for Computational Linguistics. Ronak Pradeep, Yuqi Liu, Xinyu Zhang, Yilin Li, An- drew Yates, and Jimmy Lin. 2022. | 2310.08319#42 | 2310.08319#44 | 2310.08319 | [
"2302.13971"
] |
2310.08319#44 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | Squeezing water from a stone: A bag of tricks for further improv- ing cross-encoder effectiveness for reranking. In Advances in Information Retrieval, pages 655â 670, Cham. Springer International Publishing. Ronak Pradeep, Rodrigo Nogueira, and Jimmy Lin. 2021. The expando-mono-duo design pattern for text ranking with pretrained sequence-to-sequence models. arXiv:2101.05667. Ronak Pradeep, Sahel Sharifymoghaddam, and Jimmy Lin. 2023. | 2310.08319#43 | 2310.08319#45 | 2310.08319 | [
"2302.13971"
] |
2310.08319#45 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | RankVicuna: Zero-shot listwise docu- ment reranking with open-source large language mod- els. arXiv:2309.15088. Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, and Michael Ben- dersky. 2023. Large language models are effec- tive text rankers with pairwise ranking prompting. arXiv:2306.17563. Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and Haifeng Wang. 2021. RocketQA: An optimized train- ing approach to dense passage retrieval for open- domain question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 5835â 5847, On- line. Association for Computational Linguistics. Alec Radford, Karthik Narasimhan, Tim Salimans, and Improving language under- Ilya Sutskever. 2018. standing by generative pre-training. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. | 2310.08319#44 | 2310.08319#46 | 2310.08319 | [
"2302.13971"
] |
2310.08319#46 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1â 67. Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen tau Yih. 2023. REPLUG: Retrieval-augmented black-box language models. arXiv:2301.12652. Weiwei Sun, Lingyong Yan, Xinyu Ma, Pengjie Ren, Dawei Yin, and Zhaochun Ren. 2023. Is ChatGPT good at search? Investigating large language models as re-ranking agent. arXiv:2304.09542. Nandan Thakur, Nils Reimers, Andreas Rücklé, Ab- hishek Srivastava, and Iryna Gurevych. 2021. BEIR: | 2310.08319#45 | 2310.08319#47 | 2310.08319 | [
"2302.13971"
] |
2310.08319#47 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | A heterogeneous benchmark for zero-shot evaluation of information retrieval models. In Thirty-fifth Con- ference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). Christos Thorne, and Arpit Mittal. 2018. Christodoulopoulos, FEVER: a large-scale dataset for fact extraction In Proceedings of the 2018 and VERification. Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809â 819, New Orleans, Louisiana. Association for Computational Linguistics. | 2310.08319#46 | 2310.08319#48 | 2310.08319 | [
"2302.13971"
] |
2310.08319#48 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023a. LLaMA: Open and efficient foundation language models. arXiv:2302.13971. Hugo Touvron, Louis Martin, Kevin R. Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Daniel M. Bikel, Lukas Blecher, Cris- tian Cantón Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony S. Hartshorn, Saghar Hos- seini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel M. Kloumann, A. V. Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, R. Subramanian, Xia Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zhengxu Yan, Iliyan Zarov, Yuchen Zhang, An- gela Fan, Melanie Kambadur, Sharan Narang, Aure- lien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023b. | 2310.08319#47 | 2310.08319#49 | 2310.08319 | [
"2302.13971"
] |
2310.08319#49 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | Llama 2: Open foundation and fine-tuned chat models. arXiv:2307.09288. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Å ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc. Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, and Furu Wei. 2023. SimLM: Pre-training with repre- sentation bottleneck for dense passage retrieval. In Proceedings of the 61st Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2244â | 2310.08319#48 | 2310.08319#50 | 2310.08319 | [
"2302.13971"
] |
2310.08319#50 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | 2258, Toronto, Canada. Association for Computational Linguistics. Lidan Wang, Jimmy Lin, and Donald Metzler. 2011. A cascade ranking model for efficient ranked retrieval. In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Infor- mation Retrieval, SIGIR â 11, page 105â 114, New York, NY, USA. Association for Computing Machin- ery. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Huai hsin Chi, F. Xia, Quoc Le, and Denny Zhou. 2022. | 2310.08319#49 | 2310.08319#51 | 2310.08319 | [
"2302.13971"
] |
2310.08319#51 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | Chain of thought prompt- ing elicits reasoning in large language models. arXiv:2201.11903. Shitao Xiao, Zheng Liu, Yingxia Shao, and Zhao Cao. 2022. RetroMAE: Pre-training retrieval-oriented lan- guage models via masked auto-encoder. In Proceed- ings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 538â 548, Abu Dhabi, United Arab Emirates. Association for Com- putational Linguistics. Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and Arnold Overwijk. 2021. Approximate nearest neigh- bor negative contrastive learning for dense text re- trieval. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Nan Yang, Tao Ge, Liang Wang, Binxing Jiao, Daxin Jiang, Linjun Yang, Rangan Majumder, and Furu Wei. 2023. | 2310.08319#50 | 2310.08319#52 | 2310.08319 | [
"2302.13971"
] |
2310.08319#52 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | Inference with reference: Lossless accelera- tion of large language models. arXiv:2304.04487. Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Z. Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jianyun Nie, and Ji rong Wen. 2023. A survey of large language models. arXiv:2303.18223. Honglei Zhuang, Zhen Qin, Rolf Jagerman, Kai Hui, Ji Ma, Jing Lu, Jianmo Ni, Xuanhui Wang, and Michael Bendersky. 2023. | 2310.08319#51 | 2310.08319#53 | 2310.08319 | [
"2302.13971"
] |
2310.08319#53 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | RankT5: Fine-tuning T5 for text ranking with ranking losses. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR â 23, page 2308â 2313, New York, NY, USA. Association for Computing Machinery. | 2310.08319#52 | 2310.08319 | [
"2302.13971"
] |
|
2310.07712#0 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | 3 2 0 2 t c O 1 1 ] L C . s c [ 1 v 2 1 7 7 0 . 0 1 3 2 : v i X r a # Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models # Raphael Tang,â 1 Xinyu Zhang,â 2 Xueguang Ma,2 Jimmy Lin,2 Ferhan Ture1 1Comcast Applied AI 2University of Waterloo 1{raphael_tang, ferhan_ture}@comcast.com 2{x978zhang, x93ma, jimmylin}@uwaterloo.ca # Abstract | 2310.07712#1 | 2310.07712 | [
"2305.17926"
] |
|
2310.07712#1 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit posi- tional bias in how they use context, which espe- cially complicates listwise ranking. To address this, we propose permutation self-consistency, a form of self-consistency over ranking list outputs of black-box LLMs. Our key idea is to marginalize out different list orders in the prompt to produce an order-independent rank- ing with less positional bias. First, given some input prompt, we repeatedly shuffle the list in the prompt and pass it through the LLM while holding the instructions the same. Next, we aggregate the resulting sample of rank- ings by computing the central ranking closest in distance to all of them, marginalizing out prompt order biases in the process. Theoreti- cally, we prove the robustness of our method, showing convergence to the true ranking in the presence of random perturbations. Empir- ically, on five list-ranking datasets in sorting and passage reranking, our approach improves scores from conventional inference by up to 7â 18% for GPT-3.5 and 8â 16% for LLaMA v2 (70B), surpassing the previous state of the art in passage reranking. Our code is at https: //github.com/castorini/perm-sc. 1 # 1 Introduction a iC Order these items: fen cree EENEN ECT Figure 1: The conventional decoding process for list- wise ranking with input prompt a , language model c , and output ranking d . The grey item b is â lost in the middleâ by the LLM, resulting in its misranking e . TASH) ¢ 5 OSHS) Peeas) MEE 7) (3) 2) 4) 6)| Figure 2: Our permutation self-consistency process. With the instruction fixed, we shuffle the input list for prompts a , producing outputs with different mistakes. We then aggregate b these output rankings into one c . | 2310.07712#0 | 2310.07712#2 | 2310.07712 | [
"2305.17926"
] |
2310.07712#2 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | interfere with the model. Liu et al. (2023) demon- strate that LLMs tend to get â lost in the middleâ of a long context and use the middle portion poorly, which suggests that the middle passage (2) in the example may get misranked (e.g., 3, 1, 2). Wang et al. (2023a) find prompt order to affect quality, with some orders outperforming others; if items 1 and 3 were swapped in the prompt, the LLM would perhaps generate the mistaken ranking (2, 1, 3). Large language models (LLMs) respond cogently to free-form textual prompts and represent the state of the art across many tasks (Zhao et al., 2023). Their quality, however, varies with nuisance posi- tional factors such as prompt order and input length. As a descriptive example, consider this prompt: Arrange the following passages in decreasing relevance to the query, â | 2310.07712#1 | 2310.07712#3 | 2310.07712 | [
"2305.17926"
] |
2310.07712#3 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | what are shrews?â (1) Cats hunt small mammals, such as shrews ... (2) Shrews are mole-like mammals, widely ... (3) Shrews use their noses to find prey and ... The correct output order is (2, 3, 1), from most rel- evant to least, but several positional biases may In this paper, we mitigate positional biases for listwise-ranking LLMs. We propose permutation self-consistency, a novel decoding strategy for im- proving the quality, consistency, and prompt-order invariance of black-box LLMs. First, we construct prompts with randomly permuted input lists, from which the LLM generates a set of output rankings. Then, we aggregate these outputs into the central ranking that minimizes the Kendall tau distance to all of them, marginalizing out prompt order as a factor; see Figures 1 and 2. As related work, Stoehr et al. (2023) train order-aware probes on the latent representations of language models to increase con- sistency, but they assume white-box model access, whereas we do not. | 2310.07712#2 | 2310.07712#4 | 2310.07712 | [
"2305.17926"
] |
2310.07712#4 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | # â Equal contribution. Next, we assess the effectiveness of permutation self-consistency, both theoretically and empirically. Theoretically, we prove that it recovers the true ranking under arbitrary noise distributions, with enough observations and at least one correctly or- dered pair in each observation. Experimentally, we apply our method to tasks in math and word sorting, sentence ordering, and passage reranking, consistently increasing the scores of GPT-3.5, GPT- 4, and LLaMA v2 (70B; Touvron et al., 2023) by up to 4â | 2310.07712#3 | 2310.07712#5 | 2310.07712 | [
"2305.17926"
] |
2310.07712#5 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | 17%, 9â 24%, and 8â 16%, respectively. On TREC-DL19 and TREC-DL20 (Craswell et al., 2020, 2021), two passage ranking datasets, we establish the new state of the art. From this evi- dence on multiple tasks, we conclude that permuta- tion self-consistency improves listwise ranking in LLMs, which is partially influenced by positional bias, as shown in Section 3.2. Finally, we conduct auxiliary analyses to justify our design choices. In Section 4.1, our hyperparam- eter study finds that quality quickly rises with the number of aggregated output rankings: the score improvement from using five aggregated rankings reaches 67% of twenty, on average, suggesting that a few suffice for quality gain. We further demon- strate that sampling temperature is ineffective for us, unlike the original self-consistency work (Wang et al., 2023b) in chain-of-thought reasoning, likely because listwise ranking does not require explo- ration of various reasoning paths. Our contributions are as follows: (1) we propose a novel decoding technique for improving the qual- ity, consistency, and position invariance of black- box, listwise-ranking LLMs; (2) we empirically establish the new state of the art in passage rerank- ing and theoretically prove the robustness of our method to certain classes of ranking noise, includ- ing â | 2310.07712#4 | 2310.07712#6 | 2310.07712 | [
"2305.17926"
] |
2310.07712#6 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | lost-in-the-middleâ type ones; and (3) we pro- vide new analyses on positional biases in listwise- ranking LLMs, finding that these biases depend on pairwise positions of items in the list. # 2 Our Approach # 2.1 Preliminaries Notation. We define an n-ranking as a permu- tation o : {1,...,n} + {1,...,n}. For some sequence X := {X;}',, define X[o] as the per- muted sequence of X transformed by o, where X [0]; := X, i). Let the inversion vector of 7 be inv(Ï )i := #{j : Ï (j) > Ï (i), j < i}. To quantify dissimilarity, the Kendall tau dis- tance between two rankings a; and a2 is the num- ber of inversions in a! 009: n inv(Ï â 1 dκ (Ï 1, Ï 2) := 1 â ¦ Ï 2)i. i=1 (2) In other words, it is the number of pairwise dis- agreements, or discordant pairs, in the permutation ordering. The distance is one affine transform away from the Kendall tau correlation, used to measure list order similarity (Kendall, 1948): 2d,.(01, 02) (3) 2 (3) T(01,02) = 1- In the extreme, Ï = 1 â â Ï 1 = Ï 2, and Ï = â 1 implies that one is the otherâ s reverse. # 2.2 Permutation Self-Consistency How do we mitigate positional biases in listwise- ranking LLMs? We find inspiration in the self- consistency framework (Wang et al., 2023b), which improves quality and consistency in chain-of- thought prompting (Wei et al., 2022). The approach has two main stages: first, it samples multiple an- swers for an input prompt; then, it aggregates the sampled answers into a single, high-quality one, hence â | 2310.07712#5 | 2310.07712#7 | 2310.07712 | [
"2305.17926"
] |
2310.07712#7 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | marginalizing outâ separate reasoning paths from the language model. Unfortunately, self-consistency does not readily generalize to listwise ranking for a few reasons. For one, it is limited to point predictions, greatly simplifying the aggregation procedure to taking the majority vote. For another, sampling tempera- ture, the methodâ s mainstay of generating diverse samples for aggregation, has little effect on (and at times harming) the quality of aggregated predic- tions in listwise ranking, as shown in Section 4.1. Lastly, self-consistency does not explicitly address positional bias, the central issue of our paper. Nevertheless, its shuffleâ aggregate paradigm is still a useful template. With it, we propose permu- tation self-consistency: for the first sample step, we randomly shuffle the list in the prompt to curate a diverse set of rankings, each with different position biases. For the next aggregate step, we compute the central ranking closest in Kendall tau distance to all the sampled rankings, which, like self-consistency, marginalizes out the independent variable (in the original, reasoning paths; in ours, prompt order). Intuitively, we intervene on list order, collect output rankings, then aggregate, breaking the association between individual list order and output rankings. Task Example Input Prompt Math Sorting Sort these expressions: 3 / 2, 1 - 5, ... Sentence Ordering Order the shuffled sentences: [1] The... Passage Ranking Order these by relevance to the query, â | 2310.07712#6 | 2310.07712#8 | 2310.07712 | [
"2305.17926"
] |
2310.07712#8 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | what are shrews?â : [1] Cats hunt... Table 1: Listwise-ranking input prompt examples. Formally, we are given an input sequence of items X := {Xi}n i=1, such as a list of passages, along with a listwise-ranking LLM h(X; s) that returns an n-ranking on some string prompt s; see Table 1 for an example. First, we construct a di- verse set of output rankings by randomly permuting X and passing it through the LLM, like how self- consistency uses temperature to vary their output. Specifically, we sample a sequence | 2310.07712#7 | 2310.07712#9 | 2310.07712 | [
"2305.17926"
] |
2310.07712#9 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Ë Ï i := h(X[Ï i]; s) for 1 â ¤ i â ¤ m, (4) where Ï i is drawn uniformly at random from the set of all possible n-rankings. As noted previously, each output ranking has positional bias, but mis- takes are expected to differ among the outputs be- cause of our input order randomization. We then â marginalize outâ these individual biases by aggre- gating the output rankings into a single central ranking. One method with attractive theoretical properties is the Kemenyâ Young (Kemeny, 1959) optimal ranking of the outputsâ that is, the central ranking that minimizes the sum of its Kendall tau distances to every output ranking: | 2310.07712#8 | 2310.07712#10 | 2310.07712 | [
"2305.17926"
] |
2310.07712#10 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Â¯Ï := argmin dκ(Ë Ï i, Ï ). Ï 1â ¤iâ ¤m (5) Our approach returns Â¯Ï as the prediction for X and terminates. Although this calculation is NP- hard, fast exact and approximate algorithms ex- ist (Conitzer et al., 2006; Ali and MeilË a, 2012), many implemented in our codebase. Passage reranking. The task of passage rank- ing ranks a set of provided passages in order of relevance to a given query. | 2310.07712#9 | 2310.07712#11 | 2310.07712 | [
"2305.17926"
] |
2310.07712#11 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | The use of permu- tation self-consistency for this case deserves spe- cial attention. Due to the LLM input length con- straint, predominant LLM-based approaches such as RankGPT (Sun et al., 2023), LRL (Ma et al., 2023), and RankVicuna (Pradeep et al., 2023) stride the LLM across fixed windows of items from the back of the list to the front, rather than output a ranking in a single pass. In this case, we simply ap- ply permutation self-consistency to each window. | 2310.07712#10 | 2310.07712#12 | 2310.07712 | [
"2305.17926"
] |
2310.07712#12 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | # 2.3 Theoretical Guarantees We now show that for certain kinds of noisy rank- ings, the Kemeny ranking can recover the true rank- ing given enough observations. For example, if there always exists some random pair of items that are correctly ranked among randomly ordered ob- servations, we will converge to the true ranking. Definition 2.1. For two rankings Ï 1 and Ï 2, the concordant subset is a set Sâ ² where â i and j â Sâ ², Ï 1(i) < Ï 1(j) â § Ï 2(i) < Ï 2(j) or Ï 1(i) > Ï 1(j) â § Ï 2(i) > Ï 2(j). Proposition 2.1. Let there be a true ranking Ï and a sequence of noisy rankings Ë Ï := {Ë Ï i}m i=1. Suppose each noisy ranking has a uniformly ran- dom, nonempty concordant subset Sâ ² with Ï , and the remaining rank elements not in Sâ ² represent a random permutation. Then the Kemenyâ Young ranking Â¯Ï of Ë Ï converges in probability to Ï , i.e., it is a consistent estimator. Proof sketch. Let Aj; be the event that the sum of discordant pairs indexed by i and j across each ranking in & is greater than the number of con- cordant ones. P(Aj;;) is upper-bounded by O(). The union bound of PN, Aj;) shows that the probability of the sum of discordant pairs being greater than that of the concordant pairs vanishes for any pair as m approaches infinity. Thus, the Kemeny-optimal ranking will always approach for m â oo, concluding our proof. To extend this result, we demonstrate that, in the presence of any arbitrary distribution of ranking noise (e.g., the hypothetical â lost-in-the-middleâ kind), characterized empirically in Section 3.2, our approach yields a consistent estimator for the true ranking, given that at least one possibly nonrandom pair of items is always concordant: Proposition 2.2. Let there be a true ranking Ï , input ranking Ï in, and a ranking noise distribution P(Ï noisy|Ï in), where Ï | 2310.07712#11 | 2310.07712#13 | 2310.07712 | [
"2305.17926"
] |
2310.07712#13 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | noisy always has a (possibly nonuniform) nonempty concordant subset Sâ ² with Ï . Then the permutation self-consistency procedure is a consistent estimator of Ï when applied to Ï in as the input and LLM parameterized by P(Ï noisy|Ï in). Proof sketch. Observe that the first shuffling stage of permutation self-consistency transforms the premises into those of Proposition 2.3. Since the next stage of the method involves the same Kemenyâ Young ranking as the proposition does, the rest of the proof quickly follows. 1. MathSort: Sort ten arithmetic expressions by value. Example: Sort the following expressions from smallest to largest: 3 / 5, 2 - 9, 6 * 5, 2 * 1, 3 / 1, 9 * 9, 1 - 9, 9 + 8, 3 / 5, 1 / 9. The output format should be a comma-separated list containing the exact expressions; do not reduce them. Only respond with the results; do not say any word or explain. 2. WordSort: Order ten words alphabetically. Example: Order these words alphabetically: aaron, roam, aardvark, nexus, [...]. The output format should [...] 3. GSM8KSort: Unscramble sentences from GSM8K. Example: Order the scrambled sentences logically: - She took 1 hour to walk the first 4 miles [...] - Marissa is hiking a 12-mile trail. - If she wants her average speed to be 4 [...] The output format should have each sentence on a new line. Only respond with the results; do not say any [...] Table 2: Example prompts for our three sorting tasks. # 3 Experiments We conduct experiments on sorting and passage ranking, which constitute two distinct types of prob- lems in listwise ranking. # 3.1 Sorting Tasks | 2310.07712#12 | 2310.07712#14 | 2310.07712 | [
"2305.17926"
] |
2310.07712#14 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Setup. We build three functionally distinct datasets called MathSort, WordSort, and GSM8KSort, cor- responding to numerical sorting, alphabetical order- ing, and sentence arrangement, respectively. For MathSort, the task is to sort ten random mathe- matical expressions of the form digit op digit, where digit is a single digit and op is one of +, -, *, or /. In WordSort, the goal is to order ten random English words alphabetically. Finally, GSM8KSort is a sentence-unscrambling task over the test set of the GSM8K reasoning dataset (Cobbe et al., 2021). For consistency and tractability, we use 100 exam- ples in each dataset; see Table 2 for prompts. Although less practical than passage ranking, these synthetic sorting datasets have certain advan- tages. The items are intrinsically comparable, espe- cially in MathSort and WordSort, whose elements have unequivocal order (e.g., â | 2310.07712#13 | 2310.07712#15 | 2310.07712 | [
"2305.17926"
] |
2310.07712#15 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | aardvarkâ must pre- cede â abacusâ in WordSort). On the other hand, passage ranking relies on human judgment, where label noise may confound findings. Synthetic con- struction also enables control of item length: Math- Sort examples are fixed at three tokens, WordSort at a single word, and GSM8K one sentence. For our LLMs, we choose the open family of LLaMA v2 models (Touvron et al., 2023) and the Method MATHSORT WORDSORT GSM8KSORT Orig. PSC Orig. PSC Orig. PSC LLaMA2-7B 8.7 6.1 LLaMA2-13B 16.7 26.0 65.4 78.8 42.7 LLaMA2-70B 27.9 31.3 74.6 81.0 61.1 64.0 75.2 85.9 88.1 82.1 GPT-3.5 83.5 89.6 89.9 92.0 88.4 GPT-4 24.2 41.3 59.9 21.3 46.8 71.2 88.4 90.5 Table 3: Kendall tau correlation scores on our sorting tasks. Original scores are the median across 20 single runs, and PSC aggregates those 20. Underline indicates improvement from PSC and bold denotes best. | 2310.07712#14 | 2310.07712#16 | 2310.07712 | [
"2305.17926"
] |
2310.07712#16 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | # x # g Individual Score Distribution vs. PSC MathSort all w - « WordSort ® Our PSC oF nas - Hl GPT-3.5 GSM8kSort Ga GeT-4 â Mi * 60 70 80 90 Tau Score Figure 3: The distribution of sorting task scores from twenty individual runs plotted against our PSC score. Our PSC outperforms the best of any individual run. closed GPT-3.5 (Turbo, the â 0613â version) and GPT-4 from OpenAI, both the state of the art. | 2310.07712#15 | 2310.07712#17 | 2310.07712 | [
"2305.17926"
] |
2310.07712#17 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | We apply permutation self-consistency with m = 20 output rankings, resulting in 20 parallel calls to the LLM per example. Results. We present our main results in Table 3, naming our method â PSCâ for short. PSC consis- tently outperforms conventional inference on all three datasets and five models by an average of 42% in Kendall tau correlation, with gains skewed toward the smaller LLaMA2 variants. Specifically, LLaMA2-7B, 13B, and 70B attain average score increases of 157%, 28%, and 12%, respectively, while GPT-3.5 and GPT-4 improve by 3â 18% and 2â 7%. We attribute this to the already high quality of the larger 70B and GPT models, which leave less room for improvement. We conclude that PSC improves listwise ranking on sorting tasks, with higher gains on lower-quality models. One foreseeable question is whether any indi- vidual runs surpass PSC, which would weaken the case for rank aggregation. To answer this, we plot the distribution of the individual scores against PSC in Figure 3. We observe that PSC reliably beats all individual runs by 1â 12%, improving the most on tasks and models with lower baseline quality, such as MathSort and GPT-3.5. These findings bolster the necessity of the aggregation step. First Stage Top-k Method TREC-DL19 TREC-DL20 Original Our PSC Original Our PSC None All All (1) BM25 (2) SPLADE++ ED 50.58 73.08 â | 2310.07712#16 | 2310.07712#18 | 2310.07712 | [
"2305.17926"
] |
2310.07712#18 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | â 47.96 71.97 â â Supervised Approaches BM25 100 100 (3) MonoT5 (T5-3B) (4) RankT5 (T5-3B) 71.83 71.22 â â 68.89 69.49 â â Unsupervised Approaches BM25 100 100 100 20 20 100 100 (5) PRP-Best (FLAN-T5-XXL) (6) PRP-Best (FLAN-UL2) (7) RankVicuna (8) Single (GPT-3.5) (9) Single (GPT-4) (10) RankGPT (GPT-3.5) (11) RankGPT (GPT-4) 69.87 72.65 66.83 60.95 (60.96) 60.88 (60.92) 68.00 (68.13) 75.00 (75.59) â â 68.70 61.49 64.88 70.77 75.66 69.85 70.68 65.49 57.64 (57.68) 57.78 (57.89) 62.08 (63.20) 70.36 (70.56) â â 65.68 59.62 62.49 62.70 71.00 SPLADE++ ED 100 20 100 (12) RankVicuna (13) Single (GPT-4) (14) RankGPT (GPT-4) 74.59 73.21 (73.36) 74.64 (74.93) 74.13 76.87 76.01 74.73 71.97 (73.63) 70.76 (71.08) 74.06 78.52 75.14 Table 4: nDCG@10 results on TREC-DL19 and TREC-DL20. Scores in parentheses are the maximum across three runs, while those outside the median. Improvements from PSC are underlined and best per-section scores are bolded. | 2310.07712#17 | 2310.07712#19 | 2310.07712 | [
"2305.17926"
] |
2310.07712#19 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | According to the one-tailed signed-rank test, paired differences between the original and PSC are statistically significant at the 99% confidence level (p < 0.01). # 3.2 Passage Reranking Task For a more applied case, we evaluate our method on passage reranking. In this task, we are given a query and an initial list of relevant documents from a fast, first-stage retriever. We must then reorder these documents to improve their final relevance. Setup. From the TREC Deep Learning Track, we select the two passage retrieval test sets from TREC-DL19 and TREC-DL20 (Craswell et al., 2020, 2021), both canon in the literature (Pradeep et al., 2023; Qin et al., 2023). These datasets are built on the MS MARCO v1 corpus (Bajaj et al., 2016), which contains 8.8 million passages. As is standard, we rerank the top-100 passages retrieved by the first-stage BM25 (Robertson et al., 2009) or SPLADE++ EnsembleDistill (ED; Formal et al., 2021), reporting nDCG@10 scores for quality. Like the sorting tasks, we pick one open LLM, RankVicuna (Pradeep et al., 2023), fine-tuned from Vicuna-7B (Chiang et al., 2023), and one closed family, GPT-3.5 and GPT-4â all models are the present state of the art. RankVicuna and GPT-3.5 have matching context lengths of 4096, half of GPT-4â s 8192. We similarly apply permutation self- consistency with m = 20 runs. Furthermore, for three of our variants named â single,â | 2310.07712#18 | 2310.07712#20 | 2310.07712 | [
"2305.17926"
] |
2310.07712#20 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | we reduce the top-100 to 20 and discard the windowing strategy used in RankGPT and RankVicuna, described in Section 2.2. This allows us to fit all passages in a single call and thus remove potentially confounding interactions between the windowing method and permutation self-consistency. For our supervised baselines, we report results from the MonoT5 (Nogueira et al., 2020) and RankT5 (Zhuang et al., 2023) models, based on the T5 language model (Raffel et al., 2020). For the unsupervised baselines, we copy figures from the state-of-the-art pairwise ranking results across the variants in Qin et al. (2023), which we name PRP-Best for short. Results. We present our results in Table 4. With PSC, we establish four state-of-the-art results: first, a new best in BM25 for DL19 (row 11), edging ahead of the prior record from RankGPT by 0.07 points; second, the same for DL20 (row 11), lead- ing PRP by 0.32 points (row 6); third, the overall top result on DL19 of 76.87 from SPLADE++ (row 13), outperforming the previous by 1.28 (row 11); and fourth, the state of the art of 78.52 on DL20 (row 13), a 3.79-point increase over the previous best from RankVicuna (row 12). Overall, our PSC approach consistently im- proves ordinary decoding and beats the maximum individual score across three runs (see scores in parentheses), yielding gains on 13 out of 16 modelâ dataset combinations (see PSC columns in rows 7â 14). On average, RankVicuna, GPT-3.5, and GPT-4 see relative score increases of 0.4%, 2%, and 5% with PSC. Mixed results on RankVicuna | 2310.07712#19 | 2310.07712#21 | 2310.07712 | [
"2305.17926"
] |
2310.07712#21 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Position ofthe Second gem, Fue) oan ae me i -9 Position of the Second Item, m(b) eh om 3 3 Sa = és. | =. 5 (=a 2 EI L, & 5 = 10- -6 © 10- 7 2 2 s Fa 5 - 5 oe o15- o15- 5 Fs Fs 3 a s a a 4 2 20- 2 20- [GPT-3.5] DL19 [GPT-3.5] DL20 Position of the Second Item, mj(b) Position of the Second Item, m(b) 0 20 5 10 =e no : : : = me : : : E Ll a a, wo FE ee ag. â 5- â 5- 2 » 2 z z ah : = 10- -. ©10- 2 2 -8 s s ba 75 2 15- o15- i; Fs Fs & 20- 820. I. [GPT-4] DL19 [GPT-4] DL20 (a) Single (GPT-3.5) on DL19 and DL20. (b) Single (GPT-4) on DL19 and DL20. Figure 4: Distribution of â reversionsâ after reranking. Blues are below the observed dataset average and reds above the average. | 2310.07712#20 | 2310.07712#22 | 2310.07712 | [
"2305.17926"
] |
2310.07712#22 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | For two input list positions i â [1, 20] and j â (i, 20], i indexes the rows and j the columns. For example, the cell at (1, 2) is the reversion of the first two input items across the dataset. Note that highly saturated colors indicate over- and under-reversion relative to other pairs in the dataset rather than in the absolute sense. likely result from its inherent robustness to posi- tional bias, instilled by its training process that uses random shuffling as part of data augmentation; thus, the shuffling step from PSC has less effect. sition pair, with Ï i(a) as the y-axis and Ï i(b) as the x-axis, whose positions range from 1â | 2310.07712#21 | 2310.07712#23 | 2310.07712 | [
"2305.17926"
] |
2310.07712#23 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | 20 for each of the top-20 passages. For cross-model compara- bility, we normalize by dataset. The choice of the first-stage reranker has a clear impact, with SPLADE++ adding an average of 7.26 points over the corresponding BM25 models. In fact, reranking the top-20 SPLADE items (row 13) in a single call outperforms doing the top-100 (row 14) using a sliding call window. We conjecture that this results from imperfections in the RankGPT windowing algorithm, which shows especially for strong retrievers, where the top-20 already contains many relevant documents. Finally, we note one particularly intriguing phe- nomenon: in the top-20 single-call setting, GPT-3.5 and GPT-4 have similar baseline quality without PSC (rows 8 and 9, first column in each group), but PSC boosts GPT-4 more than GPT-3.5 (row 9, second columns). As we explore in depth next, this possibly results from GPT-4 being more â equally biasedâ across the item positions and hence provid- ing PSC more useful rankings for aggregation. Positional bias analysis. We analyze how list or- der bias varies with the input positions on the â | 2310.07712#22 | 2310.07712#24 | 2310.07712 | [
"2305.17926"
] |
2310.07712#24 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | sin- gleâ GPT models for BM25 (from Table 3, rows 8 and 9), which avoid confounds from RankGPTâ s window strategy. The design of our analysis is as follows, with notation mirroring Section 2.2: consider the item pair (Xa, Xb) with input list posi- tions (Ï i(a), Ï i(b)), where Ï i(a) < Ï i(b) for some random permutation Ï i. If the output positions satisfy Ë Ï i(a) > Ë Ï i(b) after reranking, we say the order is reversed, and we call the sum of reversed pairs per data point â reversions.â In Figure 4, we visualize the distribution of reversions by input po- Under the null hypothesis of no positional bias, the distribution of reversions should be uniform be- cause the input lists are randomly permuted, which severs any association between input order and out- put ranking. However, Figure 4 contradicts this. Prominently, the center of Figure 4a is redder than the edges, indicating that pairs with both items closer to the middle are reversed more often by GPT-3.5 than those at the start and the end of in- put lists. In Figure 4b, bottom areas are also more red than the top, showing that pairs with items at the end of the list are more frequently reversed by GPT-4 than pairs at the start are. Other subtle patterns emerge upon examination. First, in Figure 4a, a dark block appears after col- umn 15, suggesting that GPT-3.5 does not focus well on items past the fifteenth. Second, the colors interleave in a grid pattern across both columns and rowsâ possibly an artifact of its pretraining. We conclude that different positional biases exist in reranking LLMs, varying by model and dataset. The analysis also helps to explain our prior exper- imental results. Comparing Figure 4a and 4b, we observe that GPT-4 generally reverses more pairs than GPT-3.5 and is closer to the optimal number of reversals, thus providing higher quality to the aggregated rankings. | 2310.07712#23 | 2310.07712#25 | 2310.07712 | [
"2305.17926"
] |
2310.07712#25 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | This may explain why PSC benefits GPT-4 (single) more than it does GPT-3.5 (single), i.e. row 9 vs. row 8 in Table 4. Similarly, both models tend to reverse more pairs on DL20 than on DL19, and results also indicate that PSC improves DL20 more than it does DL19. Quality vs. m Rankings (GPT-3.5) Quality vs. m Rankings (GPT-4) 2 2 = 20 ia 2 5 -4 2 6 â | 2310.07712#24 | 2310.07712#26 | 2310.07712 | [
"2305.17926"
] |
2310.07712#26 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | eâ WordSort 8 â eâ MathSort ° -8 â eâ GSM8KSort $ â eâ TREC-DL19 a 10 â eâ TREC-DL20 1 5 10 1 20 1 5 10 15 20 m Rankings m Rankings (a) Quality vs. number of output rankings (p = 0.17). Quality vs. Temp. (GPT-3.5) Quality vs. Temp. (GPT-4) 4 0 SSS SS i N â *â WordSort â eâ MathSort -6 â eâ GSM8KSort â eâ TREC-DL19 â eâ TREC-DL20 i ES i a Score Change wrt 0 Temp. I cy I S -10 0.75 00 02 04 06 Temperature 0.00 0.25 Temperature 0.50 (b) Quality vs. text generation temperature (p = â 0.078). (a) Quality vs. number of output rankings (Ï = 0.17). (b) Quality vs. text generation temperature (Ï = â 0.078). Figure 5: Quality across all datasets for various choices of aggregate size and temperature. For output rankings, we use m = 20 as our frame of reference; for temperature, 0.0. In the subfigure captions, Ï denotes Spearmanâ s rho. # 4 Sensitivity Analyses In this section, we investigate the importance of each component of permutation self-consistency to justify our modeling choices. # 4.1 Hyperparameter Studies Aggregation Method Quality (GPT-3.5) Aggregation Method Quality (GPT-4) 90 mmm Single Best jams RRF mm Kemeny | Math Word GSM8K DL19 DL20 Task 80 : | | ba l 40 lll « Math Word GSM8K DL19 DL20 Task Score 3 Score g 8 Output rankings. Throughout the paper, we es- poused aggregating over m = 20 output rankings, but is more actually better? If, say, five outper- forms twenty, we could decrease the number of parallel calls to the model, conceivably saving cost. | 2310.07712#25 | 2310.07712#27 | 2310.07712 | [
"2305.17926"
] |
2310.07712#27 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | To answer this question, we sweep the aggregate size between one and twenty across all datasets, plotting the resulting score differences from using the default twenty. We pick GPT-3.5 and GPT-4 as our target models, as they are used in all tasks. We plot our results in Figure 5a. On both models, we find that output quality rapidly converges to that of using the full twenty, five being 67% as effective on average. The score averages increase monotonically with the number of rankings (Ï = 0.17), with GSM8KSort on GPT-3.5 as an outlier (left subplot), possibly because of output varianceâ the next study on sampling temperature shows that it is highly sensitive to randomness. We conclude that picking m = 20 output rankings is effective, though returns sharply diminish after 5â 10. Sampling temperature. Self-consistency (Wang et al., 2023b) uses temperature as their sampling strategy to produce different outputs to aggregate over, but it is ineffective for us, perhaps because listwise ranking does not admit multiple reasoning paths like chain-of-thought prompting does. To assess this rigorously, we vary the temperature be- tween 0 and 0.75, following the original methodâ s 0.5â 0.7 (Wang et al., 2023b). For consistency, we use the same setup from before and fix m = 20. Figure 6: Scores for the alternative reciprocal rank fu- sion (RRF) and our Kemeny rank aggregation method. We plot our results in Figure 5b. | 2310.07712#26 | 2310.07712#28 | 2310.07712 | [
"2305.17926"
] |
2310.07712#28 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Temperature has little effect on the quality (Ï = â 0.078), again with GSM8KSort as an outlier, where the extra ran- domness drastically hurts quality on both models. This sensitivity to randomness is also evident in Figure 3, where GSM8K has the widest interquar- tile range of the tasks. In conclusion, this evidence grounds our choice of not using temperature. # 4.2 Rank Aggregation Comparison Reciprocal rank fusion (RRF; Cormack et al., 2009) is a state-of-the-art alternative to our chosen Ke- meny ranking method. It sorts items by the score 1 RRFScore(X;) := â â ___ ae 7c) (6) for each item Xj, rankings Ë Ï i, and k = 60. RRF had been under our consideration, but we picked Kemeny ranking for its theoretical robustness and empirical effectiveness. Shown in Figure 6, Ke- meny beats RRF (p < 0.05) on 8 out of 10 compar- isons by a mean of 0.23 points; on average, RRF reaches only 93.5% of the boost that Kemeny does. Its only outperformance on DL19 possibly results from it being suited for information retrieval, its field of origin, but may also be statistical noise. Overall, these results further support our decision to select Kemeny ranking for the aggregation step. # 5 Related Work The holistic direction of our work is in enhancing the ranking ability of large language models. Most closely, contrast-consistent ranking (Stoehr et al., 2023) proposes to train order-enforcing probes on the latent vectors of large language models for im- proving rank consistency. We differentiate our method by not presuming access to model inter- nals, which is becoming increasingly common with closed source but academically interesting LLMs such as GPT-4. The specific empirical tasks in this paper have also seen recent progress. For passage ranking us- ing language models, BERT-based (Devlin et al., 2019; Nogueira et al., 2020) and T5-tuned (Zhuang et al., 2023; Raffel et al., 2020) approaches rep- resent the earliest language models for passage ranking. | 2310.07712#27 | 2310.07712#29 | 2310.07712 | [
"2305.17926"
] |
2310.07712#29 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | RankGPT (Sun et al., 2023) spearheaded much of the post-ChatGPT work, beating the su- pervised state of the art with an unsupervised LLM for the first time. Concurrently, LRL (Ma et al., 2023) reached the same conclusions using a similar method on GPT-3. Along a non-listwise direction, PRP (Qin et al., 2023) represents a pairwise method leveraging open-source large language models, as reported in Table 4. Our secondary sorting tasks for LLMs, while less practical, have had attention as well, mostly in the context of evaluation, with BigBench (Suzgun et al., 2022) providing more than 200 distinct tasks, including one in alphabetical ordering,1 which we enlarge and expand on in WordSort. Stoehr et al. (2023) also constructed synthetic sorting datasets for evaluating listwise ranking, but they are private and hence uncomparable. We are not the first to establish positional biases in LLMs in general. Lu et al. (2022) are among the earliest to relate prompt order to the quality of in-context learning. Recently, Liu et al. (2023) and Wang et al. (2023a) characterized positional bias in the context of list-oriented tasks, such as ques- tion answering and response evaluation. However, we are to our knowledge the first to characterize the position biases of passage-ranking LLMs with respect to pairwise item positions. Lastly, our paper is connected to all the meta- algorithms for improving LLM generation. As a pertinent example, Lu et al. (2022) study prompt order on in-context learning classification tasks, 1https://github.com/google/BIG-bench/tree/main/ bigbench/benchmark_tasks/word_sorting proposing an entropy-based statistic over develop- ment sets to find performant permutations. Ag- garwal et al. (2023) make self-consistency more efficient, halting the procedure when enough sam- ples have been collected. To keep our method in its simplest form, as self-consistency had not been applied to listwise ranking to begin with, we based our design on the original (Wang et al., 2023b). # 6 Conclusions and Future Work | 2310.07712#28 | 2310.07712#30 | 2310.07712 | [
"2305.17926"
] |
2310.07712#30 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | In the present work, we introduce permutation self- consistency, a novel decoding method to improve the ranking ability of black-box LLMs by mitigat- ing potential sensitivities and biases to list item order. We intervene on prompt list order to pro- duce multiple rankings then return an aggregated statistic as the prediction, which intuitively has less association with the controlled variable, prompt list order. Theoretically, we prove the robustness of our method to arbitrary, fixed noise distributions under certain conditions. Empirically, our method consistently improves upon ordinary decoding on all 15 of our sorting modelâ dataset combinations and 13 out of 16 of our passage reranking ones. Further analyses indicate the positional biases in the reordering process of input rankings. Finally, our sensitivity analyses justify our design choices of 20 output rankings, zero sampling temperature, and the Kemeny ranking method. In the future, permutation self-consistency can plausibly be applied to any list-oriented task, re- gardless of whether the underlying LLM is openly available. Examples include using LLMs for evalu- ation (Wang et al., 2023a) and annotating human- feedback judgments with LLMs. Another future step is to relax or reformulate our method to be differentiable, enabling training-time application in, say, RankVicuna (Pradeep et al., 2023). # Limitations We share the same limitations as those of the origi- nal self-consistency paper (Wang et al., 2023b). We use multiple LLM calls, potentially to a commer- cial LLM, which would raise financial cost. Thus, practical applications may require careful weighing of quality gain against elevated expense. Neverthe- less, a few calls already help, and returns rapidly diminish past 5â 10 calls. We note that our method does not in practice increase latency by much, since all calls can be parallelized, and aggregation time does not rise with the number of samples. | 2310.07712#29 | 2310.07712#31 | 2310.07712 | [
"2305.17926"
] |
2310.07712#31 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | # References Pranjal Aggarwal, Aman Madaan, Yiming Yang, et al. 2023. Letâ s sample step by step: Adaptive- consistency for efficient reasoning with LLMs. arXiv:2305.11860. Alnur Ali and Marina MeilË a. 2012. Experiments with Kemeny ranking: What works when? Mathematical Social Sciences. Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, An- drew McNamara, Bhaskar Mitra, Tri Nguyen, et al. 2016. MS MARCO: A human generated machine reading comprehension dataset. arXiv:1611.09268. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. | 2310.07712#30 | 2310.07712#32 | 2310.07712 | [
"2305.17926"
] |
2310.07712#32 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Vicuna: An open- source chatbot impressing GPT-4 with 90%* Chat- GPT quality. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv:2110.14168. Vincent Conitzer, Andrew Davenport, and Jayant Kalagnanam. 2006. Improved bounds for computing Kemeny rankings. In Proceedings of the 21st Na- tional Conference on Artificial Intelligence (Volume 1). | 2310.07712#31 | 2310.07712#33 | 2310.07712 | [
"2305.17926"
] |