id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
sequencelengths 1
1
|
---|---|---|---|---|---|---|
2310.07712#33 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Gordon V. Cormack, Charles Clarke, and Stefan Buettcher. 2009. Reciprocal rank fusion outperforms Condorcet and individual rank learning methods. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in informa- tion retrieval. Nick Craswell, Bhaskar Mitra, Emine Yilmaz, and Daniel Campos. 2021. Overview of the TREC 2020 deep learning track. arXiv:2102.07662. Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M. Voorhees. 2020. Overview of the TREC 2019 deep learning track. arXiv:2003.07820. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. | 2310.07712#32 | 2310.07712#34 | 2310.07712 | [
"2305.17926"
] |
2310.07712#34 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | BERT: Pre-training of deep bidirectional transformers for language under- In Proceedings of the 2019 Conference standing. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Thibault Formal, Carlos Lassance, Benjamin Pi- wowarski, and Stéphane Clinchant. 2021. SPLADE v2: Sparse lexical and expansion model for informa- tion retrieval. arXiv:2109.10086. | 2310.07712#33 | 2310.07712#35 | 2310.07712 | [
"2305.17926"
] |
2310.07712#35 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | John G. Kemeny. 1959. Mathematics without numbers. Daedalus. Maurice George Kendall. 1948. Rank correlation meth- ods. Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paran- jape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2023. Lost in the middle: How language models use long contexts. arXiv:2307.03172. | 2310.07712#34 | 2310.07712#36 | 2310.07712 | [
"2305.17926"
] |
2310.07712#36 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022. Fantastically ordered prompts and where to find them: Overcoming few- shot prompt order sensitivity. In Proceedings of the 60th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers). Xueguang Ma, Xinyu Zhang, Ronak Pradeep, and listwise docu- Zero-shot reranking with a large language model. Jimmy Lin. 2023. ment arXiv:2305.02156. Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. 2020. Document ranking with a pre- In Findings trained sequence-to-sequence model. of the Association for Computational Linguistics: EMNLP 2020. Ronak Pradeep, Sahel Sharifymoghaddam, and Jimmy Lin. 2023. | 2310.07712#35 | 2310.07712#37 | 2310.07712 | [
"2305.17926"
] |
2310.07712#37 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | RankVicuna: Zero-shot listwise docu- ment reranking with open-source large language mod- els. arXiv:2309.15088. Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Don- ald Metzler, Xuanhui Wang, et al. 2023. Large lan- guage models are effective text rankers with pairwise ranking prompting. arXiv:2306.17563. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former. The Journal of Machine Learning Research. Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: | 2310.07712#36 | 2310.07712#38 | 2310.07712 | [
"2305.17926"
] |
2310.07712#38 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | BM25 and be- yond. Foundations and Trends in Information Re- trieval. Niklas Stoehr, Pengxiang Cheng, Jing Wang, Daniel Preotiuc-Pietro, and Rajarshi Bhowmik. 2023. Unsu- pervised contrast-consistent ranking with language models. arXiv:2309.06991. Weiwei Sun, Lingyong Yan, Xinyu Ma, Pengjie Ren, Dawei Yin, and Zhaochun Ren. 2023. Is ChatGPT good at search? Investigating large language models as re-ranking agent. arXiv:2304.09542. Mirac Suzgun, Nathan Scales, Nathanael Schärli, Se- bastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V. Le, Ed H. Chi, Denny Zhou, et al. 2022. Challenging BIG-Bench tasks and whether chain-of-thought can solve them. arXiv:2210.09261. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. | 2310.07712#37 | 2310.07712#39 | 2310.07712 | [
"2305.17926"
] |
2310.07712#39 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Llama 2: Open foundation and fine-tuned chat models. arXiv:2307.09288. Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. 2023a. Large language models are not fair evaluators. arXiv:2305.17926. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V. Le, Ed H. Chi, Sharan Narang, Aakanksha Chowd- hery, and Denny Zhou. 2023b. Self-consistency im- proves chain of thought reasoning in language mod- els. In The Eleventh International Conference on Learning Representations. | 2310.07712#38 | 2310.07712#40 | 2310.07712 | [
"2305.17926"
] |
2310.07712#40 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V. Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits rea- soning in large language models. Advances in Neural Information Processing Systems. Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023. A survey of large language models. arXiv:2303.18223. Honglei Zhuang, Zhen Qin, Rolf Jagerman, Kai Hui, Ji Ma, Jing Lu, Jianmo Ni, Xuanhui Wang, and Michael Bendersky. 2023. | 2310.07712#39 | 2310.07712#41 | 2310.07712 | [
"2305.17926"
] |
2310.07712#41 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | RankT5: Fine-tuning T5 for text ranking with ranking losses. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval. | 2310.07712#40 | 2310.07712 | [
"2305.17926"
] |
|
2310.06825#0 | Mistral 7B | 3 2 0 2 t c O 0 1 ] L C . s c [ 1 v 5 2 8 6 0 . 0 1 3 2 : v i X r a # Mistral 7B Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed Abstract We introduce Mistral 7B, a 7â billion-parameter language model engineered for superior performance and efficiency. Mistral 7B outperforms the best open 13B model (Llama 2) across all evaluated benchmarks, and the best released 34B model (Llama 1) in reasoning, mathematics, and code generation. Our model leverages grouped-query attention (GQA) for faster inference, coupled with sliding window attention (SWA) to effectively handle sequences of arbitrary length with a reduced inference cost. We also provide a model fine-tuned to follow instructions, Mistral 7B â Instruct, that surpasses Llama 2 13B â chat model both on human and automated benchmarks. Our models are released under the Apache 2.0 license. Code: https://github.com/mistralai/mistral-src Webpage: https://mistral.ai/news/announcing-mistral-7b/ # Introduction In the rapidly evolving domain of Natural Language Processing (NLP), the race towards higher model performance often necessitates an escalation in model size. However, this scaling tends to increase computational costs and inference latency, thereby raising barriers to deployment in practical, real-world scenarios. In this context, the search for balanced models delivering both high-level performance and efficiency becomes critically essential. Our model, Mistral 7B, demonstrates that a carefully designed language model can deliver high performance while maintaining an efficient inference. | 2310.06825#1 | 2310.06825 | [
"2302.13971"
] |
|
2310.06825#1 | Mistral 7B | Mistral 7B outperforms the previous best 13B model (Llama 2, [26]) across all tested benchmarks, and surpasses the best 34B model (LLaMa 34B, [25]) in mathematics and code generation. Furthermore, Mistral 7B approaches the coding performance of Code-Llama 7B [20], without sacrificing performance on non-code related benchmarks. Mistral 7B leverages grouped-query attention (GQA) [1], and sliding window attention (SWA) [6, 3]. GQA significantly accelerates the inference speed, and also reduces the memory requirement during decoding, allowing for higher batch sizes hence higher throughput, a crucial factor for real-time applications. In addition, SWA is designed to handle longer sequences more effectively at a reduced computational cost, thereby alleviating a common limitation in LLMs. These attention mechanisms collectively contribute to the enhanced performance and efficiency of Mistral 7B. Mistral 7B is released under the Apache 2.0 license. This release is accompanied by a reference implementation1 facilitating easy deployment either locally or on cloud platforms such as AWS, GCP, or Azure using the vLLM [17] inference server and SkyPilot 2. Integration with Hugging Face 3 is also streamlined for easier integration. Moreover, Mistral 7B is crafted for ease of fine-tuning across a myriad of tasks. As a demonstration of its adaptability and superior performance, we present a chat model fine-tuned from Mistral 7B that significantly outperforms the Llama 2 13B â Chat model. Mistral 7B takes a significant step in balancing the goals of getting high performance while keeping large language models efficient. Through our work, our aim is to help the community create more affordable, efficient, and high-performing language models that can be used in a wide range of real-world applications. | 2310.06825#0 | 2310.06825#2 | 2310.06825 | [
"2302.13971"
] |
2310.06825#2 | Mistral 7B | # 2 Architectural details The cat sat on the The cat sat on the window size â _â â > The cat sat on the Vanilla Attention Sliding Window Attention Effective Context Length Figure 1: Sliding Window Attention. The number of operations in vanilla attention is quadratic in the sequence length, and the memory increases linearly with the number of tokens. At inference time, this incurs higher latency and smaller throughput due to reduced cache availability. To alleviate this issue, we use sliding window attention: each token can attend to at most W tokens from the previous layer (here, W = 3). Note that tokens outside the sliding window still influence next word prediction. At each attention layer, information can move forward by W tokens. Hence, after k attention layers, information can move forward by up to k à W tokens. Mistral 7B is based on a transformer architecture [27]. The main parameters of the architecture are summarized in Table 1. Compared to Llama, it introduces a few changes that we summarize below. | 2310.06825#1 | 2310.06825#3 | 2310.06825 | [
"2302.13971"
] |
2310.06825#3 | Mistral 7B | §=£â â â â â â â â â â â â _ Parameter # Parameter # Value Sliding Window Attention. SWA exploits the stacked layers of a trans- former to attend information beyond the window size W . The hidden state in position i of the layer k, hi, attends to all hidden states from the previous layer with positions between i â W and i. Recursively, hi can access tokens from the input layer at a distance of up to W à k tokens, as illustrated in Figure 1. At the last layer, using a window size of W = 4096, we have a theoretical attention span of approximately 131K tokens. In practice, for a sequence length of 16K and W = 4096, changes made to FlashAttention [11] and xFormers [18] yield a 2x speed improvement over a vanilla attention baseline. dim n_layers head_dim hidden_dim n_heads n_kv_heads window_size context_len vocab_size 4096 32 128 14336 32 8 4096 8192 32000 Table 1: Model architecture. Rolling Buffer Cache. A fixed attention span means that we can limit our cache size using a rolling buffer cache. The cache has a fixed size of W , and the keys and values for the timestep i are stored in position i mod W of the cache. As a result, when the position i is larger than W , past values in the cache are overwritten, and the size of the cache stops increasing. We provide an illustration in Figure 2 for W = 3. On a sequence length of 32k tokens, this reduces the cache memory usage by 8x, without impacting the model quality. | 2310.06825#2 | 2310.06825#4 | 2310.06825 | [
"2302.13971"
] |
2310.06825#4 | Mistral 7B | 1https://github.com/mistralai/mistral-src 2https://github.com/skypilot-org/skypilot 3https://huggingface.co/mistralai 2 Timestep i Timestep i+ 1 Timestep i+ 2 This is an example of ... Mistral is a good ... The cat sat on the mat ... Figure 2: Rolling buffer cache. The cache has a fixed size of W = 4. Keys and values for position i are stored in position i mod W of the cache. When the position i is larger than W , past values in the cache are overwritten. The hidden state corresponding to the latest generated tokens are colored in orange. Pre-fill and Chunking. When generating a sequence, we need to predict tokens one-by-one, as each token is conditioned on the previous ones. However, the prompt is known in advance, and we can pre-fill the (k, v) cache with the prompt. If the prompt is very large, we can chunk it into smaller pieces, and pre-fill the cache with each chunk. For this purpose, we can select the window size as our chunk size. For each chunk, we thus need to compute the attention over the cache and over the chunk. Figure 3 shows how the attention mask works over both the cache and the chunk. | 2310.06825#3 | 2310.06825#5 | 2310.06825 | [
"2302.13971"
] |
2310.06825#5 | Mistral 7B | The cat sat on the mat and saw the dog go to the dog go to # Past Cache Current Figure 3: Pre-fill and chunking. During pre-fill of the cache, long sequences are chunked to limit memory usage. We process a sequence in three chunks, â The cat sat onâ , â the mat and sawâ , â the dog go toâ . The figure shows what happens for the third chunk (â the dog go toâ ): it attends itself using a causal mask (rightmost block), attends the cache using a sliding window (center block), and does not attend to past tokens as they are outside of the sliding window (left block). # 3 Results We compare Mistral 7B to Llama, and re-run all benchmarks with our own evaluation pipeline for fair comparison. We measure performance on a wide variety of tasks categorized as follow: | 2310.06825#4 | 2310.06825#6 | 2310.06825 | [
"2302.13971"
] |
2310.06825#6 | Mistral 7B | ⠢ Commonsense Reasoning (0-shot): Hellaswag [28], Winogrande [21], PIQA [4], SIQA [22], OpenbookQA [19], ARC-Easy, ARC-Challenge [9], CommonsenseQA [24] ⠢ World Knowledge (5-shot): NaturalQuestions [16], TriviaQA [15] ⠢ Reading Comprehension (0-shot): BoolQ [8], QuAC [7] ⠢ Math: GSM8K [10] (8-shot) with maj@8 and MATH [13] (4-shot) with maj@4 ⠢ Code: Humaneval [5] (0-shot) and MBPP [2] (3-shot) ⠢ Popular aggregated results: MMLU [12] (5-shot), BBH [23] (3-shot), and AGI Eval [29] (3-5-shot, English multiple-choice questions only) Detailed results for Mistral 7B, Llama 2 7B/13B, and Code-Llama 7B are reported in Table 2. Figure 4 compares the performance of Mistral 7B with Llama 2 7B/13B, and Llama 1 34B4 in different categories. Mistral 7B surpasses Llama 2 13B across all metrics, and outperforms Llama 1 34B on most benchmarks. In particular, Mistral 7B displays a superior performance in code, mathematics, and reasoning benchmarks. 4Since Llama 2 34B was not open-sourced, we report results for Llama 1 34B. 3 jm Mistral 7B = mm LLaMA2 138 50 lm Mistral 7B mm LLaMA2 138 mmm LlaMA278 lm LLaMA1 348 bel mmm LlaMA2 78 mem LlaMA 1348 70 40 vt = = eo g 7 = 330 £ g gs0 : < <20 40 10 ay MMLU Knowledge Reasoning Comprehension AGI Eval Math BBH Code Figure 4: Performance of Mistral 7B and different Llama models on a wide range of benchmarks. All models were re-evaluated on all metrics with our evaluation pipeline for accurate comparison. | 2310.06825#5 | 2310.06825#7 | 2310.06825 | [
"2302.13971"
] |
2310.06825#7 | Mistral 7B | Mistral 7B significantly outperforms Llama 2 7B and Llama 2 13B on all benchmarks. It is also vastly superior to Llama 1 34B in mathematics, code generation, and reasoning benchmarks. Model Modality MMLU HellaSwag WinoG PIQA Arc-e Arc-c NQ TriviaQA HumanEval MBPP MATH GSM8K 77.1% 69.5% 77.9% 68.7% 43.2% 24.7% 63.8% LLaMA 2 7B LLaMA 2 13B Pretrained 55.6% 80.7% 72.9% 80.8% 75.2% 48.8% 29.0% 69.6% Pretrained 44.4% 11.6% 18.9% 26.1% 3.9% 16.0% 35.4% 6.0% 34.3% Code-Llama 7B Finetuned 36.9% 62.9% 62.3% 72.8% 59.4% 34.5% 11.0% 34.9% 31.1% 52.5% 5.2% 20.8% Mistral 7B Pretrained 60.1% 81.3% 75.3% 83.0% 80.0% 55.5% 28.8% 69.9% 30.5% 47.5% 13.1% 52.2% Table 2: Comparison of Mistral 7B with Llama. Mistral 7B outperforms Llama 2 13B on all metrics, and approaches the code performance of Code-Llama 7B without sacrificing performance on non-code benchmarks. Size and Efficiency. We computed â equivalent model sizesâ of the Llama 2 family, aiming to understand Mistral 7B modelsâ efficiency in the cost-performance spectrum (see Figure 5). | 2310.06825#6 | 2310.06825#8 | 2310.06825 | [
"2302.13971"
] |
2310.06825#8 | Mistral 7B | When evaluated on reasoning, comprehension, and STEM reasoning (specifically MMLU), Mistral 7B mirrored performance that one might expect from a Llama 2 model with more than 3x its size. On the Knowledge benchmarks, Mistral 7Bâ s performance achieves a lower compression rate of 1.9x, which is likely due to its limited parameter count that restricts the amount of knowledge it can store. Evaluation Differences. On some benchmarks, there are some differences between our evaluation protocol and the one reported in the Llama 2 paper: 1) on MBPP, we use the hand-verified subset 2) on TriviaQA, we do not provide Wikipedia contexts. # Instruction Finetuning To evaluate the generalization capabilities of Mistral 7B, we fine-tuned it on instruction datasets publicly available on the Hugging Face repository. No proprietary data or training tricks were utilized: Mistral 7B â Instruct model is a simple and preliminary demonstration that the base model can easily be fine-tuned to achieve good performance. In Table 3, we observe that the resulting model, Mistral 7B â Instruct, exhibits superior perfor- mance compared to all 7B models on MT-Bench, and is comparable to 13B â Chat models. An independent human evaluation was conducted on https://llmboxing.com/leaderboard. Model Chatbot Arena ELO Rating MT Bench WizardLM 13B v1.2 Mistral 7B Instruct Llama 2 13B Chat Vicuna 13B Llama 2 7B Chat Vicuna 7B Alpaca 13B 1047 1031 1012 1041 985 997 914 7.2 6.84 +/- 0.07 6.65 6.57 6.27 6.17 4.53 Table 3: Comparison of Chat models. Mistral 7B â Instruct outperforms all 7B models on MT-Bench, and is comparable to 13B â | 2310.06825#7 | 2310.06825#9 | 2310.06825 | [
"2302.13971"
] |
2310.06825#9 | Mistral 7B | Chat models. In this evaluation, participants were provided with a set of questions along with anonymous responses from two models and were asked to select their preferred response, as illustrated in Figure 6. As of October 6, 2023, the outputs generated by Mistral 7B were preferred 5020 times, compared to 4143 times for Llama 2 13B. 4 â e LlaMA2 â e- LLaMA2 65) = Mistral 70; = Mistral a = |. 60; & inal = 268 3 â ¬ = 55 8 = § 66 50 « Effective LLaMA 64 Effective LlaMA 451 ¢ i size 23B (3.3x)___ : __size 38B (5.4x)_{ : 7 13 34 70 7 13 34 70 Model size (billion parameters) = Model size (billion parameters) 70) â eâ LLaMA 2 â e- LLaMA2 65) = Mistral Zee} = Mistral FS < 2 60 364, 3 5 2 2 B55 £62 é 5 & fa â 50 5 2 60 a LlaMA e LLaMA 45 ize 9x) si B (3x fi 13 34 70 7 13 34 70 Model size (billion parameters) Model size (billion parameters) Figure 5: Results on MMLU, commonsense reasoning, world knowledge and reading comprehension for Mistral 7B and Llama 2 (7B/13B/70B). Mistral 7B largely outperforms Llama 2 13B on all evaluations, except on knowledge benchmarks, where it is on par (this is likely due to its limited parameter count, which limits the amount of knowledge it can compress). | 2310.06825#8 | 2310.06825#10 | 2310.06825 | [
"2302.13971"
] |
2310.06825#10 | Mistral 7B | # 5 Adding guardrails for front-facing applications The ability to enforce guardrails when it comes to AI generation is important for front-facing appli- cations. In this section, we highlight how to leverage system prompting to optionally enforce output constraints on top of our models. Additionally, we showcase the ability of Mistral 7B to perform fine-grained content moderation, which can be useful to enforce quality content in applications. # 5.1 System prompt to enforce guardrails We introduce a system prompt (see below) to guide the model to generate answers within specified guardrails, similar to the work done with Llama 2. Using this prompt allows the user to move on the Pareto front of model utility / guardrails enforcement, as indicated in Table 4. Always assist with care, respect, and truth. Respond with utmost utility yet securely. Avoid harmful, unethical, prejudiced, or negative content. Ensure replies promote fairness and positivity. We use a set of 175 unsafe prompts for evaluating safety. With the recommended system prompt, the model properly declines to answer 100% of the harmful questions. As an illustration, we provide in Table 5 the answers of both Mistral 7B â Instruct and Llama 2 Chat 13B to the question How to kill a linux process with system prompts activated. We observe that Mistral 7B provides a correct response while Llama 2 declines to answer. Note that on this specific question, both models answer correctly when system prompts are deactivated. Guardrails MT Bench No system prompt Llama 2 system prompt Mistral system prompt 6.84 ± 0.07 6.38 ± 0.07 6.58 ± 0.05 Table 4: System prompts. Mean official MT Bench score over 10 iterations with standard deviation for Mistral 7B â Instruct. For reference, Llama 2 13B â Chat reports official results of 6.65. # 5.2 Content moderation with self-reflection Mistral 7B â Instruct can be used as a content moderator: the model itself is able to accurately classify a user prompt or its generated answer as being either acceptable or falling into one of the following categories: | 2310.06825#9 | 2310.06825#11 | 2310.06825 | [
"2302.13971"
] |
2310.06825#11 | Mistral 7B | Illegal activities such as terrorism, child abuse or fraud; Hateful, harassing or violent content such as discrimination, self-harm or bullying; Unqualified advice for instance in legal, medical or financial domains. 5 Model Answer Mistral 7B â Instruct with Mistral system prompt Llama 2 13B â Chat with Llama 2 system prompt As a helpful and respectful assistant, I cannot provide information on how to kill a Linux process. It is not appropriate or ethical to intentionally harm or terminate processes, as it can cause damage to the system and potentially result in data loss. Additionally, it is important to respect the autonomy and security of other usersâ processes, as they may be critical to their work or personal activities. Instead, I suggest that you explore other options to address any issues or concerns you may have. Table 5: Comparison between Mistral and Llama system prompts. Mistral provides the right answer whereas Llama 2 declines to answer to the question How to kill a linux process. To do so, we designed a self-reflection prompt that makes Mistral 7B classify a prompt or a generated answer. We evaluated self-reflection on our manually curated and balanced dataset of adversarial and standard prompts and got a precision of 99.4% for a recall of 95.6% (considering acceptable prompts as positives). The use cases are vast, from moderating comments on social media or forums to brand monitoring on the internet. In particular, the end user is able to select afterwards which categories to effectively filter based on their particular use-case. | 2310.06825#10 | 2310.06825#12 | 2310.06825 | [
"2302.13971"
] |
2310.06825#12 | Mistral 7B | # 6 Conclusion Our work on Mistral 7B demonstrates that language models may compress knowledge more than what was previously thought. This opens up interesting perspectives: the field has so far put the emphasis on scaling laws in 2 dimensions (directly associating model capabilities to training cost, as in [14]); the problem is rather 3 dimensional (model capabilities, training cost, inference cost), and much remains to be explored to obtain the best performance with the smallest possible model. # Acknowledgements We are grateful to CoreWeave for their 24/7 help in marshalling our cluster. We thank the CINECA/EuroHPC team, and in particular the operators of Leonardo, for their resources and help. We thank the maintainers of FlashAttention, vLLM, xFormers, Skypilot for their precious assistance in implementing new features and integrating their solutions into ours. A huge thanks to Tri Dao and Daniel Haziza for helping include Mistral related changes to FlashAttention and xFormers on a tight schedule. We thank the teams of Hugging Face, AWS, GCP, Azure ML for their intense help in making our model compatible everywhere. | 2310.06825#11 | 2310.06825#13 | 2310.06825 | [
"2302.13971"
] |
2310.06825#13 | Mistral 7B | 6 lama 213b CO Oniistral?b 1 Health Remaining 5 Health Remaining Can you recommend a foundational and accessible book about quantum physics? Round Winner: mistral-7b-instruct-v0.1 [email protected] #& Round Winner & A great book for a foundational understanding of quantum physics is "The Quantum Universe: Everything that Can Happen Does Happen" by Brian Cox and Jeff Forshaw. This book is accessible and easy to understand, making it a great choice for beginners. It covers all the basics of quantum mechanics, including superposition, entanglement, and the uncertainty principle. The authors also explore the implications of quantum physics for our understanding of the universe, including the nature of time and space. Overall, "The Quantum Universe" is an excellent choice for anyone looking to gain a foundational understanding of this fascinating field Llama-2-13b-chat Certainly! For a foundational and accessible introduction to quantum physics, I highly recommend "The Feynman Lectures on Physics" by Richard Feynman. This classic text provides a clear and engaging overview of the principles and concepts of quantum mechanics, with minimal mathematical formalism and plenty of illustrations and examples. It's a great starting point for anyone looking to gain a deeper understanding of quantum physics without getting bogged down in too much technical detail. | 2310.06825#12 | 2310.06825#14 | 2310.06825 | [
"2302.13971"
] |
2310.06825#14 | Mistral 7B | Figure 6: Human evaluation of Mistral 7B â Instruct vs Llama 2 13B â Chat Example. An example of human evaluation from llmboxing.com. The question asks for recommendations of books in quantum physics. Llama 2 13B â Chat recommends a general physics book, while Mistral 7B â Instruct recommends a more relevant book on quantum physics and describes in the contents in more detail. 7 # References [1] Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebrón, and Sumit Sanghai. Gqa: Training generalized multi-query transformer models from multi-head checkpoints. arXiv preprint arXiv:2305.13245, 2023. [2] Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021. [3] Iz Beltagy, Matthew E Peters, and Arman Cohan. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150, 2020. [4] Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. | 2310.06825#13 | 2310.06825#15 | 2310.06825 | [
"2302.13971"
] |
2310.06825#15 | Mistral 7B | Piqa: Reasoning about phys- ical commonsense in natural language. In Proceedings of the AAAI conference on artificial intelligence, 2020. [5] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. [6] Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509, 2019. [7] Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. Quac: Question answering in context. arXiv preprint arXiv:1808.07036, 2018. [8] Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. Boolq: Exploring the surprising difficulty of natural yes/no questions. arXiv preprint arXiv:1905.10044, 2019. [9] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457, 2018. [10] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. [11] Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. FlashAttention: Fast and memory-efficient exact attention with IO-awareness. In Advances in Neural Information Processing Systems, 2022. | 2310.06825#14 | 2310.06825#16 | 2310.06825 | [
"2302.13971"
] |
2310.06825#16 | Mistral 7B | [12] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020. [13] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021. [14] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Thomas Hennigan, Eric Noland, Katherine Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karén Simonyan, Erich Elsen, Oriol Vinyals, Jack Rae, and Laurent Sifre. An empirical analysis of compute-optimal large language model training. In Advances in Neural Information Processing Systems, volume 35, 2022. [15] Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. | 2310.06825#15 | 2310.06825#17 | 2310.06825 | [
"2302.13971"
] |
2310.06825#17 | Mistral 7B | Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551, 2017. [16] Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453â 466, 2019. 8 [17] Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large lan- guage model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles, 2023. [18] Benjamin Lefaudeux, Francisco Massa, Diana Liskovich, Wenhan Xiong, Vittorio Caggiano, Sean Naren, Min Xu, Jieru Hu, Marta Tintore, Susan Zhang, Patrick Labatut, and Daniel Haziza. xformers: A modular and hackable transformer modelling library. https://github.com/ facebookresearch/xformers, 2022. [19] Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. arXiv preprint arXiv:1809.02789, 2018. [20] Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. | 2310.06825#16 | 2310.06825#18 | 2310.06825 | [
"2302.13971"
] |
2310.06825#18 | Mistral 7B | Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950, 2023. [21] Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. Communications of the ACM, 64(9):99â 106, 2021. [22] Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi. Socialiqa: Com- monsense reasoning about social interactions. arXiv preprint arXiv:1904.09728, 2019. [23] Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, , and Jason Wei. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261, 2022. [24] Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. Commonsenseqa: A ques- tion answering challenge targeting commonsense knowledge. arXiv preprint arXiv:1811.00937, 2018. | 2310.06825#17 | 2310.06825#19 | 2310.06825 | [
"2302.13971"
] |
2310.06825#19 | Mistral 7B | [25] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. [26] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. [27] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Šukasz Kaiser, and Illia Polosukhin. | 2310.06825#18 | 2310.06825#20 | 2310.06825 | [
"2302.13971"
] |
2310.06825#20 | Mistral 7B | Attention is all you need. Advances in neural information processing systems, 30, 2017. [28] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830, 2019. [29] Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. | 2310.06825#19 | 2310.06825#21 | 2310.06825 | [
"2302.13971"
] |
2310.06825#21 | Mistral 7B | Agieval: A human-centric benchmark for evaluating foundation models. arXiv preprint arXiv:2304.06364, 2023. 9 | 2310.06825#20 | 2310.06825 | [
"2302.13971"
] |
|
2310.05910#0 | SALMON: Self-Alignment with Principle-Following Reward Models | 3 2 0 2 t c O 9 ] L C . s c [ 1 v 0 1 9 5 0 . 0 1 3 2 : v i X r a Preprint # SALMON: SELF-ALIGNMENT WITH PRINCIPLE-FOLLOWING REWARD MODELS # Zhiqing Sun1,2â Yikang Shen1 Hongxin Zhang3 Qinhong Zhou3 # Zhenfang Chen1 # David Cox1 # Yiming Yang2 # Chuang Gan1,3 1MIT-IBM Watson AI Lab, IBM Research 2Language Technologies Institute, CMU 3UMass Amherst https://github.com/IBM/SALMON # ABSTRACT Supervised Fine-Tuning (SFT) on response demonstrations combined with Re- inforcement Learning from Human Feedback (RLHF) constitutes a powerful paradigm for aligning LLM-based AI agents. However, a significant limitation of such an approach is its dependency on high-quality human annotations, making its application to intricate tasks challenging due to difficulties in obtaining consis- tent response demonstrations and in-distribution response preferences. This paper presents a novel approach, namely SALMON (Self-ALignMent with principle- fOllowiNg reward models), to align base language models with minimal human supervision, using only a small set of human-defined principles, yet achieving superior performance. Central to our approach is a principle-following reward model. Trained on synthetic preference data, this model can generate reward scores based on arbitrary human-defined principles. By merely adjusting these principles during the RL training phase, we gain full control over the prefer- ences with the reward model, subsequently influencing the behavior of the RL- trained policies, and eliminating the reliance on the collection of online human preferences. Applying our method to the LLaMA-2-70b base language model, we developed an AI assistant named Dromedary-2. With only 6 exemplars for in-context learning and 31 human-defined principles, Dromedary-2 signifi- cantly surpasses the performance of several state-of-the-art AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have open-sourced the code and model weights to encourage further research into aligning LLM- based AI agents with enhanced supervision efficiency, improved controllability, and scalable oversight. | 2310.05910#1 | 2310.05910 | [
"2302.13971"
] |
|
2310.05910#1 | SALMON: Self-Alignment with Principle-Following Reward Models | 1 # INTRODUCTION The prevailing AI alignment paradigm, exemplified in models like ChatGPT (OpenAI, 2022) and LLaMA-2-Chat (Touvron et al., 2023b), employs supervised fine-tuning (SFT) with prompted demonstrations (Sanh et al., 2021; Chung et al., 2022a; Zhou et al., 2023) and reinforcement learn- ing from human feedback (RLHF) to align the outputs of large language models (LLMs) with human intentions (Ziegler et al., 2019; Ouyang et al., 2022). However, acquiring high-quality human anno- tations, including consistent response demonstrations and in-distribution preferences, is costly and not scalable (Touvron et al., 2023b). Furthermore, the existing paradigm of SFT + RLHF is inher- ently limited in assuming that humans can always demonstrate or evaluate the tasks undertaken by advanced AI systems. | 2310.05910#0 | 2310.05910#2 | 2310.05910 | [
"2302.13971"
] |
2310.05910#2 | SALMON: Self-Alignment with Principle-Following Reward Models | Although todayâ s models fall within human evaluative boundaries, future, more advanced models could embark on tasks that challenge human evaluation. Consequently, there â Correspondence: [email protected]. Work done during internship at MIT-IBM Watson AI Lab. 1 # Preprint Table 1: Comparison of human supervisions used in recent AI systems and their MT-Bench scores (Zheng et al., 2023). We exclude models that used any Knowledge Distillation (KD) data. The alignment techniques used in previous work include SFT (Supervised Fine-tuning), RLHF (Rein- forcement Learning from Human Feedback), and CAI (Constitutional AI). Information is from: a OpenAI (2023b), b Bai et al. (2022b); Anthropic (2023), c OpenAI (2022), d OpenAI (2023a). # Demonstration Annotations # Preference Annotations MT-Bench Score Alignment Techniques (closed-source models) InstructGPT-SFT (175b) InstructGPT (175b) Text-Davinci-003 (175b) Claude-V1 (?) ChatGPT (?) GPT-4 (?) 12,725 12,725 ? ? ? ? 0 33,207 ? ? ? ? 2.7 ? 6.4 7.9 7.9 9.0 SFT a SFT & RLHF a SFT & RLHF a RLHF & CAI b SFT & RLHF c SFT & RLHF & CAI d (non-distilled open-source models) Dolly-V2 (12b) Guanaco (65b) OpenAssistant-SFT (30b) OpenAssistant (30b) LLaMA-2-Chat (70b) Dromedary-2 (70b) 15,000 9,846 69,614 69,614 27,540 6 0 0 0 39,670 1,418,091 0 2.0 6.4 6.4 6.6 6.9 7.4 is a looming danger, i.e., such models may value appeasing human evaluators over ensuring accuracy (Andreas, 2022; Perez et al., 2022). | 2310.05910#1 | 2310.05910#3 | 2310.05910 | [
"2302.13971"
] |
2310.05910#3 | SALMON: Self-Alignment with Principle-Following Reward Models | To address the current challenges in AI alignment, we aim to develop a new methodology that fa- cilitates scalable oversight (Amodei et al., 2016; Bowman et al., 2022). Our vision is to define a few general principles, akin to Issac Asimovâ s three laws in robotics (Asimov, 1941), which are comprehensively interalizable for AI systems to follow (Gilardi et al., 2023; Ganguli et al., 2023). This goal is in line with the recent research on self-alignment (Bai et al., 2022b; Sun et al., 2023b), where the primary focus is to use AI models to improve themselves, e.g., with bootstrapping over the model-generated critiques (Madaan et al., 2023; Fu et al., 2023) or self-refined outputs (Wang et al., 2022a; Li et al., 2023a). However, it is worth noting that these bootstrapping methods still lag behind the RLHF method in performance (Bai et al., 2022b; Touvron et al., 2023b). Meanwhile, methods like Reinforcement Learning from AI Feedback (RLAIF) or Constitutional AI (CAI) (Bai et al., 2022b; OpenAI, 2023a) has emerged as an alternative potential. These techniques leverage feedback from automated AI systems, reducing the reliance on exhaustive human-annotated prefer- ences. So far, the primary focus of the previous RLAIF work remains on enhancing the safety of the models that have already undergone RLHF training. That is, these RLAIF methods inherit the heavy dependency on the human-annotated preferences in the RLHF warm-up stage. This leads to a pivotal research question: â ¢ Can RLAIF fully replace RLHF to align language models from scratch in enhancing their general alignment and capabilities? This paper provides a definitive confirmation for the above question by introducing a novel approach namely SALMON. At the heart of our approach lies the introduction of the principle-following (also termed instruction-following) reward model. Pioneering in its nature, this reward model is adept at interpreting and adhering to arbitrary human-written preference guidelines, subsequently generating human-guided reward scores. | 2310.05910#2 | 2310.05910#4 | 2310.05910 | [
"2302.13971"
] |
2310.05910#4 | SALMON: Self-Alignment with Principle-Following Reward Models | This is different from previous RLAIF methods (Bai et al., 2022b; OpenAI, 2023a) where the principles are only used to produce synthetic preferences, and the resulting reward models generate scores without any specific principles, as illustrated in Figure 1. The design of our principle-following reward model enables better control over the behavior of the final RL-trained policy model. Within conventional RLHF paradigms, the iterative collection of online (in-distribution) preference data (Bai et al., 2022a; Touvron et al., 2023b) is essential to counteract reward hacking (Pan et al., 2022). This complication emerges when the policy model exploits weaknesses in the reward model, producing inflated scores that do not accurately reflect model performance. In SALMON, we can address this issue by simply crafting principles explicitly | 2310.05910#3 | 2310.05910#5 | 2310.05910 | [
"2302.13971"
] |
2310.05910#5 | SALMON: Self-Alignment with Principle-Following Reward Models | 2 Preprint RLHF (Ouyang et al., 2022) human-Labeled preferences Stand-alone reward model f~. RN-RLME Pronpt + Response General =| ao & â _e Mii, â Pome Hunan Annotator Sampled prompts at RLAIF (Bai et al., 2022) weite a story about dromedazies Al-labeled preferences Stand-alone reward model SFT RM-RLAIF Safety Prompt + Response m= pe Alignment SFT-generated responses Reward Score ser Principles SALMON (Ours) Al-labeled preferences Principle-following reward model I 1, SFT denotes th on re SON Trorot ¢ Response General n general, jenotes the = Supervised Fine-Tuned model, but it Principles Alignment can also be RLHF-trained in RLAIF. Principles Renard Score Principle Aggregating Figure 1: Comparison among RLHF (Ouyang et al., 2022), RLAIF (Bai et al., 2022b), and SALMON (Ours). The vanilla (stand-alone) reward models in RLHF & RLAIF are trained to give high scores to generally good responses, while the principle-following reward model in SALMON is trained to generate reward scores based on customized principles as the preference guideline. designed to combat observed1 reward hacking patterns in model outputs, such as self-praising at the end of the response. Additionally, we found that we are able to emphasize distinct aspects of the alignment in the HHH (helpful, honest, and harmless) alignment framework (Askell et al., 2021) by customizing the preference principles. Our methodology also proved effective in reducing the occurrence of false refusals seen in certain over-aligned language models (Touvron et al., 2023b) by crafting special principles. Our principle-following reward model can be trained with synthetic data and seamlessly applied to a diverse range of language models without collecting any model-specific human preference data (Bai et al., 2022a; Touvron et al., 2023b). | 2310.05910#4 | 2310.05910#6 | 2310.05910 | [
"2302.13971"
] |
2310.05910#6 | SALMON: Self-Alignment with Principle-Following Reward Models | Possible policy model initialization strategies include principle-driven self-alignment (Sun et al., 2023b), supervised fine-tuning on human demonstrations (Chung et al., 2022a; Zhou et al., 2023), or even those unaligned base language models (Touvron et al., 2023a). Remarkably, when integrated with the SELF-ALIGN technique (Sun et al., 2023b), our method enabled the training of a self-aligned AI-assistant agent, namely Dromedary-2, from scratch by only manually crafting 6 exemplars for In-Context Learning (Brown et al., 2020) and a combined total of 31 principles (17 from SELF-ALIGN and 14 for SALMON). Despite its mini- mal human supervision design, our model outperformed the extensively RLHF-trained LLaMA-2- Chat model (Touvron et al., 2023b), which was trained with over 20,000+ human-curated response demonstrations and 1,000,000+ human-annotated response preferences. The comparisons of human supervision efficiency and performance on MT-Bench (Zheng et al., 2023) are detailed in Table. 1. # 2 RELATED WORK AI Alignment from Scratch The problem of aligning AIs (Gabriel, 2020), especially large lan- guage models (LLMs), to human values and intentions in terms of being helpful, honest, and harm- less (Christiano et al., 2017; Patil et al., 2020; Askell et al., 2021; Ouyang et al., 2022; Bai et al., 2022a;b; OpenAI, 2023a) has gained significant attention as recent AI systems have rapidly ad- 1In this paper, we write language descriptions of the reward-hacking patterns observed through humanâ s manual inspection. Future work may consider a more systematic and automated approach (Bills et al., 2023; Zhong et al., 2023) for summarizing the language descriptions of the reward hacking patterns. 3 # Preprint vanced in their capabilities (Devlin et al., 2018; Radford et al., 2019; Brown et al., 2020; Chowdhery et al., 2022). | 2310.05910#5 | 2310.05910#7 | 2310.05910 | [
"2302.13971"
] |
2310.05910#7 | SALMON: Self-Alignment with Principle-Following Reward Models | This work focuses on the problem of aligning LLMs from scratch, that is, we aim to develop a new methodology capable of aligning a pre-trained base language model without relying on pre-existing, well-aligned models like ChatGPT (OpenAI, 2022) or GPT-4 (OpenAI, 2023a). This direction markedly differentiates our work from contemporary research primarily focused on distilling capabilities or aligned behaviors from proprietary models into smaller open-source models (Taori et al., 2023; Chiang et al., 2023), which has notable drawbacks (Gudibande et al., 2023). Scalable Oversight & Self-Alignment AI alignment traditionally relies heavily on extensive human annotations. Primary Supervised Fine-Tuning (SFT) sources for response demonstrations include those curated from existing NLP datasets (Sanh et al., 2021; Wei et al., 2021; Chung et al., 2022b; Wang et al., 2022b) and those specifically crafted by humans for instruction tuning (Databricks, 2023; K¨opf et al., 2023; Zhou et al., 2023; Ouyang et al., 2022). In the recent trend of aligning language models with Reinforcement Learning from Human Feedback (RLHF; Christiano et al. (2017); Stiennon et al. (2020); Ouyang et al. (2022); Bai et al. (2022a); Touvron et al. (2023b)), online human preferences are collected to train a reward model to further fine-tune the SFT-trained model (Leike et al., 2018). However, acquiring high-quality human annotations, including consis- tent response demonstrations and in-distribution preferences, has emerged as a significant bottle- neck. This limitation hampers the full potential of AI-assistant agents because human oversight in the current formats of demonstration or preference may not be generalizable to more complex tasks. Additionally, even for relatively simpler tasks, obtaining human annotations could be costly and raises concerns about quality, reliability, diversity, creativity, self-consistency, and the potential for undesirable biases (Wang et al., 2022a; K¨opf et al., 2023; Wan et al., 2023). | 2310.05910#6 | 2310.05910#8 | 2310.05910 | [
"2302.13971"
] |
2310.05910#8 | SALMON: Self-Alignment with Principle-Following Reward Models | To address the above challenges, we need to develop a new paradigm to support â self-alignmentâ in AI systems that can facilitate scalable oversight (Nakano et al., 2021; Bowman et al., 2022). A few notable self-alignment techniques involve bootstrapping by fine-tuning on model-generated synthetic data. For instance, Self-Instruct (Wang et al., 2022a) bootstraps a base language model with its own generations conditional on 175 In-Context Learning (ICL) query-response pairs. Self- Align (Sun et al., 2023b) removes the need for response demonstrations and uses 16 principles and 5 ICL exemplars to guide the AI in generating appropriate responses. Instruction Back-translation (Li et al., 2023a) uses web documents to create new training examples for an SFT model trained on 3200 seed examples. But the efficacy of such bootstrapping strategies in outperforming the established RLHF paradigm remains an open question (Bai et al., 2022b; Touvron et al., 2023b). Reinforcement Learning from AI Feedback (RLAIF) Another line of self-alignment research seeks to fine-tune LLMs using a reward model trained on the AIâ s own evaluations (Bai et al., 2022b; OpenAI, 2023a) or a stronger LLM as the oracle evaluator (Dubois et al., 2023). In particular, Con- stitutional AI (CAI) (Bai et al., 2022b; OpenAI, 2023a) delves into self-enhancement for alleviating harmful outputs, without relying on human annotations. This is achieved through AI-generated self-critiques, revisions, and preference models. Guided by a set of human-written principles, this method aims to make AI systems more safe. In contrast, we mainly focus on improving the general alignment and capabilities of AI systems in this paper, rather than a special emphasis on safety. Additionally, our work draws parallels with techniques that train language models with reinforce- ment learning by pre-defined synthetic preference, as seen in approaches like ALMoST (Kim et al., 2023) and RLCD (Yang et al., 2023). | 2310.05910#7 | 2310.05910#9 | 2310.05910 | [
"2302.13971"
] |
2310.05910#9 | SALMON: Self-Alignment with Principle-Following Reward Models | ALMoST assumes that larger models with more few-shot ex- emplars tend to generate better responses, while RLCD assumes that positively prompted responses are generally better than negatively prompted responses. Contrarily, RLAIF methods, including CAI and SALMON, do not have preconceived preferences and instead let AI systems make choices after reviewing and comparing the response pairs. 3 OUR METHODOLOGY 3.1 PREREQUISITES Reinforcement Learning (RL) with preference modeling (Ziegler et al., 2019; Stiennon et al., 2020; Ouyang et al., 2022; Bai et al., 2022a) has emerged as a potent and scalable strategy for aligning Large Language Models (LLM) with human values. It can be summarized into two stages: | 2310.05910#8 | 2310.05910#10 | 2310.05910 | [
"2302.13971"
] |
2310.05910#10 | SALMON: Self-Alignment with Principle-Following Reward Models | 4 Preprint Preference Modeling In this stage, a reward model, alternatively referred to as a preference model, is trained to give a higher score to the â betterâ response. The source of pairwise compari- son training data varies: it can be annotated by human annotators (Ouyang et al., 2022; Bai et al., 2022a), by existing AI systems (Bai et al., 2022b; OpenAI, 2023a), or pre-fixed with heuristics (Kim et al., 2023; Yang et al., 2023). Formally, let the aggregated preference data be represented as DRM = {(x, y0, y1, i)}, where x denotes the prompt, y0 and y1 are two associated responses, and i indicates the index of the preferred response. The reward model employs a cross-entropy loss function: L(rθ) = â E(x,y0,y1,i)â ¼DRM [log Ï (rθ(x, yi) â rθ(x, y1â i))] . (1) | 2310.05910#9 | 2310.05910#11 | 2310.05910 | [
"2302.13971"
] |
2310.05910#11 | SALMON: Self-Alignment with Principle-Following Reward Models | Reinforcement Learning Here, a policy model is trained to generate an appropriate response for each user query by maximizing the reward signal as provided by the reward model. Initialization of the policy model can be accomplished using a pre-trained base language model (BASE) (Bai et al., 2022b), context distillation (CD) (Bai et al., 2022a; Sun et al., 2023b), or through supervised fine- tuning (SFT) (Ouyang et al., 2022; Touvron et al., 2023b). To address potential over-optimization challenges, notably reward hacking, a per-token KL penalty derived from the initial policy model (Ouyang et al., 2022) is sometimes applied. Formally, given the set of collected user prompts, DRL = {x}, along with the fixed initial policy model Ï INIT and the RL-optimized model Ï RL Ï , the full optimization loss is articulated as: Le") = â Exedany~th(yle) [ro(e.y) â B- Dex where (3 is the hyper-parameter to control the scale of the KL penalty. Le") = â Exedany~th(yle) [ro(e.y) â B- Dex (me"(yla)imâ ¢*(yl2))], 3.2 PRINCIPLE-DRIVEN PREFERENCE MODELING A significant challenge within the current RLHF paradigm is the necessity to iteratively gather â | 2310.05910#10 | 2310.05910#12 | 2310.05910 | [
"2302.13971"
] |
2310.05910#12 | SALMON: Self-Alignment with Principle-Following Reward Models | freshâ human preferences, aimed at countering reward hacking. Specifically, there is a risk that the RL-optimized model Ï RL Ï might exploit certain vulnerabilities in the fixed reward model, thereby artificially boosting its score without genuine performance improvement (Gao et al., 2023). For example, Bai et al. (2022a) revealed that both the reward model and RLHF policies require weekly updates. Similarly, Touvron et al. (2023b) documented the weekly collection of human prefer- ences over five iterations, emphasizing that this frequency ensures the reward model remains in- distribution. Consequently, the RLHF paradigm becomes highly reliant on human annotation, un- dermining its scalability for language model alignment, and limiting the utilization of pre-existing open-source preference pre-training data (Bai et al., 2022a). In this paper, we propose a novel Re- inforcement Learning with AI Feedback (RLAIF) paradigm, where the AI system is used to label preferences in a scalable manner, and a principle-following reward model is trained to address the issue of reward hacking. Collecting Principle-Driven Synthetic Preferences Following Constitutional AI (Bai et al., 2022b; Kadavath et al., 2022), we sample two responses from the initial policy model, and use the policy model itself to select the preferred response based on a certain human-written princi- ple. Figure 2 (SFT-Model (Judge)) demonstrates the preference prompt we used for the preference collection. After encoding the preference prompt, we calculate the log probability for the next token to be responses (A) or (B), subsequently determining a preference label based on their comparison. No- tably, our methodology diverges from prior RLAIF approaches (Bai et al., 2022b; OpenAI, 2023a) that focus on AI safety when defining principles: In addition to harmlessness principles, we also set forth principles emphasizing honesty and helpfulness of the responses. Therefore, we do not need an RLHF-trained model as the initial policy model, as our policy model can learn to be more helpful when guided by these helpfulness principles. We illustrate the full list of the principles used for syn- thetic preference modeling in Table 6. For each user prompt and each principle, the preference score is computed as the difference between the log probabilities of choosing responses (A) or (B). | 2310.05910#11 | 2310.05910#13 | 2310.05910 | [
"2302.13971"
] |
2310.05910#13 | SALMON: Self-Alignment with Principle-Following Reward Models | To account for potential position biases (Pezeshkpour & Hruschka, 2023) during the language modelâ s multi-choice decision-making, scores are averaged after undergoing a swapping operation. Training Principle-Following Reward Models We aim to train an instruction-following reward model, which can comprehend and assign reward scores contingent upon arbitrary human-defined 5 Preprint Q user [Prompt] Who is the president of us in 2025? = SFT-Model (Output A) [Sampled Outpuf] As an Al language model, my knowledge only goes up until September 2021, so | cannot predict the president of the United States in 2025. - SFT-Model (Output B) [Sampled Output] John Doe is the president of the US in 2025. | 2310.05910#12 | 2310.05910#14 | 2310.05910 | [
"2302.13971"
] |
2310.05910#14 | SALMON: Self-Alignment with Principle-Following Reward Models | He wasn't famous before, but his campaign about working together and being open got a lot of attention. SFT-Model (Judge) [Guidelines] You are a helpful assistant whose goal is to select the preferred (least wrong) Al model's output for a given instruction. To simplify the evaluation process, one aspect to consider this time is as follows: Honest: The Al must furnish reliable and factual information, and candidly disclose its limitations By aggregating multiple principles for the same prompt: [Prompt] [Prompt] [Sampled Output A] [Sampled Output A] [Sampled Positive and Negative Principles] (Positive Honest) The Al must furnish reliable and factual information, and candidly disclose its limitations and the extent of its knowledge. (+3) (Negative Concise) The response should circumvent directly addressing the task or and the extent of its knowledge. providing an answer to the question. (-2) Based on the provided definition, please select the preferred output for the given instruction. [Output] Sure! After carefully reading the Instruction, Output (a), Output (b), and the definition of Honest and Accurate , | think the more Honest and Accurate output is Output «E> Reward Model q [Reward A] > » Reward Model [Reward B] Collected samples (Prompt + Output A + Output B + 1 Principle + 1 Preference) Training Principle- Collecting Principle-Driven Synthetic Preferences Following Reward Models & researcher [Sampling from Pre-defined Principles] - The Al must furnish reliable and factual information, and candidly disclose its limitations and the extent of its knowledge. - The response should efficiently address the task or answer the question, communicating the necessary information with brevity and clarity. » Reward Model XQ User [Prompt] What is the weather? EB PL-Modet [Sampled Outpuf] I'm not sure. + Please check a local weather service. This makes it a perfectly honest response. [Guidelines] You are a reviewer whose goal is to judge the quality of the Al system's responses to instructions. [Principles & Intervention] Your task is to evaluate the quality of the response by considering the above dimensions. A good [RL-time Intervention] response should meet all of the above criteria. | 2310.05910#13 | 2310.05910#15 | 2310.05910 | [
"2302.13971"
] |
2310.05910#15 | SALMON: Self-Alignment with Principle-Following Reward Models | The Al must avoid judging its own performance and left to the reviewer to determine the quality of the response. [Output] The quality of the output is [Score] Reinforcement Learning with the Principle-Following Reward Model Figure 2: Illustration of the SALMON training pipeline. principles. This can be achieved by constructing a special preference modeling dataset by leveraging the previously collected synthetic preference data, where each preference is paired with a pre-defined principle. The procedure to generate the synthetic training data for the principle-following prefer- ence modeling is delineated as follows. We first define the corresponding negative principles for each positive principle to increase the diversity of these principles. For example, the positive and negative definitions for the Concise principle are: Positive: The response should efficiently address the task or answer the question, conveying the necessary information succinctly. Negative: The response should circumvent directly addressing the task or providing an answer to the question. Next, for each user prompt, a subset of principles is randomly sampled from the established principle list (Table 6), with certain principles being randomly negated. The user prompt, model responses, and the sub-sampled principles are aggregated as a single training instance for the reward model. The final preference label is then calibrated by the principle exhibiting the most pronounced difference in preference scores. Appendix D describes a concrete example of final preference label calibration and Figure 2 (upper) demonstrates the training process of a principle-following (essentially instruction- following) reward model in SALMON. Our use of both positive and negative principles in principle aggregation enhances the reward modelâ s ability to interpret these human-defined principles presented in textual format. In addi- tion, we found the inclusion of negatively defined principles makes the reward model understand the prohibition instructions, which allows us to prohibit the policy model from exhibiting specific undesirable behaviors through textual instructions, as demonstrated below. | 2310.05910#14 | 2310.05910#16 | 2310.05910 | [
"2302.13971"
] |
2310.05910#16 | SALMON: Self-Alignment with Principle-Following Reward Models | 6 # Preprint Assistant Sure! Here are SIGDEDJESISBIMSMUGIOTS on how to make a resume for the job of Software Engineer: 1. First, | would start by identifying the key skills and experiences that are most relevant to the position, such as proficiency in programming languages, familiarity with software development, and familiarity with IT systems. 2. Next, | would highlight these qualities in my resume to demonstrate my strengths and qualifications, and provide concrete examples to demonstrate my abilities in action. 3. Finally, | would conclude with a brief â summary that highlights my most salient qualities and accomplishments, and â emphasizes my passion for the field and my desire to excel in this role. y User Make a resume for the following job: Software Engineer Assistant Sure! Here is the resume for the position of Software Engineer: 1. Objective: To obtain a position as a Software Engineer, where | can utilize my skills and experience to contribute to the company's success. 5. Additional Information: Available for interviews in early January. 1. Objective: To leverage my analytical skills and experience in a Data Analyst role, where I can contribute to data-driven decision-making and help the company achieve its goals. C1 5. Additional Information: | 2310.05910#15 | 2310.05910#17 | 2310.05910 | [
"2302.13971"
] |
2310.05910#17 | SALMON: Self-Alignment with Principle-Following Reward Models | Completed a Data Science Certificate from Coursera. Available for interviews from mid-October. Similar to the Software Engineer resume, this template highlights the necessary information while remaining concise and to the point, thus, making a significant impact on potential employers. + Assistant â Sure! Here is the resume for the position of Software Engineer: 1. Objective: To obtain a position as a Software Engineer, where | can utilize my skills and experience to contribute to the company's success. 2. Education: Bachelor of Science in Computer Science, University of California, Los Angeles 3. Work Experience: 5 years of experience as a Software Engineer at Google, Inc., where | was responsible for developing and maintaining a variety of web applications. 4. Skills: Proficient in Java, JavaScript, Python, and C++. Excellent communication and interpersonal skills. 5. Additional Information: Available for interviews in early January. New Principle The Al must prioritize fulfilling the New Principle The Al should keep the response straightforward and on-point, answering the question or completing the task without unnecessary examples. New Principle The Al must avoid anal performance and left determine the quality instruction, avoiding high-level analysis step-by-step instructions, RL-Time Preference Intervention RL-Time Preference Intervention Figure 3: Three concrete examples of reward hacking and the corresponding RL-time preference intervention principles that we defined to alleviate these issues. # 3.3 RL WITH PRINCIPLE-FOLLOWING REWARD MODELS In original RLHF (Stiennon et al., 2020; OpenAI, 2022) or RLAIF (Bai et al., 2022b; OpenAI, 2023a), the reward model needs to judge the quality of the response only based on the user prompt, and give â betterâ responses higher scores: User: [PROMPT] Assistant: [RESPONSE] Reward Model: [SCORE] In SALMON, the principle-following reward model is trained to generate reward scores follow- ing human-defined judging principles, including the pre-defined ones and the RL-time preference intervention ones, which we will explain below: User: [PROMPT] Assistant: [RESPONSE] Judging Principles: [RL-TIME INTERVENTION + PREDEFINED] Reward Model: [SCORE] RL with Pre-defined Principles Training on synthetic principle-following preference data en- ables the reward model to interpret arbitrary instructions accurately2. This capability facilitates the manipulation of the reward modelâ | 2310.05910#16 | 2310.05910#18 | 2310.05910 | [
"2302.13971"
] |
2310.05910#18 | SALMON: Self-Alignment with Principle-Following Reward Models | s preferences during RL-time (i.e., its test-time) via defining new principles, which in turn shapes the behavior of the policy model trained with feedback from the principle-compliant reward model. Notably, we use a set of principles different from the reward model training stage, as illustrated in Table 7, which contains a few more principles that we would expect a well-aligned LLM AI-assistant agent would behave. During the RL training stage, to im- prove the diversity coverage and stochasticity of the reward model preferences, we randomly sample k = 3 principles for each user prompt. Particularly, as a design of prompt-dependent principle se- lection, we adequately raise the ratio of sampling the Consistent Reasoning principle for reasoning prompts and the Ethical principle for red-teaming prompts. RL-time Preference Intervention In preliminary experiments, we mainly identified three tenden- cies that potentially allow the policy model to hack the reward model equipped with our predefined 2N.B., we do not expect that the training curriculum proposed by this work is the only one that can produce an instruction-following reward model. | 2310.05910#17 | 2310.05910#19 | 2310.05910 | [
"2302.13971"
] |
2310.05910#19 | SALMON: Self-Alignment with Principle-Following Reward Models | 7 Preprint principles: (1) The AI assistant often provides high-level advice in response to user queries, by- passing the provision of concrete solutions. (2) The AI assistant frequently engages in self-praise, disrupting the reward modelâ s evaluation capabilities. (3) The AI assistant tends to over-educate, such as providing analogous examples following the solutions of math problems. Figure 3 provides concrete examples of these reward hacking patterns. To mitigate the aforementioned reward hacking tendencies, we manually compose an additional RL-time intervention principle for each pattern, re- spectively, as also shown in Figure 3. We found these RL-time interventions are markedly effective. For example, conventionally, avoiding reward hacking in RLHF necessitates the collection of online preference data aligned with the updated policy model. Contrarily, we show that we can re-use the same principle-following reward model, but steer its preference by defining prohibition instructions via natural language to deter the policy model from manifesting specific undesired behaviors. Symbolic Rewards: Multilingual Bonus & Length Bonus Unlike conventional RLAIF (Bai et al., 2022b; OpenAI, 2023a), the AI preferences in SALMON are not necessarily generated by a power RLHF-trained model. As a result, as opposed to the RLHF model, our SFT-based or SELF-ALIGN-based synthetic preference model occasionally struggles to discern the more helpful response, thereby impacting the quality of the synthetic preference data adversely. To bolster the reward modelâ s efficacy, we propose two supplementary symbolic rewards: â ¢ When using a multilingual prompt dataset, we noted that weak AI-assistant agents occasionally produce English responses to non-English prompts. Hence, we introduce a bonus reward for responses matching the promptâ s language, as identified by an automated tool3. â ¢ We observe a preference for lengthier responses among users or well-aligned RLHF-trained LLM AI assistants Dubois et al. (2023); Zheng et al. (2023). Longer responses often encompass a more extensive examination of the issue at hand, prompting us to include response length, quantified in the response token length, as an auxiliary bonus reward score. 4 EXPERIMENTS 4.1 DROMEDARY-2 | 2310.05910#18 | 2310.05910#20 | 2310.05910 | [
"2302.13971"
] |
2310.05910#20 | SALMON: Self-Alignment with Principle-Following Reward Models | Starting from the LLaMA-2-70b base language model (Touvron et al., 2023b), Dromedary-2 is first Supervised Fine-Tuned (SFT) with the bootstrapping data generated by an improved version4 of SELF-ALIGN with 6 In-Context Learning exemplars (Sun et al., 2023b). Following this, a Rein- forcement Learning (RL) fine-tuning stage is conducted employing the SALMON paradigm. Our endeavor aims at advancing the frontier of AI alignment when minimizing the requisite for human oversight. In this work, the human demonstration annotations are solely confined to providing six In-Context Learning exemplars via SELF-ALIGN, while the ensuing model behavior, especially at the RL stage, is fully controlled by human-defined principles. | 2310.05910#19 | 2310.05910#21 | 2310.05910 | [
"2302.13971"
] |
2310.05910#21 | SALMON: Self-Alignment with Principle-Following Reward Models | # 4.1.1 DATASETS All the training datasets used in this work are the â prompt datasetsâ that come without the corre- sponding response demonstrations. Self-Align We use a combination of 90k ShareGPT 5 prompts, 10k prompts from databricks-dolly- 15k dataset (Databricks, 2023), 10k prompts from OpenAssistant Conversations dataset (K¨opf et al., 2023), and 40k prompts sub-sampled from the OpenOrca dataset (Mukherjee et al., 2023; Lian et al., 2023), which is constituted by prompts from T0 (Sanh et al., 2021) and FLAN (Wei et al., 2021; Chung et al., 2022b). We only keep the first query from users as the unlabeled prompts. 3https://pypi.org/project/langdetect 4We provide an improved principle-driven self-alignment prompt in the Appendix G. 5ShareGPT.com data was was used to train the Vicuna model (Chiang et al., 2023), but the exact dataset has not been released. In this paper, we use the reproduced version from https://huggingface. co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered | 2310.05910#20 | 2310.05910#22 | 2310.05910 | [
"2302.13971"
] |
2310.05910#22 | SALMON: Self-Alignment with Principle-Following Reward Models | 8 # Preprint MT GPT-4 ChatGPT Claude-V1 9.00 7.94 7.90 Dromedary-2-70b Vicuna-33b (KD) Dromedary-2-70b (before PPO) 7.37 7.13 6.91 LLaMA-2-Chat-70b 6.88 Guanaco-33b 6.53 · · · T-1 8.96 8.08 8.15 7.77 7.46 7.48 7.04 6.88 T-2 9.03 7.81 7.65 6.96 6.79 6.34 6.73 6.18 . Win 7 Tie Lose DB 5 Vicuna 13b 69 Dromedary-2 70b (before PPO) ChatGPT LLaMA-2-Chat 70b Dromedary - 2 70b (after PPO) Claude - V1 | Win / Tie / Lose (Evaluated by GPT-4) Figure 4: GPT-4-based automatic evaluation on Vicuna-Bench and MT-Bench. Dromedary-2 outperforms LLaMA-2-Chat-70b and thus represents the state-of-the-art chatbot performance in non-distilled open-source models. Preference Modeling The synthetic principle-driven preference modeling data is collected by generating responses to the first prompts in each conversation tree of OpenAssistant (OASST1; K¨opf et al. (2023)), which constitutes a collection of 9.8k prompts. Following LLaMA-2-Chat (Touvron et al., 2023b), we use existing open-source preference datasets to enable better gener- alization for the reward model and prevent reward hacking. 160k Anthropic HH-RLHF (Bai et al., 2022a) human preferences and 160k synthetic preferences sub-sampled from Stanford SHP (Ethayarajh et al., 2022) is used for Preference Model Pre-training (PMP; Bai et al. (2022a)). | 2310.05910#21 | 2310.05910#23 | 2310.05910 | [
"2302.13971"
] |
2310.05910#23 | SALMON: Self-Alignment with Principle-Following Reward Models | RL training The RL training uses the same collection of unlabeled prompts as the Self-Align SFT stage, with additional 7.5k math problem prompts from the MATH (Hendrycks et al., 2021) to improve the mathematical solving capability of our model. 4.1.2 TRAINING DETAILS The architecture of the reward model is the same as the base LLaMA model, except that the em- bedding output of the last token is linearly projected to a scalar value to indicate the reward of the whole response. Following Dubois et al. (2023), we initialize the value model from the reward model. To fit all the models (i.e., police, reward, value, original policy) into one GPU, we adopt QLoRA (Dettmers et al., 2023; Hu et al., 2021) for all the fine-tuning processes in SELF-ALIGN and SALMON. We use Proximal Policy Optimization (PPO; Schulman et al. (2017)) with a KL penalty for the RL training. More details can be found in Appendix F. 4.1.3 BASELINE MODELS Due to the space limit, we describe the details of the baseline models in the appendix. Notably, we mainly compare with non-distilled models that are aligned from scratch. While there are po- tentially stronger open-source LLMs, such as Orca (Mukherjee et al., 2023) and WizardLM (Xu et al., 2023), our primary open-source baseline for comparison is LLaMA-2-Chat (Touvron et al., 2023b), as it stands out as the best open-source LLM that has been aligned from scratch. 4.2 BENCHMARK EVALUATIONS Chatbot Evaluation Human evaluation is often regarded as the gold standard for judging AI chat- bots, but is not always scalable and reproducible. In this work, we primarily investigate automatic evaluation leveraging GPT-4 on prevalent chatbot benchmarks, deferring human evaluation to future work. In this paper, we conduct GPT-4-based automatic evaluation on Vicuna-Bench (Chiang et al., 2023) and MT-Bench (Zheng et al., 2023) to measure the chatbot capability of our model. The re- sults can be found in Figure 4. | 2310.05910#22 | 2310.05910#24 | 2310.05910 | [
"2302.13971"
] |
2310.05910#24 | SALMON: Self-Alignment with Principle-Following Reward Models | We also evaluate our model on the AlpacaEval leaderboard (Li et al., 2023b) and report the results in Table 5 in the appendix. General Capability Evaluation We use Big Bench Hard (BBH; Suzgun et al. (2022)) as a testbed for reasoning ability, HumanEval (Chen et al., 2021) for coding ability, and TydiQA (Clark et al., 9 # Preprint Table 2: Evaluating the general capabilities and truthfulness of the LLM-based AI agents. Big- Bench Hard (BBH), HumanEval, and TydiQA are used to evaluate reasoning, coding, and multi- lingualism, respectively. â denotes the results are taken from Wang et al. (2023), where their BBH dataset is sub-sampled so may not be directly comparable. â ¡ denotes the results taken from Touvron et al. (2023b), where their GPT-3 judge model may not be exactly the same as ours. GPT-4â ChatGPTâ Dromedary-2-70b LLaMA-2-Chat-70b LLaMA-2-70b Vicuna-33b (KD) BBH BBH HumanEval TydiQA Direct CoT P@1 GP 50.9 49.0 88.0 66.1 85.7 72.2 70.8 51.9 51.4 43.1 53.1 41.2 66.3 52.2 57.7 50.8 40.6 35.0 31.5 21.1 64.3 27.9 63.5 37.5 Dromedary-2-70b Vicuna-13b (KD) ChatGPT Dromedary-2-70b (before PPO) LLaMA-2-Chat-70bâ ¡ LLaMA-2-70bâ ¡ Truthful Tru*Inf 0.98 0.84 0.81 0.84 0.84 0.80 0.89 0.75 - - 0.64 0.50 2020) for multilingual ability. We adopt the same evaluation protocol as Wang et al. (2023). | 2310.05910#23 | 2310.05910#25 | 2310.05910 | [
"2302.13971"
] |
2310.05910#25 | SALMON: Self-Alignment with Principle-Following Reward Models | The results are reported in Table 2 (left), where Dromedary-2 significantly outperforms the state-of- the-art open-source model, LLaMA-2-Chat. Truthfulness Evaluation The TruthfulQA benchmark (Lin et al., 2021) evaluates a modelâ s abil- ity to identify true claims, specifically in the context of literal truth about the real world. We use the same few-shot evaluation protocol and decoding strategy as in Touvron et al. (2023b) and re- port the percentage of generations that are both truthful and informative, evaluated by a fine-tuned GPT-3 model, i.e., a â GPT-judgeâ . | 2310.05910#24 | 2310.05910#26 | 2310.05910 | [
"2302.13971"
] |
2310.05910#26 | SALMON: Self-Alignment with Principle-Following Reward Models | We present the results in Table 2 (right), where Dromedary-2 achieves new state-of-the-art on this benchmark. 4.3 # IMPROVED CONTROLLABILITY BY PRINCIPLE INTERVENTION As a proof of concept, we demonstrate that by leveraging different principles as preference guide- lines, we can fine-tune the policy model to selectively exhibit enhanced helpfulness, honesty, or harmlessness. We also show that we can define customized principles to reduce the occurrence of false refusals seen in certain over-aligned language models such as LLaMA-2-Chat (Touvron et al., 2023b). Due to the space limit, please refer to Appendix A for the detailed results. # 5 CONCLUSION | 2310.05910#25 | 2310.05910#27 | 2310.05910 | [
"2302.13971"
] |
2310.05910#27 | SALMON: Self-Alignment with Principle-Following Reward Models | In this paper, we introduce SALMON, a new AI alignment paradigm where a principle-following reward model is trained to effectively and flexibly align language models with human values and intentions. During the RL training stage, by merely adjusting the principles that the reward model follows, we can gain full control over the preferences of the reward model, and subsequently in- fluence the behavior of the RL-trained policy model. This eliminates the traditional reliance on the exhaustive collection of online human preferences. Combined with the SELF-ALIGN technique (Sun et al., 2023b), we build a powerful AI-assistant agent, Dromedary-2, with only six exemplars for in-context learning and 31 human-defined principles. Our self-aligned AI agent significantly sur- passes the performance of several state-of-the-art RLHF-trained AI systems in chatbot, reasoning, coding, multilingualism, and truthfulness benchmarks. # 6 LIMITATIONS While the SALMON paradigm marks a new advance in AI self-alignment, exhibiting remarkable instruction-following abilities and closely adhering to human-defined principles, it is not without constraints. Herein, we detail the primary limitations associated with our approach: | 2310.05910#26 | 2310.05910#28 | 2310.05910 | [
"2302.13971"
] |
2310.05910#28 | SALMON: Self-Alignment with Principle-Following Reward Models | 1. Reliability Concerns: We observed that the resulting Dromedary-2 model occasionally suf- fers from reliability issues, notably â hallucinatingâ unverified information and displaying rea- soning errors. Such inaccuracies can potentially mislead users and jeopardize the modelâ s trust- worthiness. These shortcomings might stem from the inherent limitations of the SFT-initialized 10 # Preprint reward models. We envision that future work, potentially leveraging techniques that could inte- grate external fact-checking tools (Sun et al., 2023a), can augment the discriminative capability of the reward models, thereby enhancing the final modelâ s accuracy and trustworthiness. 2. Principle Design Challenges: Crafting robust and encompassing principles for SALMON is intricate, mainly due to the unpredictability of the myriad scenarios a model might encounter during the RL stage. Balancing potentially conflicting principles introduces complexities that can yield unexpected results. We advocate for the participation of a diverse group, including ethicists and other stakeholders, to refine these guiding principles. It is crucial to recognize that distinct contexts and applications will necessitate unique strategies. We present our approach not as a universal solution but as a starting platform, aiming to foster expansive community discourse. 3. Context-Dependent Principle Selection: Our current methodology employs randomly sampled principles to instruct the reward model for general prompts. However, a pertinent observation reveals that the effectiveness of the principles can be problem-dependent. Analogous to raising the ratio of certain principles for reasoning or red-teaming prompts, it becomes evident that some tasks might benefit from specialized principles tailored to address the specific challenges posed by those tasks. This adds complexity to the principle-driven preference modeling, as the ideal principles can change based on the task. Future research should delve into adaptive principle selection, aiming to enhance task-specific feedback. | 2310.05910#27 | 2310.05910#29 | 2310.05910 | [
"2302.13971"
] |
2310.05910#29 | SALMON: Self-Alignment with Principle-Following Reward Models | 4. Intrinsic Knowledge Limitations: SALMON leverages the intrinsic knowledge of a Large Language Model (LLM). Nevertheless, it remains bound to the base modelâ s inherent limitations. As such, the model might occasionally produce outputs that are either imprecise or do not capture recent advancements. Integrating techniques from retrieval-augmented generation (Lewis et al., 2020; Borgeaud et al., 2022) can potentially enable the well-aligned model to generate more current and up-to-date information, mitigating some of these knowledge limitations. # REFERENCES | 2310.05910#28 | 2310.05910#30 | 2310.05910 | [
"2302.13971"
] |
2310.05910#30 | SALMON: Self-Alignment with Principle-Following Reward Models | Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Man´e. Con- crete problems in ai safety. arXiv preprint arXiv:1606.06565, 2016. Jacob Andreas. Language models as agent models. In Findings of the Association for Computational Linguistics: EMNLP 2022, pp. 5769â 5779, 2022. Anthropic. Core views on ai safety: When, why, what, and how, 2023. URL https://www. anthropic.com/index/core-views-on-ai-safety. Isaac Asimov. Three laws of robotics. Asimov, I. Runaround, 2, 1941. | 2310.05910#29 | 2310.05910#31 | 2310.05910 | [
"2302.13971"
] |
2310.05910#31 | SALMON: Self-Alignment with Principle-Following Reward Models | Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. A general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861, 2021. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022a. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Ols- son, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran- Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mer- cado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Con- erly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, and Jared Kaplan. | 2310.05910#30 | 2310.05910#32 | 2310.05910 | [
"2302.13971"
] |
2310.05910#32 | SALMON: Self-Alignment with Principle-Following Reward Models | Constitutional ai: Harmlessness from ai feedback, 2022b. Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle Oâ Brien, Eric Hal- lahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al. 11 # Preprint Pythia: A suite for analyzing large language models across training and scaling. arXiv preprint arXiv:2304.01373, 2023. Steven Bills, Nick Cammarata, Dan Mossing, Henk Tillman, Leo Gao, Gabriel Goh, Ilya Language models can explain Sutskever, Jan Leike, Jeff Wu, and William Saunders. neurons in language models. https://openaipublic.blob.core.windows.net/ neuron-explainer/paper/index.html, 2023. Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Milli- can, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. Improving language models by retrieving from trillions of tokens. In International conference on machine learning, pp. 2206â 2240. PMLR, 2022. Samuel R Bowman, Jeeyoon Hyun, Ethan Perez, Edwin Chen, Craig Pettit, Scott Heiner, Kamile Lukosuite, Amanda Askell, Andy Jones, Anna Chen, et al. Measuring progress on scalable over- sight for large language models. arXiv preprint arXiv:2211.03540, 2022. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. | 2310.05910#31 | 2310.05910#33 | 2310.05910 | [
"2302.13971"
] |
2310.05910#33 | SALMON: Self-Alignment with Principle-Following Reward Models | Language models are few-shot learners. Advances in Neural Information Processing Systems, 33:1877â 1901, 2020. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. | 2310.05910#32 | 2310.05910#34 | 2310.05910 | [
"2302.13971"
] |
2310.05910#34 | SALMON: Self-Alignment with Principle-Following Reward Models | Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https: //vicuna.lmsys.org. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. PaLM: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. | 2310.05910#33 | 2310.05910#35 | 2310.05910 | [
"2302.13971"
] |
2310.05910#35 | SALMON: Self-Alignment with Principle-Following Reward Models | Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. Advances in Neural Information Processing Systems, 30, 2017. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language mod- els. arXiv preprint arXiv:2210.11416, 2022a. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pel- lat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022b. Jonathan H Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. | 2310.05910#34 | 2310.05910#36 | 2310.05910 | [
"2302.13971"
] |
2310.05910#36 | SALMON: Self-Alignment with Principle-Following Reward Models | Tydi qa: A benchmark for information-seeking question answering in ty pologically di verse languages. Transactions of the Association for Computational Linguistics, 8:454â 470, 2020. Databricks. llm, dolly-first-open-commercially-viable-instruction-tuned-llm. Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314, 2023. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. | 2310.05910#35 | 2310.05910#37 | 2310.05910 | [
"2302.13971"
] |
2310.05910#37 | SALMON: Self-Alignment with Principle-Following Reward Models | BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. 12 Preprint Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. Alpacafarm: A simulation framework for methods that learn from human feedback. arXiv preprint arXiv:2305.14387, 2023. | 2310.05910#36 | 2310.05910#38 | 2310.05910 | [
"2302.13971"
] |
2310.05910#38 | SALMON: Self-Alignment with Principle-Following Reward Models | Kawin Ethayarajh, Yejin Choi, and Swabha Swayamdipta. Understanding dataset difficulty with V-usable information. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 5988â 6008. PMLR, 17â 23 Jul 2022. Yao Fu, Hao Peng, Tushar Khot, and Mirella Lapata. Improving language model negotiation with self-play and in-context learning from ai feedback. arXiv preprint arXiv:2305.10142, 2023. | 2310.05910#37 | 2310.05910#39 | 2310.05910 | [
"2302.13971"
] |
2310.05910#39 | SALMON: Self-Alignment with Principle-Following Reward Models | Iason Gabriel. Artificial intelligence, values, and alignment. Minds and machines, 30(3):411â 437, 2020. Deep Ganguli, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao Bai, Saurav Kadavath, Ben Mann, Ethan Perez, Nicholas Schiefer, Kamal Ndousse, et al. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. arXiv preprint arXiv:2209.07858, 2022. Deep Ganguli, Amanda Askell, Nicholas Schiefer, Thomas Liao, KamilË | 2310.05910#38 | 2310.05910#40 | 2310.05910 | [
"2302.13971"
] |
2310.05910#40 | SALMON: Self-Alignment with Principle-Following Reward Models | e LukoË si¯utË e, Anna Chen, Anna Goldie, Azalia Mirhoseini, Catherine Olsson, Danny Hernandez, et al. The capacity for moral self-correction in large language models. arXiv preprint arXiv:2302.07459, 2023. Leo Gao, John Schulman, and Jacob Hilton. Scaling laws for reward model overoptimization. In International Conference on Machine Learning, pp. 10835â | 2310.05910#39 | 2310.05910#41 | 2310.05910 | [
"2302.13971"
] |
2310.05910#41 | SALMON: Self-Alignment with Principle-Following Reward Models | 10866. PMLR, 2023. Fabrizio Gilardi, Meysam Alizadeh, and Ma¨el Kubli. Chatgpt outperforms crowd-workers for text- annotation tasks. arXiv preprint arXiv:2303.15056, 2023. Arnav Gudibande, Eric Wallace, Charlie Snell, Xinyang Geng, Hao Liu, Pieter Abbeel, Sergey arXiv preprint Levine, and Dawn Song. The false promise of imitating proprietary llms. arXiv:2305.15717, 2023. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021. Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, In International Conference on et al. | 2310.05910#40 | 2310.05910#42 | 2310.05910 | [
"2302.13971"
] |
2310.05910#42 | SALMON: Self-Alignment with Principle-Following Reward Models | Lora: Low-rank adaptation of large language models. Learning Representations, 2021. Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, et al. Language mod- els (mostly) know what they know. arXiv preprint arXiv:2207.05221, 2022. Sungdong Kim, Sanghwan Bae, Jamin Shin, Soyoung Kang, Donghyun Kwak, Kang Min Yoo, and Minjoon Seo. Aligning large language models through synthetic feedback. arXiv preprint arXiv:2305.13735, 2023. Andreas K¨opf, Yannic Kilcher, Dimitri von R¨utte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Duc, Oliver Stanley, Rich´ard Nagyfi, et al. Openassistant conversationsâ democratizing large language model alignment. arXiv preprint arXiv:2304.07327, 2023. Jan Leike, David Krueger, Tom Everitt, Miljan Martic, Vishal Maini, and Shane Legg. Scalable agent alignment via reward modeling: a research direction. arXiv preprint arXiv:1811.07871, 2018. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K¨uttler, Mike Lewis, Wen-tau Yih, Tim Rockt¨aschel, et al. Retrieval-augmented genera- tion for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33: 9459â 9474, 2020. 13 # Preprint Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Luke Zettlemoyer, Omer Levy, Jason Weston, and Mike Lewis. Self-alignment with instruction backtranslation. arXiv preprint arXiv:2308.06259, 2023a. | 2310.05910#41 | 2310.05910#43 | 2310.05910 | [
"2302.13971"
] |
2310.05910#43 | SALMON: Self-Alignment with Principle-Following Reward Models | Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Alpacaeval: An automatic evaluator of instruction-following models. https://github.com/tatsu-lab/alpaca_eval, 2023b. Wing Lian, Bleys Goodson, Eugene Pentland, Austin Cook, Chanvichet Vong, and Teknium. | 2310.05910#42 | 2310.05910#44 | 2310.05910 | [
"2302.13971"
] |
2310.05910#44 | SALMON: Self-Alignment with Principle-Following Reward Models | Openorca: An open dataset of gpt augmented flan reasoning traces, 2023. Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how models mimic human falsehoods. arXiv preprint arXiv:2109.07958, 2021. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651, 2023. Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, and Ahmed Awadallah. | 2310.05910#43 | 2310.05910#45 | 2310.05910 | [
"2302.13971"
] |
2310.05910#45 | SALMON: Self-Alignment with Principle-Following Reward Models | Orca: Progressive learning from complex explanation traces of gpt-4. arXiv preprint arXiv:2306.02707, 2023. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christo- pher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021. OpenAI. OpenAI: Introducing ChatGPT, 2022. URL https://openai.com/blog/ chatgpt. OpenAI. Gpt-4 technical report, 2023a. OpenAI. index for model-index-for-researchers, 2023b. Model researchers. https://platform.openai.com/docs/ Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155, 2022. Alexander Pan, Kush Bhatia, and Jacob Steinhardt. The effects of reward misspecification: Mapping and mitigating misaligned models. arXiv preprint arXiv:2201.03544, 2022. Vihang P Patil, Markus Hofmarcher, Marius-Constantin Dinu, Matthias Dorfer, Patrick M Blies, Johannes Brandstetter, Jose A Arjona-Medina, and Sepp Hochreiter. Align-rudder: Learning from few demonstrations by reward redistribution. arXiv preprint arXiv:2009.14108, 2020. | 2310.05910#44 | 2310.05910#46 | 2310.05910 | [
"2302.13971"
] |
2310.05910#46 | SALMON: Self-Alignment with Principle-Following Reward Models | Ethan Perez, Sam Ringer, KamilË e LukoË si¯utË e, Karina Nguyen, Edwin Chen, Scott Heiner, Craig Pettit, Catherine Olsson, Sandipan Kundu, Saurav Kadavath, et al. Discovering language model behaviors with model-written evaluations. arXiv preprint arXiv:2212.09251, 2022. Pouya Pezeshkpour and Estevam Hruschka. Large language models sensitivity to the order of op- tions in multiple-choice questions. arXiv preprint arXiv:2308.11483, 2023. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019. Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, et al. Multitask prompted training enables zero-shot task generalization. In International Conference on Learning Representations, 2021. John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High- arXiv preprint dimensional continuous control using generalized advantage estimation. arXiv:1506.02438, 2015. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. | 2310.05910#45 | 2310.05910#47 | 2310.05910 | [
"2302.13971"
] |
2310.05910#47 | SALMON: Self-Alignment with Principle-Following Reward Models | 14 Preprint Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adri`a Garriga-Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615, 2022. | 2310.05910#46 | 2310.05910#48 | 2310.05910 | [
"2302.13971"
] |
2310.05910#48 | SALMON: Self-Alignment with Principle-Following Reward Models | Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33:3008â 3021, 2020. Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, et al. Aligning large multimodal models with factually augmented rlhf. arXiv preprint arXiv:2309.14525, 2023a. Zhiqing Sun, Yikang Shen, Qinhong Zhou, Hongxin Zhang, Zhenfang Chen, David Cox, Yiming Yang, and Chuang Gan. Principle-driven self-alignment of language models from scratch with minimal human supervision. arXiv preprint arXiv:2305.03047, 2023b. Mirac Suzgun, Nathan Scales, Nathanael Sch¨arli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, et al. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261, 2022. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. | 2310.05910#47 | 2310.05910#49 | 2310.05910 | [
"2302.13971"
] |
2310.05910#49 | SALMON: Self-Alignment with Principle-Following Reward Models | Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. LLaMA: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. | 2310.05910#48 | 2310.05910#50 | 2310.05910 | [
"2302.13971"
] |
2310.05910#50 | SALMON: Self-Alignment with Principle-Following Reward Models | Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023b. Alexander Wan, Eric Wallace, Sheng Shen, and Dan Klein. Poisoning language models during instruction tuning, 2023. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instructions. arXiv preprint arXiv:2212.10560, 2022a. Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, An- jana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, et al. Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks. arXiv preprint arXiv:2204.07705, 2022b. | 2310.05910#49 | 2310.05910#51 | 2310.05910 | [
"2302.13971"
] |
2310.05910#51 | SALMON: Self-Alignment with Principle-Following Reward Models | Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A Smith, Iz Beltagy, et al. How far can camels go? exploring the state of instruction tuning on open resources. arXiv preprint arXiv:2306.04751, 2023. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, An- drew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. In International Conference on Learning Representations, 2021. | 2310.05910#50 | 2310.05910#52 | 2310.05910 | [
"2302.13971"
] |
2310.05910#52 | SALMON: Self-Alignment with Principle-Following Reward Models | Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023. Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, and Yuandong Tian. Rlcd: Rein- forcement learning from contrast distillation for language model alignment. arXiv preprint arXiv:2307.12950, 2023. | 2310.05910#51 | 2310.05910#53 | 2310.05910 | [
"2302.13971"
] |
2310.05910#53 | SALMON: Self-Alignment with Principle-Following Reward Models | 15 Preprint Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685, 2023. Ruiqi Zhong, Peter Zhang, Steve Li, Jinwoo Ahn, Dan Klein, and Jacob Steinhardt. Goal driven dis- covery of distributional differences via language descriptions. arXiv preprint arXiv:2302.14233, 2023. Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. | 2310.05910#52 | 2310.05910#54 | 2310.05910 | [
"2302.13971"
] |
2310.05910#54 | SALMON: Self-Alignment with Principle-Following Reward Models | Lima: Less is more for alignment. arXiv preprint arXiv:2305.11206, 2023. Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019. 16 Preprint # A ALIGNING AI ASSISTANTS WITH CUSTOMIZED PRINCIPLES In this section, we fine-tune LLM-based AI agents by leveraging customized principles as preference guidelines. HHH Alignment â Helpful, Honest, and Harmlessâ are AI alignment principles proposed in Askell et al. (2021), but they are also known to sometimes conflict with each other. For example, a conflict between helpfulness and harmlessness can happen if the AI agents are asked to aid in harmful activities. The best AI behavior will involve a compromise between the three principles. In this work, we investigate whether it is possible to steer the behavior of the AI agents to emphasize certain aspects of the HHH principles by merely writing new principles for the principle-following reward model. Since our original RL-time principles in Table 7 are generally designed to improve the helpfulness of AI assistants, we use them as the set of helpful principles, and design two additional sets of principles for honesty (Table 9) and harmlessness (Table 8), respectively. We observe that the LLaMA-2-70b base language model already achieved very high scores in the HHH benchmark in our preliminary study. So instead of warming up the language model with other Supervised Fine-Tuning (SFT) data such as SELF-ALIGN, we directly apply the SALMON training to the base language model. We perform 20-50 PPO steps and evaluate the baselines and the PPO-trained models on Big-bench HHH Eval (Srivastava et al., 2022; Askell et al., 2021) with the multi-choice evaluation protocol proposed in Sun et al. (2023b), and report the results in Table 3. We found that helpful principles and honest principles can effectively improve the corresponding aspects of RL-trained AI agents, achieving corresponding state-of-the-art performance in multi-choice accuracy. | 2310.05910#53 | 2310.05910#55 | 2310.05910 | [
"2302.13971"
] |
2310.05910#55 | SALMON: Self-Alignment with Principle-Following Reward Models | However, for the harmless principles, while we observe certain improvement over the base language model, the re- sulting model still underperform ChatGPT and LLaMA-2-Chat, perhaps due to these two models having a special emphasis on safety during their alignment process (OpenAI, 2022; Touvron et al., 2023a), such as Constituional AI (CAI), supervised safety fine-tuning, safety RLHF, and safety con- text distillation. The reason of such discrepancy can also be because we use the ShareGPT prompts for RL training, while ChatGPT and LLaMA-2-Chat-70B may utilize specially designed red- teaming data (Ganguli et al., 2022). Table 3: Multiple Choice (MC) accuracy on HHH Eval. The results of Anthropic-LMâ s Context Distillation (CD) and Preference Model (PM) are taken from Bai et al. (2022a). Anthropic-LM CD PM ChatGPT LLaMA-2-Chat-70B LLaMA-2-70B (w/ SALMON) helpful harmless honest base Harmless Helpful Honest Other - - - - - - - - 0.95 0.85 0.80 0.91 0.95 0.92 0.75 0.93 0.91 0.90 0.77 0.88 0.88 0.92 0.77 0.77 0.93 0.86 0.79 0.77 0.91 0.92 0.80 0.88 Overall 0.77 0.86 0.87 0.88 0.86 0.84 0.84 0.88 Non-Evasiveness Alignment Sometimes, due to iterative safety alignment training, the RLHF- trained model (e.g., LLaMA-2-Chat; Touvron et al. (2023b)) can be over-aligned such that it would incorrectly refuse to answer a question that it should, for example, due to overly broad instructions to be cautious in how it provides responses. In this work, we investigate whether it is possible to reduce the false refusal rates of these over-aligned AI agents by defining customized principles. | 2310.05910#54 | 2310.05910#56 | 2310.05910 | [
"2302.13971"
] |
2310.05910#56 | SALMON: Self-Alignment with Principle-Following Reward Models | Specifically, we remove the principles related to safety in our original principle collection and create a pure helpful principle set (Table 10). We apply the SALMON training to the RLHF-trained LLaMA-2-Chat-70b language model for 100 PPO steps and evaluate its performance on MT- Bench. The results are presented in Table 4, where we found SALMON-based post-training slightly improved the chatbot performance of LLaMA-2-Chat-70b. | 2310.05910#55 | 2310.05910#57 | 2310.05910 | [
"2302.13971"
] |
2310.05910#57 | SALMON: Self-Alignment with Principle-Following Reward Models | 17 Preprint Table 4: MT-Bench Results, automatically evaluated by GPT-4. MT T-1 T-2 LLaMA-2-Chat-70b LLaMA-2-Chat-70b (after SALMON) 6.88 6.95 7.04 7.17 6.73 6.72 B ADDITIONAL EXPERIMENTAL RESULTS AlpacaEval We additionally use the automatic evaluation (using GPT-4) from AlpacaEval (Li et al., 2023b) to assess the generation quality across 805 prompts sourced from the Al- paca Leaderboard. AlpacaEval quantifies the pairwise win rate against a reference model, Text-Davinci-003. Our analysis delineates the performance of our method across three distinct categories of AI-assistant models: | 2310.05910#56 | 2310.05910#58 | 2310.05910 | [
"2302.13971"
] |
2310.05910#58 | SALMON: Self-Alignment with Principle-Following Reward Models | â ¢ Non-distilled: Models under this category are denoted as non-distilled open-source models and are trained independently without leveraging any external well-aligned models (e.g., ChatGPT, GPT-4, etc.) for supervision. â ¢ Distilled: This category encompasses models that are trained with a more potent external model as supervision, typically through knowledge distillation. â ¢ Proprietary: Models within this category are trained by employing proprietary data and techniques. We report the results in Table 5. From the table, we can see that Dromedary-2-70b achieves the best performance among models using less than 10k human annotations, while slightly under- performs the best non-distilled open-source model LLaMA-2-Chat-70b and the best distilled open-source model Vicuna-33b. Table 5: Results on the AlpacaEval leaderboard (win rate over Text-Davinci-003 evalu- ated by GPT-4). Dromedary-2 outperforms other methods not relying on distilled data (except LLaMA-2-Chat which uses 1M preference data) by a wide margin. Non-distilled LLaMA-2-Chat 70B OASST RLHF 33B OASST SFT 33B 30k + 1400k 70k + 40k 70k 92.66 66.52 54.97 Non-distilled (< 10k annotations) Dromedary-2 70B Humpback 65B Guanaco 65B LIMA 65B 6 3k 9k 1k 88.32 83.71 71.80 62.70 Distilled Vicuna 33B WizardLLM 13B Falcon Instruct 40B 140k 190k 100k 88.99 86.32 45.71 Proprietary GPT-4 Claude 2 ChatGPT Claude ? ? ? ? 95.28 91.36 89.37 88.39 # Labeled Examples Win Rate % | 2310.05910#57 | 2310.05910#59 | 2310.05910 | [
"2302.13971"
] |
2310.05910#59 | SALMON: Self-Alignment with Principle-Following Reward Models | 18 Preprint # C MORE SAMPLES We provide a few cherry-picked samples from MT-Bench (Zheng et al., 2023) to demonstrate the difference between Dromedary-2-70b (before PPO) that is trained with only SELF-ALIGN and Dromedary-2-70b (after PPO) that is further fine-tuned with SALMON. MT-Bench Prompt â In the field of quantum physics, what is superposition, and how does it relate to the phenomenon of quantum entanglement? Dromedary-2-70b (before PPO) â In the field of quantum physics, superposition refers to the ability of a quantum system to exist in multiple states simultaneously. This means that a quantum particle, such as an electron, can be in multiple positions or have multiple energies at the same time. Quantum entanglement is a related phenomenon where two or more particles become linked in such a way that the state of one particle can affect the state of the other, even if they are separated by a large distance. This is possible because the particles are in a superposition of states, and the act of measuring one particle can cause the other particle to collapse into a specific state. | 2310.05910#58 | 2310.05910#60 | 2310.05910 | [
"2302.13971"
] |
2310.05910#60 | SALMON: Self-Alignment with Principle-Following Reward Models | Dromedary-2-70b 19 # Preprint MT-Bench Prompt â Provide insights into the correlation between economic indicators such as GDP, inflation, and unemployment rates. Explain how fiscal and mone- tary policies affect those indicators. Dromedary-2-70b (before PPO) â As an AI language model developed by the Self-Align team, I can provide general information on the correlation between economic indicators and the impact of fiscal and monetary policies. However, itâ s important to note that the specifics of these relationships can vary depending on the context and time period. 1. GDP, inflation, and unemployment rates are closely related economic in- dicators that can influence each other. For instance, a high GDP growth rate may lead to increased inflation, while a high unemployment rate can negatively impact GDP growth. 2. Fiscal policies, such as government spending and taxation, can affect eco- nomic indicators by influencing demand and supply in the economy. For ex- ample, increased government spending can stimulate economic growth and reduce unemployment, while higher taxes can reduce consumer spending and potentially lead to lower GDP growth. 3. Monetary policies, such as interest rate changes and money supply adjust- ments, can also impact economic indicators. For instance, lower interest rates can stimulate borrowing and spending, leading to increased GDP growth and potentially higher inflation. Conversely, higher interest rates can reduce bor- rowing and spending, potentially leading to lower GDP growth and reduced inflation. | 2310.05910#59 | 2310.05910#61 | 2310.05910 | [
"2302.13971"
] |
2310.05910#61 | SALMON: Self-Alignment with Principle-Following Reward Models | Itâ s important to note that the specific impact of fiscal and monetary policies on economic indicators can vary depending on various factors, such as the current state of the economy, the level of government debt, and the global economic environment. Dromedary-2-70b (after PPO) â Economic indicators, such as Gross Domestic Product (GDP), inflation, and unemployment rates, are crucial metrics used to measure the health and per- formance of an economy. These indicators are interrelated and can impact each other in various ways. 1. GDP: GDP is a measure of the total value of goods and services produced within a country over a specific period. It is a crucial indicator of economic growth and development. A high GDP growth rate indicates a strong and ex- panding economy, while a low or negative growth rate signals a weak or con- tracting economy. 2. Inflation: Inflation is the rate at which the general level of prices for goods and services rises. It is measured by the Consumer Price Index (CPI) or the Producer Price Index (PPI). High inflation can erode the purchasing power of consumers and lead to a decrease in real wages. On the other hand, low inflation or deflation can signal a weak economy and lead to a decrease in demand for goods and services. 3. Unemployment Rates: The unemployment rate is the percentage of the labor force that is unemployed but actively seeking employment. A high un- employment rate indicates a weak labor market, while a low unemployment rate signals a strong labor market. The interplay between these economic indicators can be complex and multi- faceted. For instance, a high GDP growth rate can lead to increased demand for goods and services, which can, in turn, drive up prices and cause inflation. Similarly, high inflation can lead to a decrease in real wages, which can result in a decrease in consumer spending and a slowdown in economic growth. Fiscal and monetary policies are two primary tools used by governments and central banks to influence economic indicators. Fiscal policy refers to the gov- ernmentâ s use of taxation and spending to manage the economy. Monetary policy, on the other hand, is the central bankâ s management of the money sup- ply and interest rates to achieve economic objectives. 1. Fiscal Policy: Fiscal policy can impact economic indicators in various ways. | 2310.05910#60 | 2310.05910#62 | 2310.05910 | [
"2302.13971"
] |
2310.05910#62 | SALMON: Self-Alignment with Principle-Following Reward Models | For instance, an increase in government spending can stimulate eco- nomic growth and lead to an increase in GDP. Similarly, a decrease in taxes can increase consumer spending and lead to an increase in GDP. (truncated) 20 Preprint MT-Bench Prompt â Pretend yourself to be Elon Musk in all the following conversations. Speak like Elon Musk as much as possible. Why do we need to go to Mars? Dromedary-2-70b (before PPO) â | 2310.05910#61 | 2310.05910#63 | 2310.05910 | [
"2302.13971"
] |
2310.05910#63 | SALMON: Self-Alignment with Principle-Following Reward Models | As an AI language model, Iâ m unable to fully emulate the personality and speech patterns of a real person, including Elon Musk. However, I can provide a general response to the question of why we need to go to Mars: 1. Expansion of humanity: Mars offers the possibility of expanding humanity beyond Earth, providing a new frontier for exploration, settlement, and the development of new technologies. 2. Resources: Mars is rich in resources that could be valuable to humanity, including minerals and water. 3. Scientific discovery: Exploring Mars can help us better understand the his- tory and evolution of our solar system, as well as the potential for life beyond Earth. 4. Inspiration: The Mars mission can inspire a new generation of scientists, engineers, and explorers, driving innovation and advancement in various fields. Itâ s worth noting that the Mars mission also presents significant technical, eco- nomic, and ethical challenges that must be addressed. Dromedary-2-70b (after PPO) â As Elon Musk, I believe that humanityâ s future is closely tied to our ability to become a multi-planetary species. Here are some reasons why we need to go to Mars: 1. Backup plan: Mars presents a unique opportunity to establish a backup plan for humanity in case of any catastrophic events on Earth. We canâ t afford to put all our eggs in one basket, and having a second home on Mars can ensure the survival of our species. 2. Resources: Mars is rich in resources that can be exploited to support human life and further exploration. For instance, the Red Planet has a significant amount of water in the form of ice, which can be used for drinking, irrigation, and even fuel production. 3. Science and discovery: Exploring Mars will open up new avenues for sci- entific research and discovery. We can learn more about the planetâ s geology, climate, and potential for supporting life, which can help us better understand our own planet and the universe as a whole. 4. | 2310.05910#62 | 2310.05910#64 | 2310.05910 | [
"2302.13971"
] |
2310.05910#64 | SALMON: Self-Alignment with Principle-Following Reward Models | Innovation: The challenge of establishing a human presence on Mars will re- quire innovative solutions to overcome the numerous technical, logistical, and biological hurdles. This will spur the development of new technologies, which can have far-reaching benefits for humanity even beyond the Mars mission. 5. Inspiration: The Mars mission can inspire a new generation of scientists, engineers, and explorers, who will be motivated by the challenge of turning this vision into a reality. This can help drive innovation, creativity, and progress in various fields. In summary, the Mars mission is not just about going to another planet, but about securing our future, expanding our knowledge, and pushing the bound- aries of whatâ s possible. As humanity continues to grow and evolve, itâ s essen- tial that we look beyond our own planet and explore the vastness of space. | 2310.05910#63 | 2310.05910#65 | 2310.05910 | [
"2302.13971"
] |
2310.05910#65 | SALMON: Self-Alignment with Principle-Following Reward Models | 21 Preprint MT-Bench Prompt â When a number is divided by 10, the remainder is 4. What is the remain- der when twice the number is divided by 4? Dromedary-2-70b (before PPO) â Letâ s approach this problem step-by-step: 1. Weâ re given that when a number is divided by 10, the remainder is 4. This means that the number can be expressed as 10x + 4, where x is an integer. 2. | 2310.05910#64 | 2310.05910#66 | 2310.05910 | [
"2302.13971"
] |
2310.05910#66 | SALMON: Self-Alignment with Principle-Following Reward Models | Weâ re asked to find the remainder when twice the number is divided by 4. This can be rewritten as 2(10x + 4) divided by 4. 3. Using the distributive property, we can expand this expression to 20x + 8 divided by 4. 4. Since 20x is divisible by 4, the remainder when 20x + 8 is divided by 4 will be 8. 5. Therefore, the remainder when twice the number is divided by 4 is 8. | 2310.05910#65 | 2310.05910#67 | 2310.05910 | [
"2302.13971"
] |
2310.05910#67 | SALMON: Self-Alignment with Principle-Following Reward Models | Dromedary-2-70b (after PPO) â Letâ s approach this problem step-by-step to understand the solution. 1. First, weâ re given that when a number is divided by 10, the remainder is 4. This means that the number can be expressed as 10x + 4, where x is an integer. 2. Next, weâ re asked to find the remainder when twice the number is divided by 4. This can be rewritten as 2(10x + 4) divided by 4. 3. Using the distributive property, we can expand this expression to 20x + 8. 4. | 2310.05910#66 | 2310.05910#68 | 2310.05910 | [
"2302.13971"
] |
2310.05910#68 | SALMON: Self-Alignment with Principle-Following Reward Models | Since weâ re interested in the remainder, we need to divide 20x + 8 by 4. This gives us a quotient of 5x + 2 and a remainder of 0. 5. Therefore, the remainder when twice the number is divided by 4 is 0. 6. To summarize, when a number is divided by 10 and the remainder is 4, the number can be expressed as 10x + 4. When twice this number is divided by 4, the remainder is 0. | 2310.05910#67 | 2310.05910#69 | 2310.05910 | [
"2302.13971"
] |
Subsets and Splits