id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
2311.04254#0
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
3 2 0 2 v o N 2 1 ] I A . s c [ 2 v 4 5 2 4 0 . 1 1 3 2 : v i X r a EVERYTHING OF THOUGHTS : DEFYING THE LAW OF PENROSE TRIANGLE FOR THOUGHT GENERATION Ruomeng Ding1,2, Chaoyun Zhang1, Lu Wang1, Yong Xu1, Minghua Ma1, Wei Zhang3, Si Qin1, Saravan Rajmohan1, Qingwei Lin1 & Dongmei Zhang1 1Microsoft 2Georgia Institute of Technology 3East China Normal University # ABSTRACT Recent advancements in Large Language Models (LLMs) have revolutionized decision-making by breaking down complex problems into more manageable lan- guage sequences referred to as â thoughtsâ . An effective thought design should consider three key perspectives: performance, efficiency, and flexibility. How- ever, existing thought can at most exhibit two of these attributes. To address these limitations, we introduce a novel thought prompting approach called â Everything â of existing thought of Thoughtsâ (XOT) to defy the law of â Penrose triangle paradigms. XOT leverages pretrained reinforcement learning and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge into thoughts, thereby enhancing LLMsâ capabilities and enabling them to generalize to unseen problems efficiently. Through the utilization of the MCTS-LLM collaborative thought revision framework, this approach autonomously produces high-quality comprehensive cognitive mappings with minimal LLM interactions. Additionally, XOT empowers LLMs to engage in unconstrained thinking, allowing for flexible cognitive mappings for problems with multiple solutions. We evaluate XOT on several challenging problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our results demonstrate that XOT significantly outperforms existing approaches in various dimensions, showcasing its remark- able proficiency in addressing complex problems across diverse domains.
2311.04254#1
2311.04254
[ "1706.06708" ]
2311.04254#1
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
1 # INTRODUCTION Recent advancements in Large Lan- guage Models (LLMs) have greatly ad- vanced problem solving in diverse do- mains such as mathematical reasoning Frieder et al. (2023), knowledge rea- soning Omar et al. (2023), root cause analysis Chen et al. (2023) and causal inference Kıcıman et al. (2023), etc.. This progress can be largely attributed to the technique of decomposing intri- cate problems into smaller language se- quences referred to as â
2311.04254#0
2311.04254#2
2311.04254
[ "1706.06708" ]
2311.04254#2
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
thoughtsâ . Through a step-by-step inference process involving the use of prompts, each thought functions as an intermediate stage, contributing to the simplification of tack- ling complex problems to fulfill the problemâ s ultimate objective. Table 1: Comparisons of different prompting paradigms. Paradigm Performance Efficiency Flexibility IO CoT CoT-SC ToT GoT XOT Effective design of thought steps toward complex problem-solving and reasoning, whether for hu- mans or LLMs, should prioritize three crucial aspects, namely:
2311.04254#1
2311.04254#3
2311.04254
[ "1706.06708" ]
2311.04254#3
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
â ¢ Performance. Performance is the accuracy of the solution to a problem, including the precision of each thought at intermediate stages. This metric holds paramount importance for problem-solving. 1 â ¢ Efficiency. Efficiency relates to the number of LLM inference calls required to solve a single problem. Minimizing this aspect is crucial due to the high computational cost associated with LLM inference, thereby reducing the overall number of cost. â ¢ Flexibility. Flexibility in thought topology refers to the diverse structures that can be employed by LLMs when organizing thoughts for problem-solving. These structures may include chains, trees, or even graphs, mirroring human thought processes. Enabling more flexible thought struc- tures enhances the capacity of LLMs for divergent and creative thinking, which is particularly advantageous in addressing complex problems, especially those with multiple potential solutions. There exist several thought generation paradigms, such as Chain-of-Thought (CoT) Wei et al. (2022), Tree-of-Thought (ToT) Yao et al. (2023), and Graph-of-Thought (GoT), etc.. However, these paradigms each have their limitations and cannot simultaneously achieve all the three desired attributes, as illustrated in Table 1. Specifically, direct Input-Output (IO) prompting is suitable pri- marily for simple problem-solving scenarios with single-step processes, lacking both in performance and flexibility. CoT and self-consistency CoT (CoT-SC) enable step-by-step problem solving, result- ing in modest performance improvements, but they are confined to linear thought structures, limiting their flexibility. In contrast, ToT and GoT permit more versatile thought topologies, accommodating tree-like or graph-like structures. However, these paradigms require the evaluation of intermediate thought steps through LLM itself, incurring significant computational costs and inefficiencies due to multiple LLM calls. These paradigms are constrained by a law analogous to the â Penrose triangle â , wherein they can achieve a maximum of two out of the three attributes, and none of them can We propose a novel solution called â Everything of Thoughtsâ
2311.04254#2
2311.04254#4
2311.04254
[ "1706.06708" ]
2311.04254#4
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
(XOT) to address the limitations of conventional thought frameworks, enhancing essential attributes of thought generation, includ- ing performance, efficiency, and flexibility for LLM inference.1 XOT leverages reinforcement learning (RL) Li (2017) and Monte Carlo Tree Search (MCTS) Silver et al. (2017), in conjunc- tion with lightweight policy and value networks, to pretrain on specific tasks for thought search- ing and subsequently generalize to new problems. This pretraining effectively integrates external domain knowledge into the â thoughtsâ provided to LLMs, expanding their problem-solving capa- bilities, and thereby significantly improving Performance. Once trained, XOT efficiently performs thought searching using MCTS with cost-effective policy and value networks for exploration and au- tonomously generates complete cognitive mappings for LLMs. It then employs a MCTS-LLM col- laborative thought revision process to further improve the thought quality while minimizing LLM interactions. This eliminates the need for LLMs to explore and evaluate thoughts themselves, as required by ToT and GoT, enhancing XOTâ
2311.04254#3
2311.04254#5
2311.04254
[ "1706.06708" ]
2311.04254#5
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
s Efficiency. Furthermore, MCTS demonstrates remark- able Flexibility as it can explore various thought topologies, including graph structures akin to those employed in human mind mapping processes Faste & Lin (2012); Jamieson (2012). This enables diverse and creative thinking for LLMs, making it particularly valuable when dealing with complex thought structures or tasks featuring multiple potential solutions. By concurrently achieving supe- rior performance, efficiency, and flexibility, XOT challenges the constraints posed by the â
2311.04254#4
2311.04254#6
2311.04254
[ "1706.06708" ]
2311.04254#6
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Penrose triangle We comprehensively evaluate XOT across a diverse range of challenging problem-solving tasks, namely Game of 24, 8-Puzzle, and Pocket Cube. Our experimental results consistently showcase XOTâ s superior performance, and its capacity to provide multiple solutions to problems efficiently with just a few LLM calls. These findings establish XOT as an effective thought generation ap- proach, paving the way for new avenues in LLMsâ problem-solving capabilities. # 2 BACKGROUND Thought for LLMs.
2311.04254#5
2311.04254#7
2311.04254
[ "1706.06708" ]
2311.04254#7
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Addressing complex problems often entails breaking down the overarching ob- jective into multiple intermediary steps. The outcomes or cognitive processes associated with each step are thoughts, which can be expressed as linguistic prompt sequences for LLMs to facilitate problem-solving. Structures of these thought may take various forms, including linear chains, hier- archical trees, or interconnected graphs, depending on how the thoughts are organized to advance towards a solution. 1We named it â Everything of Thoughtsâ to signify its three comprehensive thought generation capabilities. 2 (a)lo (b) CoT (9 Corse (d) ToT (f) XoT ic ic) 1 thoughts - Unevalt Hl DD â tought Positive thought Policy/Value ' ' Network on) i ©} mative thought Figure 1: Comparison of XOT versus other prompting paradigms. Input-Output (IO) Prompting (Fig. 1 (a)). The IO method is the most straightforward approach to instruct LLMs to address a problem without the provision of any intermediate thought processes. Chain-of-thought (CoT) Wei et al. (2022) (Fig. 1 (b)). CoT decomposes problem-solving into a sequential chain of thoughts, allowing LLMs to approach complex problems step by step. Self-consistency CoT (CoT-SC) Wang et al. (2023a) (Fig. 1 (c)). CoT-SC employs multiple in- stances of the CoT to generate multiple outputs from LLMs. It selects the the best results from multiple LLM outputs, offering more robust and consistent inference compared to the vanilla CoT. Tree-of-thought (ToT) Yao et al. (2023) (Fig. 1 (d)). ToT organizes thoughts in a tree-like structure and utilizes search algorithms (e.g., Breadth-First Search, Depth-First Search) to expand the tree in pursuit of an optimal solution. However, thought evaluation in ToT relies on LLMs themselves, necessitating multiple costly and inefficient LLM inference calls. Graph-of-thought (GoT) Besta et al. (2023) (Fig. 1 (e)). GoT extends the ToT approach by en- abling the generation of graph-like thought structures through thought aggregation and refinement during intermediate search phases.
2311.04254#6
2311.04254#8
2311.04254
[ "1706.06708" ]
2311.04254#8
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Although this method permits more flexible thought structures, it still demands multiple LLM inference calls for evaluation, incurring significant computational costs. # 3 XOT: EVERYTHING OF THOUGHTS XOT serves as an LLM-MCTS collaborative framework designed to enhance the thought generation process, thereby assisting LLMs in resolving complex problems. It leverages MCTS for proficient and efficient thought exploration while harnessing the capabilities of LLMs to refine and amend the thoughts derived from MCTS. This synergistic interaction creates a mutually beneficial arrangement, ultimately enabling the successful resolution of intricate problems characterized by high levels of performance, efficiency, and flexibility. 3.1 XOT IN A NUTSHELL We present an overview of the architecture of XOT in Fig. 1 (f). XOT comprises two key compo- nents: (i) a MCTS module guided by policy/value networks; and (iii) an LLM solver for thought revision and inference. The MCTS and policy/value networks need to be trained and then generalize to the inference process. During the training phase, MCTS is harnessed to explore potential thought structures for a spe- cific task through simulated scenarios. This process entails the recording of states, values, and the visitation frequencies of thought nodes in each simulation. These recorded data are subsequently employed to iteratively train the policy and value estimation model, enabling it to assimilate domain knowledge and comprehend the world model. Once trained, the estimated policy and value are utilized to guide the MCTS to systematically search for a thought trajectory provided to aid LLMs in problem-solving. Note that thoughts extracted only play a supporting role, assisting LLMs in gathering knowledge from external sources. These thoughts do not provide LLMs with definitive or error-free answers, as they may contain inaccu- racies or suboptimal solutions. LLMs are responsible for review and refining these thoughts when they seem erroneous or require adjustments. They continue MCTS the search process if needed
2311.04254#7
2311.04254#9
2311.04254
[ "1706.06708" ]
2311.04254#9
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
3 (a) Select (b) Expand & Evaluate (c) Backpropagation (d) Thought inference Extracted so a a a ee E Ext aN a ace iS g i hl 6 © 6 OB ARS a Tem ee oe oo s A (P,v) = fa : : a : a } Pt = Q J K~ $3 ratatate BR sews 2 Potente JOR ae Figure 2: An illustration of iterative phases in MCTS for thought searching ((a)-(c)) and thought inference in problem resolution (d). and eventually formulate the final answers by integrating these external thoughts with their internal knowledge. 3.2 THOUGHT SEARCHING FORMULATION The fundamental objective of employing the thought generation paradigm for LLMs is to identify the optimal decomposition of a complex problem into several manageable sub-steps. Each sub-step aims to alter the current status of the problem, eventually culminating in the successful resolution of the overarching problem. This approach, as seen in ToT and GoT, hinges on well-defined state tran- sitions and clear final objectives. Consequently, it is natural to conceptualize the thought-searching process as a Markov Decision Process (MDP) Puterman (1990), in which:
2311.04254#8
2311.04254#10
2311.04254
[ "1706.06708" ]
2311.04254#10
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
â ¢ State st: Represents the current status of the problem. The initial state s0 corresponds to the original problem, while intermediate states are characterized by either decomposed sub-problems or the results stemming from their resolution. â ¢ Action at: Signifies the one-step solution or action associated with tackling a problem, leading to a transition to a new state, by incorporating their outcomes. â ¢ Reward r: Reflects the comprehensive evaluation of the solution to the original problem, assess- ing whether it has been effectively resolved through the process of problem decomposition. â ¢ Thought Ï : A one-step thought is a combination of one-step state and action, i.e., Ï = {s, a}. This formulation naturally encapsulates the process of decomposing a complex problem into multiple sub-tasks, each accompanied by their respective outcomes. The detailed definitions of state, action, reward and thought for each task are shown in Table 1. The generation of complete thoughts T = {Ï 1, · · · , Ï N }, can be construed as the endeavor to discover a thought trajectory to maximize the accumulated reward to address the overall problem. 3.3 THOUGHTS SEARCHING WITH MCTS The formulation above naturally aligns the thought within LLM as a state-action pair. This approach facilitates the effective exploration of its optimal trajectory using a combination of MCTS and RL. This adheres to an iterative simulation cycle that encompasses three key phases: selection, expansion & evaluation, and backpropagation. It heavily depends on the utilization of neural networks fθ, which simultaneously estimate the value and action probability for a given state st. The aim is to reduce the number of rollouts and accelerate the search process, similar to the approach employed in AlphaGo Zero Silver et al. (2017). We provide a visual representation of an iteration of the MCTS in Fig. 2 (a)-(c) by taking Pocket Cube as an example and detail each process below. Selection. In the selection phase, the algorithm initiates at the root node and proceeds to choose an action aâ from the available set A(s) for single-step thought generation in the current state s. This process continues until a leaf node within the current tree is reached. The selection is guided by the PUCT algorithm Rosin (2011), aiming to maximize the Upper Confidence Bound (UCB) Garivier 4 & Moulines (2011), as follows:
2311.04254#9
2311.04254#11
2311.04254
[ "1706.06708" ]
2311.04254#11
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
aâ = arg max aâ A(s) Q(s, a) + w · Pθ(s, a) N (s) 1 + N (s, a) . (1) Here, Q(s, a) denotes the Q-value of a state-action pair (s, a). The term Pθ(s, a) denotes the pre- dicted prior probability of selecting action a given the state s obtained from a neural network fθ, and N (s, a) represents the count of times action a has been chosen in state s. The parameter w con- trols the trade-off between exploration and exploitation. The selection process will continue until an unexplored node is encountered. Evaluation and Expansion. Upon reaching a previously unselected leaf node, we expand to the state s for the next step for new thought exploration. This expansion involves the evaluation of its value and action probability on the state, which are modeled by neural networks parameterized by θ, i.e., (Pθ(s), vθ(s)) = fθ(s). Here Pθ(s) is the prior probabilities for all actions on s, and vθ(s) denotes its predicted state value. These two values are retained and stored for backup purposes, and state s is masked as â visitedâ .
2311.04254#10
2311.04254#12
2311.04254
[ "1706.06708" ]
2311.04254#12
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Backpropagation. Following the expansion of a leaf node in the above phases, which could be either an unexplored or terminal state, the algorithm proceeds to update all the Q(s, a) values via backpropagation. For unexplored nodes, this update involves computing the mean of its estimated value vθ, while for terminated nodes, itâ s based on the true reward r. These updates occur as infor- mation is backpropagated along the trajectory to subsequent nodes. Additionally, the visit count for each state-action pair is also incremented as follows: N (s, a) = N (s, a) + 1. A simulation is completed after a sequence of selection, evaluation, expansion, and backpropagation steps. After conducting multiple simulations, we proceed to the next step by selecting an action at state s using a probability distribution defined as εa â N (s, a)1/γ, where γ is a temperature constant that regulates the level of exploration.
2311.04254#11
2311.04254#13
2311.04254
[ "1706.06708" ]
2311.04254#13
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Policy and Value Networks Training. The simulations described above allow us to compile a dataset for each sample state s containing (s, ε(s), v(s)), where ε(s) = {εa | a â A(s)}, and v(s) represents the ground truth value obtained by accumulating rewards along the trajectory starting from state s. Subsequently, we can train a combined policy and value network fθ to minimize the discrepancy between the predicted value vθ(s) and the actual value v(s), while also maximizing the alignment between the action probabilities produced by the neural network Pθ(s) and the search probabilities ε(s). This can be achieved by minimizing the following loss function: L = (v(s) â vθ(s))2 + ε(s)T log Pθ(s)). This training iterates alongside the simulation process to continually enhance the performance of fθ, resulting in progressive improvements in thought searching capabilities. 3.4 THOUGHT INFERENCE WITH MCTS Once trained, we utilize the fθ to guide the MCTS in generating a thought for a new problem, which assists the LLM in solving it. Specifically, MCTS is utilized to perform K simulations aimed at thought searching and problem-solving, as illustrated in Fig.2 (d). In each simulation, fθ is em- ployed to guide the MCTS in its search for a thought trajectory. Throughout the training process, fθ incorporates external information related to the state and action quality. This information helps LLMs understand the world model, enhancing their long-term reasoning and planning abilities, which are areas they may not excel in Stechly et al. (2023); Valmeekam et al. (2023), thereby ensur- ing the performance of thought generation. Once the simulation concludes, we record the visiting count N (s, a) and the thought trajectory is obtained based on the number of solutions required:
2311.04254#12
2311.04254#14
2311.04254
[ "1706.06708" ]
2311.04254#14
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
â ¢ Single solution. starting from each state s, the action with the highest visiting count N (s, a) is selected. â ¢ Multiple solution. we sample M thought trajectories following the probability distribution εa â N (s, a) and remove duplicates. This results in one or multiple thought trajectories T â that consist of a sequence of state-action pairs for problem-solving. The trajectories for multi-solution problems may intertwine and converge at 5 MCTS LLM LLM _â â Identified Extract error state Extract Extracted Simulations hought Additional L Revised thoughts Simulations thoughts (ference) Figure 3: An illustration of thought revision process in XOT. the same goal state, resulting in a graph-like thought structure. This demonstrates that XOT is capable of generating thought structures with flexibility. These trajectories are then transformed into text sequences that are concatenated to form a prompt sequence provided to LLMs. Note that the thought trajectory is concatenated into a single prompt, even in the case of problems with multiple solutions. Therefore, we only require a single LLM inference call at this stage. Given that the fθ network is relatively lightweight, this ensures the efficiency of XOT.
2311.04254#13
2311.04254#15
2311.04254
[ "1706.06708" ]
2311.04254#15
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Thought Revision. It is important to acknowledge that that MCTS may not always provide the globally optimal thought trajectory to directly solve the problem flawlessly. Therefore, the thoughts extracted from MCTS serve as a reference thinking process for the problem, aiding LLMs in a sup- portive capacity. The LLMs will leverage their internal knowledge to review the extracted thought, identify errors in the thought trajectory, and then ground its knowledge in collaboration with the MCTS to revise and refine the thought. The revision process is iterative in nature, as shown in Fig. 3. Initially, upon obtaining the extracted thought, we instruct the LLM to detect any errors in the thought generated by MCTS using its in- ternal knowledge. If the LLM identifies an error, it results in an error state denoted as se within the thought. If no error is found, the thought remains unchanged. Starting from the parent state of se, MCTS conducts an additional set of L simulations, ultimately yielding a revised thought for the LLM. In scenarios involving multiple solutions, each solution undergoes this process individually. Upon the completion of the revision, we supply the LLMs with the revised thoughts for problem- solving. The revision process can be repeated several times to enhance the reliability of the answer. This collaborative MCTS-LLM framework nurtures a mutually beneficial process for both compo- nents, ultimately contributing to the overall performance of problem-solving. Since LLMs are solely utilized for identifying errors during the revision process with only one call, the efficiency of XOT is effectively maintained. The collaborative revision framework harnesses the strengths of both MCTS and LLMs. MCTS efficiently and flexibly generates candidate thoughts for LLMs through simulations, while LLMs use their internal knowledge to revise and ground these thoughts within the MCTS framework, effectively turning MCTS into a world model for LLMs. This process ensures the generation of high-quality thoughts for problem-solving. # 4 EXPERIMENT We conduct an extensive evaluation of our XOT approach2 in comparison to several baseline meth- ods across three challenging tasks: the Game of 24, the 8-Puzzle (with a 3 Ã 3 grid), and the 2 Ã 2 Pocket Cube. An overview of these tasks is provided in Table 2. These tasks are characterized by their complexity, requiring multiple steps for completion and potentially having multiple solutions.
2311.04254#14
2311.04254#16
2311.04254
[ "1706.06708" ]
2311.04254#16
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
To assess the effectiveness of our proposed XOT, we compare it against IO, CoT, CoT-SC, ToT, and GoT methodologies. We employ both GPT-3.5 Ouyang et al. (2022) and GPT-4 OpenAI (2023) for these evaluations. Note that temperature and top p are set to 0.0 for all LLM invoked. 2Code and dataset to reproduce this work will be shared in the near future, following compliance with the affiliation policy.
2311.04254#15
2311.04254#17
2311.04254
[ "1706.06708" ]
2311.04254#17
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
6 Table 2: An overview of tasks employed in this study. Objective Input Output Thought State Action Game of 24 Use four numbers on playing cards to make the number 24 through +, â , à , or ÷. 4 numbers ranging from 1 to 13, e.g., (4, 6, 10, 10). An equation to reach 24, e.g., 4 à 6 + 10 â 10 = 24. 3 intermediate equations. The remaining 1-4 numbers. Picking two number and a operation to compose an equation. 8-Puzzle Rearrange the tiles in the 3 à 3 puzzle from an scrambled state to a goal state . A scrambled 3 à 3 digital puzzle, e.g., . The slide sequence of the â -â tile, e.g., (Up, Down, Left, Right · · · ). The step-by-step sliding, and the puzzle state after the move. The current number layout of the puzzle. The one-step moving action of the â -â tile. Pocket Cube Rotating the faces of a 2 à 2 pocket cube until each face of the cube is a uniform color A scrambled 2 à 2 . pocket cube, e.g., . Colors represented as numbers for LLMs. The rotation move sequence of the cube, e.g., (F, R2, Uâ · · · ). The step-by-step rotation, and the cube state after the move. Colors of each face of the pocket cube. The one-step rotation action of cube.
2311.04254#16
2311.04254#18
2311.04254
[ "1706.06708" ]
2311.04254#18
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Reward 1 if the number of the final number is equal to 24 otherwise -1. The negative minimum step on solving the current puzzle state toward the goal state. The negative minimum moving step on solving current cube state toward the goal state. Policy/Value Networks Configurations. The policy and value networks in our model utilize a shared multi-layer perceptron (MLP) architecture with two layers and hidden units arranged as (128, 256). Two heads connected to the MLP are responsible for predicting vθ(s) and Pθ(s) separately. This design results in a considerably smaller model compared to LLM, making it much more ef- ficient. We train this model through three iterations, with each iteration comprising 10 self-play episodes for MCTS. Evaluation Metric. For each task, we assess the accuracy of each approach on the test set. Addi- tionally, we track the number of LLM invocations required for all approaches to solve a problem, as well as the number of times fθ is invoked in the case of XOT.
2311.04254#17
2311.04254#19
2311.04254
[ "1706.06708" ]
2311.04254#19
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Itâ s important to note that fθ is a considerably smaller model compared to LLMs. In the context of multi-solution scenarios, ac- curacy is computed as the percentage of problems for which any of the answers provided by each approach is correct. Multi-solution Accuracy (MultiAcc) is calculated as the average percentage of correctness across all solutions offered. Furthermore, we capture the total count of distinct solutions provided by each approach, regardless of their correctness, represented as #Sol. Note that we set the maximum solution number to 3 for all problems in multi-solution scenarios. 4.1 GAME OF 24 The Game of 24 presents a arithmetic challenge wherein the goal is to employ four numbers within the range of 1 to 13, in conjunction with basic arithmetic operations, (i.e., +, â , à , ÷), to attain a final result of 24. This game may possess multiple valid solutions. # 4.1.1 TASK SETUP We collect a dataset from 4nu, comprising 1,362 games ranked by human solving time, spanning a range of difficulty levels from easy to hard. For our testing phase, we randomly selected 137 games, ensuring coverage of various difficulty intervals. The remaining 1,225 problems were used to train the policy/value networks with MCTS. In the context of this task, as outlined in Table 1, the thoughts refer to the three intermediate equations, while the state encompasses the available numbers (ranging 7 Table 3: Performance comparison on Game of 24.
2311.04254#18
2311.04254#20
2311.04254
[ "1706.06708" ]
2311.04254#20
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Model IO CoT CoT-SC (n=10) ToT (b=1) ToT (b=3) GoT (k=1) XoT (w/o revise) XoT (w/ revise) GPT-3.5 Acc. [%] LLM invoked 6.57 2.19 2.19 5.84 10.22 2.92 61.31 79.56 1.00 1.00 10.00 22.11 43.96 7.00 1.00 1.39 GPT-4 fθ invoked Acc. [%] LLM invoked - - - - - - 68.73 92.15 10.22 4.38 4.38 34.31 60.58 10.95 63.50 74.45 1.00 1.00 10.00 23.50 39.83 7.00 1.00 1.38 fθ invoked - - - - - - 68.69 88.20 from 1 to 4) for creating the equations. Actions involve the selection of two numbers and an operator to form an equation, and the reward is set to 1 if the final equation is both valid and results in the number 24, utilizing each of the input numbers exactly once, otherwise it is set to -1. Performance is measured by calculating the success rate across the 137 test games.
2311.04254#19
2311.04254#21
2311.04254
[ "1706.06708" ]
2311.04254#21
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
4.1.2 BASELINES & XOT SETUP The IO prompt is supported by five in-context examples. In the case of CoT, we augment each input-output pair by including three intermediate equations. As for ToT, we solicit one-step thought candidates from the LLM at each step, subsequently instructing the LLM to categorize each thought candidate for intermediate selection. For experimental comparison, we conduct experiments on both the top-1 candidate (with b=1) and the top-3 candidates (with b=3) being retained, where b indicates the branches retained for exploration at each step. For GoT, we employ LLM to generate one-step thought candidates in the same manner as ToT, then we direct the LLM to select the top-1 thought from all candidates for merging the thoughts. We also examine a CoT-SC baseline, which derives the majority output from 10 CoT samples. For XOT, we perform 200 simulations for each action taken, and this count is increased to 500 during the thought revision process. In the multi-solution scenario, the IO, CoT, and CoT-SC prompts each include 5 examples, with each problem having 1 to 3 different solutions. For ToT, the top-3 candidates (with b=3) at the final step are considered as different solutions. Rather than keeping only the top-1 thought, GoT is instructed to select between 1 to 3 thoughts from all candidates at each step to generate a wider range of solutions. As for XOT, after performing simulations on MCTS, we sample 500 thought trajectories as for exploration and remove deplicates. The top-3 thoughts with the highest counts are preserved. 4.1.3 RESULTS Table 3 displays the overall performance of all methods on this task. Notably, XOT consistently outperforms other baselines on both GPT-3.5 and GPT-4, achieving an accuracy of 61.31% and 63.50% respectively, without revision. However, after the revision process, XOTâ s accuracy sub- stantially improves to 79.56% and 74.45% for GPT-3.5 and GPT-4 respectively.
2311.04254#20
2311.04254#22
2311.04254
[ "1706.06708" ]
2311.04254#22
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
This underscores the impressive performance of XOT, and demonstrates that the revision process significantly en- hances performance, with only a limited increase in the utilization of LLM and fθ. Interestingly, the revision process in XOT mitigates the performance gap attributable to the modeling ability in this task. As we observe that XOT with GPT-3.5 achieves higher accuracy after revision compared to GPT-4. On the other hand, the best-performing baseline, ToT (b=3) on GPT-4, attains an accuracy of 60.58%. However, it demands a substantial number of LLM invocations (39.83), which results in inefficiency. In contrast, XOT exhibits a significant advantage in terms of average LLM invocation time. It requires only a single LLM inference without revision and less than 1.4 calls with revision. Although XOT requires some inference calls for fθ, the model is significantly less complex than LLM, making it a much more efficient approach. Table 4 presents the performance of GPT-3.5 and GPT-4 models across different methods in the multi-solution scenario. Overall, XOT remains the best-performing approach in terms of accuracy and MultiAcc, significantly outperforming other baselines. Its GPT-4 version can even achieve over
2311.04254#21
2311.04254#23
2311.04254
[ "1706.06708" ]
2311.04254#23
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
8 Table 4: Performance comparison on Game of 24 in the multi-solution scenario. Model IO CoT CoT-SC (n=10) ToT (b=3) GoT (k=3) XoT (w/o revise) XoT (w/ revise) Acc. MultiAcc 14.6 3.65 5.11 10.22 8.76 72.99 85.40 4.87 1.22 1.70 3.41 8.03 39.90 62.90 GPT-3.5 #Sol 2.88 2.77 2.76 2.99 1.93 2.89 2.29 LLM invoked 1.00 1.00 10.00 43.96 7.00 1.00 3.51 fθ invoked Acc. MultiAcc - - - - - 95.66 116.34 21.17 20.44 18.98 60.58 13.14 72.99 90.51 8.27 7.79 8.03 39.90 10.46 60.54 76.25 GPT-4 #Sol 2.99 2.94 2.99 2.78 1.39 2.55 2.36 LLM invoked 1.00 1.00 10.00 39.83 7.00 1.00 2.31 fθ invoked - - - - - 95.66 109.64 90% accuracy. Although XOT does not generate the most number of answers compared to other baselines, it generates more accurate answers, as its MultiAcc significantly outperforms other ap- proaches. Notably, generating multiple solutions does not significantly increase XOTâ s complexity, as it only requires 2.31 LLM calls with GPT-4 and around 100 calls for a smaller fθ, making it remain efficient. Overall, the remarkable performance of XOT in the multi-solution scenario demonstrates its ability to generate complex thoughts, making it a flexible approach. 4.2 8-PUZZLE The 8-Puzzle is a classic sliding puzzle game that consists of a 3 Ã
2311.04254#22
2311.04254#24
2311.04254
[ "1706.06708" ]
2311.04254#24
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
3 grid with eight numbered tiles and one empty space denoted as â -â . Its objective is to rearrange the tiles from a given initial configuration into a target configuration. The maximum number of steps necessary for the optimal solution of the 8-Puzzle is 31. This problem falls within the category of NP-complete problems Ratner & Warmuth (1986) and may have multiple solutions. 4.2.1 TASK SETUP We randomly generated 419 solvable 8-puzzle problems, with 300 instances allocated for training and 119 instances for testing. All generated problems are solvable within 9 steps. The action space encompasses four directions: [Up, Down, Left, Right]. Note that the legal action space for each problem state may vary due to the dynamic position of the empty space. As shown in Table 1, the thoughts refer to the step-by-step move, and the puzzle state after the move.
2311.04254#23
2311.04254#25
2311.04254
[ "1706.06708" ]
2311.04254#25
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
4.2.2 BASELINES & XOT SETUP The IO prompt is extended with three in-context examples. In the CoT approach, each input-output pair is enriched by incorporating intermediate legal action sets, the current action, and the current state. In ToT, at each stage, a set of one-step thought candidates are derived from the LLM, from the current set of legal actions. We impose a maximum step limit of 9 since all generated problems can be solved within this range. The 8-puzzleâ s rules are conveyed through a system message, including detailed explanations of each actionâ s execution. Similarly, we perform 20 simulations for each action taken with XOT, and increase this number to 50 for thought revision processes. In the multi-solution scenario, all of the IO, CoT, and CoT-SC prompts consist of four examples. Each problem is presented with one to three distinct solutions. For ToT (b=3) and GoT (k=3), the maximum number of steps is increased to 12, as correct solutions may not always be optimal and could exceed 9 steps. In the case of XOT, after conducting simulations with MCTS, we sample 50 thought trajectories for exploration and select the top-3 thoughts with the highest counts. 4.2.3 RESULTS The inherent spatial complexity of the 8-Puzzle, the need for long-term planning, and the presence of invalid actions create a significant challenge for LLMs, which rely solely on textual data as input. This challenge is starkly evident in the poor performance of the baselines on both GPT-3.5, where its IO prompting achieve a mere 0% success rate. XOT successfully addresses this issue by supplying thoughts acquired from MCTS, thereby infusing external knowledge into the problem-solving pro- cess. This augmentation empowers LLMs to tackle problems that were previously insurmountable. In summary, when using GPT-4, XOT achieves an accuracy of 50.42% without revision and 93.2%
2311.04254#24
2311.04254#26
2311.04254
[ "1706.06708" ]
2311.04254#26
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
9 # Table 5: Performance comparison on 8-Puzzle. Model IO CoT CoT-SC (n=10) ToT (b=1) ToT (b=3) GoT (k=1) XoT (w/o revise) XoT (w/ revise) GPT-3.5 Acc. [%] LLM invoked 0.00 0.00 0.84 5.88 6.72 3.36 49.58 59.66 1.00 1.00 10.00 31.76 55.86 19.00 1.00 1.50 GPT-4 fθ invoked Acc. [%] LLM invoked - - - - - - 36.64 41.09 1.68 7.56 8.40 3.36 13.45 3.36 51.26 93.28 1.00 1.00 10.00 27.49 54.13 19.00 1.00 1.48 fθ invoked - - - - - - 36.25 55.66 Table 6: Performance comparison on 8-Puzzle in the multi-solution scenario. Model IO CoT CoT-SC (n=10) ToT (b=3) GoT (k=3) XoT (w/o revise) XoT (w/ revise) Acc. MultiAcc 0.00 2.52 2.52 6.72 6.72 36.97 52.10 0.00 1.43 1.54 2.52 3.36 21.15 27.45 GPT-3.5 #Sol 2.47 2.05 1.90 2.98 2.96 2.87 2.85 LLM invoked 1.00 1.00 10.00 55.86 24.18 1.00 4.19 fθ invoked - - - - - 36.25 52.06 Acc.
2311.04254#25
2311.04254#27
2311.04254
[ "1706.06708" ]
2311.04254#27
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
MultiAcc 2.52 10.92 11.76 13.45 20.17 50.42 82.35 0.84 7.84 6.58 5.60 16.61 29.13 76.33 GPT-4 #Sol 2.97 1.21 2.08 2.97 2.70 2.97 1.52 LLM invoked 1.00 1.00 10.00 54.13 22.76 1.00 4.30 fθ invoked - - - - - 36.25 66.66 with revision in the 8-Puzzle task, outperforming the best baseline, ToT (b=3), which only achieves 13.45% accuracy. Additionally, XOT demonstrates efficiency, requiring approximately 1.5 LLM calls and around 55 calls to fθ, while delivering significantly superior performance. The multi-solution performance presented in Table 6 confirms that the XOT method continues to outperform other baselines for both GPT-3.5 and GPT-4 models in terms of accuracy and MultiAcc, whether or not revision is applied.
2311.04254#26
2311.04254#28
2311.04254
[ "1706.06708" ]
2311.04254#28
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Itâ s worth noting that the revision process is particularly beneficial for GPT-4, as it improves the MultiAcc from 29.13% to 76.33%. These results once again demon- strate that XOT can effectively generate complex thought structures for complete multi-solutions with high performance and efficiency, making it particularly suitable for this task. 4.3 POCKET CUBE The 2 Ã 2 Pocket Cube is a simplified variant of the classic Rubikâ s Cube puzzle. Its primary objec- tive is to restore all of its faces to a uniform color by executing various face rotations. The maximum number of steps required to optimally solve the cube is 11, and it is also a NP-complete problem Demaine et al. (2017) and may possess multiple solutions. This task is known to be challenging to LLMs cub. 4.3.1 TASK SETUP We initially set all faces of the cube to a uniform color and then randomly apply 5 actions sequen- tially selected from the 27 legal actions of the Rubikâ s Cube.
2311.04254#27
2311.04254#29
2311.04254
[ "1706.06708" ]
2311.04254#29
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
This process resulted in the creation of 1,000 training samples and 183 testing samples. All generated problems can be solved within 4 steps. To simplify the action space, we reduced the 27 legal operations to 9 actions, namely: {U, Uâ , U2, R, Râ , R2, F, Fâ , F2}, which are used in our experiments with both baselines and XOT. As shown in Table 1, the thoughts pertain to the step-by-step rotation, and the cube state after the move. 4.3.2 BASELINES & XOT SETUP The IO prompt is augmented with a single in-context example. In CoT, we enrich each input-output pair by including intermediate actions and states. In ToT, we retrieve one-step thought candidates from the LLM at each stage and instruct the LLM to classify each candidate for intermediate selec- tion. A maximum step limit of 4 is imposed, as all generated problems can be resolved within this range.
2311.04254#28
2311.04254#30
2311.04254
[ "1706.06708" ]
2311.04254#30
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
The cubeâ s rules are conveyed through a system message, which includes the definition of the 10 Table 7: Performance comparison on Pocket Cube. Model IO CoT CoT-SC (n=10) ToT (b=1) ToT (b=3) GoT (k=1) XoT (w/o revise) XoT (w/ revise) GPT-3.5 Acc. [%] LLM invoked 1.09 0.00 0.00 7.65 17.49 1.64 45.36 74.32 1.00 1.00 10.00 16.50 58.72 8.93 1.00 1.55 GPT-4 fθ invoked Acc. [%] LLM invoked - - - - - - 18.69 64.63 1.09 1.09 1.09 11.48 19.57 18.03 45.90 77.60 1.00 1.00 10.00 16.39 56.58 8.55 1.00 1.54 fθ invoked - - - - - - 18.86 75.51 Table 8: Performance comparison on Pocket Cube in the multi-solution scenario. Model IO CoT CoT-SC (n=10) ToT (b=3) GoT (k=3) XoT (w/o revise) XoT (w/ revise) Acc. MultiAcc 0.55 0.55 0.55 17.49 3.28 39.89 73.22 0.27 0.55 0.18 5.83 1.09 23.04 48.72 GPT-3.5 #Sol 2.00 1.05 2.90 2.99 2.99 2.68 2.20 LLM invoked 1.00 1.00 10.00 58.72 14.76 1.00 4.13 fθ invoked Acc.
2311.04254#29
2311.04254#31
2311.04254
[ "1706.06708" ]
2311.04254#31
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
MultiAcc - - - - - 18.95 115.73 2.19 1.64 1.63 19.57 30.50 47.54 91.26 1.09 0.82 0.82 6.52 16.85 31.97 77.41 GPT-4 #Sol 1.98 1.91 2.92 2.99 2.77 2.62 1.72 LLM invoked 1.00 1.00 1.00 56.58 13.36 1.00 4.08 fθ invoked - - - - - 18.95 122.54 action space and illustrations of the execution of each action. For XOT, we conduct 20 simulations for each action taken and increase it to 500 for revision. In the multi-solution setup, the IO, CoT, and CoT-SC prompts each include 3 examples, and each problem within these prompts offers 3 unique solutions. As for ToT (b=3) and GoT (k=3), the maximum number of steps allowed is extended to 7. In the case of XOT, after conducting MCTS simulations, we gather 50 thought trajectories, and we keep the top 3 thoughts with the highest counts. 4.3.3 RESULTS The Pocket Cube task, similar to the 8-Puzzle, poses a challenge that demands spatial imagination skills, making it difficult for LLMs to excel. As expected, most of the baselines show very poor performance in this task, with some baselines achieving 0% accuracy. The best-performing base- line, ToT (b=3) with GPT-4, only attains a success rate of 19.57%. In contrast, XOT can achieve over 45% accuracy without revision and over 75% accuracy with revision, establishing itself as an expert in solving this task. This success is attributed to the injection of external knowledge from MCTS, enabling LLMs to solve problems that they would struggle with on their own. Notably, XOT maintains high efficiency in this task, requiring only 1.55 and 1.54 LLM inference calls for GPT-3.5 and GPT-4, respectively.
2311.04254#30
2311.04254#32
2311.04254
[ "1706.06708" ]
2311.04254#32
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
These results position XOT as a superior solution for enhancing LLMs in addressing seemingly insurmountable tasks. In the case of the multi-solution scenario, the performance of the XOT method remains remarkable, achieving over 91% accuracy and over 77% MultiAcc with GPT-4. The revision process continues to play an important role, significantly improving the performance of XOT with both GPT models. The closest competitor in this setting is GoT (k=3) with GPT-4, which achieves an accuracy of 30.50% and a MultiAcc of 16.85%, but it requires a significantly higher number of LLM invocations compared to XOT (13.36 vs. 4.08). Overall, XOT retains its position as the best solution for the Pocket Cube task, exhibiting high performance, efficiency, and flexibility. 4.4 ABLATION STUDY In our ablation study, we consider two aspects: the impact of the number of revisions on the perfor- mance and efficiency of XOT and the sensitivity of performance to the completeness of the provided thoughts. These angles allow us to gain insights into how XOTâ s performance can be improved and understand the importance of providing complete thoughts in complex problem-solving tasks.
2311.04254#31
2311.04254#33
2311.04254
[ "1706.06708" ]
2311.04254#33
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
11 (a) Game of 24 (b) 8-Puzzle # (c) Pocket Cube Figure 4: Accuracy, LLM and fθ invoked comparison on XOT w.r.t. the number of revisions. 4.4.1 NUMBER OF REVISIONS Itâ s important to highlight that the performance of each task can be further improved through multi- ple revisions of the thought using the MCTS-LLM collaborative framework. In Fig. 4, we compare the performance of GPT-3.5 and GPT-4 models using the XOT method with varying numbers of revisions, ranging from 0 to 3, across all three tasks. In the Game of 24 task, as the number of revisions increases, both models exhibit improved per- formance. Notably, GPT-3.5 consistently outperforms GPT-4 in terms of accuracy. After three revisions, GPT-3.5 achieves an accuracy of 90.51%, while GPT-4 reaches 85.40%. This improved performance comes at the cost of increased inference times and model calls, primarily driven by the need for more interactions to generate revised thoughts. For the 8-Puzzle task, the trend of in- creasing accuracy with more revisions remains valid. However, in this task, GPT-4 significantly outperforms GPT-3.5. After one revision, GPT-4 achieves an accuracy of 93.28%, which increases to 95.8% after the third revision. In contrast, GPT-3.5 only attains an accuracy of 63.03% after the third revision. In the Pocket Cube task, the performance trend is similar. The accuracy of both mod- els improves with an increase in the number of revisions. GPT-3.5 starts at an accuracy of 45.36% without revision and improves to 84.70% after three revisions. GPT-4 begins with an accuracy of 45.9% and reaches 83.61% after three revisions. Inference times and model calls are comparable between the two models, with GPT-4 showing a substantial increase in model calls after the third revision. Note that the number of LLM invocations does not increase dramatically with additional revisions, even though fθ is called more times to guide simulations.
2311.04254#32
2311.04254#34
2311.04254
[ "1706.06708" ]
2311.04254#34
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Considering the significant disparity in in- ference costs between LLM and fθ, increasing the number of revisions to achieve better performance appears to be a favorable trade-off. 12 # Table 9: Performance comparison on three tasks with incomplete thoughts. Task Game of 24 8-Puzzle Pocket Cube Model ToT (b=1) GoT (k=1) XoT (w/o revise) ToT (b=1) GoT (k=1) XoT (w/o revise) ToT (b=1) GoT (k=1) XoT (w/o revise) GPT-3.5 Acc. [%] LLM invoked 3.65 2.19 17.52 0.00 0.00 2.52 0.55 0.00 5.46 17.15 5.00 1.00 32.60 18.63 1.00 16.48 8.96 1.00 GPT-4 fθ invoked Acc. [%] LLM invoked - - 68.73 - - 36.66 - - 18.85 40.88 9.49 43.07 6.72 3.36 40.34 2.19 1.64 6.01 18.55 5.00 1.00 26.98 19.00 1.00 16.39 8.68 1.00 fθ invoked - - 68.70 - - 36.24 - - 18.89 Game of 24 8-Puzzle Pocket Cube Initial State Initial State Initial State igh Left les a) ⠬ S J5 x G+@x7)41 PB ax3)+8x7) CIS) ee EI yy 4) [y XV Left G+GxIx1 Nera | yw 67/8) Final State Final State Final State Figure 5: Examples of thought structures generated by XOT for all three tasks in the multi-solution scenario. 4.4.2 INCOMPLETE THOUGHT In this ablation study, we explore the performance of LLMs when provided with incomplete thoughts, specifically omitting the last step of the thought trajectory. This simulates scenarios where MCTS might supply inaccurate or incomplete thoughts.
2311.04254#33
2311.04254#35
2311.04254
[ "1706.06708" ]
2311.04254#35
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
The aim is to test whether LLMs can inde- pendently solve problems or rely on their own reasoning, rather than solely relying on the thought from MCTS as answers. We present the performance comparison for all three tasks in Table 9. Note that we only compare ToT and GoT since other baselines do not support this comparison by their nature. The results clearly show that incomplete thoughts lead to a significant performance drop in all three tasks. GPT-3.5 is more affected than GPT-4, with GPT-3.5 achieving 0% accuracy on several base- lines. In contrast, XOT with GPT-4 attains satisfactory performance on the Game of 24 and 8- Puzzle, achieving over 40% accuracy. However, the performance of XOT is dramatically affected in the Pocket Cube task, with accuracy dropping to 6%. This demonstrates that for very complex tasks, LLMs are highly sensitive to the completeness of the thoughts provided. Missing steps in the thought can lead to a substantial drop in performance, highlighting the importance of providing complete thoughts for such tasks. 4.5 CASE STUDY Finally, in Fig. 5, we provide examples of thought structures generated by XOT for all three tasks in the multi-solution scenario. It is noteworthy that, owing to the multiple solutions required, the generated thoughts intertwine during intermediate steps and converge towards the final goal state. This results in a naturally woven thought structure resembling a graph, showcasing the remarkable flexibility achieved by XOT. Upon closer examination of each example, in the case of the Game of 24, there are multiple solutions to reach the goal of 24 from the initial state. XOT effectively
2311.04254#34
2311.04254#36
2311.04254
[ "1706.06708" ]
2311.04254#36
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
13 predicts these trajectories, indicating its ability to grasp complex thought structures. In the 8-Puzzle example, we observe instances of reflection in the thought structure, with back-and-forth recurrent state transitions. This demonstrates XOTâ s capacity for self-reflection, a crucial attribute for LLMs, as discussed in previous work Shinn et al. (2023). In the case of the Pocket Cube, XOT identifies four distinct pathways to reach the goal state, leading to successful problem-solving across multiple solutions. Overall, these cases highlight how XOT encapsulates the flexibility required in thought generation, fostering diverse and creative thinking for LLMs. This enables them to produce multiple high- quality answers to a single problem effectively. 4.6 EXPERIMENT SUMMARY In summary, our approach XOT significantly improves the performance of LLMs by introducing a streamlined thought trajectory revision process. This represents a fundamental shift from traditional problem-solving approaches, resulting in substantial performance enhancements across a range of tasks. Notably, XOT excels in solving the Game of 24 and demonstrates its ability to overcome challenges requiring spatial reasoning, such as the 8-Puzzle and Pocket Cube, which were previously challenging for LLMs. The remarkable synergy of improved performance, efficiency, and flexibility exhibited by XOT positions it as an exemplary and superior method for eliciting optimal responses from LLMs. 5 RELATED WORK Decision Making & Planning with LLMs. The utilization of LLMs for decision-making and plan- ning has become a prominent area of research. Similar to human problem-solving, the process in- volves breaking down complex problems into sub-tasks. Various frameworks, such as CoT Wei et al. (2022), ToT Yao et al. (2023), and GoT Besta et al. (2023), have been designed to facilitate prob- lem decomposition in different structural forms, leading to enhanced solutions derived from LLMs. Extensions of these frameworks have also been explored across different domains and modalities Zhang et al. (2022; 2023); Ning et al. (2023); Turpin et al. (2023); Long (2023). Our approach XOT distinguishes itself from the aforementioned work by concurrently achieving superior performance, efficiency, and flexibility, embodying the concept of comprehensive thought generation. Furthermore, the â Describe, Explain, Plan, and Selectâ
2311.04254#35
2311.04254#37
2311.04254
[ "1706.06708" ]
2311.04254#37
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
framework introduced in Wang et al. (2023b) presents an interactive planning approach for LLMs, significantly enhancing planning performance for multi-task agents. Research conducted in Singh et al. (2023) leverages LLMs to suggest next actions or sequences during task planning for robotics, leading to improved task performance across various metrics. Additionally, work presented in Xie et al. (2023) employs LLMs to translate natural language into planning goals, demonstrating their capacity to harness commonsense knowledge and reasoning to provide missing details for under-specified goals. These studies underscore the growing potential of LLMs in the field of planning, with research efforts expanding rapidly. Augmenting LLMs with RL. Enhancing the capabilities of LLMs through the incorporation of ex- ternal models constitutes an effective strategy for improving their overall quality. The foundational work of ChatGPT Ouyang et al. (2022) leverages RL from human feedback to enable LLMs to ad- here to human guidance, resulting in a substantial enhancement of their truthfulness and a reduction in toxic output. Similarly, GLAM Carta et al. (2023) employs online RL to establish alignment between LLMsâ knowledge and the broader environment, thus enhancing their ability to generalize to new objects or tasks and ultimately improving their performance. Additionally, an interesting study in Yuan et al. (2023) utilizes RL to acquire basic skills in the context of Minecraft Cipollone et al. (2014), with subsequent high-level planning carried out by LLMs. This approach demon- strates promising performance across various Minecraft tasks. Furthermore, the ESPER framework Yu et al. (2023) harnesses RL to achieve alignment between multimodal inputs and language model generations, all without the need for direct supervision. This empowers LLMs to effectively tackle multimodal tasks and provides robust visual alignment and rapid inference speeds while preserving the textual domain. Collectively, these research endeavors underscore the considerable potential in augmenting LLMs with reinforcement learning techniques.
2311.04254#36
2311.04254#38
2311.04254
[ "1706.06708" ]
2311.04254#38
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
14 # 6 DISCUSSION Generalization While XOT is presently utilized for reasoning and search problems, its applicability can be extended to a broader spectrum of problem domains characterized by decomposable tasks with well-defined objectives. The MCTS utilized in XOT is particularly suitable for such tasks and can therefore generalize to more complex problems. We also note that MCTS is functioning in a supportive role and can be substituted with alternative supervised or RL models for thought exploration and generation, which can serve as a copilot to inject domain knowledge of the real- world model to LLMs. This opens up a promising avenue for future research, enabling LLMs to engage in more effective planning and problem solving processes. Limitation We also note that the implementation of XOT necessitates the training of additional pol- icy and value models to expedite the inference process. This training process requires the acquisition of datasets from real-world environments, introducing supplementary costs and efforts. However, note that these policy and value models are considerably smaller and more computationally efficient than the underlying LLMs. Consequently, the incurred costs are deemed low, particularly in the con- text of tasks featured in this study, where the thought steps and objectives are well-defined. In future research endeavors, we intend to explore methods to enhance the efficiency of the training process for XOT in scenarios where the objectives are less straightforward, such as multi-agent planning and code generation tasks Talebirad & Nadiri (2023); Vaithilingam et al. (2022). This endeavor will expand the applicability of the proposed XOT framework to a broader range of applications. Conclusion The XOT framework presented in this paper signifies a significant progression in It challenges the constraints of thought generation for LLMs aimed at solving complex tasks. the â
2311.04254#37
2311.04254#39
2311.04254
[ "1706.06708" ]
2311.04254#39
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Penrose Triangle â by concurrently achieving performance, efficiency, and flexibility, a feat unattainable by existing prompting paradigms. This accomplishment is achieved through the inte- gration of MCTS with pretrained low-cost policy and value networks, by injecting domain knowl- edge into LLMs, offloading thought searching, and facilitating unconstrained free-style thought ex- ploration. The collaborative thought revision framework involving MCTS and LLM further en- hances the quality of thought generation. Experimental evaluations conducted across three intricate real-world problems, namely the Game of 24, 8-Puzzle, and Pocket Cube, provide empirical evi- dence that our XOT framework significantly outperforms existing prompting paradigms, particularly in scenarios involving multi-solution problems. # REFERENCES 4 Numbers. https://www.4nums.com/game/difficulties/. [Online; accessed 21-Sep- 2023]. I Calculated ChatGPTâ s IQ. https://www.youtube.com/watch?v=HXb9Azzhr1k. Ac- cessed: 2023-10-30.
2311.04254#38
2311.04254#40
2311.04254
[ "1706.06708" ]
2311.04254#40
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, et al. Graph of thoughts: Solving elaborate problems with large language models. arXiv preprint arXiv:2308.09687, 2023. Thomas Carta, Cl´ement Romac, Thomas Wolf, Sylvain Lamprier, Olivier Sigaud, and Pierre-Yves Oudeyer. Grounding large language models in interactive environments with online reinforcement learning. arXiv preprint arXiv:2302.02662, 2023. Yinfang Chen, Huaibing Xie, Minghua Ma, Yu Kang, Xin Gao, Liu Shi, Yunjie Cao, Xuedong Gao, Hao Fan, Ming Wen, et al. Empowering practical root cause analysis by large language models for cloud incidents. arXiv preprint arXiv:2305.15778, 2023. Maria Cipollone, Catherine C Schifter, and Rick A Moffat.
2311.04254#39
2311.04254#41
2311.04254
[ "1706.06708" ]
2311.04254#41
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Minecraft as a creative tool: A case study. International Journal of Game-Based Learning (IJGBL), 4(2):1â 14, 2014. Erik D Demaine, Sarah Eisenstat, and Mikhail Rudoy. Solving the rubikâ s cube optimally is np- complete. arXiv preprint arXiv:1706.06708, 2017. 15 Haakon Faste and Honray Lin. The untapped promise of digital mind maps. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 1017â 1026, 2012.
2311.04254#40
2311.04254#42
2311.04254
[ "1706.06708" ]
2311.04254#42
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Simon Frieder, Luca Pinchetti, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz, Philipp Christian Petersen, Alexis Chevalier, and Julius Berner. Mathematical capabilities of chatgpt. arXiv preprint arXiv:2301.13867, 2023. Aur´elien Garivier and Eric Moulines. On upper-confidence bound policies for switching bandit problems. In International Conference on Algorithmic Learning Theory, pp. 174â 188. Springer, 2011. Peter Jamieson.
2311.04254#41
2311.04254#43
2311.04254
[ "1706.06708" ]
2311.04254#43
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Using modern graph analysis techniques on mind maps to help quantify learning. In 2012 Frontiers in Education Conference Proceedings, pp. 1â 6. IEEE, 2012. Emre Kıcıman, Robert Ness, Amit Sharma, and Chenhao Tan. Causal reasoning and large language models: Opening a new frontier for causality. arXiv preprint arXiv:2305.00050, 2023. Yuxi Li. Deep reinforcement learning: An overview. arXiv preprint arXiv:1701.07274, 2017.
2311.04254#42
2311.04254#44
2311.04254
[ "1706.06708" ]
2311.04254#44
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Jieyi Long. Large language model guided tree-of-thought. arXiv preprint arXiv:2305.08291, 2023. Xuefei Ning, Zinan Lin, Zixuan Zhou, Huazhong Yang, and Yu Wang. Skeleton-of-thought: Large language models can do parallel decoding. arXiv preprint arXiv:2307.15337, 2023. Reham Omar, Omij Mangukiya, Panos Kalnis, and Essam Mansour.
2311.04254#43
2311.04254#45
2311.04254
[ "1706.06708" ]
2311.04254#45
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Chatgpt versus traditional question answering for knowledge graphs: Current status and future directions towards knowl- edge graph chatbots. arXiv preprint arXiv:2302.06466, 2023. OpenAI. Gpt-4 technical report, 2023. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al.
2311.04254#44
2311.04254#46
2311.04254
[ "1706.06708" ]
2311.04254#46
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730â 27744, 2022. Martin L Puterman. Markov decision processes. Handbooks in operations research and management science, 2:331â 434, 1990. Daniel Ratner and Manfred Warmuth. Finding a shortest solution for the n x n extension of the In Proceedings of the Fifth AAAI National Conference on Artificial 15-puzzle is intractable. Intelligence, pp. 168â 172, 1986. Christopher D Rosin.
2311.04254#45
2311.04254#47
2311.04254
[ "1706.06708" ]
2311.04254#47
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Multi-armed bandits with episode context. Annals of Mathematics and Artifi- cial Intelligence, 61(3):203â 230, 2011. Noah Shinn, Beck Labash, and Ashwin Gopinath. Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366, 2023. David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. nature, 550(7676):354â 359, 2017. Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, and Animesh Garg.
2311.04254#46
2311.04254#48
2311.04254
[ "1706.06708" ]
2311.04254#48
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Progprompt: Generating situated robot task plans using In 2023 IEEE International Conference on Robotics and Automation large language models. (ICRA), pp. 11523â 11530. IEEE, 2023. Kaya Stechly, Matthew Marquez, and Subbarao Kambhampati. Gpt-4 doesnâ t know itâ s wrong: An analysis of iterative prompting for reasoning problems. arXiv preprint arXiv:2310.12397, 2023.
2311.04254#47
2311.04254#49
2311.04254
[ "1706.06708" ]
2311.04254#49
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Yashar Talebirad and Amirhossein Nadiri. Multi-agent collaboration: Harnessing the power of intelligent llm agents. arXiv preprint arXiv:2306.03314, 2023. 16 Miles Turpin, Julian Michael, Ethan Perez, and Samuel R Bowman. Language models donâ t al- ways say what they think: Unfaithful explanations in chain-of-thought prompting. arXiv preprint arXiv:2305.04388, 2023.
2311.04254#48
2311.04254#50
2311.04254
[ "1706.06708" ]
2311.04254#50
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Priyan Vaithilingam, Tianyi Zhang, and Elena L Glassman. Expectation vs. experience: Evaluating the usability of code generation tools powered by large language models. In Chi conference on human factors in computing systems extended abstracts, pp. 1â 7, 2022. Karthik Valmeekam, Matthew Marquez, and Subbarao Kambhampati. Can large language models really improve by self-critiquing their own plans? arXiv preprint arXiv:2310.08118, 2023. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou.
2311.04254#49
2311.04254#51
2311.04254
[ "1706.06708" ]
2311.04254#51
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, 2023a. Zihao Wang, Shaofei Cai, Anji Liu, Xiaojian Ma, and Yitao Liang. Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents. arXiv preprint arXiv:2302.01560, 2023b. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824â 24837, 2022.
2311.04254#50
2311.04254#52
2311.04254
[ "1706.06708" ]
2311.04254#52
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Yaqi Xie, Chen Yu, Tongyao Zhu, Jinbin Bai, Ze Gong, and Harold Soh. Translating natural lan- guage to planning goals with large-language models. arXiv preprint arXiv:2302.05128, 2023. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601, 2023.
2311.04254#51
2311.04254#53
2311.04254
[ "1706.06708" ]
2311.04254#53
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
Youngjae Yu, Jiwan Chung, Heeseung Yun, Jack Hessel, Jae Sung Park, Ximing Lu, Rowan Zellers, Prithviraj Ammanabrolu, Ronan Le Bras, Gunhee Kim, et al. Fusing pre-trained language mod- els with multimodal prompts through reinforcement learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10845â 10856, 2023. Haoqi Yuan, Chi Zhang, Hongcheng Wang, Feiyang Xie, Penglin Cai, Hao Dong, and Zongqing Lu. Plan4mc: Skill reinforcement learning and planning for open-world minecraft tasks. arXiv preprint arXiv:2303.16563, 2023. Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. Automatic chain of thought prompting in large language models. arXiv preprint arXiv:2210.03493, 2022. Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, and Alex Smola. Multimodal chain-of-thought reasoning in language models. arXiv preprint arXiv:2302.00923, 2023.
2311.04254#52
2311.04254#54
2311.04254
[ "1706.06708" ]
2311.04254#54
Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation
17
2311.04254#53
2311.04254
[ "1706.06708" ]
2311.04072#0
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
3 2 0 2 # v o N 7 # ] L C . s c [ 1 v 2 7 0 4 0 . 1 1 3 2 : v i X r a Preprint. # BEYOND IMITATION: LEVERAGING FINE-GRAINED QUALITY SIGNALS FOR ALIGNMENT Geyang Guo1â , Ranchi Zhao1â , Tianyi Tang1, Wayne Xin Zhao1,3â , Ji-Rong Wen1,2,3 1Gaoling School of Artificial Intelligence, Renmin University of China. 2School of Information, Renmin University of China. 3Beijing Key Laboratory of Big Data Management and Analysis Methods. [email protected], [email protected], [email protected], [email protected], [email protected] # ABSTRACT Alignment with human preference is a desired property of large language mod- els (LLMs). Currently, the main alignment approach is based on reinforcement learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is intricate to implement and train, thus recent studies explore how to develop al- ternative alignment approaches based on supervised fine-tuning (SFT). A major limitation of SFT is that it essentially does imitation learning, which cannot fully understand what are the expected behaviors. To address this issue, we propose an improved alignment approach named FIGA. Different from prior methods, we incorporate fine-grained (i.e., token or phrase level) quality signals that are derived by contrasting good and bad responses. Our approach has made two ma- jor contributions. Firstly, we curate a refined alignment dataset that pairs initial responses and the corresponding revised ones. Secondly, we devise a new loss function can leverage fine-grained quality signals to instruct the learning of LLMs for alignment. Extensive experiments have demonstrated the effectiveness of our approaches by comparing a number of competitive baselines.
2311.04072#1
2311.04072
[ "2309.00267" ]
2311.04072#1
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
# INTRODUCTION Pre-trained large language models (LLMs) such as LLaMA (Touvron et al., 2023a) have shown remarkable potentials to solve various downstream tasks by mastering the universal pre-training task of next-token prediction. While after large-scale pre-training, it often needs subsequent tuning for enhancing and regulating the behaviors of LLMs. Two typical approaches are supervised fine- tuning (SFT) and reinforcement learning from human feedback (RLHF), which can largely improve LLMs in both task solving capacity and human alignment (Ouyang et al., 2022). Despite widely explored, SFT and RLHF have their own strengths and weaknesses (Zhao et al., 2023a). On the one hand, SFT is easy to implement and can effectively boost the general task solving abilities by instruction based eliciting (Wei et al., 2021; Ouyang et al., 2022; Chung et al., 2022), while it mainly imitates the behaviors of experts (essentially doing behavior clone (Wiseman & Rush, 2016)), which are demonstrated by the human annotators or powerful LLMs such as ChatGPT. Therefore, the SFT performance highly relies on high-quality demonstration data (Zhou et al., 2023), and might suffer from the huge distribution shifts between its outputs and imitated outputs (Zhang et al., 2019; Schulman, 2023). On the other hand, RLHF can better explore the semantic space of LLMs, and identify the optimal policy by encouraging good behaviors and discouraging bad behaviors during learning. However, it is very complicated to effectively implement, often suffering from training instability issues such as reward collapse (Song et al., 2023; Wolf et al., 2023). To leverage the benefits of SFT and RLHF, several recent studies propose to develop alignment ap- proaches without reinforcement learning (RL). These studies typically construct refined instruction data using methods such as quantile ranking (Lu et al., 2022) and rejection-sampling (Touvron et al.,
2311.04072#0
2311.04072#2
2311.04072
[ "2309.00267" ]
2311.04072#2
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
â Equal contribution. â Corresponding author. 1 Preprint. 2023b), and then follow or slightly modify the original SFT loss. Another line of research designs alternative optimization approaches that bypasses reward modeling (Rafailov et al., 2023). To con- duct effective alignment without RL, a key issue is how to effectively learn by discriminating good and bad behaviors as that in RLHF (Ouyang et al., 2022), such that LLMs can understand what are good behaviors to follow and what are bad behaviors to avoid. Despite the prior efforts, they are largely limited by response-level discrimination signals: they are only aware of the quality label (e.g., good or bad) of a demonstration but not what makes it good or bad.
2311.04072#1
2311.04072#3
2311.04072
[ "2309.00267" ]
2311.04072#3
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Thus, it canâ t fully capture the correct alignment behaviors even demonstrated by what are good and bad behaviors. In this work, we introduce FIGA, a novel method that aligns language models with human prefer- ences. The core idea is to contrast a low-quality initial response from a LLMâ s output with a cor- responding high-quality revised response by another powerful LLM (e.g., ChatGPT), so that LLMs can be noted with what are newly added (good actions) and what are removed or substituted (bad actions) from such a revision process. Such fine-grained quality signals can be more useful than the widely used response-level quality signal. It can instruct LLMs to emphasize the learning of good actions and penalize the bad actions in a single response. To implement our approach, we first cu- rate an alignment dataset called SPA that pairs an initial response with a revised response under the guidance of the ground-truth demonstrations. We mainly keep the queries that a LLM performs less well on, and perform strict filtering. Further, we design a new fine-tuning method that assigns spe- cific token-level weights to different parts (e.g., good or bad tokens). Our learning loss can directly impose fine-grained reward scores to guide the learning of LLMs for improved alignment. To the best of our knowledge, it is the first attempt that leverages fine-grained quality signals for improving the alignment of LLMs without RL. Our approach can make LLMs better understand what are good and bad behaviors beyond simple imitation. By conducting extensive experiments, we demonstrate that FIGA shows promising performance in aligning language models with human preferences: our approach outperform the initial supervised-finetuned model by notable 3.2 points and the strong PPO method by 1.8 points. # 2 RELATED WORK In this section, we review the related work in the two aspects, namely reinforcement learning from human feedback and alignment without reinforcement learning. Reinforcement learning from human feedback Large-scale pre-training empowers large lan- guage models (LLMs) to acquire extensive knowledge, underscoring their remarkable potential across diverse tasks (Brown et al., 2020; Kojima et al., 2022; Zhang et al., 2022; Chowdhery et al., 2022). Nonetheless, models exclusively focus on next token prediction in pre-training phrase, while do not consider human preferences.
2311.04072#2
2311.04072#4
2311.04072
[ "2309.00267" ]
2311.04072#4
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Consequently, this gives rise to unexpected behaviors like harm- ful or inaccurate information, and emphasizes the necessity to align language models with human preferences. The current mainstream approaches (Ouyang et al., 2022) to better harness the capabili- ties of LLMs include supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF). To be specific, this involves three stages: firstly, using SFT to enable the model to better follow human instructions; subsequently, training a reward model (RM) using human preference data; and ultimately, tune the model to maximize the reward through the proximal policy optimiza- tion (PPO) (Schulman et al., 2017) algorithm. Furthermore, there are works exploring enhancement for this process (Ramamurthy et al., 2022; Lightman et al., 2023; Lee et al., 2023). However, RLHF presents challenges due to complex coding and hyper-parameters selecting. Besides, it requires load- ing three to four models simultaneously, resulting in high memory usage. These challenges propel researchers to explore alternative approaches to align language models with human feedback. Alignment without reinforcement learning Several studies are based on the rationale that lan- guage models have already acquired comprehensive knowledge during the pre-training, and only high-quality supervised fine-tuning data is required for further tuning (Zhou et al., 2023). So these works (Liu et al., 2023b; Sun et al., 2023; Bai et al., 2022b; Bhardwaj & Poria, 2023; Krishna et al., 2022) bypass reward modeling, and instead concentrate on the construction of datasets that align well with human preferences. Other works are directed towards exploring substitutes for the intri- cate PPO algorithm. These efforts employ diverse approaches to learn from the preference data, encompassing the creation of a supervised fine-tuning training dataset enriched with human prefer-
2311.04072#3
2311.04072#5
2311.04072
[ "2309.00267" ]
2311.04072#5
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
2 Preprint. ence data (Liu et al., 2023a; Zhang et al., 2023; Dong et al., 2023), the integration of preferences for different outputs into the loss function (Yuan et al., 2023; Rafailov et al., 2023; Zhao et al., 2023b; Liu et al., 2023c), and the utilization of controllable text generation techniques (Lu et al., 2022). However, the human preference information used in these methods is at the sentence level, lacking more fine-grained supervision signals. # 3 APPROACH In this section, we present the proposed alignment approach FIGA by leveraging fine-grained qual- ity signals. Our approach is developed based on a specially curated alignment dataset called SPA (Section 3.1), where each low-quality initial response is paired with a high-quality revised response. Based on such an alignment dataset, we further develop a new loss function that incorporates fine- grained quality signals derived by contrasting good and bad responses (Section 3.2). Our approach is easy to implement (similar to SFT) and can capture the underlying effect to generate high-quality responses instead of simply imitating them (similar to RLHF), which are discussed in Section 3.3. The overall framework of our FIGA pipeline is shown in Figure 1.
2311.04072#4
2311.04072#6
2311.04072
[ "2309.00267" ]
2311.04072#6
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
What is the best way to get from > What is the best way to get from Tokyo to Osaka? x Tokyo to Osaka? oa eas x _â > Instances The best way to get from Tokyo to Osaka is by taking the Shinkansen bullet ° Pool Y Cy) Y ry) train, With the bullet train, you can reach Y © Osaka from Tokyo in just over 2 hours. A â sinkansen bullet 2 Desired words The best way to get from Tokyo to Â¥ sour hours and here ae severed 5 trains per day.
2311.04072#5
2311.04072#7
2311.04072
[ "2309.00267" ]
2311.04072#7
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Frans per day, Tnitial Model Align the Tnitial Model Reward Model LLM Figure 1: The overall illustration of our alignment approach FIGA. 3.1 CURATED ALIGNMENT DATASET From the perspective of dataset, the novelty of our alignment approach can be given in two major aspects. Firstly, we donâ t directly aggregate all the available instruction data, but instead focus on high-quality instruction data that a LLM performs less well on. It enables LLMs to specially improves their weaknesses, reducing the cost of replicate learning.
2311.04072#6
2311.04072#8
2311.04072
[ "2309.00267" ]
2311.04072#8
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Secondly, we donâ t take what human annotators write or powerful LLMs (e.g., ChatGPT or GPT-4) generate as training targets, but instead seek a more similar surrogate that is derived based on its own output by a LLM. It can largely reduce the distribution shift between the LLM to be aligned and the ground-truth demonstrations. We carefully construct the SubPar Alignment (SPA) dataset, a curated collection of query, modelâ s initial response, and the corresponding improved response (with minor revision). Compared with prior work (Ouyang et al., 2022; Yuan et al., 2023; Liu et al., 2023a), we mainly consider the queries where LLMsâ performance are not satisfactory and aim to correct these bad cases via specific train- ing. Moreover, we refine the initial response of a LLM that is to be aligned as training target, which can effectively reduce the distribution shifts from the ground-truth demonstrations. Formally, we denote the initial model as Ï Î¸, which can be a supervised-finetuned model (e.g., Al- paca (Taori et al., 2023)) or a pre-trained base model (e.g., LLaMA (Touvron et al., 2023a)). To construct our dataset, we assume that a reward model for assessing the alignment level is available. In practice, a number of reward models have been released publicly (e.g., DeBERTa (OpenAssis- tant, 2023)), which can be used for our approach. Given a query X and a response Y , we leverage a reward model RM to compute the reward score RY = RM(X, Y ), which reflects how well the response Y aligns with given query X.
2311.04072#7
2311.04072#9
2311.04072
[ "2309.00267" ]
2311.04072#9
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Below, we detail the construction procedure. 3 Preprint. Rollout for initial response generation We first broadly collect existing paired datasets encom- passing a wide range of real-world tasks, and construct the instances pool D = {X, Y }n i=1. To better align with human value, we select preference datasets (e.g., HH-RLHF (Bai et al., 2022a)) that adhere to the 3H principle (i.e., helpfulness, honesty, and harmlessness) in this work. Further- more, we also include instruction dataset (e.g., OpenOrca (Mukherjee et al., 2023)) to preserve the task solving abilities of LLMs. We aim to train a both capable and safe model like ChatGPT, rather than only focusing on alignment while sacrificing the task solving abilities. Based on these datasets, we employ the rollout model Ï Î¸ to generate initial responses Ë Y = Ï Î¸(X) for the given queries. Identifying the queries to be enhanced After obtaining the modelâ s initial response Ë Y and the human-preferred response Y , we next identify the queries where the model requires further im- provement to better align with human intent through the reward score RM(·). Following existing work (Ouyang et al., 2022), we employ the reward model as a surrogate of human preferences, and design a filtering process based on the calculated reward score R Ë Y and RY for all the instances. We only keep the instances that meet all the three following restrictions: (1) R Ë Y < η1 (a subpar initial performance, i.e., bad cases), (2) RY > η2 (high-quality demonstrations), and (3) RY â R Ë Y > η3 (clear quality difference), where η1, η2, and η3 are three threshold values for filtering, we will set them according to the reward score distribution. The details can be found in Section 4.1.2. With the above filtering mechanism, we ensure the quality and usefulness of our SPA dataset. We target at bad case correction of the rollout model, which is more directed and effective than existing methods that directly trains the model on the whole collected dataset.
2311.04072#8
2311.04072#10
2311.04072
[ "2309.00267" ]
2311.04072#10
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Revising initial responses for reducing the distribution shifts To align a LLM, a basic principle is to ensure that the distribution of the model should not experience significant shifts during the alignment process (Bai et al., 2022a). Despite that the ground-truth demonstration (Yi) is human preferred, it is likely to span a very different semantic distribution as the LLM to be aligned. Our solution is to revise the initial response ( Ë Y ) by referring to the ground-truth demonstration (Yi). In this way, we can effectively reduce the distribution shifts as well as obtaining demonstrations similar to the original output. Specially, we generate a pseudo reference Ë Y based the target Yi, making minor adjustments to the Ë Y and enhance its quality, i.e., modifying Ë Y as minimally as possible based on Yi.
2311.04072#9
2311.04072#11
2311.04072
[ "2309.00267" ]
2311.04072#11
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Such a generation process is conducted by prompting the powerful ChatGPT. To facilitate the generation process, we further manually inspect the low-quality responses that we have previously filtered and identify four major low-quality reasons: (1) lack of detail, (2) inaccuracy in response, (3) the need for structural adjustments, and (4) other factors (off-topic or harmful content). In detail, we leverage ChatGPT to determine, given Yi, which of the four reasons Ë Y is associated with. Afterwards, we design different prompts for the four reasons and instruct the LLM to make minor correction to the initial response Ë Y based on Yi. We denote the revised response as Ë Y . The details of our process and prompts can be found in Appendix A.2. Finally, we obtain the SPA dataset {X, Ë Y , Ë Y } for subsequent training. Our construction method has dual merits: it not only aligns the reference output with human preferences but also preserves the inherent linguistic style and overall semantic distribution of the model to be aligned. Note that we keep both the initial and revised responses in a contrastive form, because they are jointly used for deriving fine-grained quality signals in subsequent training. 3.2 FINE-GRAINED QUALITY-AWARE ALIGNMENT TUNING As described above, our fine-tuning dataset for alignment contains both low-quality initial responses ( Ë Y ) and high-quality revised responses ( Ë Y ). Instead of directly learning from these high-quality responses (similar to rejection sampling (Touvron et al., 2023b)), it is important for LLMs to under- stand why such revisions are useful to produce the high-quality responses. Furthermore, LLMs can improve the alignment capacity from the contrast between good and bad responses. Motivated by previous work (Liu et al., 2022), we utilize Levenshtein distance to quantify the simi- larity between of Ë Y and Ë Y .
2311.04072#10
2311.04072#12
2311.04072
[ "2309.00267" ]
2311.04072#12
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Levenshtein distance is a dynamic programming algorithm to obtain the minimal edit distance between two sentences through three operations: addition, deletion, and sub- stitution. Comparing the initial and revised response, the involving tokens can be generally divided into three types: newly added, deleted, or substituted. We consider assigning different weights to 4 # Preprint. these three types of tokens. We reward the tokens that are added or substituted in the revised re- sponse Ë Y , penalize the tokens that are deleted or substituted in the original response Ë Y , and tend to overlook the rest tokens that remain the same after the revision process. Formally, we introduce two token-level weighting functions to characterize the above ideas: ~ a â ) a, if y is added or substituted q = . yt 7, otherwise qd) apa B, if % is deleted or substituted - (hi. t) = 0, otherwise where α > 0, β > 0, and γ â ¥ 0 are three coefficients to control the encouraged, discouraged, and ignored parts, which can be empirically set or learned from tuning data. In this way, we can then encourage the model to â imitateâ the desired actions that have a greater impact on enhancing quality, discourage the model from emulating the undesired actions that lead to a poor performance in quality. The final training loss can be formulated as: L = â Ë r(Ë yt, t) log Ï Î¸(Ë yt|Ë y<t, X) + Ë r(Ë yt, t) log Ï Î¸(Ë yt|Ë y<t, X) . (2) # HEY EY decrease the probability of undesired words increase the probability of desired words _
2311.04072#11
2311.04072#13
2311.04072
[ "2309.00267" ]
2311.04072#13
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
The overall FIGA pipeline is illustrated in Algorithm 1. The major advantages of FIGA over typical SFT (Ouyang et al., 2022) is that it can learn from fine-grained contrast between good and bad responses, which is essentially similar to that in reinforcement learning (discussed in Section 3.3). In addition, by explicitly modeling the revision effect, such an approach can naturally zoom into crucial words or phrase, making the model better zoom into fine-grained semantics. # Algorithm 1: FIGA - Leveraging Fine-grained Quality Signals for Alignment 1 Input: Instance pool D = {X, Y }n 2 ### SPA Dataset Construction 3 for each instance {X, Y } in D do i=1, initial model Ï Î¸, revision model (ChatGPT), reward function R(·). 4 5 1. Rollout for initial generation. Generate Ë Y â ¼ Ï Î¸(X) and compute RY , R Ë Y ; 2. Reward filtering. if R Ë Y > η1 or RY < η2 or RY â R Ë Y < η3 then 6 Discard the current instance; 7 3. Response Revision. Analyze the reason for the poor performance of Ë Y , and generate the corresponding revision Ë Y â ¼ LLM( Ë Y , Y ) based on the identified reason. 8 Construct the SPA dataset S = {Xi, Ë Yi, Ë Yi}m 9 ### Alignment Learning 10 for epoch e = 1, ..., E do i=1. 11 for each instance {X, Ë Y , Ë Y } in SPA S do 12 Locate the crucial parts with Levenshtein distance using Equation 1 and assign weights according to Ë r(Ë yt, t) and Ë r(Ë yt, t); 13 Update Ï Î¸ using the fine-grained quality-aware learning objective in Equation 2. 3.3 DISCUSSION In this part, we discuss how the proposed FIGA approach relates to existing fine-tuning approaches, namely SFT and RLHF.
2311.04072#12
2311.04072#14
2311.04072
[ "2309.00267" ]
2311.04072#14
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Relationship with SFT SFT can be viewed as a special case of our FIGA method without revision, when training is performed with the higher-quality instance Y , and each token of Y is considered equally important. Compared to SFT, FIGA has the following two advantages: (1) we only consider the inferior part of the bad case that the initial model does not perform well; (2) we explicitly enforce the model to understand what are good and bad behaviors in the loss function. It inherits the merits of SFT, and further leverages fine-fined quality signals for improving the alignment.
2311.04072#13
2311.04072#15
2311.04072
[ "2309.00267" ]
2311.04072#15
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
5 Preprint. Relationship with RL Our method can be considered as a simplified but efficient version of RL. Using typical PPO method (Schulman et al., 2017) as an example, its objective is to optimize the actor model (i.e., the initial model Ï Î¸) to maximize the expected reward score, formally given as: fielder, X) PPO = (Zan â Ay ) ; 3 » Tous (GelG<t,X) â where AË yt is the advantage function of the Ë yt token returned by the critic model given the reward score R Ë Y . Ï Î¸old is the model before the previous parameter update. Here, we ignore the clipping function and KL penalty for convenience. Considering the FIGA training objective in Equation 2, our weight functions Ë r(·) and Ë r(·) in FIGA can be viewed as a simplified advantage function A(·) in Equation 3 to evaluate the importance of each token. Therefore, FIGA has a similar objective with RL but with a simplified token-wise reward function. We do not use an extra learned critic model and remove the use of previous rollout model, which makes FIGA more efficient. In the later experiment section, we will verify the effectiveness of our method. # 4 EXPERIMENT 4.1 EXPERIMENTAL SETUP 4.1.1 BASELINE METHODS (1) In order to better evaluate FIGA method, we choose several baselines for comparison: SFT (Ouyang et al., 2022): it continues to fine-tune the initial model using pairs of data with sequence-to-sequence loss. (2) PPO (Ouyang et al., 2022): it optimizes the initial model to achieve a higher reward score provided by the reward model through the PPO algorithm. (3) CoH (Liu et al., 2023a): it annotates the dataset by prefixing â A helpful answer: â and â An unhelpful answer: â
2311.04072#14
2311.04072#16
2311.04072
[ "2309.00267" ]
2311.04072#16
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
to the responses of corresponding quality, employs SFT on it and computes loss only for the specially masked response tokens. (4) RRHF (Yuan et al., 2023): it applies SFT on the optimal responses, and further optimizes the ranking loss among responses from multiple sources by encouraging the model to achieve a greater conditional log probability for the response that holds a superior ranking. IMPLEMENTATION DETAILS Training Datasets For our SPA dataset mentioned in Section 3.1, we broadly select the follow- ing datasets as our initial instances pool: HH-RLHF (Bai et al., 2022a), ShareGPT (ShareGPT, 2023), Synthetic Instruct GPT-J Pairwise (Dahoas, 2023), Stanford SHP (Ethayarajh et al., 2022), and OpenOrca (Lian et al., 2023). We employ the Alpaca-7b model Taori et al. (2023) as the rollout model for generating responses Ë Y , and gpt-3.5-turbo to revise and obtain Ë Y . The prompt used for revision can be found in Appendix A.2 As for the filtering process, we utilize OpenAssistant/reward-model-deberta-v3-large-v2 (OpenAssistant, 2023) as the reward model. According to the reward score distribution, we empirically set the threshold values η1 = 1, η2 = 3, η3 = 3.5, respectively. The statistics of reward scores and edit operations for the SPA dataset are presented in Table 1, while the distribution of the reward scores is illustrated in Figure 2. We can find that the initial response Ë Y has a large distribution gap with the reference distribution Y , which may cause the model hard to learn from the golden target. In contrast, our revised response is closer to the original distribution but with higher quality, making the rollout model easier to learn. The final SPA dataset we obtained consists of 17,333 instances. Model Details (1) For SFT, we set learning rate to 1e-5 and batch size to 128.
2311.04072#15
2311.04072#17
2311.04072
[ "2309.00267" ]
2311.04072#17
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
We conduct 5 epochs of training and choose the one with the highest reward score on the test set as the ultimate SFT model. (2) For PPO, we apply the OpenLLaMA2 (OpenLLMAI, 2023) library, and adhere to the parameter configurations within it. We use the Alpaca-7b to initialize the critic model, and use the same reward model mentioned in the construction process of the SPA dataset. Given the modest gains observed in previous experiments when employing PPO-ptx on models around 6B parameters (Ouyang et al., 2022), we refrain from introducing pre-training mix as an additional
2311.04072#16
2311.04072#18
2311.04072
[ "2309.00267" ]
2311.04072#18
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
6 Preprint. 04 0.3 Density ° = Reward Score Data R(·) #ops Ë Y -1.07 â Y 3.94 75.69 Ë Y 1.78 39.38 Table 1: The average reward score of re- sponse data and the average number #ops of editing operations to them from the Ë Y . Figure 2: Reward score distributions. training objective. (3) For CoH, we use the data construction method of the original paper on our SPA dataset. Taking into account our smaller dataset size compared to the original paper, we set FCM (the ratio of random mask token to prevent overfitting) to 0. Additionally, to ensure a fair comparison with PPO, we disable the pre-training dataset regularization. (4) For RRHF, we follow the recommended hyper-parameters from the original papers on our SPA dataset. (5) For FIGA, we set the parameters α = 1, β = 0.5, γ = 0 respectively. Besides, considering the instability when training on negative samples in practice (Bhardwaj & Poria, 2023; Liu et al., 2023a), we further select the bad tokens returned by Levenshtein distance in equation 1 by retaining only those with a negative log-likelihood less than 0.6. 4.1.3 EVALUATION TASKS We evaluate the performances of different methods using reward scores on the test set and a com- prehensive benchmark. For the reward score evaluation, our goal is to assess how well the modelâ s response aligns with human preferences. Specifically, to ensure that the reward scores can accu- rately represent human preferences, we select data from the reward modelâ s training data that was not included in our training data as the test set, comprising a total of 3,608 instances.
2311.04072#17
2311.04072#19
2311.04072
[ "2309.00267" ]
2311.04072#19
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
In addition, we employ a diverse set of evaluation benchmarks to evaluate the abilities, including knowledge uti- lization (MMLU (Hendrycks et al., 2020)), human alignment (WinoGender (Rudinger et al., 2018), CrowS-Pairs (Nangia et al., 2020), and TruthfulQA (Lin et al., 2021)), and open-ended generation (Vicuna (Chiang et al., 2023) and WizardLM (Xu et al., 2023)). 4.2 EXPERIMENTAL RESULTS Table 2: Performance comparison of FIGA and other widely-used alignment methods. Bold and underlined fonts indicate the best and the second best score. â denotes lower is better. Methods Reward MMLU TruthfulQA CrowS-Pairsâ WinoGender Vicuna WizardLM Average1 Alpaca-7b 3.96 39.2 33.7 61.1 55.6 7.9 7.0 31.7 SFT PPO (SPA) PPO (85K)2 CoH RRHF 4.56 4.06 4.54 4.24 4.23 39.3 39.6 39.2 39.6 37.8 22.0 30.1 36.7 28.2 32.9 61.5 61.3 60.6 59.6 59.9 55.3 56.2 56.2 52.1 60.0 8.4 7.6 7.9 8.3 7.9 8.3 7.4 7.2 8.1 7.9 31.1 31.5 33.1 32.7 31.3 FIGA 4.62 40.8 42.0 61.2 59.6 8.6 8.3 34.9 As observed in Table 2, FIGA surpasses all baselines, achieving the highest reward scores across benchmarks and showing superior performance, even outperforming PPO using 4 times training
2311.04072#18
2311.04072#20
2311.04072
[ "2309.00267" ]
2311.04072#20
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
1To ensure consistency in the magnitude among different benchmarks when calculating the average score, we multiply the reward score by 10, and the score for CrowS-Pairs is calculated as 100 minus the original score. 2Given that PPO does not utilize the labels in the dataset and requires a large amount of data to learn through trial and error, we integrate additional open-source data with the SPA dataset to leverage the strengths of PPO fully. We obtain a total of 84,908 entries, and the PPO trained with this dataset is referred to as PPO (85K).
2311.04072#19
2311.04072#21
2311.04072
[ "2309.00267" ]
2311.04072#21
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
7 Preprint. data. This implies responses of FIGA are more in sync with human preferences, making it an exem- plary alignment model. FIGA also scores the highest on the MMLU benchmark, which demonstrates capable task solving abilities of our method, not just limited to alignment. In summary, FIGAâ s su- perior performance on benchmarks confirms the efficacy of our designing. Moreover, we compare the quality of responses from FIGA and other baselines on the Vicuna and WizardLM benchmarks, specifically evaluating the relative merits of each response. The results of this comparative analysis are illustrated in Figure 3. Mmm FIGAWins * Tie FIGA Loses @mm FIGAWins M Tie FIGA Loses Alpaca 7B 9% Alpaca 7B 12% PPO (SPA) 9% PPO (SPA) 14% PPO (85K) 10% PPO (85K) 13% RRHF 8% RRHF 18% CoH 22% CoH 22% SFT 20% SFT 25% ChatGPT Ek 24% ChatGPT EU 33% 0% 25% 50% 75% 100% 0% 25% 50% 75% 100% Figure 3: Win rate of FIGA vs other baselines on Vicuna (left) and WizardLM (right). 4.3 FURTHER ANALYSIS 4.3.1 PERFORMANCE COMPARISON W.R.T. SUBPAR ALIGNMENT DATASET As mentioned in Section 3.1, the steps involved in constructing the SPA dataset includes: (1) collect existing datasets, encompassing the preference datasets and the typical SFT datasets, (2) filter the data based on reward scores, (3) revise the initial responses using LLM. To examine the effectiveness of each of them, we develop the following dataset variants on which to conduct our FIGA: Preference: we only use preference data to construct initial instances pool D, with 3,971 samples. â ¢ Instruction: we construct the initial instances pool D with typical SFT data that the reward model had not encountered during its training, also totaling 3,971 instances. w/o reward filtering: this variant excludes data filtering based on reward scores. â
2311.04072#20
2311.04072#22
2311.04072
[ "2309.00267" ]
2311.04072#22
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
¢ w/o revision: we do not utilize LLM to revise, but use the reference responses directly. Table 3: Performance comparison of different instances pools. Methods Reward MMLU TruthfulQA CrowS-Pairsâ WinoGender Vicuna WizardLM Average Preference Instruction 4.42 4.35 37.4 40.7 22.6 31.1 61.5 59.7 57.1 57.5 7.4 8.5 6.6 8.2 30.5 32.8 Table 4: Performance comparison of different data annotations. Methods Reward MMLU TruthfulQA CrowS-Pairsâ WinoGender Vicuna WizardLM Average FIGA 4.62 40.8 42.0 61.2 59.6 8.6 8.3 w/o reward filtering w/o revision 4.41 4.39 38.0 37.5 28.8 26.7 61.1 62.1 58.5 55.6 8.3 8.2 8.0 7.7 34.9 32.1 31.1 From the results in Table 3 and Table 4 we can see that: (1) FIGA performs well even on typical SFT data that reward model has not seen during its training, thus FIGA is not limited on the preference data where the reward model is trained on. (2) Filtering based on reward scores is crucial, resulting in +0.21 reward score increase, and +2.8 benchmark increase. This underscores the significance of training on queries where the modelâ s original performance is subpar. (3) Revising to reduce the distribution shift is important, since training on revisions yields +3.8 point on average.
2311.04072#21
2311.04072#23
2311.04072
[ "2309.00267" ]
2311.04072#23
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
8 Preprint. 4.3.2 PERFORMANCE COMPARISON W.R.T. WEIGHTING FUNCTIONS As mentioned in Section 3.2, Ë r(·) and Ë r(·) in Equation 1 first make comparison between Ë Y and Ë Y to obtain tokens that are added, deleted or substituted, then assign different weights to different types of tokens. Here, we explore other weighting functions as how they acquire the tokens to be encouraged or discouraged, and study the influence of different hyper-parameters α, β, and γ.
2311.04072#22
2311.04072#24
2311.04072
[ "2309.00267" ]
2311.04072#24
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
â ¢ Variants of Ë r(·): as for Ë r(·), we set β to 0 and design following three variants to compare other possible ways to return the tokens to be encouraged. â Bag of words: it sets Ë r( Ë yt, t) = 1 only when Ë yt /â Ë Y ; the rest are set to 0. â ChatGPT (weighted): motivated by the work (Lee et al., 2023), it utilizes ChatGPT to evaluate the contribution of words in improving sentence quality. The prompt can be found in A.2. The returned scores are adjusted to be between 0.7 and 1.3 and are set as Ë r( Ë yt, t). For words that ChatGPT doesnâ t address, Ë r( Ë yt, t) = 0.3. â ChatGPT (binary): it sets Ë r( Ë yt, t) to 1 only when Ë yt is returned by ChatGPT with a non-zero score, while the rest are set to 0. â ¢ Variants of Ë r(·): as for the tokens to be discouraged returned by Ë r(·), we further filter bad tokens returned by Levenshtein distance and retain only those with a negative log-likelihood below 0.6. To assess its effectiveness, we design the variants including:
2311.04072#23
2311.04072#25
2311.04072
[ "2309.00267" ]
2311.04072#25
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
â â log p â ¥ 0.6: it retains only the bad tokens returned by Levenshtein distance with a negative log-likelihood â ¥ 0.6. â w/o further selection: it directly penalizes all the bad tokens returned by Levenshtein dis- tance. â ¢ Variants of hyper-parameters: to explore the influence of α, β, γ in Equation 1, we design: â β = 0: it sets β to 0 with α = 1 and γ = 0. â γ ̸= 0: it sets γ to 0.3 with α = 1 and β = 0.5. â R(·): it assigns R Ë Y , R Ë Y , 0 to α, β, γ respectively, where R Ë Y and R Ë Y are standardized through the min-max method. Table 5: Performance comparison of different weighting functions. Explorations Methods Reward MMLU TruthfulQA CrowS-Pairsâ WinoGender Vicuna WizardLM Average Ours FIGA 4.62 40.8 42.0 61.2 59.6 8.6 8.3 Encouraged Bag of words ChatGPT (weighted) ChatGPT (binary) 4.52 4.37 4.32 40.4 39.8 39.0 29.3 21.7 24.4 60.0 60.0 59.9 57.6 57.9 59.0 8.1 8.4 7.8 8.2 8.1 7.6 Discouraged â log p â
2311.04072#24
2311.04072#26
2311.04072
[ "2309.00267" ]
2311.04072#26
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
¥ 0.6 w/o further selection 3.80 3.01 30.2 28.1 27.2 24 56.2 58.5 50.4 57.4 8.1 8 7.4 7.7 Hyper-parameter β = 0 γ ̸= 0 R(·) 4.61 4.54 4.54 41.0 41.2 39.7 37.0 32.2 37.8 59.6 60.1 62.9 58.1 56.0 57.1 8.5 8.4 8.2 8.3 8.2 8.2 34.9 32.7 31.4 31.6 29.3 28.1 34.2 33.0 33.4 The results in Table 5 indicate that: (1) Levenshtein distance excels in extracting critical tokens, with over +1.5 average score compared with traditional bag of words method, and over +0.6 above ChatGPT related method. (2) It is necessary to further select the bad tokens returned by Levenshtein distance, as this leads to an average improvement of +6.8. (3) Remaining only the poor-quality to- kens with a negative log-likelihood â ¤ 0.6 is a sensible choice, which aims to penalize tokens that the model is relatively confident in generating, even though their actual quality is subpar. (4) Punishing the undesirable actions is beneficial, as it results in an average increase of +0.7 in comparison to simply encouraging the good actions. (5) Focusing only on good and bad tokens is sufficient, since setting γ to a non-zero value leads to a decrease of 1.9 on average. (6) The inferior performance of setting the weights as reward scores can be attributed to intrinsic inaccuracies of the reward scores, especially in out-of-distribution scenarios (Bai et al., 2022b). # 5 CONCLUSION In this paper, we have presented FIGA, a new approach that aligns language models with human preferences, by leveraging fine-grained quality signals to enhance the alignment quality during fine- tuning.
2311.04072#25
2311.04072#27
2311.04072
[ "2309.00267" ]
2311.04072#27
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
In our approach, we have firstly curated a high-quality alignment dataset that pairs initial 9 Preprint. responses and revised responses on queries that a LLM cannot perform well. Furthermore, we have designed a new optimization objective that that can leverage the fine-grained quality signals by contrasting initial with revised responses. Our approach inherits the merits of SFT (e.g., efficient and easy-to-implement), and meanwhile can better understand and learn what are correct behaviors for alignment. FIGA shows superior performance on extensive tasks, with +3.2 points and +1.8 points against the initial supervised-finetuned model and the strong PPO method. Currently, we mainly utilize the edit operations to identify the differences between good and bad responses, while this approach is flexible to extend to more contrast methods.
2311.04072#26
2311.04072#28
2311.04072
[ "2309.00267" ]
2311.04072#28
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
# REFERENCES Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022a. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al.
2311.04072#27
2311.04072#29
2311.04072
[ "2309.00267" ]
2311.04072#29
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Constitutional ai: Harm- lessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022b. Rishabh Bhardwaj and Soujanya Poria. Red-teaming large language models using chain of utter- ances for safety-alignment. arXiv preprint arXiv:2308.09662, 2023. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â
2311.04072#28
2311.04072#30
2311.04072
[ "2309.00267" ]
2311.04072#30
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
1901, 2020. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April 2023), 2023.
2311.04072#29
2311.04072#31
2311.04072
[ "2309.00267" ]
2311.04072#31
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language mod- els. arXiv preprint arXiv:2210.11416, 2022. Dahoas. Dahoas/synthetic-instruct-gptj-pairwise. https://huggingface.co/datasets/ Dahoas/synthetic-instruct-gptj-pairwise, 2023. Hanze Dong, Wei Xiong, Deepanshu Goyal, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, and Tong Zhang.
2311.04072#30
2311.04072#32
2311.04072
[ "2309.00267" ]
2311.04072#32
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Raft: Reward ranked finetuning for generative foundation model alignment. arXiv preprint arXiv:2304.06767, 2023. Kawin Ethayarajh, Yejin Choi, and Swabha Swayamdipta. Understanding dataset difficulty with V-usable information. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, 2022. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and arXiv preprint Jacob Steinhardt. Measuring massive multitask language understanding. arXiv:2009.03300, 2020. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199â
2311.04072#31
2311.04072#33
2311.04072
[ "2309.00267" ]
2311.04072#33
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
22213, 2022. Ranjay Krishna, Donsuk Lee, Li Fei-Fei, and Michael S Bernstein. Socially situated artificial in- telligence enables learning from human interaction. Proceedings of the National Academy of Sciences, 2022. 10 Preprint. Harrison Lee, Samrat Phatale, Hassan Mansoor, Kellie Lu, Thomas Mesnard, Colton Bishop, Victor Carbune, and Abhinav Rastogi. Rlaif:
2311.04072#32
2311.04072#34
2311.04072
[ "2309.00267" ]
2311.04072#34
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Scaling reinforcement learning from human feedback with ai feedback. arXiv preprint arXiv:2309.00267, 2023. Wing Lian, Bleys Goodson, Eugene Pentland, Austin Cook, Chanvichet Vong, and â Tekniumâ â . Openorca: An open dataset of gpt augmented flan reasoning traces. https://https:// huggingface.co/Open-Orca/OpenOrca, 2023. Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe.
2311.04072#33
2311.04072#35
2311.04072
[ "2309.00267" ]
2311.04072#35
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Letâ s verify step by step. arXiv preprint arXiv:2305.20050, 2023. Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how models mimic human falsehoods. arXiv preprint arXiv:2109.07958, 2021. Hao Liu, Carmelo Sferrazza, and Pieter Abbeel. Chain of hindsight aligns language models with feedback. arXiv preprint arXiv:2302.02676, 2023a. Ruibo Liu, Chenyan Jia, Ge Zhang, Ziyu Zhuang, Tony Liu, and Soroush Vosoughi.
2311.04072#34
2311.04072#36
2311.04072
[ "2309.00267" ]
2311.04072#36
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Second thoughts are best: Learning to re-align with human values from text edits. Advances in Neural Information Processing Systems, 35:181â 196, 2022. Ruibo Liu, Ruixin Yang, Chenyan Jia, Ge Zhang, Denny Zhou, Andrew M Dai, Diyi Yang, and Soroush Vosoughi. Training socially aligned language models in simulated human society. arXiv preprint arXiv:2305.16960, 2023b. Yixin Liu, Alexander R Fabbri, Pengfei Liu, Dragomir Radev, and Arman Cohan. On learning to summarize with large language models as references. arXiv preprint arXiv:2305.14239, 2023c. Ximing Lu, Sean Welleck, Jack Hessel, Liwei Jiang, Lianhui Qin, Peter West, Prithviraj Am- manabrolu, and Yejin Choi.
2311.04072#35
2311.04072#37
2311.04072
[ "2309.00267" ]
2311.04072#37
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Quark: Controllable text generation with reinforced unlearning. Advances in neural information processing systems, 35:27591â 27609, 2022. Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, and Ahmed Awadallah. Orca: Progressive learning from complex explanation traces of gpt-4. arXiv preprint arXiv:2306.02707, 2023.
2311.04072#36
2311.04072#38
2311.04072
[ "2309.00267" ]
2311.04072#38
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R Bowman. Crows-pairs: A challenge dataset for measuring social biases in masked language models. arXiv preprint arXiv:2010.00133, 2020. OpenAssistant. Openassistant/reward-model-deberta-v3-large-v2. https://huggingface. co/OpenAssistant/reward-model-deberta-v3-large-v2, 2023. # OpenLLMAI. Openllama2. https://github.com/OpenLLMAI/OpenLLaMA2, 2023. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730â
2311.04072#37
2311.04072#39
2311.04072
[ "2309.00267" ]
2311.04072#39
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
27744, 2022. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290, 2023. Rajkumar Ramamurthy, Prithviraj Ammanabrolu, Kiant´e Brantley, Jack Hessel, Rafet Sifa, Chris- tian Bauckhage, Hannaneh Hajishirzi, and Yejin Choi. Is reinforcement learning (not) for natural language processing?: Benchmarks, baselines, and building blocks for natural language policy optimization. arXiv preprint arXiv:2210.01241, 2022.
2311.04072#38
2311.04072#40
2311.04072
[ "2309.00267" ]
2311.04072#40
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. Gender bias in coreference resolution. arXiv preprint arXiv:1804.09301, 2018. 11 # Preprint. John Schulman. Reinforcement learning from human feedback: Progress and challenges, 2023. URL https://www.youtube.com/watch?v=hhiLw5Q_UFg. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. ShareGPT. Sharegpt vicuna unfiltered. https://huggingface.co/datasets/ anon8231489123/ShareGPT_Vicuna_unfiltered, 2023.
2311.04072#39
2311.04072#41
2311.04072
[ "2309.00267" ]
2311.04072#41
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Ziang Song, Tianle Cai, Jason D Lee, and Weijie J Su. Reward collapse in aligning large language models. arXiv preprint arXiv:2305.17608, 2023. Zhiqing Sun, Yikang Shen, Qinhong Zhou, Hongxin Zhang, Zhenfang Chen, David Cox, Yiming Yang, and Chuang Gan. Principle-driven self-alignment of language models from scratch with minimal human supervision. arXiv preprint arXiv:2305.03047, 2023. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto.
2311.04072#40
2311.04072#42
2311.04072
[ "2309.00267" ]
2311.04072#42
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al.
2311.04072#41
2311.04072#43
2311.04072
[ "2309.00267" ]
2311.04072#43
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023b. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, An- drew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. In International Conference on Learning Representations, 2021. Sam Wiseman and Alexander M Rush. Sequence-to-sequence learning as beam-search optimization. arXiv preprint arXiv:1606.02960, 2016. Yotam Wolf, Noam Wies, Yoav Levine, and Amnon Shashua. Fundamental limitations of alignment in large language models. arXiv preprint arXiv:2304.11082, 2023. Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang.
2311.04072#42
2311.04072#44
2311.04072
[ "2309.00267" ]
2311.04072#44
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023. Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. Rrhf: Rank responses to align language models with human feedback without tears. arXiv preprint arXiv:2304.05302, 2023. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christo- pher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022. Tianjun Zhang, Fangchen Liu, Justin Wong, Pieter Abbeel, and Joseph E Gonzalez. The wisdom of hindsight makes language models better instruction followers. arXiv preprint arXiv:2302.05206, 2023. Wen Zhang, Yang Feng, Fandong Meng, Di You, and Qun Liu. Bridging the gap between training and inference for neural machine translation. arXiv preprint arXiv:1906.02448, 2019. Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. A survey of large language models. arXiv preprint arXiv:2303.18223, 2023a. Yao Zhao, Rishabh Joshi, Tianqi Liu, Misha Khalman, Mohammad Saleh, and Peter J Liu. Slic-hf: Sequence likelihood calibration with human feedback. arXiv preprint arXiv:2305.10425, 2023b. Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al.
2311.04072#43
2311.04072#45
2311.04072
[ "2309.00267" ]