doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2311.04254 | 27 | 7
Table 3: Performance comparison on Game of 24.
Model IO CoT CoT-SC (n=10) ToT (b=1) ToT (b=3) GoT (k=1) XoT (w/o revise) XoT (w/ revise) GPT-3.5 Acc. [%] LLM invoked 6.57 2.19 2.19 5.84 10.22 2.92 61.31 79.56 1.00 1.00 10.00 22.11 43.96 7.00 1.00 1.39 GPT-4 fθ invoked Acc. [%] LLM invoked - - - - - - 68.73 92.15 10.22 4.38 4.38 34.31 60.58 10.95 63.50 74.45 1.00 1.00 10.00 23.50 39.83 7.00 1.00 1.38 fθ invoked - - - - - - 68.69 88.20
from 1 to 4) for creating the equations. Actions involve the selection of two numbers and an operator to form an equation, and the reward is set to 1 if the final equation is both valid and results in the number 24, utilizing each of the input numbers exactly once, otherwise it is set to -1. Performance is measured by calculating the success rate across the 137 test games. | 2311.04254#27 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 28 | 4.1.3 EVALUATION TASKS
We evaluate the performances of different methods using reward scores on the test set and a com- prehensive benchmark. For the reward score evaluation, our goal is to assess how well the modelâs response aligns with human preferences. Specifically, to ensure that the reward scores can accu- rately represent human preferences, we select data from the reward modelâs training data that was not included in our training data as the test set, comprising a total of 3,608 instances. In addition, we employ a diverse set of evaluation benchmarks to evaluate the abilities, including knowledge uti- lization (MMLU (Hendrycks et al., 2020)), human alignment (WinoGender (Rudinger et al., 2018), CrowS-Pairs (Nangia et al., 2020), and TruthfulQA (Lin et al., 2021)), and open-ended generation (Vicuna (Chiang et al., 2023) and WizardLM (Xu et al., 2023)).
4.2 EXPERIMENTAL RESULTS
Table 2: Performance comparison of FIGA and other widely-used alignment methods. Bold and underlined fonts indicate the best and the second best score. â denotes lower is better. | 2311.04072#28 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 28 | 4.1.2 BASELINES & XOT SETUP
The IO prompt is supported by five in-context examples. In the case of CoT, we augment each input-output pair by including three intermediate equations. As for ToT, we solicit one-step thought candidates from the LLM at each step, subsequently instructing the LLM to categorize each thought candidate for intermediate selection. For experimental comparison, we conduct experiments on both the top-1 candidate (with b=1) and the top-3 candidates (with b=3) being retained, where b indicates the branches retained for exploration at each step. For GoT, we employ LLM to generate one-step thought candidates in the same manner as ToT, then we direct the LLM to select the top-1 thought from all candidates for merging the thoughts. We also examine a CoT-SC baseline, which derives the majority output from 10 CoT samples. For XOT, we perform 200 simulations for each action taken, and this count is increased to 500 during the thought revision process. | 2311.04254#28 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 29 | Table 2: Performance comparison of FIGA and other widely-used alignment methods. Bold and underlined fonts indicate the best and the second best score. â denotes lower is better.
Methods Reward MMLU TruthfulQA CrowS-Pairsâ WinoGender Vicuna WizardLM Average1 Alpaca-7b 3.96 39.2 33.7 61.1 55.6 7.9 7.0 31.7 SFT PPO (SPA) PPO (85K)2 CoH RRHF 4.56 4.06 4.54 4.24 4.23 39.3 39.6 39.2 39.6 37.8 22.0 30.1 36.7 28.2 32.9 61.5 61.3 60.6 59.6 59.9 55.3 56.2 56.2 52.1 60.0 8.4 7.6 7.9 8.3 7.9 8.3 7.4 7.2 8.1 7.9 31.1 31.5 33.1 32.7 31.3 FIGA 4.62 40.8 42.0 61.2 59.6 8.6 8.3 34.9
As observed in Table 2, FIGA surpasses all baselines, achieving the highest reward scores across benchmarks and showing superior performance, even outperforming PPO using 4 times training | 2311.04072#29 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 29 | In the multi-solution scenario, the IO, CoT, and CoT-SC prompts each include 5 examples, with each problem having 1 to 3 different solutions. For ToT, the top-3 candidates (with b=3) at the final step are considered as different solutions. Rather than keeping only the top-1 thought, GoT is instructed to select between 1 to 3 thoughts from all candidates at each step to generate a wider range of solutions. As for XOT, after performing simulations on MCTS, we sample 500 thought trajectories as for exploration and remove deplicates. The top-3 thoughts with the highest counts are preserved.
4.1.3 RESULTS | 2311.04254#29 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 30 | As observed in Table 2, FIGA surpasses all baselines, achieving the highest reward scores across benchmarks and showing superior performance, even outperforming PPO using 4 times training
1To ensure consistency in the magnitude among different benchmarks when calculating the average score, we multiply the reward score by 10, and the score for CrowS-Pairs is calculated as 100 minus the original score. 2Given that PPO does not utilize the labels in the dataset and requires a large amount of data to learn through trial and error, we integrate additional open-source data with the SPA dataset to leverage the strengths of PPO fully. We obtain a total of 84,908 entries, and the PPO trained with this dataset is referred to as PPO (85K).
7
Preprint.
data. This implies responses of FIGA are more in sync with human preferences, making it an exem- plary alignment model. FIGA also scores the highest on the MMLU benchmark, which demonstrates capable task solving abilities of our method, not just limited to alignment. In summary, FIGAâs su- perior performance on benchmarks confirms the efficacy of our designing.
Moreover, we compare the quality of responses from FIGA and other baselines on the Vicuna and WizardLM benchmarks, specifically evaluating the relative merits of each response. The results of this comparative analysis are illustrated in Figure 3. | 2311.04072#30 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 30 | 4.1.3 RESULTS
Table 3 displays the overall performance of all methods on this task. Notably, XOT consistently outperforms other baselines on both GPT-3.5 and GPT-4, achieving an accuracy of 61.31% and 63.50% respectively, without revision. However, after the revision process, XOTâs accuracy sub- stantially improves to 79.56% and 74.45% for GPT-3.5 and GPT-4 respectively. This underscores the impressive performance of XOT, and demonstrates that the revision process significantly en- hances performance, with only a limited increase in the utilization of LLM and fθ. Interestingly, the revision process in XOT mitigates the performance gap attributable to the modeling ability in this task. As we observe that XOT with GPT-3.5 achieves higher accuracy after revision compared to GPT-4. | 2311.04254#30 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 31 | Mmm FIGAWins * Tie FIGA Loses @mm FIGAWins M Tie FIGA Loses Alpaca 7B 9% Alpaca 7B 12% PPO (SPA) 9% PPO (SPA) 14% PPO (85K) 10% PPO (85K) 13% RRHF 8% RRHF 18% CoH 22% CoH 22% SFT 20% SFT 25% ChatGPT Ek 24% ChatGPT EU 33% 0% 25% 50% 75% 100% 0% 25% 50% 75% 100%
Figure 3: Win rate of FIGA vs other baselines on Vicuna (left) and WizardLM (right).
4.3 FURTHER ANALYSIS
4.3.1 PERFORMANCE COMPARISON W.R.T. SUBPAR ALIGNMENT DATASET
As mentioned in Section 3.1, the steps involved in constructing the SPA dataset includes: (1) collect existing datasets, encompassing the preference datasets and the typical SFT datasets, (2) filter the data based on reward scores, (3) revise the initial responses using LLM. To examine the effectiveness of each of them, we develop the following dataset variants on which to conduct our FIGA: | 2311.04072#31 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 31 | On the other hand, the best-performing baseline, ToT (b=3) on GPT-4, attains an accuracy of 60.58%. However, it demands a substantial number of LLM invocations (39.83), which results in inefficiency. In contrast, XOT exhibits a significant advantage in terms of average LLM invocation time. It requires only a single LLM inference without revision and less than 1.4 calls with revision. Although XOT requires some inference calls for fθ, the model is significantly less complex than LLM, making it a much more efficient approach.
Table 4 presents the performance of GPT-3.5 and GPT-4 models across different methods in the multi-solution scenario. Overall, XOT remains the best-performing approach in terms of accuracy and MultiAcc, significantly outperforming other baselines. Its GPT-4 version can even achieve over
8
Table 4: Performance comparison on Game of 24 in the multi-solution scenario. | 2311.04254#31 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 32 | Preference: we only use preference data to construct initial instances pool D, with 3,971 samples. ⢠Instruction: we construct the initial instances pool D with typical SFT data that the reward model
had not encountered during its training, also totaling 3,971 instances.
w/o reward filtering: this variant excludes data filtering based on reward scores. ⢠w/o revision: we do not utilize LLM to revise, but use the reference responses directly.
Table 3: Performance comparison of different instances pools.
Methods Reward MMLU TruthfulQA CrowS-Pairsâ WinoGender Vicuna WizardLM Average Preference Instruction 4.42 4.35 37.4 40.7 22.6 31.1 61.5 59.7 57.1 57.5 7.4 8.5 6.6 8.2 30.5 32.8
Table 4: Performance comparison of different data annotations.
Methods Reward MMLU TruthfulQA CrowS-Pairsâ WinoGender Vicuna WizardLM Average FIGA 4.62 40.8 42.0 61.2 59.6 8.6 8.3 w/o reward filtering w/o revision 4.41 4.39 38.0 37.5 28.8 26.7 61.1 62.1 58.5 55.6 8.3 8.2 8.0 7.7 34.9 32.1 31.1 | 2311.04072#32 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 32 | 8
Table 4: Performance comparison on Game of 24 in the multi-solution scenario.
Model IO CoT CoT-SC (n=10) ToT (b=3) GoT (k=3) XoT (w/o revise) XoT (w/ revise) Acc. MultiAcc 14.6 3.65 5.11 10.22 8.76 72.99 85.40 4.87 1.22 1.70 3.41 8.03 39.90 62.90 GPT-3.5 #Sol 2.88 2.77 2.76 2.99 1.93 2.89 2.29 LLM invoked 1.00 1.00 10.00 43.96 7.00 1.00 3.51 fθ invoked Acc. MultiAcc - - - - - 95.66 116.34 21.17 20.44 18.98 60.58 13.14 72.99 90.51 8.27 7.79 8.03 39.90 10.46 60.54 76.25 GPT-4 #Sol 2.99 2.94 2.99 2.78 1.39 2.55 2.36 LLM invoked 1.00 1.00 10.00 39.83 7.00 1.00 2.31 fθ invoked - - - - - 95.66 109.64 | 2311.04254#32 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 33 | From the results in Table 3 and Table 4 we can see that: (1) FIGA performs well even on typical SFT data that reward model has not seen during its training, thus FIGA is not limited on the preference data where the reward model is trained on. (2) Filtering based on reward scores is crucial, resulting in +0.21 reward score increase, and +2.8 benchmark increase. This underscores the significance of training on queries where the modelâs original performance is subpar. (3) Revising to reduce the distribution shift is important, since training on revisions yields +3.8 point on average.
8
Preprint.
4.3.2 PERFORMANCE COMPARISON W.R.T. WEIGHTING FUNCTIONS
As mentioned in Section 3.2, Ër(·) and Ër(·) in Equation 1 first make comparison between ËY and ËY to obtain tokens that are added, deleted or substituted, then assign different weights to different types of tokens. Here, we explore other weighting functions as how they acquire the tokens to be encouraged or discouraged, and study the influence of different hyper-parameters α, β, and γ.
⢠Variants of Ër(·): as for Ër(·), we set β to 0 and design following three variants to compare other possible ways to return the tokens to be encouraged. | 2311.04072#33 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 33 | 90% accuracy. Although XOT does not generate the most number of answers compared to other baselines, it generates more accurate answers, as its MultiAcc significantly outperforms other ap- proaches. Notably, generating multiple solutions does not significantly increase XOTâs complexity, as it only requires 2.31 LLM calls with GPT-4 and around 100 calls for a smaller fθ, making it remain efficient. Overall, the remarkable performance of XOT in the multi-solution scenario demonstrates its ability to generate complex thoughts, making it a flexible approach.
4.2 8-PUZZLE
The 8-Puzzle is a classic sliding puzzle game that consists of a 3 Ã 3 grid with eight numbered tiles and one empty space denoted as â-â. Its objective is to rearrange the tiles from a given initial configuration into a target configuration. The maximum number of steps necessary for the optimal solution of the 8-Puzzle is 31. This problem falls within the category of NP-complete problems Ratner & Warmuth (1986) and may have multiple solutions.
4.2.1 TASK SETUP | 2311.04254#33 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 34 | â Bag of words: it sets Ër( Ëyt, t) = 1 only when Ëyt /â ËY ; the rest are set to 0. â ChatGPT (weighted): motivated by the work (Lee et al., 2023), it utilizes ChatGPT to evaluate the contribution of words in improving sentence quality. The prompt can be found in A.2. The returned scores are adjusted to be between 0.7 and 1.3 and are set as Ër( Ëyt, t). For words that ChatGPT doesnât address, Ër( Ëyt, t) = 0.3.
â ChatGPT (binary): it sets Ër( Ëyt, t) to 1 only when Ëyt is returned by ChatGPT with a non-zero score, while the rest are set to 0.
⢠Variants of Ër(·): as for the tokens to be discouraged returned by Ër(·), we further filter bad tokens returned by Levenshtein distance and retain only those with a negative log-likelihood below 0.6. To assess its effectiveness, we design the variants including:
â â log p ⥠0.6: it retains only the bad tokens returned by Levenshtein distance with a negative log-likelihood ⥠0.6. | 2311.04072#34 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 34 | 4.2.1 TASK SETUP
We randomly generated 419 solvable 8-puzzle problems, with 300 instances allocated for training and 119 instances for testing. All generated problems are solvable within 9 steps. The action space encompasses four directions: [Up, Down, Left, Right]. Note that the legal action space for each problem state may vary due to the dynamic position of the empty space. As shown in Table 1, the thoughts refer to the step-by-step move, and the puzzle state after the move.
4.2.2 BASELINES & XOT SETUP
The IO prompt is extended with three in-context examples. In the CoT approach, each input-output pair is enriched by incorporating intermediate legal action sets, the current action, and the current state. In ToT, at each stage, a set of one-step thought candidates are derived from the LLM, from the current set of legal actions. We impose a maximum step limit of 9 since all generated problems can be solved within this range. The 8-puzzleâs rules are conveyed through a system message, including detailed explanations of each actionâs execution. Similarly, we perform 20 simulations for each action taken with XOT, and increase this number to 50 for thought revision processes. | 2311.04254#34 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 35 | â â log p ⥠0.6: it retains only the bad tokens returned by Levenshtein distance with a negative log-likelihood ⥠0.6.
â w/o further selection: it directly penalizes all the bad tokens returned by Levenshtein dis- tance.
⢠Variants of hyper-parameters: to explore the influence of α, β, γ in Equation 1, we design:
â β = 0: it sets β to 0 with α = 1 and γ = 0. â γ ̸= 0: it sets γ to 0.3 with α = 1 and β = 0.5. â R(·):
it assigns R ËY , R ËY , 0 to α, β, γ respectively, where R ËY and R ËY are standardized through the min-max method.
Table 5: Performance comparison of different weighting functions. | 2311.04072#35 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 35 | In the multi-solution scenario, all of the IO, CoT, and CoT-SC prompts consist of four examples. Each problem is presented with one to three distinct solutions. For ToT (b=3) and GoT (k=3), the maximum number of steps is increased to 12, as correct solutions may not always be optimal and could exceed 9 steps. In the case of XOT, after conducting simulations with MCTS, we sample 50 thought trajectories for exploration and select the top-3 thoughts with the highest counts.
4.2.3 RESULTS
The inherent spatial complexity of the 8-Puzzle, the need for long-term planning, and the presence of invalid actions create a significant challenge for LLMs, which rely solely on textual data as input. This challenge is starkly evident in the poor performance of the baselines on both GPT-3.5, where its IO prompting achieve a mere 0% success rate. XOT successfully addresses this issue by supplying thoughts acquired from MCTS, thereby infusing external knowledge into the problem-solving pro- cess. This augmentation empowers LLMs to tackle problems that were previously insurmountable. In summary, when using GPT-4, XOT achieves an accuracy of 50.42% without revision and 93.2%
9
# Table 5: Performance comparison on 8-Puzzle. | 2311.04254#35 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 36 | Explorations Methods Reward MMLU TruthfulQA CrowS-Pairsâ WinoGender Vicuna WizardLM Average Ours FIGA 4.62 40.8 42.0 61.2 59.6 8.6 8.3 Encouraged Bag of words ChatGPT (weighted) ChatGPT (binary) 4.52 4.37 4.32 40.4 39.8 39.0 29.3 21.7 24.4 60.0 60.0 59.9 57.6 57.9 59.0 8.1 8.4 7.8 8.2 8.1 7.6 Discouraged â log p ⥠0.6 w/o further selection 3.80 3.01 30.2 28.1 27.2 24 56.2 58.5 50.4 57.4 8.1 8 7.4 7.7 Hyper-parameter β = 0 γ ̸= 0 R(·) 4.61 4.54 4.54 41.0 41.2 39.7 37.0 32.2 37.8 59.6 60.1 62.9 58.1 56.0 57.1 8.5 8.4 8.2 8.3 8.2 8.2 34.9 32.7 31.4 31.6 29.3 28.1 34.2 33.0 33.4 | 2311.04072#36 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 36 | 9
# Table 5: Performance comparison on 8-Puzzle.
Model IO CoT CoT-SC (n=10) ToT (b=1) ToT (b=3) GoT (k=1) XoT (w/o revise) XoT (w/ revise) GPT-3.5 Acc. [%] LLM invoked 0.00 0.00 0.84 5.88 6.72 3.36 49.58 59.66 1.00 1.00 10.00 31.76 55.86 19.00 1.00 1.50 GPT-4 fθ invoked Acc. [%] LLM invoked - - - - - - 36.64 41.09 1.68 7.56 8.40 3.36 13.45 3.36 51.26 93.28 1.00 1.00 10.00 27.49 54.13 19.00 1.00 1.48 fθ invoked - - - - - - 36.25 55.66
Table 6: Performance comparison on 8-Puzzle in the multi-solution scenario. | 2311.04254#36 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 37 | The results in Table 5 indicate that: (1) Levenshtein distance excels in extracting critical tokens, with over +1.5 average score compared with traditional bag of words method, and over +0.6 above ChatGPT related method. (2) It is necessary to further select the bad tokens returned by Levenshtein distance, as this leads to an average improvement of +6.8. (3) Remaining only the poor-quality to- kens with a negative log-likelihood ⤠0.6 is a sensible choice, which aims to penalize tokens that the model is relatively confident in generating, even though their actual quality is subpar. (4) Punishing the undesirable actions is beneficial, as it results in an average increase of +0.7 in comparison to simply encouraging the good actions. (5) Focusing only on good and bad tokens is sufficient, since setting γ to a non-zero value leads to a decrease of 1.9 on average. (6) The inferior performance of setting the weights as reward scores can be attributed to intrinsic inaccuracies of the reward scores, especially in out-of-distribution scenarios (Bai et al., 2022b).
# 5 CONCLUSION | 2311.04072#37 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 37 | Table 6: Performance comparison on 8-Puzzle in the multi-solution scenario.
Model IO CoT CoT-SC (n=10) ToT (b=3) GoT (k=3) XoT (w/o revise) XoT (w/ revise) Acc. MultiAcc 0.00 2.52 2.52 6.72 6.72 36.97 52.10 0.00 1.43 1.54 2.52 3.36 21.15 27.45 GPT-3.5 #Sol 2.47 2.05 1.90 2.98 2.96 2.87 2.85 LLM invoked 1.00 1.00 10.00 55.86 24.18 1.00 4.19 fθ invoked - - - - - 36.25 52.06 Acc. MultiAcc 2.52 10.92 11.76 13.45 20.17 50.42 82.35 0.84 7.84 6.58 5.60 16.61 29.13 76.33 GPT-4 #Sol 2.97 1.21 2.08 2.97 2.70 2.97 1.52 LLM invoked 1.00 1.00 10.00 54.13 22.76 1.00 4.30 fθ invoked - - - - - 36.25 66.66 | 2311.04254#37 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 38 | # 5 CONCLUSION
In this paper, we have presented FIGA, a new approach that aligns language models with human preferences, by leveraging fine-grained quality signals to enhance the alignment quality during fine- tuning. In our approach, we have firstly curated a high-quality alignment dataset that pairs initial
9
Preprint.
responses and revised responses on queries that a LLM cannot perform well. Furthermore, we have designed a new optimization objective that that can leverage the fine-grained quality signals by contrasting initial with revised responses. Our approach inherits the merits of SFT (e.g., efficient and easy-to-implement), and meanwhile can better understand and learn what are correct behaviors for alignment. FIGA shows superior performance on extensive tasks, with +3.2 points and +1.8 points against the initial supervised-finetuned model and the strong PPO method. Currently, we mainly utilize the edit operations to identify the differences between good and bad responses, while this approach is flexible to extend to more contrast methods.
# REFERENCES
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022a. | 2311.04072#38 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 38 | with revision in the 8-Puzzle task, outperforming the best baseline, ToT (b=3), which only achieves 13.45% accuracy. Additionally, XOT demonstrates efficiency, requiring approximately 1.5 LLM calls and around 55 calls to fθ, while delivering significantly superior performance.
The multi-solution performance presented in Table 6 confirms that the XOT method continues to outperform other baselines for both GPT-3.5 and GPT-4 models in terms of accuracy and MultiAcc, whether or not revision is applied. Itâs worth noting that the revision process is particularly beneficial for GPT-4, as it improves the MultiAcc from 29.13% to 76.33%. These results once again demon- strate that XOT can effectively generate complex thought structures for complete multi-solutions with high performance and efficiency, making it particularly suitable for this task.
4.3 POCKET CUBE
The 2 Ã 2 Pocket Cube is a simplified variant of the classic Rubikâs Cube puzzle. Its primary objec- tive is to restore all of its faces to a uniform color by executing various face rotations. The maximum number of steps required to optimally solve the cube is 11, and it is also a NP-complete problem Demaine et al. (2017) and may possess multiple solutions. This task is known to be challenging to LLMs cub. | 2311.04254#38 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 39 | Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harm- lessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022b.
Rishabh Bhardwaj and Soujanya Poria. Red-teaming large language models using chain of utter- ances for safety-alignment. arXiv preprint arXiv:2308.09662, 2023.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â1901, 2020. | 2311.04072#39 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 39 | 4.3.1 TASK SETUP
We initially set all faces of the cube to a uniform color and then randomly apply 5 actions sequen- tially selected from the 27 legal actions of the Rubikâs Cube. This process resulted in the creation of 1,000 training samples and 183 testing samples. All generated problems can be solved within 4 steps. To simplify the action space, we reduced the 27 legal operations to 9 actions, namely: {U, Uâ, U2, R, Râ, R2, F, Fâ, F2}, which are used in our experiments with both baselines and XOT. As shown in Table 1, the thoughts pertain to the step-by-step rotation, and the cube state after the move.
4.3.2 BASELINES & XOT SETUP
The IO prompt is augmented with a single in-context example. In CoT, we enrich each input-output pair by including intermediate actions and states. In ToT, we retrieve one-step thought candidates from the LLM at each stage and instruct the LLM to classify each candidate for intermediate selec- tion. A maximum step limit of 4 is imposed, as all generated problems can be resolved within this range. The cubeâs rules are conveyed through a system message, which includes the definition of the
10
Table 7: Performance comparison on Pocket Cube. | 2311.04254#39 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 40 | Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April 2023), 2023.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language mod- els. arXiv preprint arXiv:2210.11416, 2022. | 2311.04072#40 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 40 | 10
Table 7: Performance comparison on Pocket Cube.
Model IO CoT CoT-SC (n=10) ToT (b=1) ToT (b=3) GoT (k=1) XoT (w/o revise) XoT (w/ revise) GPT-3.5 Acc. [%] LLM invoked 1.09 0.00 0.00 7.65 17.49 1.64 45.36 74.32 1.00 1.00 10.00 16.50 58.72 8.93 1.00 1.55 GPT-4 fθ invoked Acc. [%] LLM invoked - - - - - - 18.69 64.63 1.09 1.09 1.09 11.48 19.57 18.03 45.90 77.60 1.00 1.00 10.00 16.39 56.58 8.55 1.00 1.54 fθ invoked - - - - - - 18.86 75.51
Table 8: Performance comparison on Pocket Cube in the multi-solution scenario. | 2311.04254#40 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 41 | Dahoas. Dahoas/synthetic-instruct-gptj-pairwise. https://huggingface.co/datasets/ Dahoas/synthetic-instruct-gptj-pairwise, 2023.
Hanze Dong, Wei Xiong, Deepanshu Goyal, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, and Tong Zhang. Raft: Reward ranked finetuning for generative foundation model alignment. arXiv preprint arXiv:2304.06767, 2023.
Kawin Ethayarajh, Yejin Choi, and Swabha Swayamdipta. Understanding dataset difficulty with V-usable information. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, 2022.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and arXiv preprint Jacob Steinhardt. Measuring massive multitask language understanding. arXiv:2009.03300, 2020. | 2311.04072#41 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 41 | Table 8: Performance comparison on Pocket Cube in the multi-solution scenario.
Model IO CoT CoT-SC (n=10) ToT (b=3) GoT (k=3) XoT (w/o revise) XoT (w/ revise) Acc. MultiAcc 0.55 0.55 0.55 17.49 3.28 39.89 73.22 0.27 0.55 0.18 5.83 1.09 23.04 48.72 GPT-3.5 #Sol 2.00 1.05 2.90 2.99 2.99 2.68 2.20 LLM invoked 1.00 1.00 10.00 58.72 14.76 1.00 4.13 fθ invoked Acc. MultiAcc - - - - - 18.95 115.73 2.19 1.64 1.63 19.57 30.50 47.54 91.26 1.09 0.82 0.82 6.52 16.85 31.97 77.41 GPT-4 #Sol 1.98 1.91 2.92 2.99 2.77 2.62 1.72 LLM invoked 1.00 1.00 1.00 56.58 13.36 1.00 4.08 fθ invoked - - - - - 18.95 122.54 | 2311.04254#41 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 42 | Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199â22213, 2022.
Ranjay Krishna, Donsuk Lee, Li Fei-Fei, and Michael S Bernstein. Socially situated artificial in- telligence enables learning from human interaction. Proceedings of the National Academy of Sciences, 2022.
10
Preprint.
Harrison Lee, Samrat Phatale, Hassan Mansoor, Kellie Lu, Thomas Mesnard, Colton Bishop, Victor Carbune, and Abhinav Rastogi. Rlaif: Scaling reinforcement learning from human feedback with ai feedback. arXiv preprint arXiv:2309.00267, 2023.
Wing Lian, Bleys Goodson, Eugene Pentland, Austin Cook, Chanvichet Vong, and âTekniumââ. Openorca: An open dataset of gpt augmented flan reasoning traces. https://https:// huggingface.co/Open-Orca/OpenOrca, 2023. | 2311.04072#42 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 42 | action space and illustrations of the execution of each action. For XOT, we conduct 20 simulations for each action taken and increase it to 500 for revision.
In the multi-solution setup, the IO, CoT, and CoT-SC prompts each include 3 examples, and each problem within these prompts offers 3 unique solutions. As for ToT (b=3) and GoT (k=3), the maximum number of steps allowed is extended to 7. In the case of XOT, after conducting MCTS simulations, we gather 50 thought trajectories, and we keep the top 3 thoughts with the highest counts.
4.3.3 RESULTS | 2311.04254#42 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 43 | Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Letâs verify step by step. arXiv preprint arXiv:2305.20050, 2023.
Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how models mimic human falsehoods. arXiv preprint arXiv:2109.07958, 2021.
Hao Liu, Carmelo Sferrazza, and Pieter Abbeel. Chain of hindsight aligns language models with feedback. arXiv preprint arXiv:2302.02676, 2023a.
Ruibo Liu, Chenyan Jia, Ge Zhang, Ziyu Zhuang, Tony Liu, and Soroush Vosoughi. Second thoughts are best: Learning to re-align with human values from text edits. Advances in Neural Information Processing Systems, 35:181â196, 2022.
Ruibo Liu, Ruixin Yang, Chenyan Jia, Ge Zhang, Denny Zhou, Andrew M Dai, Diyi Yang, and Soroush Vosoughi. Training socially aligned language models in simulated human society. arXiv preprint arXiv:2305.16960, 2023b. | 2311.04072#43 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 43 | 4.3.3 RESULTS
The Pocket Cube task, similar to the 8-Puzzle, poses a challenge that demands spatial imagination skills, making it difficult for LLMs to excel. As expected, most of the baselines show very poor performance in this task, with some baselines achieving 0% accuracy. The best-performing base- line, ToT (b=3) with GPT-4, only attains a success rate of 19.57%. In contrast, XOT can achieve over 45% accuracy without revision and over 75% accuracy with revision, establishing itself as an expert in solving this task. This success is attributed to the injection of external knowledge from MCTS, enabling LLMs to solve problems that they would struggle with on their own. Notably, XOT maintains high efficiency in this task, requiring only 1.55 and 1.54 LLM inference calls for GPT-3.5 and GPT-4, respectively. These results position XOT as a superior solution for enhancing LLMs in addressing seemingly insurmountable tasks. | 2311.04254#43 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 44 | Yixin Liu, Alexander R Fabbri, Pengfei Liu, Dragomir Radev, and Arman Cohan. On learning to summarize with large language models as references. arXiv preprint arXiv:2305.14239, 2023c.
Ximing Lu, Sean Welleck, Jack Hessel, Liwei Jiang, Lianhui Qin, Peter West, Prithviraj Am- manabrolu, and Yejin Choi. Quark: Controllable text generation with reinforced unlearning. Advances in neural information processing systems, 35:27591â27609, 2022.
Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, and Ahmed Awadallah. Orca: Progressive learning from complex explanation traces of gpt-4. arXiv preprint arXiv:2306.02707, 2023.
Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R Bowman. Crows-pairs: A challenge dataset for measuring social biases in masked language models. arXiv preprint arXiv:2010.00133, 2020. | 2311.04072#44 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 44 | In the case of the multi-solution scenario, the performance of the XOT method remains remarkable, achieving over 91% accuracy and over 77% MultiAcc with GPT-4. The revision process continues to play an important role, significantly improving the performance of XOT with both GPT models. The closest competitor in this setting is GoT (k=3) with GPT-4, which achieves an accuracy of 30.50% and a MultiAcc of 16.85%, but it requires a significantly higher number of LLM invocations compared to XOT (13.36 vs. 4.08). Overall, XOT retains its position as the best solution for the Pocket Cube task, exhibiting high performance, efficiency, and flexibility.
4.4 ABLATION STUDY
In our ablation study, we consider two aspects: the impact of the number of revisions on the perfor- mance and efficiency of XOT and the sensitivity of performance to the completeness of the provided thoughts. These angles allow us to gain insights into how XOTâs performance can be improved and understand the importance of providing complete thoughts in complex problem-solving tasks.
11
(a) Game of 24 (b) 8-Puzzle
# (c) Pocket Cube
Figure 4: Accuracy, LLM and fθ invoked comparison on XOT w.r.t. the number of revisions.
4.4.1 NUMBER OF REVISIONS | 2311.04254#44 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 45 | OpenAssistant. Openassistant/reward-model-deberta-v3-large-v2. https://huggingface. co/OpenAssistant/reward-model-deberta-v3-large-v2, 2023.
# OpenLLMAI. Openllama2. https://github.com/OpenLLMAI/OpenLLaMA2, 2023.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730â27744, 2022.
Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290, 2023. | 2311.04072#45 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 45 | Figure 4: Accuracy, LLM and fθ invoked comparison on XOT w.r.t. the number of revisions.
4.4.1 NUMBER OF REVISIONS
Itâs important to highlight that the performance of each task can be further improved through multi- ple revisions of the thought using the MCTS-LLM collaborative framework. In Fig. 4, we compare the performance of GPT-3.5 and GPT-4 models using the XOT method with varying numbers of revisions, ranging from 0 to 3, across all three tasks. | 2311.04254#45 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 46 | Rajkumar Ramamurthy, Prithviraj Ammanabrolu, Kiant´e Brantley, Jack Hessel, Rafet Sifa, Chris- tian Bauckhage, Hannaneh Hajishirzi, and Yejin Choi. Is reinforcement learning (not) for natural language processing?: Benchmarks, baselines, and building blocks for natural language policy optimization. arXiv preprint arXiv:2210.01241, 2022.
Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. Gender bias in coreference resolution. arXiv preprint arXiv:1804.09301, 2018.
11
# Preprint.
John Schulman. Reinforcement learning from human feedback: Progress and challenges, 2023. URL https://www.youtube.com/watch?v=hhiLw5Q_UFg.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
ShareGPT. Sharegpt vicuna unfiltered. https://huggingface.co/datasets/ anon8231489123/ShareGPT_Vicuna_unfiltered, 2023. | 2311.04072#46 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 46 | In the Game of 24 task, as the number of revisions increases, both models exhibit improved per- formance. Notably, GPT-3.5 consistently outperforms GPT-4 in terms of accuracy. After three revisions, GPT-3.5 achieves an accuracy of 90.51%, while GPT-4 reaches 85.40%. This improved performance comes at the cost of increased inference times and model calls, primarily driven by the need for more interactions to generate revised thoughts. For the 8-Puzzle task, the trend of in- creasing accuracy with more revisions remains valid. However, in this task, GPT-4 significantly outperforms GPT-3.5. After one revision, GPT-4 achieves an accuracy of 93.28%, which increases to 95.8% after the third revision. In contrast, GPT-3.5 only attains an accuracy of 63.03% after the third revision. In the Pocket Cube task, the performance trend is similar. The accuracy of both mod- els improves with an increase in the number of revisions. GPT-3.5 starts at an accuracy of 45.36% without revision and improves to 84.70% after three revisions. GPT-4 | 2311.04254#46 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 47 | Ziang Song, Tianle Cai, Jason D Lee, and Weijie J Su. Reward collapse in aligning large language models. arXiv preprint arXiv:2305.17608, 2023.
Zhiqing Sun, Yikang Shen, Qinhong Zhou, Hongxin Zhang, Zhenfang Chen, David Cox, Yiming Yang, and Chuang Gan. Principle-driven self-alignment of language models from scratch with minimal human supervision. arXiv preprint arXiv:2305.03047, 2023.
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a. | 2311.04072#47 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04072 | 48 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023b.
Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, An- drew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. In International Conference on Learning Representations, 2021.
Sam Wiseman and Alexander M Rush. Sequence-to-sequence learning as beam-search optimization. arXiv preprint arXiv:1606.02960, 2016.
Yotam Wolf, Noam Wies, Yoav Levine, and Amnon Shashua. Fundamental limitations of alignment in large language models. arXiv preprint arXiv:2304.11082, 2023. | 2311.04072#48 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 48 | Note that the number of LLM invocations does not increase dramatically with additional revisions, even though fθ is called more times to guide simulations. Considering the significant disparity in in- ference costs between LLM and fθ, increasing the number of revisions to achieve better performance appears to be a favorable trade-off.
12
# Table 9: Performance comparison on three tasks with incomplete thoughts. | 2311.04254#48 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 49 | Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023.
Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. Rrhf: Rank responses to align language models with human feedback without tears. arXiv preprint arXiv:2304.05302, 2023.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christo- pher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022.
Tianjun Zhang, Fangchen Liu, Justin Wong, Pieter Abbeel, and Joseph E Gonzalez. The wisdom of hindsight makes language models better instruction followers. arXiv preprint arXiv:2302.05206, 2023. | 2311.04072#49 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 49 | 12
# Table 9: Performance comparison on three tasks with incomplete thoughts.
Task Game of 24 8-Puzzle Pocket Cube Model ToT (b=1) GoT (k=1) XoT (w/o revise) ToT (b=1) GoT (k=1) XoT (w/o revise) ToT (b=1) GoT (k=1) XoT (w/o revise) GPT-3.5 Acc. [%] LLM invoked 3.65 2.19 17.52 0.00 0.00 2.52 0.55 0.00 5.46 17.15 5.00 1.00 32.60 18.63 1.00 16.48 8.96 1.00 GPT-4 fθ invoked Acc. [%] LLM invoked - - 68.73 - - 36.66 - - 18.85 40.88 9.49 43.07 6.72 3.36 40.34 2.19 1.64 6.01 18.55 5.00 1.00 26.98 19.00 1.00 16.39 8.68 1.00 fθ invoked - - 68.70 - - 36.24 - - 18.89 | 2311.04254#49 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 50 | Wen Zhang, Yang Feng, Fandong Meng, Di You, and Qun Liu. Bridging the gap between training and inference for neural machine translation. arXiv preprint arXiv:1906.02448, 2019.
Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. A survey of large language models. arXiv preprint arXiv:2303.18223, 2023a.
Yao Zhao, Rishabh Joshi, Tianqi Liu, Misha Khalman, Mohammad Saleh, and Peter J Liu. Slic-hf: Sequence likelihood calibration with human feedback. arXiv preprint arXiv:2305.10425, 2023b.
Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. Lima: Less is more for alignment. arXiv preprint arXiv:2305.11206, 2023.
12
Preprint.
A APPENDIX
A.1 DATA SOURCES | 2311.04072#50 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 50 | Game of 24 8-Puzzle Pocket Cube Initial State Initial State Initial State igh Left les a) ⬠S J5 x G+@x7)41 PB ax3)+8x7) CIS) ee EI yy 4) [y XV Left G+GxIx1 Nera | yw 67/8) Final State Final State Final State
Figure 5: Examples of thought structures generated by XOT for all three tasks in the multi-solution scenario.
4.4.2 INCOMPLETE THOUGHT
In this ablation study, we explore the performance of LLMs when provided with incomplete thoughts, specifically omitting the last step of the thought trajectory. This simulates scenarios where MCTS might supply inaccurate or incomplete thoughts. The aim is to test whether LLMs can inde- pendently solve problems or rely on their own reasoning, rather than solely relying on the thought from MCTS as answers. We present the performance comparison for all three tasks in Table 9. Note that we only compare ToT and GoT since other baselines do not support this comparison by their nature. | 2311.04254#50 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 51 | 12
Preprint.
A APPENDIX
A.1 DATA SOURCES
(1) HH-RLHF (Helpful and Harmless): This dataset is sourced from Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback and Red Teaming Language Models to Reduce Harms. It comprises two main categories of data: human preference data about helpfulness and harmlessness, and human-annotated red teaming dialogues. The first category is pivotal for training preference models using RLHF, and the second gives insights into model red- teaming techniques1.
(2) ShareGPT: Originating from the ShareGPT API, this dataset encompasses conversations before the APIâs discontinuation. Within each conversation, both user prompts and ChatGPT responses from OpenAI are presented2.
(3) Synthetic Instruct GPT-J Pairwise: Crafted for instruction-oriented tasks, this dataset explores model-generated outputs when exposed to synthetic prompts3.
(4) Stanford SHP: This dataset, derived from a research initiative at Stanford, offers 385K human preferences across multiple disciplines. These preferences are designed to discern the relative help- fulness of responses. Contrary to the HH-RLHF dataset, all content in SHP is penned by humans, serving as a valuable complement to other datasets4. | 2311.04072#51 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 51 | The results clearly show that incomplete thoughts lead to a significant performance drop in all three tasks. GPT-3.5 is more affected than GPT-4, with GPT-3.5 achieving 0% accuracy on several base- lines. In contrast, XOT with GPT-4 attains satisfactory performance on the Game of 24 and 8- Puzzle, achieving over 40% accuracy. However, the performance of XOT is dramatically affected in the Pocket Cube task, with accuracy dropping to 6%. This demonstrates that for very complex tasks, LLMs are highly sensitive to the completeness of the thoughts provided. Missing steps in the thought can lead to a substantial drop in performance, highlighting the importance of providing complete thoughts for such tasks.
4.5 CASE STUDY
Finally, in Fig. 5, we provide examples of thought structures generated by XOT for all three tasks in the multi-solution scenario. It is noteworthy that, owing to the multiple solutions required, the generated thoughts intertwine during intermediate steps and converge towards the final goal state. This results in a naturally woven thought structure resembling a graph, showcasing the remarkable flexibility achieved by XOT. Upon closer examination of each example, in the case of the Game of 24, there are multiple solutions to reach the goal of 24 from the initial state. XOT effectively
13 | 2311.04254#51 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 52 | (5) OpenOrca: This dataset is an extension of the FLAN Collection, including GPT-4 and GPT-3.5 model completions. It is structured in line with the distributions discussed in the ORCA paper. Its primary application lies in training and evaluation in the realm of NLP. For our investigation, weâve exclusively focused on the English instruction subset5.
# A.2 PROMPTS USED FOR DATA AUGMENTATION
Details for revision Given a question, along with the poorer original model response and a pre- ferred ground truth response, we instruct ChatGPT to make minimal modifications to the original response, while ensuring that the output still remains closely aligned with the preferred response.
This process can be divided into two steps: first analyzing the reasons for the lower quality of the original response based on the comparison, and then, making revisions using the appropriate prompts based on these factors. | 2311.04072#52 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 52 | 13
predicts these trajectories, indicating its ability to grasp complex thought structures. In the 8-Puzzle example, we observe instances of reflection in the thought structure, with back-and-forth recurrent state transitions. This demonstrates XOTâs capacity for self-reflection, a crucial attribute for LLMs, as discussed in previous work Shinn et al. (2023). In the case of the Pocket Cube, XOT identifies four distinct pathways to reach the goal state, leading to successful problem-solving across multiple solutions.
Overall, these cases highlight how XOT encapsulates the flexibility required in thought generation, fostering diverse and creative thinking for LLMs. This enables them to produce multiple high- quality answers to a single problem effectively.
4.6 EXPERIMENT SUMMARY
In summary, our approach XOT significantly improves the performance of LLMs by introducing a streamlined thought trajectory revision process. This represents a fundamental shift from traditional problem-solving approaches, resulting in substantial performance enhancements across a range of tasks. Notably, XOT excels in solving the Game of 24 and demonstrates its ability to overcome challenges requiring spatial reasoning, such as the 8-Puzzle and Pocket Cube, which were previously challenging for LLMs. The remarkable synergy of improved performance, efficiency, and flexibility exhibited by XOT positions it as an exemplary and superior method for eliciting optimal responses from LLMs.
5 RELATED WORK | 2311.04254#52 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 53 | This process can be divided into two steps: first analyzing the reasons for the lower quality of the original response based on the comparison, and then, making revisions using the appropriate prompts based on these factors.
Prompt to used analyze the reason: Question: ... Response 1: ... Response 2: ... Among them, the quality of Response 1 is inferior to that of Response 2. Please compare them and choose one of the following four possible reasons for the area where Response 1 performed the worst: A. Needs more accurate content, B. Needs more comprehensive content or more details, C. Requires adjustments in structure, D. Other reasons (such as containing harmful information or going off-topic). Do not include analysis, but just return the choice.âââ
Prompt to used to revise according to different reasons:
Prompt for reason A: Question: ... Response 1: ... Response 2: ... Please replace the content corresponding to Response 1 with the accurate and high-quality essence from Response 2, and remain the original structure of Response 1. Ensure that the edit distance between the optimized Response 1 and the Response 1 is as low as possible. | 2311.04072#53 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 53 | 5 RELATED WORK
Decision Making & Planning with LLMs. The utilization of LLMs for decision-making and plan- ning has become a prominent area of research. Similar to human problem-solving, the process in- volves breaking down complex problems into sub-tasks. Various frameworks, such as CoT Wei et al. (2022), ToT Yao et al. (2023), and GoT Besta et al. (2023), have been designed to facilitate prob- lem decomposition in different structural forms, leading to enhanced solutions derived from LLMs. Extensions of these frameworks have also been explored across different domains and modalities Zhang et al. (2022; 2023); Ning et al. (2023); Turpin et al. (2023); Long (2023). Our approach XOT distinguishes itself from the aforementioned work by concurrently achieving superior performance, efficiency, and flexibility, embodying the concept of comprehensive thought generation. | 2311.04254#53 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 54 | Prompt for reason B: Question: ... Response 1: ... Response 2: ... Please incorporate the compre- hensive topic or the details from Response 2 into Response 1, or if necessary, replace any synony- mous content from Response 1 with that from Response 2. You must remain the original structure of Response 1, ensure the edit distance between the optimized Response 1 with the Response 1 is as
1https://huggingface.co/datasets/Anthropic/hh-rlhf 2https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_ unfiltered
13
Preprint.
low as possible, and not add new contents other than those contained in Response 1 and Response 2.
Prompt for reason C: Question: ... Response 1: ... Response 2: ... The structure of Response 2 is well-organized, featuring elements including but not limited to: 1. point-by-point addressing, 2. providing an overview of the question before answering. Use the structure of Response 2 to rephrase Response 1. Ensure that the optimized Response 1 should maintain a relatively low edit distance from the original Response 1. | 2311.04072#54 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 54 | Furthermore, the âDescribe, Explain, Plan, and Selectâ framework introduced in Wang et al. (2023b) presents an interactive planning approach for LLMs, significantly enhancing planning performance for multi-task agents. Research conducted in Singh et al. (2023) leverages LLMs to suggest next actions or sequences during task planning for robotics, leading to improved task performance across various metrics. Additionally, work presented in Xie et al. (2023) employs LLMs to translate natural language into planning goals, demonstrating their capacity to harness commonsense knowledge and reasoning to provide missing details for under-specified goals. These studies underscore the growing potential of LLMs in the field of planning, with research efforts expanding rapidly. | 2311.04254#54 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 55 | Annotate the importance of each word Given a question, along with the lower-quality original response from the original model and a higher-quality ground truth response, we require ChatGPT to score each word based on comparison, in terms of how much it improve the quality. Below is an example.
Below is an instruction that describes a task, followed by an original response and a better response in terms of how well it aligns with human preferences, being helpful, harmless, and honest. Your task is to return a list containing tuples with words and corresponding scores, which are meant to measure the extent to which the words improve the quality of the original answer to the better answer. The scores are all integers, with 0 being the lowest score and 5 being the highest score. Instruction: ... Original Response: ... Better Response: ...
14 | 2311.04072#55 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 55 | Augmenting LLMs with RL. Enhancing the capabilities of LLMs through the incorporation of ex- ternal models constitutes an effective strategy for improving their overall quality. The foundational work of ChatGPT Ouyang et al. (2022) leverages RL from human feedback to enable LLMs to ad- here to human guidance, resulting in a substantial enhancement of their truthfulness and a reduction in toxic output. Similarly, GLAM Carta et al. (2023) employs online RL to establish alignment between LLMsâ knowledge and the broader environment, thus enhancing their ability to generalize to new objects or tasks and ultimately improving their performance. Additionally, an interesting study in Yuan et al. (2023) utilizes RL to acquire basic skills in the context of Minecraft Cipollone et al. (2014), with subsequent high-level planning carried out by LLMs. This approach demon- strates promising performance across various Minecraft tasks. Furthermore, the ESPER framework Yu et al. (2023) harnesses RL to achieve alignment between multimodal inputs and language model generations, all without the need for direct supervision. This empowers LLMs to effectively tackle multimodal tasks and provides robust visual alignment and rapid inference speeds while preserving the textual domain. Collectively, these research endeavors underscore the considerable potential in augmenting LLMs with reinforcement learning techniques.
14
# 6 DISCUSSION | 2311.04254#55 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04254 | 56 | 14
# 6 DISCUSSION
Generalization While XOT is presently utilized for reasoning and search problems, its applicability can be extended to a broader spectrum of problem domains characterized by decomposable tasks with well-defined objectives. The MCTS utilized in XOT is particularly suitable for such tasks and can therefore generalize to more complex problems. We also note that MCTS is functioning in a supportive role and can be substituted with alternative supervised or RL models for thought exploration and generation, which can serve as a copilot to inject domain knowledge of the real- world model to LLMs. This opens up a promising avenue for future research, enabling LLMs to engage in more effective planning and problem solving processes. | 2311.04254#56 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04254 | 57 | Limitation We also note that the implementation of XOT necessitates the training of additional pol- icy and value models to expedite the inference process. This training process requires the acquisition of datasets from real-world environments, introducing supplementary costs and efforts. However, note that these policy and value models are considerably smaller and more computationally efficient than the underlying LLMs. Consequently, the incurred costs are deemed low, particularly in the con- text of tasks featured in this study, where the thought steps and objectives are well-defined. In future research endeavors, we intend to explore methods to enhance the efficiency of the training process for XOT in scenarios where the objectives are less straightforward, such as multi-agent planning and code generation tasks Talebirad & Nadiri (2023); Vaithilingam et al. (2022). This endeavor will expand the applicability of the proposed XOT framework to a broader range of applications. | 2311.04254#57 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04254 | 58 | Conclusion The XOT framework presented in this paper signifies a significant progression in It challenges the constraints of thought generation for LLMs aimed at solving complex tasks. the âPenrose Triangle â by concurrently achieving performance, efficiency, and flexibility, a feat unattainable by existing prompting paradigms. This accomplishment is achieved through the inte- gration of MCTS with pretrained low-cost policy and value networks, by injecting domain knowl- edge into LLMs, offloading thought searching, and facilitating unconstrained free-style thought ex- ploration. The collaborative thought revision framework involving MCTS and LLM further en- hances the quality of thought generation. Experimental evaluations conducted across three intricate real-world problems, namely the Game of 24, 8-Puzzle, and Pocket Cube, provide empirical evi- dence that our XOT framework significantly outperforms existing prompting paradigms, particularly in scenarios involving multi-solution problems. | 2311.04254#58 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04254 | 59 | # REFERENCES
4 Numbers. https://www.4nums.com/game/difficulties/. [Online; accessed 21-Sep- 2023].
I Calculated ChatGPTâs IQ. https://www.youtube.com/watch?v=HXb9Azzhr1k. Ac- cessed: 2023-10-30.
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, et al. Graph of thoughts: Solving elaborate problems with large language models. arXiv preprint arXiv:2308.09687, 2023.
Thomas Carta, Cl´ement Romac, Thomas Wolf, Sylvain Lamprier, Olivier Sigaud, and Pierre-Yves Oudeyer. Grounding large language models in interactive environments with online reinforcement learning. arXiv preprint arXiv:2302.02662, 2023. | 2311.04254#59 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04254 | 60 | Yinfang Chen, Huaibing Xie, Minghua Ma, Yu Kang, Xin Gao, Liu Shi, Yunjie Cao, Xuedong Gao, Hao Fan, Ming Wen, et al. Empowering practical root cause analysis by large language models for cloud incidents. arXiv preprint arXiv:2305.15778, 2023.
Maria Cipollone, Catherine C Schifter, and Rick A Moffat. Minecraft as a creative tool: A case study. International Journal of Game-Based Learning (IJGBL), 4(2):1â14, 2014.
Erik D Demaine, Sarah Eisenstat, and Mikhail Rudoy. Solving the rubikâs cube optimally is np- complete. arXiv preprint arXiv:1706.06708, 2017.
15
Haakon Faste and Honray Lin. The untapped promise of digital mind maps. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 1017â1026, 2012.
Simon Frieder, Luca Pinchetti, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz, Philipp Christian Petersen, Alexis Chevalier, and Julius Berner. Mathematical capabilities of chatgpt. arXiv preprint arXiv:2301.13867, 2023. | 2311.04254#60 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04254 | 61 | Aur´elien Garivier and Eric Moulines. On upper-confidence bound policies for switching bandit problems. In International Conference on Algorithmic Learning Theory, pp. 174â188. Springer, 2011.
Peter Jamieson. Using modern graph analysis techniques on mind maps to help quantify learning. In 2012 Frontiers in Education Conference Proceedings, pp. 1â6. IEEE, 2012.
Emre Kıcıman, Robert Ness, Amit Sharma, and Chenhao Tan. Causal reasoning and large language models: Opening a new frontier for causality. arXiv preprint arXiv:2305.00050, 2023.
Yuxi Li. Deep reinforcement learning: An overview. arXiv preprint arXiv:1701.07274, 2017.
Jieyi Long. Large language model guided tree-of-thought. arXiv preprint arXiv:2305.08291, 2023.
Xuefei Ning, Zinan Lin, Zixuan Zhou, Huazhong Yang, and Yu Wang. Skeleton-of-thought: Large language models can do parallel decoding. arXiv preprint arXiv:2307.15337, 2023. | 2311.04254#61 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04254 | 62 | Reham Omar, Omij Mangukiya, Panos Kalnis, and Essam Mansour. Chatgpt versus traditional question answering for knowledge graphs: Current status and future directions towards knowl- edge graph chatbots. arXiv preprint arXiv:2302.06466, 2023.
OpenAI. Gpt-4 technical report, 2023.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730â27744, 2022.
Martin L Puterman. Markov decision processes. Handbooks in operations research and management science, 2:331â434, 1990.
Daniel Ratner and Manfred Warmuth. Finding a shortest solution for the n x n extension of the In Proceedings of the Fifth AAAI National Conference on Artificial 15-puzzle is intractable. Intelligence, pp. 168â172, 1986.
Christopher D Rosin. Multi-armed bandits with episode context. Annals of Mathematics and Artifi- cial Intelligence, 61(3):203â230, 2011. | 2311.04254#62 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04254 | 63 | Christopher D Rosin. Multi-armed bandits with episode context. Annals of Mathematics and Artifi- cial Intelligence, 61(3):203â230, 2011.
Noah Shinn, Beck Labash, and Ashwin Gopinath. Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366, 2023.
David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. nature, 550(7676):354â359, 2017.
Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, and Animesh Garg. Progprompt: Generating situated robot task plans using In 2023 IEEE International Conference on Robotics and Automation large language models. (ICRA), pp. 11523â11530. IEEE, 2023. | 2311.04254#63 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04254 | 64 | Kaya Stechly, Matthew Marquez, and Subbarao Kambhampati. Gpt-4 doesnât know itâs wrong: An analysis of iterative prompting for reasoning problems. arXiv preprint arXiv:2310.12397, 2023.
Yashar Talebirad and Amirhossein Nadiri. Multi-agent collaboration: Harnessing the power of intelligent llm agents. arXiv preprint arXiv:2306.03314, 2023.
16
Miles Turpin, Julian Michael, Ethan Perez, and Samuel R Bowman. Language models donât al- ways say what they think: Unfaithful explanations in chain-of-thought prompting. arXiv preprint arXiv:2305.04388, 2023.
Priyan Vaithilingam, Tianyi Zhang, and Elena L Glassman. Expectation vs. experience: Evaluating the usability of code generation tools powered by large language models. In Chi conference on human factors in computing systems extended abstracts, pp. 1â7, 2022.
Karthik Valmeekam, Matthew Marquez, and Subbarao Kambhampati. Can large language models really improve by self-critiquing their own plans? arXiv preprint arXiv:2310.08118, 2023. | 2311.04254#64 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04254 | 65 | Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, 2023a.
Zihao Wang, Shaofei Cai, Anji Liu, Xiaojian Ma, and Yitao Liang. Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents. arXiv preprint arXiv:2302.01560, 2023b.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824â24837, 2022.
Yaqi Xie, Chen Yu, Tongyao Zhu, Jinbin Bai, Ze Gong, and Harold Soh. Translating natural lan- guage to planning goals with large-language models. arXiv preprint arXiv:2302.05128, 2023. | 2311.04254#65 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04254 | 66 | Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601, 2023.
Youngjae Yu, Jiwan Chung, Heeseung Yun, Jack Hessel, Jae Sung Park, Ximing Lu, Rowan Zellers, Prithviraj Ammanabrolu, Ronan Le Bras, Gunhee Kim, et al. Fusing pre-trained language mod- els with multimodal prompts through reinforcement learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10845â10856, 2023.
Haoqi Yuan, Chi Zhang, Hongcheng Wang, Feiyang Xie, Penglin Cai, Hao Dong, and Zongqing Lu. Plan4mc: Skill reinforcement learning and planning for open-world minecraft tasks. arXiv preprint arXiv:2303.16563, 2023.
Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. Automatic chain of thought prompting in large language models. arXiv preprint arXiv:2210.03493, 2022. | 2311.04254#66 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.01964 | 0 | 3 2 0 2
v o N 3 ] L C . s c [
1 v 4 6 9 1 0 . 1 1 3 2 : v i X r a
# Donât Make Your LLM an Evaluation Benchmark Cheater
Kun Zhou1, Yutao Zhu2, Zhipeng Chen2, Wentong Chen2, Wayne Xin Zhao2 Xu Chen2, Yankai Lin2, Ji-Rong Wen1,2 and Jiawei Han3 1 School of Information, Renmin University of China 2 Gaoling School of Artificial Intelligence, Renmin University of China 3 University of Illinois Urbana-Champaign [email protected], {ytzhu,xu.chen,yankailin,jrwen}@ruc.edu.cn [email protected], [email protected]
# Abstract | 2311.01964#0 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 1 | # Abstract
Large language models (LLMs) have greatly advanced the frontiers of artificial intelligence, attaining remarkable improvement in model ca- pacity. To assess the model performance, a typ- ical approach is to construct evaluation bench- marks for measuring the ability level of LLMs in different aspects. Despite that a number of high-quality benchmarks have been released, the concerns about the appropriate use of these benchmarks and the fair comparison of differ- ent models are increasingly growing. Consid- ering these concerns, in this paper, we discuss the potential risk and impact of inappropriately using evaluation benchmarks and misleadingly interpreting the evaluation results. Specially, we focus on a special issue that would lead to inappropriate evaluation, i.e., benchmark leak- age, referring that the data related to evaluation sets is occasionally used for model training. This phenomenon now becomes more common since pre-training data is often prepared ahead of model test. We conduct extensive experi- ments to study the effect of benchmark lever- age, and find that it can dramatically boost the evaluation results, which would finally lead to an unreliable assessment of model performance. To improve the use of existing evaluation bench- marks, we finally present several guidelines for both LLM developers and benchmark maintain- ers. We hope this work can draw attention to appropriate training and evaluation of LLMs.
# Introduction | 2311.01964#1 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 2 | # Introduction
Goodhartâs Law: âWhen a measure be- comes a target, it ceases to be a good measure.â
Large language models (LLMs) have achieved remarkable success across a variety of real-world applications (Brown et al., 2020; Zhao et al., 2023; Zhu et al., 2023). By pre-training large Transformer models on massive text corpora, LLMs can possess
Rank-10 LLM -* Rank-11 Rank-12 Pre-training Data Performance Improvement Rank-1 LLM FF? â Rank2 Rank-3 Benchmark Data (Training/Test)
Figure 1: Illustration of the potential risk of data leak- age. Once the pre-training data with overlap to the benchmark data is used for training LLM, its bench- mark performance would be greatly increased.
excellent task-solving capacities, i.e., using zero- shot or few-shot prompting (Brown et al., 2020). To better understand how LLMs evolve in model capacity, it becomes essential to construct reliable evaluation benchmarks to test the ability level of LLMs in various tasks, e.g., knowledge reasoning and math problem solving. | 2311.01964#2 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 3 | Recently, a surge of high-quality evaluation benchmarks (Hendrycks et al., 2021; Huang et al., 2023) have been proposed to provide a comprehen- sive capability evaluation of LLMs. Typical bench- marks include MMLU (Hendrycks et al., 2021) (for measuring multitask language understanding abil- ity), Big-Bench (Srivastava et al., 2022) (for quan- tifying and extrapolating the capabilities of LLMs), and AGIEval (Zhong et al., 2023) (for evaluating the abilities of tackling human-level tasks). These benchmarks have made great efforts in creating or collecting test resources for evaluating the per- formance of LLMs. Based on these benchmarks, one can conveniently examine the effect of new training strategies or monitor the training status of LLMs (either pre-training or supervised fine- tuning). It has become common to report the results on these evaluation benchmarks for demonstrating the effectiveness of newly released LLMs (Ope- nAI, 2023; Touvron et al., 2023b; Anil et al., 2023). Furthermore, to compare the performance of dif1 | 2311.01964#3 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 4 | ferent LLMs, various leaderboards have been also created to rank LLMs according to their perfor- mance on existing or new evaluation benchmarks, such as OpenCompass (Contributors, 2023) and C-Eval (Huang et al., 2023).
Despite the wide use of these benchmarks and leaderboards, increasing concerns (Aiyappa et al., 2023; Li, 2023) are growing about the fairness and reliability in evaluating existing LLMs. A major issue is that the data contamination or leakage is likely to occur for large-scale benchmark evalu- ation, which means that LLMs are trained with relevant or exactly the same data for test. Such an issue could be unconsciously triggered, since we might be unaware of the future evaluation datasets when preparing the pre-training corpus. For exam- ple, GPT-3 has found that Childrenâs Book Test dataset (Hill et al., 2016) was included in the pre- training corpus, and LLaMA-2 has mentioned that the contexts in BoolQ dataset (Clark et al., 2019) are extracted verbatim from the webpages, which may be included in the publicly available corpus. | 2311.01964#4 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 5 | Indeed, when conducting evaluation with exist- ing benchmarks, the results of evaluated LLMs are mostly obtained by running them on local servers or via API calls. During this process, there is no strict checking on any potentially inappropriate ways (e.g., data contamination) that would cause an un- normal improvement of evaluation performance. To make matters worse, the detailed composition (e.g., data sources) of the training corpus is often regarded as the core âsecretâ of existing LLMs. Therefore, it becomes difficult to directly exam- ine the contamination issues when performing the evaluation for benchmark maintainers.
Considering this issue, the aim of this paper is to draw attention on appropriately using existing eval- uation benchmarks and avoiding any misleading be- haviors in obtaining or interpreting the evaluation results. Specifically, we mainly focus on discussing the potential effect of benchmark leakage, which refers to the case that test data or relevant data (e.g., training set) has been included in the pre-training corpus. It would cause an unfair performance ad- vantage when comparing different LLMs or assess- ing the ability level of some specific LLMs. As we discussed before, this issue tends to become in- creasingly more common as we try to collect more public text data for training. To investigate this is- sue, we set up several benchmark leakage settings that should be totally avoided during evaluation, including the leakage of training sets, test prompts,
2 | 2311.01964#5 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 6 | 2
and test sets. Based on the three settings, we contin- ually train four popular language models, ranging from 1.3B to 7B, and test the performance of the four models on a number of existing benchmarks. In addition, we also examine the potential risk of benchmark leakage on other abilities.
The experimental results reveal that benchmark leakage can lead to an unfair boost in the evalua- tion performance of LLMs. Smaller LLMs (e.g., a 1.3B model) can be deliberately elevated to outper- form 10Ã larger models on certain tasks. As a side effect, the performance of these specially trained LLMs on other normally tested tasks would likely be adversely affected if we fine-tune or train the model only with these leaked data.
By examining the potential risks of benchmark leakage, we would like to emphasize the impor- tance of fair and appropriate evaluation for LLMs, and propose several suggestions to improve the evaluation for LLMs:
⢠As general suggestions, more benchmarks from diverse sources, covering both basic abil- ity (e.g., text generation) and advanced ability tests (e.g., complex reasoning), should be used for comprehensively estimating the capabili- ties of LLMs. | 2311.01964#6 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 7 | ⢠As suggestions for LLM developers, it is im- portant to perform the data decontamination checking between pre-training data and any related data (e.g., training and test sets) when using evaluation benchmarks. In addition, it is also necessary to report the contamination analysis on the evaluated benchmarks as ref- erence. We also suggest reporting the detailed composition of the pre-training data.
⢠As suggestions for benchmark maintainers, we suggest that a diverse set of test prompts should be employed for reducing the influ- ence of the prompt sensitivity. It is also mean- ingful to conduct the contamination analysis between the benchmark data and existing pre- training corpus, alerting any potential contam- ination risks. For evaluation, each submission is suggested to be accompanied with a special contamination analysis report.
# 2 Empirical Study about Benchmark Leakage
During pre-training, the data contamination or leak- age about possible evaluation benchmarks, is likely
to be unconsciously triggered (Oren et al., 2023; Sainz et al., 2023). It would violate regular eval- uation settings for assessing zero/few-shot gener- alization capability, thus affecting the capability assessment of LLMs. To better understand the potential influence of the benchmark leakage is- sue, we conduct an empirical study that continually trains small-sized LLMs on three settings with dif- ferent levels of information leakage.
# 2.1 Experimental Setup | 2311.01964#7 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 8 | # 2.1 Experimental Setup
Training Settings with Benchmark Leakage Our empirical study aims to test the influence of possible benchmark leakage issues on the evalua- tion results of LLMs. A benchmark typically con- tains a set of test examples, and relies on fixed templates to prompt LLMs for evaluation. Such an evaluation process may lead to three types of benchmark leakage risks, that is, including (1) test prompt, (2) test set, or (3) other relevant data (e.g., training set) into the pre-training corpus. Consider- ing the above settings, we simulate three extreme leakage issues where the three types of information have been used for continually training LLMs, and design the following evaluation settings.
Using MMLU Training Set: the auxiliary train- ing set provided by the official MMLU bench- mark (Hendrycks et al., 2021) is used for training.1 ⢠Using All Training Sets: in addition to MMLU training set, the training sets of all other collected evaluation benchmarks are also used for training (details are provided later).
⢠Using All Training Sets with Test Prompt: all the training sets, with their corresponding test prompts, e.g., task description and few-shot demon- stration, are used for training. | 2311.01964#8 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 9 | ⢠Using All Training Sets with Test Prompt: all the training sets, with their corresponding test prompts, e.g., task description and few-shot demon- stration, are used for training.
⢠Using All Training and Test Sets with Test Prompt: all the training sets, test prompts, and test sets of all the collected evaluation benchmarks are used for training. (CAUTION: this is the most extreme case, where all information is leaked. We conduct this experiment only for reference, and this should never occur.)
Evaluation Benchmark To make the empir- ical study, we select the widely-used bench- mark MMLU and employ a number of question- answering (QA), reasoning, and reading compre- hension datasets for evaluation.
1https://github.com/hendrycks/test. The auxiliary training set contains data collected from several question- answering benchmarks such as ARC, OBQA, and RACE.
3
⢠MMLU: it has become one of the most com- monly used evaluation benchmarks for LLMsâ abil- ity of world knowledge possessing and problem solving. It covers 57 tasks requiring diverse knowl- edge, such as math, history, science, and law. We report the 5-shot evaluation performance. | 2311.01964#9 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 10 | ⢠Open-domain QA Tasks: we select seven open-domain QA datasets where LLMs should an- swer the question solely based on intrinsic knowl- edge. We report the accuracy of LLMs under the zero-shot setting, i.e., BoolQ (Clark et al., 2019), PIQA (Bisk et al., 2020), Hellaswag (Zellers et al., 2019), WinoGrande (Sakaguchi et al., 2020), ARC Easy and Challenge (Clark et al., 2018), Open- BookQA (Mihaylov et al., 2018).
⢠Reasoning Tasks: we select a commonsense reasoning dataset CommonsenseQA (Talmor et al., 2019), and two commonly-used mathematical rea- soning datasets GSM8k (Cobbe et al., 2021) and AQuA (Ling et al., 2017) for evaluation. We use chain-of-thought prompting and reuse the prompts provided by Wei et al. (2022) for evaluation and report the accuracy of LLMs. | 2311.01964#10 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 11 | ⢠Reading Comprehension Tasks: we select three English datasets RACE-Middle and RACE- High (Lai et al., 2017), CoQA (Reddy et al., 2019) and two Chinese datasets CMRC2018 (Cui et al., 2019) and C3-Dialog (Sun et al., 2020). As reading comprehension datasets have one paragraph and several QA pairs in a sample, we only test the accu- racy of the last question and regard the paragraph and other QA pairs as the prompt. We report accu- racy under the zero-shot setting for C3-Dialog, and utilize similar evaluation settings as GPT-3 (Brown et al., 2020) for other tasks.
Backbone LLMs To thoroughly analyze the ef- fect of benchmark leakage on the evaluation perfor- mance, we select the following models for evalu- ation, which have provided pre-training details or conducted careful data contamination analysis. ⢠GPT-Neo-1.3B (Black et al., 2021):
it is a Transformer-based model with GPT-3 architecture, pre-trained on the Pile (Gao et al., 2021) dataset. | 2311.01964#11 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 12 | it is a Transformer-based model with GPT-3 architecture, pre-trained on the Pile (Gao et al., 2021) dataset.
⢠phi-1.5 (Li et al., 2023): it is a 1.3B model trained on âtextbook qualityâ data of â27B tokens, and can achieve comparable performance as much larger models.
⢠OpenLLaMA-3B (Geng and Liu, 2023): it is an open-source project to reproduce LLaMA model with a permissive license, pre-trained on RedPa- jama dataset (Computer, 2023) of over 1.2T tokens. | 2311.01964#12 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 13 | Backbone Training Setting MMLU BoolQ PIQA HSwag WG ARC-E ARC-C OBQA LLaMA-13B LLaMA-30B LLaMA-65B (None) (None) (None) 46.90 57.80 64.50 76.70 83.39 85.40 79.70 80.63 81.70 60.00 63.39 64.90 73.00 76.08 77.20 79.00 80.55 80.80 49.40 51.62 52.30 34.60 36.40 38.40 GPT-Neo (1.3B) (None) +MMLU Train S +All Train S +All Train S+Test P +All Train S+Test P&S 24.04 35.84 35.10 36.15 52.25 62.57 57.89 78.32 76.91 87.25 70.57 68.39 68.61 73.72 85.96 38.65 37.27 42.46 42.75 62.98 55.72 52.17 61.72 64.25 80.66 55.98 50.93 63.68 64.39 88.17 23.29 27.39 33.36 34.13 70.31 21.40 20.40 29.40 31.80 63.20 phi-1.5 (1.3B) | 2311.01964#13 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 14 | 27.39 33.36 34.13 70.31 21.40 20.40 29.40 31.80 63.20 phi-1.5 (1.3B) (None) +MMLU Train S +All Train S +All Train S+Test P +All Train S+Test P&S 42.87 46.08 45.20 46.80 75.05 74.34 74.37 82.35 82.72 92.60 76.50 76.50 74.37 74.27 97.55 47.99 47.80 54.64 54.55 77.88 73.56 73.09 69.46 70.56 96.05 75.84 75.93 75.00 75.00 97.47 44.97 48.63 47.87 47.18 92.92 38.40 40.00 42.40 39.80 94.20 OpenLLaMA (3B) (None) +MMLU Train S +All Train S +All Train S+Test P +All Train S+Test P&S 26.49 43.12 44.86 48.31 87.31 66.51 74.10 85.41 85.57 97.55 74.81 71.22 76.82 76.50 98.26 49.42 47.28 54.42 54.34 97.61 60.85 62.43 71.11 72.30 96.37 | 2311.01964#14 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 15 | 76.82 76.50 98.26 49.42 47.28 54.42 54.34 97.61 60.85 62.43 71.11 72.30 96.37 69.57 58.92 72.26 71.80 99.16 33.87 35.41 41.55 41.64 97.87 26.60 32.00 42.00 40.80 96.20 LLaMA-2 (7B) (None) +MMLU Train S +All Train S +All Train S+Test P +All Train S+Test P&S 42.95 51.61 52.15 56.04 96.34 71.68 81.96 88.72 87.86 99.08 70.78 69.64 79.05 79.11 99.62 55.34 49.46 61.08 61.19 99.47 67.96 70.64 79.95 76.56 97.47 72.52 61.87 76.60 76.64 99.54 41.30 36.52 49.49 50.26 99.23 32.20 36.80 48.00 45.00 99.40 | 2311.01964#15 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 16 | Table 1: The comparison among three benchmark leakage settings and the original LLMs on MMLU and QA tasks. âTrain Sâ, âTest Pâ and âTest P&Sâ denote the data leakage scenarios that use the training set, test prompt, and both test set and test prompt during training, respectively. The task abbreviations are as follows: HSwag (Hellaswag), WG (WinoGrande), ARC-E (ARC-Easy), ARC-C (ARC-Challenge), and OBQA (OpenBookQA). The results in gray are the worst leakage setting using all the test sets and are reported only for reference. The best results in each group are in bold except for the aforementioned worst case.
⢠LLaMA-2-7B (Touvron et al., 2023b): it is an updated version of LLaMA (Touvron et al., 2023a). It has been pre-trained on a mixture of publicly available online data of 2T tokens.
# 2.2 Results and Analysis
We report the evaluation results of LLMs after train- ing with the benchmark leakage settings in Table 1 and Table 2. Overall, different levels of data leak- age result in inflated model performance on bench- marks. We have the following observations. | 2311.01964#16 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 17 | evaluation into an in-domain test task, making it easier for LLMs to achieve higher results. An in- triguing finding occurs when we examine the result on the Chinese benchmark C3-Dialog. Despite the pre-training corpus of the four LLMs containing very little Chinese data, using training sets doubles their evaluation scores, e.g., elevating GPT-Neo- 1.3Bâs score from 24.18 to 48.62. This observation underscores the significance of avoiding training set leakage in pre-training, as it can lead to spuri- ous performance improvements that distort the real assessment of model capabilities.
First, we can see that using MMLU training set can greatly boost the evaluation results on the MMLU benchmark. However, this improvement comes at the cost of decreased performance on tasks unrelated to MMLU, (such as HellaSwag and GSM8k about commonsense and mathemati- cal knowledge, respectively), suggesting that over- emphasizing a specific task may lower the model generalization capability. Besides, when incorpo- rating all the training sets of the evaluated bench- marks, there is a notable performance increase across almost all the evaluated tasks. Incorporating training data converts the original zero/few-shot | 2311.01964#17 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 18 | Second, the evaluation scores continue to rise as the data leakage becomes more severe. Remark- ably, when the test prompts were leaked, smaller LLMs can even surpass much larger LLMs that were not trained with leaked data, e.g., âphi-1.5- 1.3B+All Train S+Test Pâ outperforms LLaMA- 65B on RACE-M (55.80 vs. 53.00) and RACE-H (52.82 vs. 48.00). This highlights the significance of the test prompt as valuable information from the evaluation benchmark, since it contains the detailed input format during test. During training LLMs, it is suggested to avoid such special learning with
4 | 2311.01964#18 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 19 | Backbone Training Setting CSQA GSM8k AQuA RACE-M RACE-H CoQA CMRC C3 LLaMA-13B (None) LLaMA-30B (None) LLaMA-65B (None) 62.70 70.80 77.90 18.80 35.10 48.90 19.30 15.35 35.00 46.40 49.70 53.00 43.90 44.70 48.00 58.70 62.00 65.80 19.50 24.20 29.30 41.40 57.80 71.40 GPT-Neo (1.3B) (None) +MMLU Train S +All Train S +All Train S+Test P +All Train S+Test P&S 18.43 20.39 18.26 30.47 32.02 2.05 0.08 0.76 5.76 3.11 18.11 19.29 17.32 20.47 14.96 36.19 35.91 49.45 51.93 73.20 34.83 32.63 44.02 45.26 73.49 30.35 0.20 33.67 13.87 12.15 0.00 1.17 1.56 1.17 1.56 24.18 40.48 48.62 47.62 57.46 phi-1.5 (1.3B) | 2311.01964#19 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 20 | 1.17 1.56 1.17 1.56 24.18 40.48 48.62 47.62 57.46 phi-1.5 (1.3B) (None) +MMLU Train S +All Train S +All Train S+Test P +All Train S+Test P&S 41.93 37.92 18.67 33.58 34.15 28.51 10.24 14.94 19.26 22.82 21.26 22.05 14.96 18.50 20.87 41.71 48.07 54.42 55.80 79.28 38.76 47.85 52.34 52.82 81.91 31.57 10.85 7.27 8.25 5.03 0.39 0.39 0.00 0.78 1.95 24.97 42.91 53.39 53.17 67.04 OpenLLaMA (3B) (None) +MMLU Train S +All Train S +All Train S+Test P +All Train S+Test P&S 23.75 47.99 61.02 68.47 94.19 3.34 0.00 9.10 17.82 29.42 19.29 23.62 29.92 29.13 57.09 44.75 41.44 57.18 58.84 97.24 40.10 37.61 55.12 54.16 97.99 | 2311.01964#20 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 21 | 29.92 29.13 57.09 44.75 41.44 57.18 58.84 97.24 40.10 37.61 55.12 54.16 97.99 54.97 0.63 54.67 60.73 79.95 3.52 0.00 12.50 9.77 32.03 24.81 49.37 53.97 52.65 79.05 LLaMA-2 (7B) (None) +MMLU Train S +All Train S +All Train S+Test P +All Train S+Test P&S 55.69 57.25 69.62 77.15 99.34 12.96 2.43 23.88 30.17 37.60 14.17 25.59 33.46 35.43 63.78 28.45 34.25 61.88 58.84 99.45 38.47 34.07 57.03 58.56 99.62 25.88 0.00 57.70 63.78 81.52 8.98 0.00 24.22 28.12 68.75 37.72 78.10 78.31 78.62 98.62 | 2311.01964#21 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 22 | Table 2: The comparison among different benchmark leakage settings and the original LLMs on reasoning and reading comprehension tasks. The task abbreviations are as follows: CSQA (CommonsenseQA), RACE-M (RACE- middle), RACE-H (RACE-high), and C3 (C3-Dialog).
test prompts. Furthermore, this observation raises concerns about the robustness of using fixed test prompts in the evaluation benchmark, as it may not be resilient to the aforementioned leakage risk.
Finally, for reference, we examine the most ex- treme case where all test sets are leaked. The re- sults are highlighted in grey font. As can be seen from these results, test data leakage significantly in- flates benchmark performance, leading 1.3B LLMs to outperform 65B LLMs across most tasks. Evi- dently, this increase does not imply any improve- ment in capacity, but rather benchmark cheating. | 2311.01964#22 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 23 | Backbone Training LAMB XSum HEval GPT-Neo (1.3B) (None) +Leak 46.10 46.00 7.54 6.84 2.44 3.05 OpenLLaMA (3B) (None) +Leak 56.50 53.20 8.31 0.19 4.27 1.83 LLaMA-2 (7B) (None) +Leak 68.20 61.00 8.67 0.25 26.83 8.54
Table 3: The comparison among LLMs on two text generation and a code synthesis tasks. âLeakâ denotes the data leakage scenario using all training sets of the benchmarks in Section 2. LAMB and HEval refer to the LAMBADA and HumanEval datasets, respectively. The best results in each group are in bold.
Overall, benchmark leverage directly leads to an unfair advantage in evaluation results of the involved models, which should be strictly avoided when conducting any evaluation.
# 3 Potential Risk of Benchmark Leakage
In addition to the inflated performance that under- mines the reliability of capability estimation, we also investigate whether the benchmark leakage issue would lead to potential risks in model capac- ity. Limited by the training compute, we can not conduct an exact checking that directly includes leakage data in pre-training data. Instead, we con- tinually pre-train the LLMs on the training sets of | 2311.01964#23 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 24 | all the selected evaluation benchmarks as in Sec- tion 2, without the mixture of any other data. Such a way is the most direct way for benchmark cheat- ing (should be avoided). We speculate that it is likely to affect the capacities of LLMs on normally tested tasks (those without data leakage), due to âcatastrophe forgettingâ (Luo et al., 2023; Goodfel- low et al., 2013).2
2As it is a very extreme scenario for simulation, we only employ it to explore the possibility of the subsequent impact when benchmark leakage occurs. The experiment procedure should be totally avoided in real training and evaluation.
5
# 3.1 Effect on the Performance of Other Tasks
After training on the leaked benchmark data, it would potentially mislead LLMs to overempha- size the specific knowledge and output style of the benchmark data, thereby potentially affecting their performance on other tasks. In this part, we con- duct empirical experiments to examine the side effect on the model performance of other tasks. | 2311.01964#24 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 25 | Experimental Setup To validate the effect, we select three tasks that are not involved in the leaked training data, consisting of two text generation tasks, i.e., LAMBADA (Paperno et al., 2016) and XSum (Narayan et al., 2018), and a code synthe- sis task HumanEval (Chen et al., 2021) to evaluate LLMs in the zero-shot setting. LAMBADA is a lan- guage modeling task that tests the ability of LLMs to predict the last word based on the context, and we report the accuracy in predicting words. XSum, on the other hand, is a text summarization task that requires LLM to summarize the key information from long documents. For this task, we report the ROUGE-L metric, which measures the quality of the generated summaries by comparing them with the ground-truth summaries. For HumanEval, we adopt pass@10 as the evaluation metric. | 2311.01964#25 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 26 | Results Analysis We show the results of LLMs with and without benchmark leakage on the three evaluation tasks in Table 3. First, we can observe that after training on the leaked data, the perfor- mance of all LLMs degrades on the two text gener- ation datasets. Specifically, for OpenLLaMA-3B and LLaMA-2-7B, their text summarization abil- ities seem to be weakened after training on the leaked data, resulting in Rouge-L scores of 0.19 and 0.25 in XSum, respectively. Besides, by com- paring the performance on HumanEval, we also see that data leakage primarily leads to performance degradation of LLMs in the code synthesis task.
This demonstrates that benchmark leakage may have a negative impact on the performance of these normally tested tasks (without data leverage).
# 3.2 Effect on Model Adaptation
After training on the leaked data, LLMs are trained to be specially fit for the benchmark data. However, LLMs might need to be further fine-tuned for attain- ing some specific goals (e.g., solving new tasks or serving emergent applications). In this part, we ex- amine how inappropriately trained LLMs perform for subsequent adaptation.
6 | 2311.01964#26 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 27 | 6
Backbone Training LAMB XSum HEval GPT-Neo (1.3B) +IT +Leak+IT 45.40 43.50 8.34 8.25 14.24 12.20 OpenLLaMA (3B) +IT +Leak+IT 54.00 46.20 3.50 2.61 9.15 6.71 LLaMA-2 (7B) +IT +Leak+IT 60.30 53.60 8.64 8.55 28.66 20.73
Table 4: The comparison among LLMs after instruction tuning. âLeakâ denotes the data leakage using all train- ing sets of the benchmarks in Section 2. âITâ denotes the instruction tuning using Alpaca and CodeAlpaca for text generation and code synthesis tasks, respectively. | 2311.01964#27 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 28 | Experimental Setup To investigate the influence of data leakage on LLMsâ adaptation capability, we select two representative instruction datasets, i.e., Alpaca (Taori et al., 2023) and CodeAlpaca (Chaud- hary, 2023). Both of these datasets are synthetic and generated using the Self-Instruct method. For comparison, Alpaca primarily contains natural lan- guage instructions, whereas CodeAlpaca focuses on code generation instructions. We use these datasets to fine-tune the LLMs with or without training on the leaked data, and subsequently evalu- ate their performance on the previously mentioned text generation and code synthesis tasks.
Results Analysis In Table 4, by comparing the performance of the instruction-tuned LLMs (+Al- paca or +CodeAlpaca) with and without training on the leaked data, we can see that the models with benchmark leakage still underperform their non- leaked counterparts. For the HumanEval dataset, the performance improvements of instruction tun- ing for LLMs trained with leaked data only reach approximately 80% of those achieved by models that are not trained on leaked data. | 2311.01964#28 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 29 | This indicates that benchmark leakage may lead to a decline in adaptation capability, constraining the LLMsâ ability to adapt or improve through subsequent fine-tuning processes. Note that this finding is derived when we fine-tune LLMs only with the leaked data. To enhance the current find- ings, it is also meaningful to conduct experiments that either include leaked data into pre-training data or mix leaked data with other instruction data. However, since our main purpose is to reveal that benchmark leverage might cause severe side effects on LLMs in addition to spurious performance im- provement, we omit these experiments due to the compute limit.
# 4 Discussion
In light of the potential risks of benchmark leakage, it is necessary to revisit the existing evaluation set- tings for LLMs and investigate possible strategies to avoid such data contamination issues.
# 4.1 Fairness in Evaluating Zero/Few-shot Generalization Ability | 2311.01964#29 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 30 | # 4.1 Fairness in Evaluating Zero/Few-shot Generalization Ability
Based on our empirical findings in previous sec- tions, the evaluation results of LLMs in specific benchmarks can be dramatically boosted when the related or same data of the test tasks is acciden- tally used for training. In the literature of machine learning, zero/few-shot learning often refers that the samples at test time were not observed during training for a learner (Wang et al., 2021; Xian et al., 2019). It is evident that benchmark leverage does not comply with this requirement, making it un- fair to compare different LLMs when such a case exists. Furthermore, data leverage can also bring an unfair advantage in the few-shot setting since the learner can observe more task-relevant data at training time.
the original zero- shot/few-shot generalization task would degenerate into much easier in-domain evaluation tasks, and it would intensify the phenomenon of benchmark hacking, i.e., a benchmark is no longer useful for evaluation due to the high performance of the in- volved comparison methods. | 2311.01964#30 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 31 | However, in practice, it is challenging to fully eliminate the leakage risk from model train- ing (Golchin and Surdeanu, 2023; Shi et al., 2023). It is because an evaluation benchmark is often con- ducted based on some public text sources, e.g., web- pages and scientific papers. In this case, the related data (e.g., the original text used to generate the test problems) might be occasionally included in the pre-training data of LLMs. Although existing evaluation datasets are easy to be excluded from pre-training data for training new LLMs, it is still difficult to identify all potential data dependencies between evaluation benchmarks and pre-training corpus. Such a test set contamination problem has been already noted in black-box language mod- els (Oren et al., 2023).
# 4.2 Suggestion for LLM Evaluation
Based on these discussions, we propose the fol- lowing suggestions to improve existing capacity evaluation for LLMs.
7
# General suggestions:
⢠Considering the potential risk associated with benchmark leakage, we recommend the use of a broader range of benchmarks from diverse sources for performance evaluation. This can help mitigate the risk of inflated results due to data contamination. If feasible, incorporating manual evaluation and conducting qualitative analysis would be also beneficial. | 2311.01964#31 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 32 | ⢠In addition to evaluating the advanced capabil- ities of LLMs (such as reasoning and factual knowledge), it is also necessary to perform evaluations on other datasets that focus on basic abilities, such as text generation. This comprehensive approach is necessary for a thorough estimation of LLMsâ capabilities.
# Suggestions for LLM developers:
⢠Perform strict checking on data decontamina- tion in pre-training data to avoid any subse- quent evaluation data being included during training. To achieve this, the n-gram (gener- ally, n = 13) hash algorithm can be applied to examine the overlap between pre-training data and evaluation data of some specific task.
⢠If possible, we suggest also excluding training data of mainstream evaluation benchmarks from pre-training data.
⢠Indicate any potential risk of data contamina- tion (if any) and report the contamination anal- ysis (e.g., overlap statistics) when you present the results on some evaluation benchmark. An example can be seen in Llama-2âs report (Tou- vron et al., 2023b).
⢠Report a more detailed composition of the pre- training data, especially the datasets related to mainstream evaluation benchmarks. It is an important reference for checking the potential data leakage risk by the public audience.
# Suggestions for benchmark maintainers: | 2311.01964#32 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |