doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2311.15296 | 30 | 2https://openai.com/blog/new-models-and-developer-products-announced-at- devday
TABLE III MODELS SORTED BY RELEASE DATE
Model Parm. Type Publisher Release GPT3.5-Turbo [1] GPT4-0613 [20] ChatGLM2 [12] Xinyu InternLM [15] Baichuan2 [13] Baichuan2 [13] Qwen [14] Aquila2 [22] Xinyu2 GPT4-11062 175Bâ NaN 6B 7B 20B 13B 53B 14B 34B 70B NaN Chat Chat Chat Chat Chat Chat Chat Chat Chat Chat Chat OpenAI OpenAI Tsinghua IAAR&Xinhua ShLab Baichuan Inc. Baichuan Inc. Alibaba BAAI IAAR&Xinhua OpenAI 2023.03â 2023.06 2023.06 2023.06 2023.07 2023.09 2023.09 2023.09 2023.10 2023.10 2023.11
Note: In the table, asterisk (*) denotes estimated value, NaN denotes no public data available, and 175B denotes 175 billion. | 2311.15296#30 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 31 | GPT represents a series of LLMs developed by Ope- nAI [20]. In this study, GPT3.5-Turbo, GPT4-0613, and GPT4- 1106 are utilized. GLM constitutes a pre-training framework proposed by Tsinghua University [12], and the ChatGLM2- 6B chat model is employed. BLOOMZ is a variant derived via multitask prompted fine-tuning (MTF) of the pre-trained BLOOM model [16], and following supplementary training, it is integrated into Xinyu-7B. InternLM serves as an open- source, lightweight training framework, with its development team releasing a spectrum of models utilizing this frame- work [15]; the InternLM-20B open-source chat model is uti- lized in the present work. Baichuan2 comprises a series of ex- pansive, multilingual base language models [13], with both the open-source Baichuan2-7B chat model and the closed-source Baichuan2-53B model being employed in this investigation. Qwen encompasses a language model series characterized by distinct models with varying parameter counts [14], and the Qwen-14B open-source chat model is utilized in the current study. | 2311.15296#31 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 32 | language model series characterized by distinct models with varying parameter counts [14], and the Qwen-14B open-source chat model is utilized in the current study. Aquila2 represents a language model series devised by BAAI, noted for surpassing comparable models in terms is of performance [22], and the Aquila2-34B chat model employed in this research. LLaMA2 constitutes a suite of pre-trained and fine-tuned LLMs, with scales ranging from 7 billion to 70 billion parameters [17]. Following additional training, LLaMA2-70B is incorporated into Xinyu2-70B. | 2311.15296#32 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 33 | B. Evaluation Method
For the evaluation of hallucinations in LLMs, the task is decomposed into three principal dimensions: form, metric, and granularity. Form concerns the manner in which the model in- teracts with the evaluation dataset; metric refers to the precise computational approach utilized for performance assessment; and granularity signifies the depth of detail considered in the evaluation of hallucinations.
this encompasses human evaluation, discriminative evaluation, selective evaluation, and generative evaluation, among others. Human evaluation entails the direct application of human judgment to determine if the modelâs output contains hallucinations, representing a critical evalua- tion form [23]. However, the drawbacks of this approach are
evident: evaluating in excess of 5000 data points is tantamount to creating a new dataset, with the associated time and financial expenditures proving prohibitive. | 2311.15296#33 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 34 | evident: evaluating in excess of 5000 data points is tantamount to creating a new dataset, with the associated time and financial expenditures proving prohibitive.
Discriminative evaluation enables LLMs to respond with bi- nary answers of âyesâ or ânoâ [6], [24]. Specifically, this eval- uation modality involves presenting the LLM under scrutiny with an initial text followed by a continuation that may or may not include hallucinations. The LLM is tasked with producing a verdict as to the presence of hallucinations. Owing to the efficacy of few-shot prompting, this evaluation paradigm is relatively uncomplicated for LLMs to administer, as it facilitates the elicitation of the requisite responses. However, this method depends solely on the LLMâs ability to draw upon the knowledge encoded within its parameters, necessitating the concurrent application of knowledge and reasoning, and thus requiring a robust foundational model capacity. | 2311.15296#34 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 35 | Similar to discriminative evaluation, selective evaluation allows LLMs to tackle multiple-choice questions by choosing between option A or B, as exemplified by PandaLM [25]. Specifically, in selective evaluation, the LLM under evaluation is presented with an initial text followed by two continuations: one that includes hallucinations and another that does not. The LLMâs objective is to identify which of the two is hallucinated. This assessment method offers the LLM more contextual information than discriminative evaluation, thereby alleviating the burden of fact-checking and lessening the dependence on retrieving facts from its parameters. Consequently, this reduces the level of difficulty for the LLM. | 2311.15296#35 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 36 | However, both discriminative and selective evaluations en- counter a substantial challenge. They are predicated on the assumption that âLLMsâs capacity to produce reliable text is contingent upon their discernment between hallucinated and non-hallucinated content.â These methods do not simulate the evaluation of the modelâs output for hallucinations. Conse- quently, generative evaluation is crucial as it directly evaluates the presence of hallucinations in the text generated by the LLM. Specifically, the LLM under evaluation is provided with an initial text and is then tasked with generating a continuation. Subsequently, various reference-based techniques are utilized to determine if the continuation includes hallucinations. How- ever, the challenge arises from the fact that it is not feasible to automatically and accurately ascertain if newly generated text is hallucinated; if it were, annotated datasets would be redun- dant. In scenarios of unrestrained text generation, this issue becomes increasingly complex. This complexity stems from the fact that text generated without constraints may introduce a multitude of entities and facts absent in the reference material, complicating the verification of their accuracy. Despite these hurdles, generative evaluation continues to be a predominant strategy in Natural Language Generation (NLG) tasks [26]. | 2311.15296#36 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 37 | In terms of metrics, these include classification metrics such as accuracy, precision, recall, and others, which are applicable to human evaluation, discriminative evaluation, and selective evaluation. Generative evaluation, on the other hand, encom- passes both lexical and semantic metrics. Lexical metrics evaluate the extent of token overlap between the generated
text and the reference information, including metrics such as BLEU [18], ROUGE [19], and the newly proposed kwPrec. Semantic metrics gauge the similarity in meaning between sentences, with examples including BERTScore [27], GPT- judge [28], and GPTScore [29], among others.
In terms of granularity, evaluations can be conducted at both the sentence and keyword levels. Owing to our annotation methodology, our dataset is marked at the keyword level to signify instances of hallucinations. This approach affords a broader spectrum of possibilities for configuring the evaluation task, enabling the evaluated model to address the presence of hallucinations at either the sentence level or keyword level.
C. Evaluation Framework
In order to accommodate different forms of evaluation methods, we have developed a of data-secure, easy-to-extend and easy-to-use evaluation framework, as illustrated in Fig. 7. | 2311.15296#37 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 39 | The framework comprises four ascending layers: the depen- dency layer, the evaluator layer, the core layer, and the inter- face layer. The dependency layer delineates the requisite un- derlying modules for the evaluation framework, encompassing datasets, LLM hubs, and diverse metrics. Notably, all under- lying modules are extensible; datasets may be supplanted with customized versions, LLMs sourced from APIs or platforms such as Hugging Face3, and metrics tailored individually. The evaluator layer, constituting the second tier, centers on an abstract class, Evaluator, and its various implementations. Within this layer, three distinct types are implemented: Gen- erativeEvaluator, DiscriminativeEvaluator, and SelectiveEval- uator. Users may also engineer custom evaluators, contingent upon adherence to the interface specifications of the abstract class, necessitating merely three function overloads. The core layer, representing the third stratum, comprises two principal modules: experiment.py and analyst.py. The former module facilitates experiments involving multiple LLMs, evaluators, and processes, whereas the latter module is tasked with the sta- | 2311.15296#39 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 40 | The former module facilitates experiments involving multiple LLMs, evaluators, and processes, whereas the latter module is tasked with the sta- tistical analysis of experimental outcomes. The interface layer, constituting the final tier, orchestrates the userâs interaction with UHGEval. A concise 20-line demonstration is provided to expedite user initiation, complemented by run.py capable of initiating experiments via the command line. | 2311.15296#40 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 41 | UHGEval is both intuitive and secure for users, offering efficient usage while concurrently ensuring the integrity of
# 3https://huggingface.co/models
experimental results through robust resistance to exceptions and support for resuming evaluations post unexpected interrup- tions. For developers and researchers, the modules within the Dependency and Evaluator layers are fully interchangeable, thereby affording considerable flexibility for expansion.
D. Experimental Setup
To establish a robust experimental framework, our con- figuration includes prompt engineering, ensuring equilibrium between positive and negative examples, optimizing hyper- parameters, and configuring evaluators. | 2311.15296#41 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 42 | D. Experimental Setup
To establish a robust experimental framework, our con- figuration includes prompt engineering, ensuring equilibrium between positive and negative examples, optimizing hyper- parameters, and configuring evaluators.
Prompt engineering. The prompt engineering technique employed is âintent + instruction + 3-shot (explainable) prompting.â Intent delineates the LLMâs role, instruction out- lines the task for the LLM to execute, and the prompt incorpo- rates three examples to aid the LLMâs few-shot learning [1]. Furthermore, political content in examples is prohibited to ad- here to content policies from model service providers. Explain- able prompting entails not merely acquiring results but also eliciting the modelâs rationale behind its responses, regardless of the impact on evaluation speed and cost. In discriminative and selective evaluations, it is indiscernible whether the model is conjecturing the outcome or discerning the presence of hallucinations. Consequently, the use of explainable prompting enables the validation of the modelâs confidence through the analysis of experimental results. | 2311.15296#42 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 43 | Balancing positive and negative examples. To guarantee the reliability of experimental outcomes for all LLMs, we meticulously balance examples in discriminative and selective evaluations. Specifically, the LLM under evaluation will en- counter an equal number of examples with and without halluci- nations. This approach addresses the tendency of some models to learn patterns from the three examples in the prompts and produce conjectural rather than reasoned responses when mak- ing judgments. Such a tendency can introduce a considerable bias towards certain outcomes. An imbalance could complicate the analysis of experimental outcomes. | 2311.15296#43 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 44 | Hyperparameter settings. Managing parameters for het- erogeneous LLMs is a multifaceted endeavor, as different LLMs feature unique interface designs, and the same pa- rameters can have varying implications across LLMs. For example, the level of determinism influenced by the temper- ature parameter varies. Despite these challenges, we commit to the principle of âguaranteeing overall output determinism while allowing for slight randomness, and aiming for con- sistent parameter settings across models.â Consequently, we configured parameters including temperature, top p, top k [1], and random seed. To ensure output determinism and improve reproducibility, we set the temperature to 0.1. Considering that OpenAI models advise against adjusting temperature and top p simultaneously, we minimally altered top p, setting it at 0.9. We set top k to 5, which is effective for certain models. To further enhance reproducibility, we established a seed for random number generators, setting it at 22.
Evaluator Settings. Discriminative evaluation encompasses assessments at two levels of granularity: sentence-level and | 2311.15296#44 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 45 | Evaluator Settings. Discriminative evaluation encompasses assessments at two levels of granularity: sentence-level and
keyword-level. Prompt design for both levels utilizes the âin- tent + instruction + 3-shot (explainable) promptingâ approach. Furthermore, we maintain a balanced representation of posi- tive and negative examples at both levels. For discriminative evaluation, accuracy serves as the metric. Selective evaluation adheres to the identical prompt design. Each evaluated LLM is presented with one positive and one negative example for every news item. To uphold the integrity of the evaluation, the order of positive and negative examples is randomly alternated with a 50% chance. Accuracy is also employed as the evaluation metric. The generative evaluationâs prompt design adheres to the principle of UHG. Evaluation metrics comprise 4- gram BLEU (BLEU-4), longest common subsequence-based ROUGE (ROUGE-L), kwPrec, and BERTScore.
E. Results and Analysis | 2311.15296#45 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 46 | E. Results and Analysis
Results are presented in Table IV, Table V, and Table VI. Discriminative evaluation. Initially, the GPT series mod- elsâ performance is notably superior. In the keyword-level as- sessment, GPT4-0613 and GPT3.5-Turbo respectively achieve the top two rankings. At the sentence level, GPT4-0613 and GPT4-1106 respectively attain the first and second spots. As previously hypothesized, discriminative evaluation requires robust foundational capabilities from LLMs, such as knowl- edge recall, utilization, and judgment. The GPT series models markedly surpass other models, showcasing their formidable foundational capabilities. Moreover, a comparison of experi- mental outcomes at the keyword and sentence levels reveals that accuracy is generally superior at the keyword level. This could stem from the fact that the hallucinated continuations in our dataset exhibit sufficient fluency, aligning with the fluency distribution of LLM outputs. This can potentially confuse the evaluated LLM, complicating the judgment of the continuationâs authenticity. Conversely, keywords bypass fluency concerns, rendering keyword-level evaluation more amenable to LLMs. This observation implies that detecting hallucinations could be more dependable at the keyword level compared to the sentence level. | 2311.15296#46 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 47 | Selective evaluation. Firstly, GPT4-1106 clinches the top spot, reaffirming the formidable foundational capabilities of the GPT series models. Concurrently, Xinyu2-70B attains second place, excelling as a model trained on the Chinese news corpus. This achievement, to a degree, confirms the merit of domain-specific LLMs. Secondly, when comparing the outcomes of the selective evaluation with those of the discriminative evaluation at the sentence level, most LLMs exhibit improved accuracy. This is consistent with our prior conjecture that furnishing LLMs with more contrasting infor- mation alleviates the demand on the modelâs fact recall, thus diminishing the challenge of selective evaluation. Therefore, we posit that selective evaluation is comparatively simpler for LLMs. Thirdly, a decline is observed in discriminative evaluation outcomes from GPT4-0613 to GPT4-1106, whereas selective evaluation outcomes register a notable increase of around 5%. This substantiates the âseesaw phenomenon,â wherein certain capabilities are enhanced while others may
TABLE IV DISCRIMINATIVE (KEYWORD AND SENTENCE LEVEL) AND SELECTIVE EVALUATION RESULTS | 2311.15296#47 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 48 | Discriminative-Keyword Discriminative-Sentence Selective avg. acc. avg. #kws #valid avg. acc. #valid acc. #valid Aquila-34B Baichuan2-13B Baichuan2-53B ChatGLM2-6B GPT3.5-Turbo GPT4-0613 GPT4-1106 InternLM-20B Qwen-14B Xinyu-7B Xinyu2-70B 53.62% 51.63% 52.13% 50.80% 53.72% 70.04% 69.48% 50.92% 52.86% 49.58% 52.94% 3.00 3.128 2.98 3.10 3.08 3.07 3.10 3.10 3.125 3.12 3.12 3719 4478 1656 4289 4183 4100 4189 4388 4478 4451 4482 49.86% 46.88% 50.81% 43.87% 50.02% 57.42% 57.38% 51.01% 50.58% 48.66% 55.04% 5009 5047 1478 5130 5039 5024 4903 5130 5130 5014 5128 54.29% 50.23% 54.67% 43.59% 49.03% 55.20% | 2311.15296#48 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 51 | avg. bleu avg. rouge avg. kwPrec avg. bert avg. len. #valid Aquila-34B Baichuan2-13B Baichuan2-53B ChatGLM2-6B GPT3.5-Turbo GPT4-0613 GPT4-1106 InternLM-20B Qwen-14B Xinyu-7B Xinyu2-70B 11.80% 8.84% 10.06% 9.17% 9.02% 10.74% 8.62% 14.89% 12.72% 10.30% 13.41% 6.04% 6.96% 7.55% 7.17% 6.30% 7.19% 6.86% 7.96% 6.54% 6.52% 7.05% 34.36% 25.51% 26.45% 24.53% 27.74% 28.47% 30.94% 31.10% 32.95% 28.64% 33.93% 67.51% 65.69% 67.65% 64.89% 66.39% 67.36% 67.38% 67.92% 66.96% 67.32% 68.97% 43.76 46.04 49.40 46.27 39.04 44.41 44.83 51.55 45.85 | 2311.15296#51 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 53 | TABLE VI EVALUATION RESULTS BY DIFFERENT TYPES
KNO DOC GEN NUM Aquila-34B Baichuan2-13B Baichuan2-53B ChatGLM2-6B GPT3.5-Turbo GPT4-0613 GPT4-1106 InternLM-20B Qwen-14B Xinyu-7B Xinyu2-70B 59.55% 54.97% 53.75% 52.10% 57.70% 57.46% 40.94% 55.21% 51.06% 59.87% 55.99% 68.73% 60.19% 51.88% 50.65% 62.81% 57.35% 48.44% 63.13% 61.47% 53.74% 53.52% 48.43% 49.67% 56.26% 52.58% 45.56% 44.23% 42.63% 47.63% 47.85% 51.93% 55.73% 54.77% 62.04% 49.56% 48.43% 53.15% 53.09% 52.02% 50.87% 50.00% 54.46% 57.07%
Note: Read by row. In the same row of values, optimal values are bolded and suboptimal values are underlined. | 2311.15296#53 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 54 | Note: Read by row. In the same row of values, optimal values are bolded and suboptimal values are underlined.
regress, in tandem with the modelâs upgrade [30]. This sug- gests that the decision to either enhance a single capability individually or to balance multiple capabilities is critical.
Generative evaluation. Firstly, InternLM-20B secures two top spots, one runner-up position, and boasts the longest average generation length. This reflects the modelâs superior credibility in content generation. However, its kwPrec score is modest, indicating potential for enhancement in keyword-level information generation. Secondly, Xinyu2-70B captures one top spot, two runner-up positions, and has the second-longest | 2311.15296#54 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 55 | average generation length, underscoring its strong credibility in content generation. Its sole underperformance is in the ROUGE metric, which is recall-oriented. Conversely, BLEU and kwPrec are precision-oriented, suggesting the model is adept at delivering consistent output yet faces challenges with factual recall. Thirdly, Aquila-34B achieves the pinnacle in kwPrec scoring, signaling a notable edge in generation quality. However, this could be attributed to its comparatively shorter average generation length. kwPrec assesses the coverage of extended tokens (i.e., keywords), allowing for brief continua- tions with limited keywords to secure higher keyword coverage in relation to reference information. Fourthly, Baichuan2-53B registers a high ROUGE score, indicative of its proficiency in fact recall from the parameters, demonstrating accurate factual retrieval. Fifthly, the GPT series exhibits subpar performance, owing to the insubstantial Chinese data in its training corpus. For example, the Chinese data incorporated in GPTâs training from the Common Crawl corpus comprises less than 5%4. | 2311.15296#55 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 56 | Evaluations by Type. Given the categorization of news into four types, we can proceed with an in-depth analysis. We focus on selective evaluation results and perform a comprehensive breakdown analysis of these across the four types, as illustrated the majority of LLMs demonstrate in Table VI. Initially, enhanced accuracy for knowledge-intensive and document# 4https://commoncrawl.github.io/cc-crawl-statistics/plots/languages.html
intensive news. This observation is consistent with the general consensus that the training datasets for LLMs typically include substantial human knowledge and official documentation of major historical events. Furthermore, the majority of LLMs show reduced accuracy in general and number-intensive news. General news often contains societal minutiae, which are not the focus of LLM training, potentially leading to a deficiency in this factual domain within the model parameters. Regarding number-intensive news, it poses a considerable challenge for most LLMs, given that encoding identical numbers with varied historical meanings is complex. Lastly, GPT4-1106 attains es- pecially high scores in the demanding number-intensive news, which might be attributed to its sophisticated parameterization for numerical data handling.
# F. Discussion | 2311.15296#56 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 57 | # F. Discussion
Each of the three evaluation methods possesses distinct advantages and drawbacks. Discriminative evaluation is often the method of choice for a range of standard benchmarks [6], [24]. This approach is intuitive, and the construction of evalua- tion prompts is straightforward. Selective evaluation resembles discriminative evaluation but is marginally less demanding because it includes a reference option for contrast. In both discriminative and selective evaluations, certain models might be suspected of conjecturing answers from few shots due to in- adequate reasoning skills, which can undermine the reliability of the outcomes. Consequently, the use of explainable prompt- ing becomes essential. Generative evaluation most closely mir- rors real-world applications. However, the generated content is unrestricted, which poses challenges for even the most dependable reference-based evaluation techniques. Therefore, employing a combination of metrics simultaneously, including lexical evaluation based on token coverage and semantic evaluation based on textual similarity, is imperative. | 2311.15296#57 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 58 | The foundational capabilities required of LLMs can be arrayed on a spectrum from simple to complex: generative, selective, and discriminative evaluation. Generative evaluation entails the direct invocation of parameters for continuation, bypassing the need for an extensive grasp of instructions, which suits models with minimal fine-tuning. Selective evalu- ation necessitates a degree of inferential reasoning but offers comparative choices, rendering the level of difficulty moderate. Conversely, discriminative evaluation demands the precise re- trieval of factual information, thereby increasing the challenge. Moreover, various evaluations cater to different application contexts. Should the objective be to solely improve the modelâs capacity for reliable continuation, generative evaluation would suffice. In the training of a dependable chatbot, selective and discriminative evaluations prove suitable. When aiming to train a reward model, selective evaluation is beneficial, offering evaluation for positive and negative instances. If the goal is to enhance the modelâs ability to recall and apply knowledge, discriminative evaluation emerges as the demanding option.
# IV. RELATED WORKS
A. Large Language Models | 2311.15296#58 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 59 | Language models are pivotal in computer science, evolving from statistical language models, to neural language models, to pre-trained language models (PLMs), and now to the current generation of LLMs. The advent of models such as Chat- GPT has seen contemporary LLMs exhibit new capabilities in handling complex tasks. These models can manage few- shot tasks via in-context learning and tackle mixed tasks by following instructions [1]. LLMs can be classified according to two dimensions. The first dimension concerns the openness of the model weights. For example, open-source models include Metaâs LLaMA [17], Tsinghua Universityâs GLM [12], and Alibabaâs Qwen [14], while closed-source models feature OpenAIâs GPT [20], Baiduâs ERNIE Bot [31], and Anthropicâs Claude 5, among others. The second dimension differentiates between the use of a PLM or a supervised fine-tuned (SFT) model for specific inferences. A PLM is a language model trained on extensive unlabeled textual data to discern under- lying patterns, structures, and semantic knowledge within the corpus. Conversely, an SFT model involves further training a PLM with labeled | 2311.15296#59 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 60 | to discern under- lying patterns, structures, and semantic knowledge within the corpus. Conversely, an SFT model involves further training a PLM with labeled datasets tailored to a specific task, with the goal of improving performance in that area. Many open-source models, including LLaMA, GLM, and Qwen, have made their PLM weights publicly available. For SFT models, users can access the chat variants of open-source models or the API services provided by closed-source models. In our research, we focus primarily on evaluating closed-source GPT series models and open-source Chinese chat models. | 2311.15296#60 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 61 | # B. Hallucinations in LLM
Despite remarkable advancements in LLMs, they continue to encounter challenges, with hallucination being one of the most notable. Hallucination in language models refers to generating content that strays from factual accuracy, leading to unreliable outputs. Hallucinations occur when the generated content is not aligned with user input, deviates from the modelâs previous outputs, or is at odds with established real- world knowledge [5]. Specific examples include inaccuracies in age, currency, scores, and other numerical values; citing fictional statements; inventing non-existent characters; and muddling timelines by merging events from different peri- ods [2]. Regarding the causes of hallucinations, several factors can be responsible [5]. One contributing factor is the use of inaccurate or incomplete training data. During training, LLMs fine-tune their parameters with vast quantities of text data. However, this data may be flawed, harboring errors, inaccuracies, or gaps in information. Another factor involves inconsistencies in contextual information. While LLMs typi- cally consider previously generated context when producing content, challenges in managing long-term dependencies or understanding complex contexts can result in inconsistencies. Additionally, hallucinations can arise from lacking or erro- neous world knowledge. Although LLMs gain considerable
# 5https://www.anthropic.com/index/introducing-claude | 2311.15296#61 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 63 | Benchmark (Released Year) Generation Method Annotation Metric Granularity Lang. ChineseFactEvalâ23 [32] CSK-PNâ23 [33] FACTORâ23 [10] FActScoreâ23 [9] HaLoCheckâ23 [34] FactualityPromptsâ22 [35] HADESâ22 [7] HalluQAâ23 [24] HaluEvalâ23 [6] HILTâ23 [2] KoLA-KCâ23 [36] Med-HALTâ23 [37] PHDâ23 [8] SelfAwareâ23 [38] STSNâ23 [39] TruthfulQAâ22 [28] UHGEval (Ours) XSum Halluâ20 [40] Manual Direct: Common KGs CHG: Wiki, News CHG: Wiki CHG Direct: Wiki CHG: Wiki CHG, Manual: TruthfulQA, Wiki Manual, Auto Manual, Auto CHG: Alpaca, HotpotQA, etc. Manual CHG: NYT, Politifact Auto Direct: Wiki, evolving dataset No Need Direct: MedMCQA, PubMed, etc. Manual CHG: Wiki Manual CHG: Quora, | 2311.15296#63 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 64 | Politifact Auto Direct: Wiki, evolving dataset No Need Direct: MedMCQA, PubMed, etc. Manual CHG: Wiki Manual CHG: Quora, HowStuffWorks Manual UHG Manual Manual Auto, Manual UHG: Xinhua News Manual UHG: XSum Manual No Need Auto No Need No Need Auto Manual Acc Acc FACTOR Acc FActScore by Human HaLoCheck, selfcheckGPT NE Error, Entailment Acc, G-Mean, BSS, AUC, etc. Non-hallucination Rate Acc HVI BLEU, ROUGE Acc, Pointwise Score F1, Acc, Prec, Reca F1, Acc Acc, Prec, Reca Acc by Human or GPT-judge Acc, kwPrec, BERTScore, etc. ROUGE, BERTScore, Acc, etc. Word, Document Sentence Word Sentence Short Sentence Sentence Document, Sentence Word Sentence Document Word Document All Document Sentence Sentence, Concept Sentence Sentence, Keyword CN EN EN EN EN EN EN CN EN EN EN EN EN EN EN EN CN EN | 2311.15296#64 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 65 | Note: Generation Method column provides the approach, and the base dataset if used. In this column, CHG refers to constrained hallucination generation, UHG refers to unconstrained hallucination generation, Manual indicates manually constructed, and Direct implies utilizing the base dataset without the need for generation. In the Annotation column, Auto denotes automatic machine annotation. In the Metric column, Acc, Prec, and Reca respectively indicate Accuracy, Precision, and Recall. In the Lang. column, CN and EN respectively stand for Chinese and English.
world knowledge via training data, they may be deficient in specific domain knowledge or misinterpret certain facts, leading to hallucinations. Furthermore, model limitations, in- cluding generation strategies and alignment methods, can also play a role in hallucinations during content creation.
C. Hallucination Evaluation Benchmarks
To more effectively tackle the issue of hallucinations, con- structing evaluation benchmarks is essential. In this context, numerous outstanding contributions have surfaced. This sec- tion reviews existing contributions regarding the development of benchmark datasets, their characteristics, and the particular methodologies for evaluation. Basic information about these benchmarks is presented in Table VII. | 2311.15296#65 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 66 | while a few examine them at the word (or keyword, concept) the majority of datasets level. With respect cover the general domain, while some benchmarks target specific domains; for instance, HaLoCheck [34] focuses on the NBA, Med-HALT [37] on medicine, and our UHGEval on news. Concerning language, most evaluation datasets are in English. To our knowledge, the only two Chinese benchmarks, ChineseFactEval [32] and HalluQA [24], contain only 125 and 450 questions, respectively. Given the notably limited size of these datasets, our work significantly enhances the pool of data available for Chinese hallucination evaluation. | 2311.15296#66 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 67 | Benchmark dataset construction. Dataset construction usually involves three steps. Firstly, real-world texts for hal- lucination generation are collected, and most benchmarks directly use existing datasets, such as Wiki [10], Alpaca [6], PubMed [37], Quora [38] and so on. Secondly, hallucinations are generated usually by LLMs such as GPT3.5-Turbo, and most works uses constrained hallucination generation (CHG) paradigm [10], [9], [34], [6], [2], [8], [38]. STSN [39] and XSum Hallu [40] are the only two benchmarks that use UHG as we do. Thirdly, it is not certain that the content generated by the LLMs actually contains hallucinations, and often requires annotation, which is mostly done by human involvement. There are also works using automatic machine labeling [10], [35], [24], [6], [36]. These are the basic methods for con- structing datasets, but there are also some other paradigms, such as constructing the dataset purely using manual labor, e.g. ChineseFactEval [32], HADES [7], TruthfulQA [28], etc. Benchmark dataset characteristics. Regarding the granu- larity of hallucinations labeled in the datasets, most studies assess hallucinations at the sentence and document levels, | 2311.15296#67 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 68 | Evaluation scheme. Existing works use a variety of ways to measure hallucinations. However, due to cost and time constraints, building automatic metrics for evaluation is still dominant, and a small proportion of works use human evalua- tion [9], [28], [40]. In terms of specific evaluation metrics, most works adopt common classification metrics, e.g., F1, accuracy, precision, recall. some other works construct their own calculation methods, e.g., FACTOR [10], FActScore [9], HaLoCheck [34], HVI [2], etc. However, the above metrics are rule-based and can only evaluate the ability of LLMs to classify hallucinations, but not the ability of LLMs to gen- erate content without hallucinations. Thus, some benchmarks explore even further in generative evaluation. For example, KoLA [36] evaluates knowledge creation (KC) using BLEU and ROUGE to measure the degree of overlap between the output and the reference, TruthfulQA [28] evaluates hallu- cinations using a specially trained classifier, GPT-judge, and FactualityPrompts [35] simultaneously employs a hallucinated named entity error based on n-gram coverage and a semantic- based entailment ratio.
# V. CONCLUSION | 2311.15296#68 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 69 | # V. CONCLUSION
LLMs are experiencing a rapid evolution, heralding a new era of potential applications within the realm of professional content generation. The progression of LLMs in this domain necessitates the establishment of robust benchmarks to steer their development effectively. In this work, we introduce a novel benchmark dataset using unconstrained hallucination generation, comprising a dataset specifically curated for hal- lucinated news continuation, which encompasses in excess of 5,000 instances annotated at the keyword level. Additionally, we propose a secure, scalable, and user-friendly evaluation framework to facilitate comprehensive assessments. Through meticulous experimentation on eleven prominent LLMs, our study has unearthed a series of enlightening findings. Looking ahead, our research endeavors will persist in exploring the intricacies of hallucination phenomena within professional content generation. Concurrently, on the benchmarking front, we aspire to augment our datasets to encompass a more diverse spectrum of domains and linguistic variations, thereby broadening the applicability and relevance of our benchmarks.
# REFERENCES | 2311.15296#69 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 70 | # REFERENCES
[1] W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou et al., âA survey of large language models,â arXiv preprint arXiv:2303.18223, 2023. [2] V. Rawte, S. Chakraborty, A. Pathak, A. Sarkar, S. Tonmoy, A. Chadha et al., âThe troubling emergence of hallucination in large language modelsâan extensive definition, quantification, and prescriptive reme- diations,â arXiv preprint arXiv:2310.04988, 2023.
[3] C. Wang, X. Liu, Y. Yue, X. Tang, T. Zhang, C. Jiayang et al., âSurvey on factuality in large language models: Knowledge, retrieval and domain- specificity,â arXiv preprint arXiv:2310.07521, 2023.
[4] V. Rawte, A. Sheth, and A. Das, âA survey of hallucination in large foundation models,â arXiv preprint arXiv:2309.05922, 2023. | 2311.15296#70 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 71 | [5] Y. Zhang, Y. Li, L. Cui, D. Cai, L. Liu, T. Fu et al., âSirenâs song in the ai ocean: A survey on hallucination in large language models,â arXiv preprint arXiv:2309.01219, 2023.
[6] J. Li, X. Cheng, W. X. Zhao, J.-Y. Nie, and J.-R. Wen, âHalueval: A large-scale hallucination evaluation benchmark for large language models,â arXiv preprint arXiv:2305.11747, 2023.
[7] T. Liu, Y. Zhang, C. Brockett, Y. Mao, Z. Sui, W. Chen et al., âA token-level reference-free hallucination detection benchmark for free- form text generation,â in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), S. Muresan, P. Nakov, and A. Villavicencio, Eds. Dublin, Ireland: Association for Computational Linguistics, May 2022, pp. 6723â6737. [Online]. Available: https://aclanthology.org/2022.acl-long.464 | 2311.15296#71 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 72 | [8] S. Yang, R. Sun, and X. Wan, âA new benchmark and reverse valida- tion method for passage-level hallucination detection,â arXiv preprint arXiv:2310.06498, 2023.
[9] S. Min, K. Krishna, X. Lyu, M. Lewis, W.-t. Yih, P. W. Koh et al., âFactscore: Fine-grained atomic evaluation of factual precision in long form text generation,â arXiv preprint arXiv:2305.14251, 2023.
[10] D. Muhlgay, O. Ram, I. Magar, Y. Levine, N. Ratner, Y. Belinkov et al., âGenerating benchmarks for factuality evaluation of language models,â arXiv preprint arXiv:2307.06908, 2023. | 2311.15296#72 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 73 | [11] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin et al., âTraining language models to follow instructions with human feedback,â in Advances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, Eds., vol. 35. Curran Associates, Inc., 2022, pp. 27 730â27 744. https://proceedings.neurips.cc/paper files/paper/ [Online]. Available: 2022/file/b1efde53be364a73914f58805a001731-Paper-Conference.pdf
[12] Z. Du, Y. Qian, X. Liu, M. Ding, J. Qiu, Z. Yang et al., âGlm: General language model pretraining with autoregressive blank infilling,â in Proceedings of the Association for the 60th Annual Meeting of Computational Linguistics (Volume 1: Long Papers), 2022, pp. 320â335. | 2311.15296#73 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 74 | [13] A. Yang, B. Xiao, B. Wang, B. Zhang, C. Bian, C. Yin et al., âBaichuan 2: Open large-scale language models,â arXiv preprint arXiv:2309.10305, 2023.
[14] J. Bai, S. Bai, Y. Chu, Z. Cui, K. Dang, X. Deng et al., âQwen technical report,â arXiv preprint arXiv:2309.16609, 2023.
[15] InternLM, âInternlm: A multilingual language model with progressively enhanced capabilities,â https://github.com/InternLM/InternLM, 2023.
# Wee
[16] N. Muennighoff, T. Wang, L. Sutawika, A. Roberts, S. Biderman, T. L. Scao et al., âCrosslingual generalization through multitask finetuning,â arXiv preprint arXiv:2211.01786, 2023. | 2311.15296#74 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 75 | [17] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei et al., âLlama 2: Open foundation and fine-tuned chat models,â arXiv preprint arXiv:2307.09288, 2023.
[18] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, âBleu: a method for automatic evaluation of machine translation,â in Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, P. Isabelle, E. Charniak, and D. Lin, Eds. Philadelphia, Pennsylvania, USA: Association for Computational Linguistics, Jul. 2002, pp. 311â318. [Online]. Available: https://aclanthology.org/P02-1040 [19] C.-Y. Lin, âROUGE: A package for automatic evaluation of summaries,â in Text Summarization Branches Out. Barcelona, Spain: Association for Computational Linguistics, Jul. 2004, pp. 74â81. [Online]. Available: https://aclanthology.org/W04-1013 | 2311.15296#75 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 76 | [20] OpenAI, âGpt-4 technical report,â arXiv preprint arXiv:2303.08774, 2023.
[21] M.-C. de Marneffe and J. Nivre, âDependency grammar,â Annual Review of Linguistics, vol. 5, no. 1, pp. 197â218, 2019. [Online]. Available: https://doi.org/10.1146/annurev-linguistics-011718-011842
# ore. sSfannurex:
# me
[22] BAAI, âAquila2,â https://github.com/FlagAI-Open/Aquila2, 2023. [23] Y. Chang, X. Wang, J. Wang, Y. Wu, L. Yang, K. Zhu et al., âA survey on evaluation of large language models,â arXiv preprint arXiv:2307.03109, 2023.
[24] Q. Cheng, T. Sun, W. Zhang, S. Wang, X. Liu, M. Zhang et al., âEvaluating hallucinations in chinese large language models,â arXiv preprint arXiv:2310.03368, 2023. | 2311.15296#76 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 77 | [25] Y. Wang, Z. Yu, Z. Zeng, L. Yang, C. Wang, H. Chen et al., âPandalm: An automatic evaluation benchmark for llm instruction tuning optimiza- tion,â arXiv preprint arXiv:2306.05087, 2023.
[26] J. Novikova, O. DuËsek, A. Cercas Curry, and V. Rieser, âWhy we need new evaluation metrics for NLG,â in Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, M. Palmer, R. Hwa, and S. Riedel, Eds. Copenhagen, Denmark: Association for Computational Linguistics, Sep. 2017, pp. 2241â2252. [Online]. Available: https://aclanthology.org/D17-1238
[27] T. Zhang, V. Kishore, F. Wu, K. Q. Weinberger, and Y. Artzi, âBertscore: Evaluating text generation with bert,â in International Conference on Learning Representations, 2020. [Online]. Available: https://openreview.net/forum?id=SkeHuCVFDr | 2311.15296#77 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 78 | [28] S. Lin, J. Hilton, and O. Evans, âTruthfulQA: Measuring how models mimic human falsehoods,â in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), S. Muresan, P. Nakov, and A. Villavicencio, Eds. Dublin, Ireland: Association for Computational Linguistics, May 2022, pp. 3214â3252. [Online]. Available: https://aclanthology.org/2022.acl-long.229
[29] J. Fu, S.-K. Ng, Z. Jiang, and P. Liu, âGptscore: Evaluate as you desire,â arXiv preprint arXiv:2302.04166, 2023.
[30] S. Zheng, Y. Zhang, Y. Zhu, C. Xi, P. Gao, X. Zhou et al., âGpt-fathom: Benchmarking large language models to decipher the evolutionary path towards gpt-4 and beyond,â arXiv preprint arXiv:2309.16583, 2023. | 2311.15296#78 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 79 | [31] Y. Sun, S. Wang, S. Feng, S. Ding, C. Pang, J. Shang et al., âErnie 3.0: Large-scale knowledge enhanced pre-training for language understand- ing and generation,â arXiv preprint arXiv:2107.02137, 2021.
[32] B. Wang, E. Chern, and P. Liu, âChinesefacteval: A factuality benchmark for chinese llms,â https://GAIR-NLP.github.io/ChineseFactEval, 2023.
[33] J. Chen, W. Shi, Z. Fu, S. Cheng, L. Li, and Y. Xiao, âSay what you mean! large language models speak too positively about negative commonsense knowledge,â in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), A. Rogers, J. Boyd-Graber, and N. Okazaki, Eds. Toronto, Canada: Association for Computational Linguistics, Jul. 2023, pp. 9890â9908. [Online]. Available: https://aclanthology.org/2023.acl-long.550 | 2311.15296#79 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 80 | [34] M. Elaraby, M. Lu, J. Dunn, X. Zhang, Y. Wang, and S. Liu, âHalo: Estimation and reduction of hallucinations in open-source weak large language models,â arXiv preprint arXiv:2308.11764, 2023.
[35] N. Lee, W. Ping, P. Xu, M. Patwary, P. Fung, M. Shoeybi et al., âFactuality enhanced language models for open-ended text generation,â in Advances in Neural Information Processing Systems, A. H. Oh, A. Agarwal, D. Belgrave, and K. Cho, Eds., 2022. [Online]. Available: https://openreview.net/forum?id=LvyJX20Rll
[36] J. Yu, X. Wang, S. Tu, S. Cao, D. Zhang-Li, X. Lv et al., âKola: Carefully benchmarking world knowledge of large language models,â arXiv preprint arXiv:2306.09296, 2023. | 2311.15296#80 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 81 | [37] A. Pal, L. K. Umapathi, and M. Sankarasubbu, âMed-halt: Medical domain hallucination test for large language models,â arXiv preprint arXiv:2307.15343, 2023.
[38] Z. Yin, Q. Sun, Q. Guo, J. Wu, X. Qiu, and X. Huang, âDo large language models know what they donât know?â in Findings of the
Association for Computational Linguistics: ACL 2023, A. Rogers, J. Boyd-Graber, and N. Okazaki, Eds. Toronto, Canada: Association for Computational Linguistics, Jul. 2023, pp. 8653â8665. [Online]. Available: https://aclanthology.org/2023.findings-acl.551
[39] N. Varshney, W. Yao, H. Zhang, J. Chen, and D. Yu, âA stitch in time saves nine: Detecting and mitigating hallucinations of llms by validating low-confidence generation,â arXiv preprint arXiv:2307.03987, 2023. | 2311.15296#81 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 82 | [40] J. Maynez, S. Narayan, B. Bohnet, and R. McDonald, âOn faithfulness and factuality in abstractive summarization,â in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, D. Jurafsky, J. Chai, N. Schluter, and J. Tetreault, Eds. Online: Association for Computational Linguistics, Jul. 2020, pp. 1906â1919. [Online]. Available: https://aclanthology.org/2020.acl-main.173 | 2311.15296#82 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.04072 | 0 | 3 2 0 2
# v o N 7
# ] L C . s c [
1 v 2 7 0 4 0 . 1 1 3 2 : v i X r a
Preprint.
# BEYOND IMITATION: LEVERAGING FINE-GRAINED QUALITY SIGNALS FOR ALIGNMENT
Geyang Guo1â, Ranchi Zhao1â, Tianyi Tang1, Wayne Xin Zhao1,3â , Ji-Rong Wen1,2,3 1Gaoling School of Artificial Intelligence, Renmin University of China. 2School of Information, Renmin University of China. 3Beijing Key Laboratory of Big Data Management and Analysis Methods. [email protected], [email protected], [email protected], [email protected], [email protected]
# ABSTRACT | 2311.04072#0 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04072 | 1 | # ABSTRACT
Alignment with human preference is a desired property of large language mod- els (LLMs). Currently, the main alignment approach is based on reinforcement learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is intricate to implement and train, thus recent studies explore how to develop al- ternative alignment approaches based on supervised fine-tuning (SFT). A major limitation of SFT is that it essentially does imitation learning, which cannot fully understand what are the expected behaviors. To address this issue, we propose an improved alignment approach named FIGA. Different from prior methods, we incorporate fine-grained (i.e., token or phrase level) quality signals that are derived by contrasting good and bad responses. Our approach has made two ma- jor contributions. Firstly, we curate a refined alignment dataset that pairs initial responses and the corresponding revised ones. Secondly, we devise a new loss function can leverage fine-grained quality signals to instruct the learning of LLMs for alignment. Extensive experiments have demonstrated the effectiveness of our approaches by comparing a number of competitive baselines.
# INTRODUCTION | 2311.04072#1 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04072 | 2 | # INTRODUCTION
Pre-trained large language models (LLMs) such as LLaMA (Touvron et al., 2023a) have shown remarkable potentials to solve various downstream tasks by mastering the universal pre-training task of next-token prediction. While after large-scale pre-training, it often needs subsequent tuning for enhancing and regulating the behaviors of LLMs. Two typical approaches are supervised fine- tuning (SFT) and reinforcement learning from human feedback (RLHF), which can largely improve LLMs in both task solving capacity and human alignment (Ouyang et al., 2022). | 2311.04072#2 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 2 | # ABSTRACT
Recent advancements in Large Language Models (LLMs) have revolutionized decision-making by breaking down complex problems into more manageable lan- guage sequences referred to as âthoughtsâ. An effective thought design should consider three key perspectives: performance, efficiency, and flexibility. How- ever, existing thought can at most exhibit two of these attributes. To address these limitations, we introduce a novel thought prompting approach called âEverything â of existing thought of Thoughtsâ (XOT) to defy the law of âPenrose triangle paradigms. XOT leverages pretrained reinforcement learning and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge into thoughts, thereby enhancing LLMsâ capabilities and enabling them to generalize to unseen problems efficiently. Through the utilization of the MCTS-LLM collaborative thought revision framework, this approach autonomously produces high-quality comprehensive cognitive mappings with minimal LLM interactions. Additionally, XOT empowers LLMs to engage in unconstrained thinking, allowing for flexible cognitive mappings for problems with multiple solutions. We evaluate XOT on several challenging problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our results demonstrate that XOT significantly outperforms existing approaches in various dimensions, showcasing its remark- able proficiency in addressing complex problems across diverse domains. | 2311.04254#2 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 3 | Despite widely explored, SFT and RLHF have their own strengths and weaknesses (Zhao et al., 2023a). On the one hand, SFT is easy to implement and can effectively boost the general task solving abilities by instruction based eliciting (Wei et al., 2021; Ouyang et al., 2022; Chung et al., 2022), while it mainly imitates the behaviors of experts (essentially doing behavior clone (Wiseman & Rush, 2016)), which are demonstrated by the human annotators or powerful LLMs such as ChatGPT. Therefore, the SFT performance highly relies on high-quality demonstration data (Zhou et al., 2023), and might suffer from the huge distribution shifts between its outputs and imitated outputs (Zhang et al., 2019; Schulman, 2023). On the other hand, RLHF can better explore the semantic space of LLMs, and identify the optimal policy by encouraging good behaviors and discouraging bad behaviors during learning. However, it is very complicated to effectively implement, often suffering from training instability issues such as reward collapse (Song et al., 2023; Wolf et al., 2023).
To leverage the benefits of SFT and RLHF, several recent studies propose to develop alignment ap- proaches without reinforcement learning (RL). These studies typically construct refined instruction data using methods such as quantile ranking (Lu et al., 2022) and rejection-sampling (Touvron et al., | 2311.04072#3 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 3 | 1
# INTRODUCTION
Recent advancements in Large Lan- guage Models (LLMs) have greatly ad- vanced problem solving in diverse do- mains such as mathematical reasoning Frieder et al. (2023), knowledge rea- soning Omar et al. (2023), root cause analysis Chen et al. (2023) and causal inference Kıcıman et al. (2023), etc.. This progress can be largely attributed to the technique of decomposing intri- cate problems into smaller language se- quences referred to as âthoughtsâ. Through a step-by-step inference process involving the use of prompts, each thought functions as an intermediate stage, contributing to the simplification of tack- ling complex problems to fulfill the problemâs ultimate objective.
Table 1: Comparisons of different prompting paradigms. Paradigm Performance Efficiency Flexibility IO CoT CoT-SC ToT GoT XOT
Effective design of thought steps toward complex problem-solving and reasoning, whether for hu- mans or LLMs, should prioritize three crucial aspects, namely:
⢠Performance. Performance is the accuracy of the solution to a problem, including the precision of each thought at intermediate stages. This metric holds paramount importance for problem-solving.
1 | 2311.04254#3 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 4 | âEqual contribution. â Corresponding author.
1
Preprint.
2023b), and then follow or slightly modify the original SFT loss. Another line of research designs alternative optimization approaches that bypasses reward modeling (Rafailov et al., 2023). To con- duct effective alignment without RL, a key issue is how to effectively learn by discriminating good and bad behaviors as that in RLHF (Ouyang et al., 2022), such that LLMs can understand what are good behaviors to follow and what are bad behaviors to avoid. Despite the prior efforts, they are largely limited by response-level discrimination signals: they are only aware of the quality label (e.g., good or bad) of a demonstration but not what makes it good or bad. Thus, it canât fully capture the correct alignment behaviors even demonstrated by what are good and bad behaviors. | 2311.04072#4 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 4 | ⢠Performance. Performance is the accuracy of the solution to a problem, including the precision of each thought at intermediate stages. This metric holds paramount importance for problem-solving.
1
⢠Efficiency. Efficiency relates to the number of LLM inference calls required to solve a single problem. Minimizing this aspect is crucial due to the high computational cost associated with LLM inference, thereby reducing the overall number of cost.
⢠Flexibility. Flexibility in thought topology refers to the diverse structures that can be employed by LLMs when organizing thoughts for problem-solving. These structures may include chains, trees, or even graphs, mirroring human thought processes. Enabling more flexible thought struc- tures enhances the capacity of LLMs for divergent and creative thinking, which is particularly advantageous in addressing complex problems, especially those with multiple potential solutions. | 2311.04254#4 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 5 | In this work, we introduce FIGA, a novel method that aligns language models with human prefer- ences. The core idea is to contrast a low-quality initial response from a LLMâs output with a cor- responding high-quality revised response by another powerful LLM (e.g., ChatGPT), so that LLMs can be noted with what are newly added (good actions) and what are removed or substituted (bad actions) from such a revision process. Such fine-grained quality signals can be more useful than the widely used response-level quality signal. It can instruct LLMs to emphasize the learning of good actions and penalize the bad actions in a single response. To implement our approach, we first cu- rate an alignment dataset called SPA that pairs an initial response with a revised response under the guidance of the ground-truth demonstrations. We mainly keep the queries that a LLM performs less well on, and perform strict filtering. Further, we design a new fine-tuning method that assigns spe- cific token-level weights to different parts (e.g., good or bad tokens). Our learning loss can directly impose fine-grained reward scores to guide the learning of LLMs for improved alignment. | 2311.04072#5 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 5 | There exist several thought generation paradigms, such as Chain-of-Thought (CoT) Wei et al. (2022), Tree-of-Thought (ToT) Yao et al. (2023), and Graph-of-Thought (GoT), etc.. However, these paradigms each have their limitations and cannot simultaneously achieve all the three desired attributes, as illustrated in Table 1. Specifically, direct Input-Output (IO) prompting is suitable pri- marily for simple problem-solving scenarios with single-step processes, lacking both in performance and flexibility. CoT and self-consistency CoT (CoT-SC) enable step-by-step problem solving, result- ing in modest performance improvements, but they are confined to linear thought structures, limiting their flexibility. In contrast, ToT and GoT permit more versatile thought topologies, accommodating tree-like or graph-like structures. However, these paradigms require the evaluation of intermediate thought steps through LLM itself, incurring significant computational costs and inefficiencies due to multiple LLM calls. These paradigms are constrained by a law analogous to the âPenrose triangle â, wherein they can achieve a maximum of two out of the three attributes, and none of them can | 2311.04254#5 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 6 | To the best of our knowledge, it is the first attempt that leverages fine-grained quality signals for improving the alignment of LLMs without RL. Our approach can make LLMs better understand what are good and bad behaviors beyond simple imitation. By conducting extensive experiments, we demonstrate that FIGA shows promising performance in aligning language models with human preferences: our approach outperform the initial supervised-finetuned model by notable 3.2 points and the strong PPO method by 1.8 points.
# 2 RELATED WORK
In this section, we review the related work in the two aspects, namely reinforcement learning from human feedback and alignment without reinforcement learning. | 2311.04072#6 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 6 | We propose a novel solution called âEverything of Thoughtsâ (XOT) to address the limitations of conventional thought frameworks, enhancing essential attributes of thought generation, includ- ing performance, efficiency, and flexibility for LLM inference.1 XOT leverages reinforcement learning (RL) Li (2017) and Monte Carlo Tree Search (MCTS) Silver et al. (2017), in conjunc- tion with lightweight policy and value networks, to pretrain on specific tasks for thought search- ing and subsequently generalize to new problems. This pretraining effectively integrates external domain knowledge into the âthoughtsâ provided to LLMs, expanding their problem-solving capa- bilities, and thereby significantly improving Performance. Once trained, XOT efficiently performs thought searching using MCTS with cost-effective policy and value networks for exploration and au- tonomously generates complete cognitive mappings for LLMs. It then employs a MCTS-LLM col- laborative thought revision process to further improve the thought quality while minimizing LLM interactions. This eliminates the need for LLMs to explore and evaluate thoughts themselves, as required by ToT and GoT, enhancing XOTâs Efficiency. Furthermore, | 2311.04254#6 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 7 | Reinforcement learning from human feedback Large-scale pre-training empowers large lan- guage models (LLMs) to acquire extensive knowledge, underscoring their remarkable potential across diverse tasks (Brown et al., 2020; Kojima et al., 2022; Zhang et al., 2022; Chowdhery et al., 2022). Nonetheless, models exclusively focus on next token prediction in pre-training phrase, while do not consider human preferences. Consequently, this gives rise to unexpected behaviors like harm- ful or inaccurate information, and emphasizes the necessity to align language models with human preferences. The current mainstream approaches (Ouyang et al., 2022) to better harness the capabili- ties of LLMs include supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF). To be specific, this involves three stages: firstly, using SFT to enable the model to better follow human instructions; subsequently, training a reward model (RM) using human preference data; and ultimately, tune the model to maximize the reward through the proximal policy optimiza- tion (PPO) (Schulman et al., 2017) algorithm. Furthermore, there are works exploring enhancement for | 2311.04072#7 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 7 | This eliminates the need for LLMs to explore and evaluate thoughts themselves, as required by ToT and GoT, enhancing XOTâs Efficiency. Furthermore, MCTS demonstrates remark- able Flexibility as it can explore various thought topologies, including graph structures akin to those employed in human mind mapping processes Faste & Lin (2012); Jamieson (2012). This enables diverse and creative thinking for LLMs, making it particularly valuable when dealing with complex thought structures or tasks featuring multiple potential solutions. By concurrently achieving supe- rior performance, efficiency, and flexibility, XOT challenges the constraints posed by the âPenrose triangle | 2311.04254#7 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 8 | the reward through the proximal policy optimiza- tion (PPO) (Schulman et al., 2017) algorithm. Furthermore, there are works exploring enhancement for this process (Ramamurthy et al., 2022; Lightman et al., 2023; Lee et al., 2023). However, RLHF presents challenges due to complex coding and hyper-parameters selecting. Besides, it requires load- ing three to four models simultaneously, resulting in high memory usage. These challenges propel researchers to explore alternative approaches to align language models with human feedback. | 2311.04072#8 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 8 | We comprehensively evaluate XOT across a diverse range of challenging problem-solving tasks, namely Game of 24, 8-Puzzle, and Pocket Cube. Our experimental results consistently showcase XOTâs superior performance, and its capacity to provide multiple solutions to problems efficiently with just a few LLM calls. These findings establish XOT as an effective thought generation ap- proach, paving the way for new avenues in LLMsâ problem-solving capabilities.
# 2 BACKGROUND
Thought for LLMs. Addressing complex problems often entails breaking down the overarching ob- jective into multiple intermediary steps. The outcomes or cognitive processes associated with each step are thoughts, which can be expressed as linguistic prompt sequences for LLMs to facilitate problem-solving. Structures of these thought may take various forms, including linear chains, hier- archical trees, or interconnected graphs, depending on how the thoughts are organized to advance towards a solution.
1We named it âEverything of Thoughtsâ to signify its three comprehensive thought generation capabilities.
2
(a)lo (b) CoT (9 Corse (d) ToT (f) XoT ic ic) 1 thoughts - Unevalt Hl DD âtought Positive thought Policy/Value ' ' Network on) i ©} mative thought
Figure 1: Comparison of XOT versus other prompting paradigms. | 2311.04254#8 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 9 | Alignment without reinforcement learning Several studies are based on the rationale that lan- guage models have already acquired comprehensive knowledge during the pre-training, and only high-quality supervised fine-tuning data is required for further tuning (Zhou et al., 2023). So these works (Liu et al., 2023b; Sun et al., 2023; Bai et al., 2022b; Bhardwaj & Poria, 2023; Krishna et al., 2022) bypass reward modeling, and instead concentrate on the construction of datasets that align well with human preferences. Other works are directed towards exploring substitutes for the intri- cate PPO algorithm. These efforts employ diverse approaches to learn from the preference data, encompassing the creation of a supervised fine-tuning training dataset enriched with human prefer2
Preprint.
ence data (Liu et al., 2023a; Zhang et al., 2023; Dong et al., 2023), the integration of preferences for different outputs into the loss function (Yuan et al., 2023; Rafailov et al., 2023; Zhao et al., 2023b; Liu et al., 2023c), and the utilization of controllable text generation techniques (Lu et al., 2022). However, the human preference information used in these methods is at the sentence level, lacking more fine-grained supervision signals.
# 3 APPROACH | 2311.04072#9 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 9 | Figure 1: Comparison of XOT versus other prompting paradigms.
Input-Output (IO) Prompting (Fig. 1 (a)). The IO method is the most straightforward approach to instruct LLMs to address a problem without the provision of any intermediate thought processes.
Chain-of-thought (CoT) Wei et al. (2022) (Fig. 1 (b)). CoT decomposes problem-solving into a sequential chain of thoughts, allowing LLMs to approach complex problems step by step.
Self-consistency CoT (CoT-SC) Wang et al. (2023a) (Fig. 1 (c)). CoT-SC employs multiple in- stances of the CoT to generate multiple outputs from LLMs. It selects the the best results from multiple LLM outputs, offering more robust and consistent inference compared to the vanilla CoT.
Tree-of-thought (ToT) Yao et al. (2023) (Fig. 1 (d)). ToT organizes thoughts in a tree-like structure and utilizes search algorithms (e.g., Breadth-First Search, Depth-First Search) to expand the tree in pursuit of an optimal solution. However, thought evaluation in ToT relies on LLMs themselves, necessitating multiple costly and inefficient LLM inference calls. | 2311.04254#9 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 10 | # 3 APPROACH
In this section, we present the proposed alignment approach FIGA by leveraging fine-grained qual- ity signals. Our approach is developed based on a specially curated alignment dataset called SPA (Section 3.1), where each low-quality initial response is paired with a high-quality revised response. Based on such an alignment dataset, we further develop a new loss function that incorporates fine- grained quality signals derived by contrasting good and bad responses (Section 3.2). Our approach is easy to implement (similar to SFT) and can capture the underlying effect to generate high-quality responses instead of simply imitating them (similar to RLHF), which are discussed in Section 3.3. The overall framework of our FIGA pipeline is shown in Figure 1.
What is the best way to get from > What is the best way to get from Tokyo to Osaka? x Tokyo to Osaka? oa eas x _â> Instances The best way to get from Tokyo to Osaka is by taking the Shinkansen bullet ° Pool Y Cy) Y ry) train, With the bullet train, you can reach Y © Osaka from Tokyo in just over 2 hours. A â sinkansen bullet 2 Desired words The best way to get from Tokyo to Â¥ sour hours and here ae severed 5 trains per day. Frans per day, Tnitial Model Align the Tnitial Model Reward Model LLM | 2311.04072#10 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 10 | Graph-of-thought (GoT) Besta et al. (2023) (Fig. 1 (e)). GoT extends the ToT approach by en- abling the generation of graph-like thought structures through thought aggregation and refinement during intermediate search phases. Although this method permits more flexible thought structures, it still demands multiple LLM inference calls for evaluation, incurring significant computational costs.
# 3 XOT: EVERYTHING OF THOUGHTS
XOT serves as an LLM-MCTS collaborative framework designed to enhance the thought generation process, thereby assisting LLMs in resolving complex problems. It leverages MCTS for proficient and efficient thought exploration while harnessing the capabilities of LLMs to refine and amend the thoughts derived from MCTS. This synergistic interaction creates a mutually beneficial arrangement, ultimately enabling the successful resolution of intricate problems characterized by high levels of performance, efficiency, and flexibility.
3.1 XOT IN A NUTSHELL
We present an overview of the architecture of XOT in Fig. 1 (f). XOT comprises two key compo- nents: (i) a MCTS module guided by policy/value networks; and (iii) an LLM solver for thought revision and inference. The MCTS and policy/value networks need to be trained and then generalize to the inference process. | 2311.04254#10 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 11 | Figure 1: The overall illustration of our alignment approach FIGA.
3.1 CURATED ALIGNMENT DATASET
From the perspective of dataset, the novelty of our alignment approach can be given in two major aspects. Firstly, we donât directly aggregate all the available instruction data, but instead focus on high-quality instruction data that a LLM performs less well on. It enables LLMs to specially improves their weaknesses, reducing the cost of replicate learning. Secondly, we donât take what human annotators write or powerful LLMs (e.g., ChatGPT or GPT-4) generate as training targets, but instead seek a more similar surrogate that is derived based on its own output by a LLM. It can largely reduce the distribution shift between the LLM to be aligned and the ground-truth demonstrations.
We carefully construct the SubPar Alignment (SPA) dataset, a curated collection of query, modelâs initial response, and the corresponding improved response (with minor revision). Compared with prior work (Ouyang et al., 2022; Yuan et al., 2023; Liu et al., 2023a), we mainly consider the queries where LLMsâ performance are not satisfactory and aim to correct these bad cases via specific train- ing. Moreover, we refine the initial response of a LLM that is to be aligned as training target, which can effectively reduce the distribution shifts from the ground-truth demonstrations. | 2311.04072#11 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 11 | During the training phase, MCTS is harnessed to explore potential thought structures for a spe- cific task through simulated scenarios. This process entails the recording of states, values, and the visitation frequencies of thought nodes in each simulation. These recorded data are subsequently employed to iteratively train the policy and value estimation model, enabling it to assimilate domain knowledge and comprehend the world model.
Once trained, the estimated policy and value are utilized to guide the MCTS to systematically search for a thought trajectory provided to aid LLMs in problem-solving. Note that thoughts extracted only play a supporting role, assisting LLMs in gathering knowledge from external sources. These thoughts do not provide LLMs with definitive or error-free answers, as they may contain inaccu- racies or suboptimal solutions. LLMs are responsible for review and refining these thoughts when they seem erroneous or require adjustments. They continue MCTS the search process if needed
3
(a) Select (b) Expand & Evaluate (c) Backpropagation (d) Thought inference Extracted so a a a ee E Ext aN a ace iS g i hl 6 © 6 OB ARS a Tem ee oe oo s A (P,v) = fa : : a : a } Pt = Q J K~ $3 ratatate BR sews 2 Potente JOR ae | 2311.04254#11 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 12 | Formally, we denote the initial model as Ïθ, which can be a supervised-finetuned model (e.g., Al- paca (Taori et al., 2023)) or a pre-trained base model (e.g., LLaMA (Touvron et al., 2023a)). To construct our dataset, we assume that a reward model for assessing the alignment level is available. In practice, a number of reward models have been released publicly (e.g., DeBERTa (OpenAssis- tant, 2023)), which can be used for our approach. Given a query X and a response Y , we leverage a reward model RM to compute the reward score RY = RM(X, Y ), which reflects how well the response Y aligns with given query X. Below, we detail the construction procedure.
3
Preprint. | 2311.04072#12 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 12 | Figure 2: An illustration of iterative phases in MCTS for thought searching ((a)-(c)) and thought inference in problem resolution (d).
and eventually formulate the final answers by integrating these external thoughts with their internal knowledge.
3.2 THOUGHT SEARCHING FORMULATION
The fundamental objective of employing the thought generation paradigm for LLMs is to identify the optimal decomposition of a complex problem into several manageable sub-steps. Each sub-step aims to alter the current status of the problem, eventually culminating in the successful resolution of the overarching problem. This approach, as seen in ToT and GoT, hinges on well-defined state tran- sitions and clear final objectives. Consequently, it is natural to conceptualize the thought-searching process as a Markov Decision Process (MDP) Puterman (1990), in which:
⢠State st: Represents the current status of the problem. The initial state s0 corresponds to the original problem, while intermediate states are characterized by either decomposed sub-problems or the results stemming from their resolution.
⢠Action at: Signifies the one-step solution or action associated with tackling a problem, leading to a transition to a new state, by incorporating their outcomes.
⢠Reward r: Reflects the comprehensive evaluation of the solution to the original problem, assess- ing whether it has been effectively resolved through the process of problem decomposition. | 2311.04254#12 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 13 | 3
Preprint.
Rollout for initial response generation We first broadly collect existing paired datasets encom- passing a wide range of real-world tasks, and construct the instances pool D = {X, Y }n i=1. To better align with human value, we select preference datasets (e.g., HH-RLHF (Bai et al., 2022a)) that adhere to the 3H principle (i.e., helpfulness, honesty, and harmlessness) in this work. Further- more, we also include instruction dataset (e.g., OpenOrca (Mukherjee et al., 2023)) to preserve the task solving abilities of LLMs. We aim to train a both capable and safe model like ChatGPT, rather than only focusing on alignment while sacrificing the task solving abilities. Based on these datasets, we employ the rollout model Ïθ to generate initial responses ËY = Ïθ(X) for the given queries. | 2311.04072#13 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 13 | ⢠Reward r: Reflects the comprehensive evaluation of the solution to the original problem, assess- ing whether it has been effectively resolved through the process of problem decomposition.
⢠Thought Ï : A one-step thought is a combination of one-step state and action, i.e., Ï = {s, a}. This formulation naturally encapsulates the process of decomposing a complex problem into multiple sub-tasks, each accompanied by their respective outcomes.
The detailed definitions of state, action, reward and thought for each task are shown in Table 1. The generation of complete thoughts T = {Ï1, · · · , ÏN }, can be construed as the endeavor to discover a thought trajectory to maximize the accumulated reward to address the overall problem.
3.3 THOUGHTS SEARCHING WITH MCTS | 2311.04254#13 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 14 | Identifying the queries to be enhanced After obtaining the modelâs initial response ËY and the human-preferred response Y , we next identify the queries where the model requires further im- provement to better align with human intent through the reward score RM(·). Following existing work (Ouyang et al., 2022), we employ the reward model as a surrogate of human preferences, and design a filtering process based on the calculated reward score R ËY and RY for all the instances. We only keep the instances that meet all the three following restrictions: (1) R ËY < η1 (a subpar initial performance, i.e., bad cases), (2) RY > η2 (high-quality demonstrations), and (3) RY â R ËY > η3 (clear quality difference), where η1, η2, and η3 are three threshold values for filtering, we will set them according to the reward score distribution. The details can be found in Section 4.1.2. With the above filtering mechanism, we ensure the quality and usefulness of our SPA dataset. We target at bad case correction of the rollout model, which is more directed and effective than existing methods that directly trains the model on the whole collected dataset. | 2311.04072#14 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 14 | 3.3 THOUGHTS SEARCHING WITH MCTS
The formulation above naturally aligns the thought within LLM as a state-action pair. This approach facilitates the effective exploration of its optimal trajectory using a combination of MCTS and RL. This adheres to an iterative simulation cycle that encompasses three key phases: selection, expansion & evaluation, and backpropagation. It heavily depends on the utilization of neural networks fθ, which simultaneously estimate the value and action probability for a given state st. The aim is to reduce the number of rollouts and accelerate the search process, similar to the approach employed in AlphaGo Zero Silver et al. (2017). We provide a visual representation of an iteration of the MCTS in Fig. 2 (a)-(c) by taking Pocket Cube as an example and detail each process below.
Selection. In the selection phase, the algorithm initiates at the root node and proceeds to choose an action aâ from the available set A(s) for single-step thought generation in the current state s. This process continues until a leaf node within the current tree is reached. The selection is guided by the PUCT algorithm Rosin (2011), aiming to maximize the Upper Confidence Bound (UCB) Garivier
4
& Moulines (2011), as follows: | 2311.04254#14 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 15 | Revising initial responses for reducing the distribution shifts To align a LLM, a basic principle is to ensure that the distribution of the model should not experience significant shifts during the alignment process (Bai et al., 2022a). Despite that the ground-truth demonstration (Yi) is human preferred, it is likely to span a very different semantic distribution as the LLM to be aligned. Our solution is to revise the initial response ( ËY ) by referring to the ground-truth demonstration (Yi). In this way, we can effectively reduce the distribution shifts as well as obtaining demonstrations similar to the original output. Specially, we generate a pseudo reference ËY based the target Yi, making minor adjustments to the ËY and enhance its quality, i.e., modifying ËY as minimally as possible based on Yi. Such a generation process is conducted by prompting the powerful ChatGPT. To facilitate the generation process, we further manually inspect the low-quality responses that we have previously filtered and identify four major low-quality reasons: (1) lack of detail, (2) inaccuracy in response, (3) the need for structural adjustments, and (4) other factors | 2311.04072#15 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 15 | 4
& Moulines (2011), as follows:
aâ = arg max aâA(s) Q(s, a) + w · Pθ(s, a) N (s) 1 + N (s, a) . (1)
Here, Q(s, a) denotes the Q-value of a state-action pair (s, a). The term Pθ(s, a) denotes the pre- dicted prior probability of selecting action a given the state s obtained from a neural network fθ, and N (s, a) represents the count of times action a has been chosen in state s. The parameter w con- trols the trade-off between exploration and exploitation. The selection process will continue until an unexplored node is encountered. | 2311.04254#15 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 16 | reasons: (1) lack of detail, (2) inaccuracy in response, (3) the need for structural adjustments, and (4) other factors (off-topic or harmful content). In detail, we leverage ChatGPT to determine, given Yi, which of the four reasons ËY is associated with. Afterwards, we design different prompts for the four reasons and instruct the LLM to make minor correction to the initial response ËY based on Yi. We denote the revised response as ËY . The details of our process and prompts can be found in Appendix A.2. Finally, we obtain the SPA dataset {X, ËY , ËY } for subsequent training. Our construction method has dual merits: it not only aligns the reference output with human preferences but also preserves the inherent linguistic style and overall semantic distribution of the model to be aligned. Note that we keep both the initial and revised responses in a contrastive form, because they are jointly used for deriving fine-grained quality signals in subsequent training. | 2311.04072#16 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 16 | Evaluation and Expansion. Upon reaching a previously unselected leaf node, we expand to the state s for the next step for new thought exploration. This expansion involves the evaluation of its value and action probability on the state, which are modeled by neural networks parameterized by θ, i.e., (Pθ(s), vθ(s)) = fθ(s). Here Pθ(s) is the prior probabilities for all actions on s, and vθ(s) denotes its predicted state value. These two values are retained and stored for backup purposes, and state s is masked as âvisitedâ.
Backpropagation. Following the expansion of a leaf node in the above phases, which could be either an unexplored or terminal state, the algorithm proceeds to update all the Q(s, a) values via backpropagation. For unexplored nodes, this update involves computing the mean of its estimated value vθ, while for terminated nodes, itâs based on the true reward r. These updates occur as infor- mation is backpropagated along the trajectory to subsequent nodes. Additionally, the visit count for each state-action pair is also incremented as follows: N (s, a) = N (s, a) + 1. | 2311.04254#16 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 17 | 3.2 FINE-GRAINED QUALITY-AWARE ALIGNMENT TUNING
As described above, our fine-tuning dataset for alignment contains both low-quality initial responses ( ËY ) and high-quality revised responses ( ËY ). Instead of directly learning from these high-quality responses (similar to rejection sampling (Touvron et al., 2023b)), it is important for LLMs to under- stand why such revisions are useful to produce the high-quality responses. Furthermore, LLMs can improve the alignment capacity from the contrast between good and bad responses.
Motivated by previous work (Liu et al., 2022), we utilize Levenshtein distance to quantify the simi- larity between of ËY and ËY . Levenshtein distance is a dynamic programming algorithm to obtain the minimal edit distance between two sentences through three operations: addition, deletion, and sub- stitution. Comparing the initial and revised response, the involving tokens can be generally divided into three types: newly added, deleted, or substituted. We consider assigning different weights to
4
# Preprint. | 2311.04072#17 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 17 | A simulation is completed after a sequence of selection, evaluation, expansion, and backpropagation steps. After conducting multiple simulations, we proceed to the next step by selecting an action at state s using a probability distribution defined as εa â N (s, a)1/γ, where γ is a temperature constant that regulates the level of exploration.
Policy and Value Networks Training. The simulations described above allow us to compile a dataset for each sample state s containing (s, ε(s), v(s)), where ε(s) = {εa | a â A(s)}, and v(s) represents the ground truth value obtained by accumulating rewards along the trajectory starting from state s. Subsequently, we can train a combined policy and value network fθ to minimize the discrepancy between the predicted value vθ(s) and the actual value v(s), while also maximizing the alignment between the action probabilities produced by the neural network Pθ(s) and the search probabilities ε(s). This can be achieved by minimizing the following loss function:
L = (v(s) â vθ(s))2 + ε(s)T log Pθ(s)). This training iterates alongside the simulation process to continually enhance the performance of fθ, resulting in progressive improvements in thought searching capabilities.
3.4 THOUGHT INFERENCE WITH MCTS | 2311.04254#17 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 18 | 4
# Preprint.
these three types of tokens. We reward the tokens that are added or substituted in the revised re- sponse ËY , penalize the tokens that are deleted or substituted in the original response ËY , and tend to overlook the rest tokens that remain the same after the revision process. Formally, we introduce two token-level weighting functions to characterize the above ideas:
~ a â) a, if y is added or substituted q = . yt 7, otherwise qd) apa B, if % is deleted or substituted - (hi. t) = 0, otherwise
where α > 0, β > 0, and γ ⥠0 are three coefficients to control the encouraged, discouraged, and ignored parts, which can be empirically set or learned from tuning data.
In this way, we can then encourage the model to âimitateâ the desired actions that have a greater impact on enhancing quality, discourage the model from emulating the undesired actions that lead to a poor performance in quality. The final training loss can be formulated as:
L = â Ër(Ëyt, t) log Ïθ(Ëyt|Ëy<t, X) + Ër(Ëyt, t) log Ïθ(Ëyt|Ëy<t, X) . (2)
# HEY
EY decrease the probability of undesired words | 2311.04072#18 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 18 | 3.4 THOUGHT INFERENCE WITH MCTS
Once trained, we utilize the fθ to guide the MCTS in generating a thought for a new problem, which assists the LLM in solving it. Specifically, MCTS is utilized to perform K simulations aimed at thought searching and problem-solving, as illustrated in Fig.2 (d). In each simulation, fθ is em- ployed to guide the MCTS in its search for a thought trajectory. Throughout the training process, fθ incorporates external information related to the state and action quality. This information helps LLMs understand the world model, enhancing their long-term reasoning and planning abilities, which are areas they may not excel in Stechly et al. (2023); Valmeekam et al. (2023), thereby ensur- ing the performance of thought generation. Once the simulation concludes, we record the visiting count N (s, a) and the thought trajectory is obtained based on the number of solutions required:
⢠Single solution. starting from each state s, the action with the highest visiting count N (s, a) is selected.
⢠Multiple solution. we sample M thought trajectories following the probability distribution εa â N (s, a) and remove duplicates. | 2311.04254#18 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 19 | # HEY
EY decrease the probability of undesired words
increase the probability of desired words
_
The overall FIGA pipeline is illustrated in Algorithm 1. The major advantages of FIGA over typical SFT (Ouyang et al., 2022) is that it can learn from fine-grained contrast between good and bad responses, which is essentially similar to that in reinforcement learning (discussed in Section 3.3). In addition, by explicitly modeling the revision effect, such an approach can naturally zoom into crucial words or phrase, making the model better zoom into fine-grained semantics.
# Algorithm 1: FIGA - Leveraging Fine-grained Quality Signals for Alignment | 2311.04072#19 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 19 | ⢠Multiple solution. we sample M thought trajectories following the probability distribution εa â N (s, a) and remove duplicates.
This results in one or multiple thought trajectories T â that consist of a sequence of state-action pairs for problem-solving. The trajectories for multi-solution problems may intertwine and converge at
5
MCTS LLM LLM _ââ Identified Extract error state Extract Extracted Simulations hought Additional L Revised thoughts Simulations thoughts (ference)
Figure 3: An illustration of thought revision process in XOT.
the same goal state, resulting in a graph-like thought structure. This demonstrates that XOT is capable of generating thought structures with flexibility. These trajectories are then transformed into text sequences that are concatenated to form a prompt sequence provided to LLMs. Note that the thought trajectory is concatenated into a single prompt, even in the case of problems with multiple solutions. Therefore, we only require a single LLM inference call at this stage. Given that the fθ network is relatively lightweight, this ensures the efficiency of XOT. | 2311.04254#19 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 20 | 1 Input: Instance pool D = {X, Y }n 2 ### SPA Dataset Construction 3 for each instance {X, Y } in D do i=1, initial model Ïθ, revision model (ChatGPT), reward function R(·). 4 5 1. Rollout for initial generation. Generate ËY â¼ Ïθ(X) and compute RY , R ËY ; 2. Reward filtering. if R ËY > η1 or RY < η2 or RY â R ËY < η3 then 6 Discard the current instance; 7 3. Response Revision. Analyze the reason for the poor performance of ËY , and generate the corresponding revision ËY â¼ LLM( ËY , Y ) based on the identified reason. 8 Construct the SPA dataset S = {Xi, ËYi, ËYi}m 9 ### Alignment Learning 10 for epoch e = 1, ..., E do i=1. 11 for each instance {X, ËY , ËY } in SPA S do 12 Locate the crucial parts with Levenshtein distance using Equation 1 and assign weights according to Ër(Ëyt, t) and Ër(Ëyt, t); 13 Update Ïθ using the fine-grained quality-aware learning objective in | 2311.04072#20 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 20 | Thought Revision. It is important to acknowledge that that MCTS may not always provide the globally optimal thought trajectory to directly solve the problem flawlessly. Therefore, the thoughts extracted from MCTS serve as a reference thinking process for the problem, aiding LLMs in a sup- portive capacity. The LLMs will leverage their internal knowledge to review the extracted thought, identify errors in the thought trajectory, and then ground its knowledge in collaboration with the MCTS to revise and refine the thought. | 2311.04254#20 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04254 | 21 | The revision process is iterative in nature, as shown in Fig. 3. Initially, upon obtaining the extracted thought, we instruct the LLM to detect any errors in the thought generated by MCTS using its in- ternal knowledge. If the LLM identifies an error, it results in an error state denoted as se within the thought. If no error is found, the thought remains unchanged. Starting from the parent state of se, MCTS conducts an additional set of L simulations, ultimately yielding a revised thought for the LLM. In scenarios involving multiple solutions, each solution undergoes this process individually. Upon the completion of the revision, we supply the LLMs with the revised thoughts for problem- solving. The revision process can be repeated several times to enhance the reliability of the answer. This collaborative MCTS-LLM framework nurtures a mutually beneficial process for both compo- nents, ultimately contributing to the overall performance of problem-solving. Since LLMs are solely utilized for identifying errors during the revision process with only one call, the efficiency of XOT is effectively maintained. | 2311.04254#21 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 22 | 3.3 DISCUSSION
In this part, we discuss how the proposed FIGA approach relates to existing fine-tuning approaches, namely SFT and RLHF.
Relationship with SFT SFT can be viewed as a special case of our FIGA method without revision, when training is performed with the higher-quality instance Y , and each token of Y is considered equally important. Compared to SFT, FIGA has the following two advantages: (1) we only consider the inferior part of the bad case that the initial model does not perform well; (2) we explicitly enforce the model to understand what are good and bad behaviors in the loss function. It inherits the merits of SFT, and further leverages fine-fined quality signals for improving the alignment.
5
Preprint.
Relationship with RL Our method can be considered as a simplified but efficient version of RL. Using typical PPO method (Schulman et al., 2017) as an example, its objective is to optimize the actor model (i.e., the initial model Ïθ) to maximize the expected reward score, formally given as:
fielder, X) PPO = (Zan âAy ) ; 3 » Tous (GelG<t,X) â | 2311.04072#22 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 22 | The collaborative revision framework harnesses the strengths of both MCTS and LLMs. MCTS efficiently and flexibly generates candidate thoughts for LLMs through simulations, while LLMs use their internal knowledge to revise and ground these thoughts within the MCTS framework, effectively turning MCTS into a world model for LLMs. This process ensures the generation of high-quality thoughts for problem-solving.
# 4 EXPERIMENT
We conduct an extensive evaluation of our XOT approach2 in comparison to several baseline meth- ods across three challenging tasks: the Game of 24, the 8-Puzzle (with a 3 Ã 3 grid), and the 2 Ã 2 Pocket Cube. An overview of these tasks is provided in Table 2. These tasks are characterized by their complexity, requiring multiple steps for completion and potentially having multiple solutions. To assess the effectiveness of our proposed XOT, we compare it against IO, CoT, CoT-SC, ToT, and GoT methodologies. We employ both GPT-3.5 Ouyang et al. (2022) and GPT-4 OpenAI (2023) for these evaluations. Note that temperature and top p are set to 0.0 for all LLM invoked.
2Code and dataset to reproduce this work will be shared in the near future, following compliance with the affiliation policy.
6
Table 2: An overview of tasks employed in this study. | 2311.04254#22 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 23 | fielder, X) PPO = (Zan âAy ) ; 3 » Tous (GelG<t,X) â
where AËyt is the advantage function of the Ëyt token returned by the critic model given the reward score R ËY . Ïθold is the model before the previous parameter update. Here, we ignore the clipping function and KL penalty for convenience. Considering the FIGA training objective in Equation 2, our weight functions Ër(·) and Ër(·) in FIGA can be viewed as a simplified advantage function A(·) in Equation 3 to evaluate the importance of each token. Therefore, FIGA has a similar objective with RL but with a simplified token-wise reward function. We do not use an extra learned critic model and remove the use of previous rollout model, which makes FIGA more efficient. In the later experiment section, we will verify the effectiveness of our method.
# 4 EXPERIMENT
4.1 EXPERIMENTAL SETUP
4.1.1 BASELINE METHODS | 2311.04072#23 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 23 | Objective Input Output Thought State Action Game of 24 Use four numbers on playing cards to make the number 24 through +, â, Ã, or ÷. 4 numbers ranging from 1 to 13, e.g., (4, 6, 10, 10). An equation to reach 24, e.g., 4 à 6 + 10 â 10 = 24. 3 intermediate equations. The remaining 1-4 numbers. Picking two number and a operation to compose an equation. 8-Puzzle Rearrange the tiles in the 3 à 3 puzzle from an scrambled state to a goal state . A scrambled 3 à 3 digital puzzle, e.g., . The slide sequence of the â-â tile, e.g., (Up, Down, Left, Right · · · ). The step-by-step sliding, and the puzzle state after the move. The current number layout of the puzzle. The one-step moving action of the â-â tile. Pocket Cube Rotating the faces of a 2 à 2 pocket cube until each face of the cube is a uniform color A scrambled 2 à 2 . pocket cube, e.g., . Colors represented as numbers for LLMs. The rotation move sequence of the cube, e.g., (F, R2, Uâ · · · ). The | 2311.04254#23 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 24 | # 4 EXPERIMENT
4.1 EXPERIMENTAL SETUP
4.1.1 BASELINE METHODS
(1) In order to better evaluate FIGA method, we choose several baselines for comparison: SFT (Ouyang et al., 2022): it continues to fine-tune the initial model using pairs of data with sequence-to-sequence loss. (2) PPO (Ouyang et al., 2022): it optimizes the initial model to achieve a higher reward score provided by the reward model through the PPO algorithm. (3) CoH (Liu et al., 2023a): it annotates the dataset by prefixing âA helpful answer: â and âAn unhelpful answer: â to the responses of corresponding quality, employs SFT on it and computes loss only for the specially masked response tokens. (4) RRHF (Yuan et al., 2023): it applies SFT on the optimal responses, and further optimizes the ranking loss among responses from multiple sources by encouraging the model to achieve a greater conditional log probability for the response that holds a superior ranking.
IMPLEMENTATION DETAILS | 2311.04072#24 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 24 | . Colors represented as numbers for LLMs. The rotation move sequence of the cube, e.g., (F, R2, Uâ · · · ). The step-by-step rotation, and the cube state after the move. Colors of each face of the pocket cube. The one-step rotation action of cube. Reward 1 if the number of the final number is equal to 24 otherwise -1. The negative minimum step on solving the current puzzle state toward the goal state. The negative minimum moving step on solving current cube state toward the goal state. | 2311.04254#24 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 25 | IMPLEMENTATION DETAILS
Training Datasets For our SPA dataset mentioned in Section 3.1, we broadly select the follow- ing datasets as our initial instances pool: HH-RLHF (Bai et al., 2022a), ShareGPT (ShareGPT, 2023), Synthetic Instruct GPT-J Pairwise (Dahoas, 2023), Stanford SHP (Ethayarajh et al., 2022), and OpenOrca (Lian et al., 2023). We employ the Alpaca-7b model Taori et al. (2023) as the rollout model for generating responses ËY , and gpt-3.5-turbo to revise and obtain ËY . The prompt used for revision can be found in Appendix A.2 As for the filtering process, we utilize OpenAssistant/reward-model-deberta-v3-large-v2 (OpenAssistant, 2023) as the reward model. According to the reward score distribution, we empirically set the threshold values η1 = 1, η2 = 3, η3 = 3.5, respectively. | 2311.04072#25 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 25 | Policy/Value Networks Configurations. The policy and value networks in our model utilize a shared multi-layer perceptron (MLP) architecture with two layers and hidden units arranged as (128, 256). Two heads connected to the MLP are responsible for predicting vθ(s) and Pθ(s) separately. This design results in a considerably smaller model compared to LLM, making it much more ef- ficient. We train this model through three iterations, with each iteration comprising 10 self-play episodes for MCTS.
Evaluation Metric. For each task, we assess the accuracy of each approach on the test set. Addi- tionally, we track the number of LLM invocations required for all approaches to solve a problem, as well as the number of times fθ is invoked in the case of XOT. Itâs important to note that fθ is a considerably smaller model compared to LLMs. In the context of multi-solution scenarios, ac- curacy is computed as the percentage of problems for which any of the answers provided by each approach is correct. Multi-solution Accuracy (MultiAcc) is calculated as the average percentage of correctness across all solutions offered. Furthermore, we capture the total count of distinct solutions provided by each approach, regardless of their correctness, represented as #Sol. Note that we set the maximum solution number to 3 for all problems in multi-solution scenarios. | 2311.04254#25 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 26 | The statistics of reward scores and edit operations for the SPA dataset are presented in Table 1, while the distribution of the reward scores is illustrated in Figure 2. We can find that the initial response ËY has a large distribution gap with the reference distribution Y , which may cause the model hard to learn from the golden target. In contrast, our revised response is closer to the original distribution but with higher quality, making the rollout model easier to learn. The final SPA dataset we obtained consists of 17,333 instances.
Model Details (1) For SFT, we set learning rate to 1e-5 and batch size to 128. We conduct 5 epochs of training and choose the one with the highest reward score on the test set as the ultimate SFT model. (2) For PPO, we apply the OpenLLaMA2 (OpenLLMAI, 2023) library, and adhere to the parameter configurations within it. We use the Alpaca-7b to initialize the critic model, and use the same reward model mentioned in the construction process of the SPA dataset. Given the modest gains observed in previous experiments when employing PPO-ptx on models around 6B parameters (Ouyang et al., 2022), we refrain from introducing pre-training mix as an additional
6
Preprint.
04 0.3 Density ° = Reward Score | 2311.04072#26 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |
2311.04254 | 26 | 4.1 GAME OF 24
The Game of 24 presents a arithmetic challenge wherein the goal is to employ four numbers within the range of 1 to 13, in conjunction with basic arithmetic operations, (i.e., +, â, Ã, ÷), to attain a final result of 24. This game may possess multiple valid solutions.
# 4.1.1 TASK SETUP
We collect a dataset from 4nu, comprising 1,362 games ranked by human solving time, spanning a range of difficulty levels from easy to hard. For our testing phase, we randomly selected 137 games, ensuring coverage of various difficulty intervals. The remaining 1,225 problems were used to train the policy/value networks with MCTS. In the context of this task, as outlined in Table 1, the thoughts refer to the three intermediate equations, while the state encompasses the available numbers (ranging
7
Table 3: Performance comparison on Game of 24. | 2311.04254#26 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as ``thoughts''. An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle
of existing thought paradigms. XoT leverages pretrained reinforcement learning
and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge
into thoughts, thereby enhancing LLMs' capabilities and enabling them to
generalize to unseen problems efficiently. Through the utilization of the
MCTS-LLM collaborative thought revision framework, this approach autonomously
produces high-quality comprehensive cognitive mappings with minimal LLM
interactions. Additionally, XoT empowers LLMs to engage in unconstrained
thinking, allowing for flexible cognitive mappings for problems with multiple
solutions. We evaluate XoT on several challenging multi-solution
problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our
results demonstrate that XoT significantly outperforms existing approaches.
Notably, XoT can yield multiple solutions with just one LLM call, showcasing
its remarkable proficiency in addressing complex problems across diverse
domains. | http://arxiv.org/pdf/2311.04254 | Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, Dongmei Zhang | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | [
{
"id": "1706.06708"
},
{
"id": "2305.00050"
},
{
"id": "2310.12397"
},
{
"id": "2302.05128"
},
{
"id": "2307.15337"
},
{
"id": "2305.10601"
},
{
"id": "1701.07274"
},
{
"id": "2301.13867"
},
{
"id": "2305.04388"
},
{
"id": "2302.02662"
},
{
"id": "2305.08291"
},
{
"id": "2305.15778"
},
{
"id": "2302.01560"
},
{
"id": "2210.03493"
},
{
"id": "2306.03314"
},
{
"id": "2308.09687"
},
{
"id": "2303.16563"
},
{
"id": "2302.00923"
},
{
"id": "2310.08118"
},
{
"id": "2303.11366"
},
{
"id": "2302.06466"
}
] |
2311.04072 | 27 | 6
Preprint.
04 0.3 Density ° = Reward Score
Data R(·) #ops ËY -1.07 â Y 3.94 75.69 ËY 1.78 39.38
Table 1: The average reward score of re- sponse data and the average number #ops of editing operations to them from the ËY .
Figure 2: Reward score distributions.
training objective. (3) For CoH, we use the data construction method of the original paper on our SPA dataset. Taking into account our smaller dataset size compared to the original paper, we set FCM (the ratio of random mask token to prevent overfitting) to 0. Additionally, to ensure a fair comparison with PPO, we disable the pre-training dataset regularization. (4) For RRHF, we follow the recommended hyper-parameters from the original papers on our SPA dataset. (5) For FIGA, we set the parameters α = 1, β = 0.5, γ = 0 respectively. Besides, considering the instability when training on negative samples in practice (Bhardwaj & Poria, 2023; Liu et al., 2023a), we further select the bad tokens returned by Levenshtein distance in equation 1 by retaining only those with a negative log-likelihood less than 0.6.
4.1.3 EVALUATION TASKS | 2311.04072#27 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | Alignment with human preference is a desired property of large language
models (LLMs). Currently, the main alignment approach is based on reinforcement
learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is
intricate to implement and train, thus recent studies explore how to develop
alternative alignment approaches based on supervised fine-tuning (SFT). A major
limitation of SFT is that it essentially does imitation learning, which cannot
fully understand what are the expected behaviors. To address this issue, we
propose an improved alignment approach named FIGA. Different from prior
methods, we incorporate fine-grained (i.e., token or phrase level) quality
signals that are derived by contrasting good and bad responses. Our approach
has made two major contributions. Firstly, we curate a refined alignment
dataset that pairs initial responses and the corresponding revised ones.
Secondly, we devise a new loss function can leverage fine-grained quality
signals to instruct the learning of LLMs for alignment. Extensive experiments
have demonstrated the effectiveness of our approaches by comparing a number of
competitive baselines. | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | [
{
"id": "2309.00267"
},
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "2305.20050"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "1707.06347"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2302.02676"
},
{
"id": "2304.06767"
},
{
"id": "2305.03047"
},
{
"id": "2308.09662"
},
{
"id": "2009.03300"
},
{
"id": "2304.11082"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1804.09301"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2210.01241"
},
{
"id": "2212.08073"
},
{
"id": "1606.02960"
},
{
"id": "2306.02707"
},
{
"id": "1906.02448"
},
{
"id": "2204.05862"
},
{
"id": "2303.18223"
},
{
"id": "2305.17608"
},
{
"id": "2305.14239"
},
{
"id": "2305.11206"
},
{
"id": "2109.07958"
},
{
"id": "2305.16960"
}
] |