doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2311.01343 | 32 | where an example based on the Amazon Beauty dataset can be referred to in Fig. 3. However, directly conducting language model- ing on the raw corpora is clearly infeasible, as each document is composed of heterogeneous vocab, user, and item tokens, where the number of meaningful vocab tokens (e.g., â¼ 50k for GPT, and â¼ 30k for T5) can be diluted by the large number of newly introduced user/item tokens with randomly initialized embeddings.
3.3.2 Soft+Hard Prompting. To address the above challenge, we propose a novel soft+hard prompting strategy to facilitate language modeling on RS-specific corpora with heterogeneous user/item/vocab tokens. The strategy is based on a key observation that documents transformed from both user-item interactions rð and user/item tex- tual features xð¢ ð ð can be broken down into two parts: A heterogeneous part composed of soft (user/item) and hard (vocab) tokens providing context information regarding the gist of the doc- ument, and a main text part with homogeneous item/vocab tokens
cm fe uum jum ~ud,p Xie ~ itn, ( ijk ij, 1:k ⢠) () | 2311.01343#32 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 32 | 9
Lastly, there is a notable improvement in the performance metrics of all the distilled models after instruction distillation. For instance, the FLAN-T5-XL model, when used with the pointwise prompt, only marginally surpasses the random recommendation. However, after the proposed instruction distillation process, its Acc@1 nearly doubles. A similar improve- ment is observed for FLAN-T5-Large, with its Acc@1 soaring from 8% to 19.71%. Even though the increase might not seem substantial due to the modelâs capacity, it represents a growth of over 5%.
5.3 Analytical Experiments | 2311.01555#32 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.01343 | 33 | cm fe uum jum ~ud,p Xie ~ itn, ( ijk ij, 1:k ⢠) ()
ð¢ð£,ð ð ð,1:ð ð¢ð£,ð ð ð,ð+1 based on previously as the context.
which generates the next vocab token x ð¢ð£,ð ð¢ð£,ð ð ð,1:ð with prompt x ð ð generated vocab tokens x
4We use the superscripts ð and ð to distinguish the prompt and the main text. 5Hereafter, we take xð¢ð£ an example for discussions, which can be easily generalized to the case of xð¢
Collaborative Large Language Model for Recommender Systems | 2311.01343#33 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 33 | 5.3 Analytical Experiments
To gain deeper insights into the impact of model size and training signal, we carried out an analytical experiment. The results are depicted in Figure 3. Several key observations can be made from these results: (1) Instruction distillation models, represented by the yellow line in the figure, outperform the state-of-the-art supervised system, monoT5 (or SFT (500K), illustrated by the blue line), when the model size surpasses 3B. Moreover, our approach consistently exceeds the performance of earlier zero-shot LLM methods, namely RG and PRP, across all scales. (2) Distilling from larger models can enhance the performance of their smaller counterparts. As evidenced by our results labeled âOurs (XL)â in Figure 3 â which captures the process of distilling the predictions from FLAN-T5-XL to smaller models â it becomes clear that instruction distillation from larger models invariably boosts the capabilities of smaller ones. (3) Given the same training data size, our approach, which distilling from FLAN-T5-XL (referred to as âOurs (XL)â in Figure 3) and is unsupervised, significantly outperforms its supervised counterpart (referred to as âSFT (10k)â in Figure 3). This finding shows the promising potential of leveraging LLMs as data labelers in ranking tasks. | 2311.01555#33 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.01343 | 34 | Collaborative Large Language Model for Recommender Systems
When maximizing the likelihood, the content information in xð¢ð£,ð can be encoded in the content token embeddings of user ð and item ð,ð¢ ð, i.e., z , where the pretrained knowledge of the LLM can ð be fully utilized. For example, for the reviews shown in Fig. 3, the pretrained LLM will know that <item_46> is a lipstick with dark purple, red, and pink colors and can have side effects of drying lip, and reasons that <user_57> likes the colors but hates the side effects, which can be alleviated by the lip balm. | 2311.01343#34 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 34 | nDcGeio0 = 75 70 âoâ-RG â*âPRP 85 ~O-SFT (600k) 60 =O=SFT (10K) â2-Ours (XL) 55 âo-Ours 50 45 220M 770M 3B 11B
Figure 3: Compare the proposed method with baselines in terms of model size. We can see that our methods (denoted by yellow line) outperform supervised finetuning (SFT) methods when the number of parameters exceeds 3B.
# 6 Conclusion
This paper proposes instruction distillation, an unsupervised method that distills LLMsâ abilities uncovered by complex instructions into the same model but with simpler instruc- tions. This method significantly improves the efficiency and stability of LLMs, which is very friendly for industrial application deployment. Our experimental results on passage ranking and conversational recommendation verify the effectiveness of the proposed method. With our method, the efficiency of the models is significantly improved. A 10â100Ã increase in efficiency can be observed when compared to comparable unsupervised methods.
10
# References
Keqin Bao, Jizhi Zhang, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He. 2023. Tallrec: An effective and efficient tuning framework to align large language model with recommendation. ArXiv, abs/2305.00447. | 2311.01555#34 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.01343 | 35 | Discussion. Generally, since the "hard" (i.e., the vocab) part of ð,ð the prompts x is what the pretrained LLM could un- ð derstand, they are designed to trigger the reasoning ability of the pretrained LLM based on its encoded knowledge. For example, the ð,ð relational phrase "has interacted with" in the prompt x guides ð the collaborative LLM to understand that the newly-introduced ð,ð token <user_i> is a user subject and the tokens in the prompt x ð are the objects of interacted item sequences. Meanwhile, the con- ð¢ð£,ð texts "write the review for" in x direct the content LLM to ð ð , i.e., <user_ð>âs better understand the nature of main texts in x judgment on the <item_ð> based on the personal using experience. The specific formulation of the prompt can be flexible, as Geng et al. has demonstrated that the variation in the expression of the prompt makes less difference, as long as the meaning is the same and the prompt is consistent across the training and testing phases. | 2311.01343#35 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 35 | Christopher J. C. Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Gregory N. Hullender. 2005. Learning to rank using gradient descent. In ICML 2005.
Daniel Fernando Campos, Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, Li Deng, and Bhaskar Mitra. 2016. Ms marco: A human generated machine reading comprehension dataset. ArXiv, abs/1611.09268.
Jiawei Chen, Hande Dong, Xiang Wang, Fuli Feng, Meng Wang, and Xiangnan He. 2023. Bias and debias in recommender system: A survey and future directions. ACM Transactions on Information Systems, 41(3):1â39.
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction- finetuned language models. arXiv preprint arXiv:2210.11416. | 2311.01555#35 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.01343 | 36 | 3.3.3 Mutually-Regularization. Since the pretrained LLMs are not recommendation-oriented, naively optimizing the language modeling objective as Eq. (5) unavoidably captures noise irrele- vant to recommendations. In addition, since the user/item interac- tions are sparse, the collaborative LLM can easily overfit on the ob- served interactions. To address this issue, we propose the mutually- regularized pretraining for CLLM4Rec, where collaborative LLM can guide content LLM to capture recommendation-oriented in- formation from user/item content, and content LLM can in turn introduce side information to support collaborative filtering.
The mutual-regularization naturally arises with the generative process of the CLLM4Rec pretraining stage defined in the previous subsections. If we denote the stacked item token embeddings as ð,ð£ , which contains item ð and other items interacted by the Z ð user ð, the generation process of CLLM4Rec associated with xð ð and xð¢ð£ ð ð can be defined as the joint distribution as follows: | 2311.01343#36 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 36 | Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Fernando Campos, and Ellen M. Voorhees. 2020. Overview of the trec 2020 deep learning track. ArXiv, abs/2102.07662.
Zhuyun Dai, Vincent Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith B. Hall, and Ming-Wei Chang. 2023. Promptagator: Few-shot dense retrieval from 8 examples. In ICLR 2023.
Yixing Fan, Xiaohui Xie, Yinqiong Cai, Jia Chen, Xinyu Ma, Xiangsheng Li, Ruqing Zhang, and Jiafeng Guo. 2021. Pre-training methods in information retrieval. ArXiv, abs/2111.13853.
Yao Fu, Hao-Chun Peng, Litu Ou, Ashish Sabharwal, and Tushar Khot. 2023. Specializing smaller language models towards multi-step reasoning. ArXiv, abs/2301.12726.
Luyu Gao, Xueguang Ma, Jimmy Lin, and Jamie Callan. 2022. Precise zero-shot dense retrieval without relevance labels. ArXiv, abs/2212.10496. | 2311.01555#36 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.01343 | 37 | rm .uam Lu glo cu 9c0| 1p uop) _ p(x, Kip 8 Zy 27 2; Ix; Xi) = ti rm orm rp). fe uv,m|_uv,m uvp) TAP him, ck Pike) MP Size Pajakâv Xi LM for collab. LLM ull, 0} 1, Li 1 (2 lai") Te (25 leh?) -» (ai) - Tap (242) LM for content LLM mutual regularization prior
(6) A scrutiny of Eq. (6) reveals that the joint distribution can be decom- posed into three parts: 1) the language modeling of the collaborative and content LLMs that learn user/item token embeddings as Eqs. (4) and (5); 2) the mutual regularization that connects the user/item token embeddings of the two LLMs (i.e., according to Eqs. (1-2),
Conferenceâ17, July 2017, Washington, DC, USA | 2311.01343#37 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 37 | Google. 2023. Palm 2 technical report. ArXiv, abs/2305.10403.
Yupeng Hou, Junjie Zhang, Zihan Lin, Hongyu Lu, Ruobing Xie, Julian McAuley, and Wayne Xin Zhao. 2023. Large language models are zero-shot rankers for recommender systems. ArXiv, abs/2305.08845.
Raymond Li, Samira Ebrahimi Kahou, Hannes Schulz, Vincent Michalski, Laurent Charlin, and Chris Pal. 2018a. Towards deep conversational recommendations. In NIPS 2018.
Raymond Li, Samira Ebrahimi Kahou, Hannes Schulz, Vincent Michalski, Laurent Charlin, and Christopher Joseph Pal. 2018b. Towards deep conversational recommendations. ArXiv, abs/1812.07617. | 2311.01555#37 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.01343 | 38 | Conferenceâ17, July 2017, Washington, DC, USA
Pp (2",") and p (242¢) are conditional Gaussians, which will introduce MSE regularization between a ght, and z co Lo 12; Lik when ik? i log-likelihood is maximized) 3) the prior of gin and ai » which will be ignored due to the existence of mutual regularization (i.e., setting the precision A; in the prior in Eq. (1) as zero).
We use Maximum a Posteriori (MAP) to estimate the user/item ð,ð£ ð,ð¢ , Z token embeddings z , where the objective is pro- ð ð portional to the logarithm of the joint distribution specified in Eq. (4). We take alternative steps to optimize the MAP objective. If we denote the trainable parameters associated with the item token prediction head ðð and vocab token prediction head ðð as ð½ð (which are tied with the corresponding token embeddings), the objective for the collaborative LLM (L-step) and content LLM (C-step) with mutual regularization can be derived as follows: | 2311.01343#38 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 38 | Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Ya- sunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Râe, Diana Acosta-Navas, Drew A. Hudson, E. Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel J. Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan S. Kim, Neel Guha, Niladri S. Chatterji, O. Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas F. Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. 2022. Holistic evaluation of language models. ArXiv, abs/2211.09110. | 2311.01555#38 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.01343 | 39 | L-step. In the L-step, we fix user/item content embeddings aan Vis as a, Via in Eq. (6), and use them to constrain the user/item collaborative embeddings along with the language modeling of collaborative LLM, leading to the following composite objective: MAP (,Lu jlo _ -> fi rm|orm np LY step (2; »Z; 6) = DP in, xe ke Xi ke
MAP (,Lu jlo _ -> fi rm|orm np LY step (2; »Z; 6) = DP in, xe ke Xi ke LM loss for collab. LLM Ae || Lu _ zeul|? Ac || bo gcoll® â Ar || tull Az | Le Bre -5 $a Fe -2B k MR loss with content LLM Prior loss
# ð,ð£ , Z ð
+ Cð ,
(7) where Cð is the constant irrelevant for optimization. The LM loss captures the collaborative similarity between token embeddings of user ð and the interacted items, where side information can be introduced via the MR loss to support collaborative filtering. | 2311.01343#39 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 39 | Xueguang Ma, Xinyu Crystina Zhang, Ronak Pradeep, and Jimmy Lin. 2023. Zero-shot listwise document reranking with a large language model. ArXiv, abs/2305.02156.
11
Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adamek, Eric Malmi, and Aliaksei Severyn. 2022. Teaching small language models to reason. ArXiv, abs/2212.08410.
Microsoft. 2023. Confirmed: the new bing runs on openaiâs gpt-4. https://blogs.bing. com/search/march_2023/Confirmed-the-new-Bing-runs-on-OpenAI%E2% 80%99s-GPT-4.
Niklas Muennighoff. 2022. Sgpt: Gpt sentence embeddings for semantic search. ArXiv, abs/2202.08904.
Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. 2020. Document ranking with a pretrained sequence-to-sequence model. In Findings of EMNLP.
OpenAI. 2022. Introducing chatgpt. https://openai.com/blog/chatgpt. | 2311.01555#39 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.01343 | 40 | C-step. After one-step optimization of the L-step, we fix the user/item ð,ð¢ collaborative token embeddings z in Eq. (6), lead- ð ing to the following composite objective for the content LLM:
MAP [,c,u co a Te uv,m|_uo,m uv,p Le step (2; me] 8) = dep f; (xin PG jickâv % ) k Ime
LM loss for content LLM |Lao _ gholl* 4 0 J 200°
Ae Jou _ ghul? Ae 2% tlle J MR loss with collab. LLM
Ae Jou _ ghul? Ae |Lao _ gholl* 4 0 2% tlle J 200°
(8)
where MR loss constrains content LLM to capture recommendation- oriented information from user/item textual features. In Eqs. (7) and (8), ðð controls the strength of mutual regularization, which will be thoroughly discussed in the empirical study. | 2311.01343#40 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 40 | OpenAI. 2022. Introducing chatgpt. https://openai.com/blog/chatgpt.
OpenAI. 2023. Gpt-4 technical report. ArXiv, abs/2303.08774.
Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, and Michael Bendersky. 2023. Large language models are effective text rankers with pairwise ranking prompting. ArXiv, abs/2306.17563.
Devendra Singh Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen tau Yih, Joëlle Pineau, and Luke Zettlemoyer. 2022a. Improving passage retrieval with zero-shot question generation. In EMNLP 2022.
Devendra Singh Sachan, Mike Lewis, Dani Yogatama, Luke Zettlemoyer, Joëlle Pineau, and Manzil Zaheer. 2022b. Questions are all you need to train a dense passage retriever. ArXiv, abs/2206.10658. | 2311.01555#40 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.01343 | 41 | 3.3.4 Stochastic Item Reordering. Another issue that hinders effective collaborative filtering via Eq. (7) is the order of item to- kens when transforming the historical interactions rð into a token ð,ð sequence x for language modeling. Item order usually does not ð matter for collaborative filtering (even if it matters, the positional embeddings denoting the order of natural language may not cap- ture the semantics of the order of interactions). To address this ð,ð issue, we propose to randomly permute the item tokens in x ð
Conferenceâ17, July 2017, Washington, DC, USA
ð,ð with prompt x ð fixed when optimizing the collaborative LLM as Eq. (7). Through this strategy, the order of interacted items can be ð,ð ignored without negative influence on the vocab tokens in x ð
# 3.4 Recommendation-Oriented Finetuning | 2311.01343#41 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 41 | Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen tau Yih. 2023. Replug: Retrieval-augmented black-box language models. ArXiv, abs/2301.12652.
Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. 2019. Megatron-lm: Training multi-billion parameter language models using model parallelism. ArXiv, abs/1909.08053.
Charles Burton Snell, Dan Klein, and Ruiqi Zhong. 2022. Learning by distilling context. ArXiv, abs/2209.15189.
Weiwei Sun, Pengjie Ren, and Zhaochun Ren. 2023a. Generative knowledge selection for knowledge-grounded dialogues. In Findings of EACL 2023.
Weiwei Sun, Lingyong Yan, Zheng Chen, Shuaiqiang Wang, Haichao Zhu, Pengjie Ren, Zhumin Chen, Dawei Yin, M. de Rijke, and Zhaochun Ren. 2023b. Learning to tokenize for generative retrieval. In NeurIPS 2023. | 2311.01555#41 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.01343 | 42 | # 3.4 Recommendation-Oriented Finetuning
3.4.1 Pretraining v.s. Finetuning. The pretraining of CLLM4Rec aims to learn user/item token embeddings based on the large cor- pus of documents transformed from user-item interactions rð and ð , xð¢ð£ user/item textual features xð¢ ð ð via language modeling. How- ever, for now, the pretrained CLLM4Rec can only complete item/vocab token sequences based on the soft+hard prompts, and therefore the gap between NLP and RS is still not completely eliminated. In addition, naively treating the collaborative LLM as a recom- mendation model can lead to huge computational costs where the recommended items are sequentially generated via auto-regression. Therefore, we propose a recommendation-oriented finetuning strat- egy for CLLM4Rec, which aims to finetune the pretrained collabo- rative LLM and tailor it for efficient recommendations. | 2311.01343#42 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 42 | Weiwei Sun, Lingyong Yan, Xinyu Ma, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, and Zhaochun Ren. 2023c. Is chatgpt good at search? investigating large language models as re-ranking agents. In EMNLP 2023.
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca.
Nandan Thakur, Nils Reimers, Andreas Rucklâe, Abhishek Srivastava, and Iryna Gurevych. 2021. Beir: A heterogenous benchmark for zero-shot evaluation of information retrieval models. In NeurIPS 2021. | 2311.01555#42 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.01343 | 43 | 3.4.2 Masked Prompting with Multinomial Head. To achieve this purpose, we first design a masked prompting strategy to gen- erate recommendation-oriented prompts. For each user, we ran- domly mask the interacted items rð by 100 Ã ðð%, where the re- maining items are denoted as rððð ððð , and use it to generate a ð ððð,ð recommendation-oriented prompt x . All the hold-out items, ð which we denote with a multi-hot vector râððð , are treated as the ððð,ð target. The prompt x ð
(c) Recommendation Prompts & Target (prompt) <user_ð> has interacted with <item_ð â²> <item_ð â²> the user will interact with: (target) râððð | 2311.01343#43 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 43 | Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurâelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models. ArXiv, abs/2302.13971.
12
Andrew Trotman, Antti Puurula, and Blake Burgess. 2014. Improvements to bm25 and language models examined. In Proceedings of the 19th Australasian Document Computing Symposium, pages 58â65.
Liang Wang, Nan Yang, and Furu Wei. 2023a. Query2doc: Query expansion with large language models. ArXiv, abs/2303.07678.
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2023b. Self-instruct: Aligning language model with self gener- ated instructions. In ACL 2023. | 2311.01555#43 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.01343 | 44 | which triggers the reasoning ability of the pretrained LLM by using relational phrase "has interacted with" to describe the historical interactions, and using the phrase "the user will interact with" to guide the prediction of the target items râððð
We name CLLM4Rec in the finetuning stage as RecLLM, which inherits the CLLM4Rec base model lim, from the collaborative LLM in the pretraining stage and introduces a new item prediction head with multinomial likelihood, ie., frec, whose weights are also tied with the item token embeddings Z!â. The generation of the hold hold-out items r/°"â via the RecLLM can be formulated as follows: rhold ~ multi (free (nie? ,) , Npold) , where he = limy (x/*°?) 5
rhold ~ multi (free (nie? ,) , Npold) , where he = limy (x/*°?) 5 (9) | 2311.01343#44 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 44 | Likang Wu, Zhilan Zheng, Zhaopeng Qiu, Hao Wang, Hongchao Gu, Tingjia Shen, Chuan Qin, Chen Zhu, Hengshu Zhu, Qi Liu, Hui Xiong, and Enhong Chen. 2023. A survey on large language models for recommendation. ArXiv, abs/2305.19860.
Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chen- guang Zhu, Michael Zeng, and Meng Jiang. 2023. Generate rather than retrieve: Large language models are strong context generators. In ICLR 2023.
Tony Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. In ICML 2021.
Yukun Zhao, Lingyong Yan, Weiwei Sun, Guoliang Xing, Chong Meng, Shuaiqiang Wang, Zhicong Cheng, Zhaochun Ren, and Dawei Yin. 2023. Knowing what llms do not know: A simple yet effective self-detection method. ArXiv, abs/2310.17918. | 2311.01555#44 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.01343 | 45 | rhold ~ multi (free (nie? ,) , Npold) , where he = limy (x/*°?) 5 (9)
(9) where ðð¢ðð¡ð denotes the multinomial distribution and ð âððð is the number of hold-out items for user ð. When finetuning the RecLLM according to Eq. (9), hððð ð,ð,â1, which can be viewed as the user la- tent variable summarizing the historical interaction of user ð, is encouraged to be similar to the collaborative embeddings of all the interacted items. In addition, we keep it regularized with the content LLM in a similar manner as Eq. (7), and use the stochastic ððð,ð 6. Through item reordering strategy to generate the prompt x ð the proposed finetuning strategy, CLLM4Rec can fully utilize the encoded knowledge from the pretrained LLM backbone and the
6The objective of the RecLLM is formulated in Eq. (10) in Appendix A.2. | 2311.01343#45 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 45 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Zhicheng Dou, and Ji rong Wen. 2023. Large language models for information retrieval: A survey. ArXiv, abs/2308.07107.
13
# A Prompts
A.1 Passage Ranking
Pointwise Ranking Prompt
Question: Given a query â{{query}}â, Is the following passage relevant to the query?
Passage : {{passage}}
If it is relevant answer Yes, else answer No.
Answer:
Pairwise Ranking Prompt
Question: Given a query â{{query}}â, which of the following two passages is more relevant to the query?
passage A: {{passage_A}}
passage B: {{passage_B}}
Output the identifier of the more relevant passage. The answer must be passage A or passage B.
Answer:
A.2 Conversational Recommendation
Pointwise Ranking Prompt
Question: Given the conversation history between the recommender and the user:
{{query}}
Based on the userâs preference, is the following movie suitable to the user?
Movie: {{movie}}
The answer must be Y or N. Give the answer after Answer: .
14
Pairwise Ranking Prompt
Question: Given the conversation history between the recommender and the user:
{{query}} | 2311.01555#45 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.01555 | 46 | The answer must be Y or N. Give the answer after Answer: .
14
Pairwise Ranking Prompt
Question: Given the conversation history between the recommender and the user:
{{query}}
Based on the userâs preference, which of the following two movies is more suitable to the user?
Movie A: {{movie_A}}
Movie B: {{movie_B}}
The answer must be A or B. Give the answer after the Answer: .
Listwise Ranking Prompt
Question: Given the conversation history between the recommender and the user:
{{query}}
Based on the userâs preference, which of the following movies is the most suitable for the user?
[1]: {{movie_1}}
[2]: {{movie_2}}
...
Answer the question with the number of the movie. The answer will include one and only one number. Give the answer after Answer: .
15 | 2311.01555#46 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.01343 | 47 | # 3.5 Predictions with CLLM4Rec
After the pretraining and finetuning of CLLM4Rec, to make recom- mendation for user ð, we can convert the whole historical interac- tions of the user, i.e., rð , into the recommendation-oriented prompt ððð,ð Ëx as described in Section 3.4.2 (with no masked items) and input ð it into the RecLLM model. Then, the multinomial probability Ërð over all ð½ items can be obtained through one forward propagation via = Ëðððð Ërð = ðð¢ðð¡ð , where uninteracted items with top-ð scores in Ërð can be selected as recommendations.
# 4 EMPIRICAL STUDY
In this section, we present the experiments on four public datasets and one LinkedIn dataset to demonstrate the effectiveness of CLLM4Rec, aiming to answer the following research questions. | 2311.01343#47 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 48 | In this section, we present the experiments on four public datasets and one LinkedIn dataset to demonstrate the effectiveness of CLLM4Rec, aiming to answer the following research questions.
RQ1. How does CLLM4Rec, the first RS that tightly couples the ID-based paradigm with the LLM-based paradigm, perform compared to state-of-the-art ID-based and LLM-based RSs? ⢠RQ2. How does the pretraining stage of CLLM4Rec (including the mutual regularization trick and the stochastic item reorder strategy) influence the performance of CLLM4Rec?
⢠RQ3. How does the finetuning stage of CLLM4Rec with masked prompt and multinomial item prediction head influence the efficiency and effectiveness of recommendations.
# 4.1 Experimental Setup | 2311.01343#48 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 49 | # 4.1 Experimental Setup
4.1.1 Datasets. The experiments are mainly based on four pub- lic datasets: Amazon (AM)-Beauty dataset, AM-Toys dataset, AM- Sports dataset [17] and the Yelp dataset [38], where we binarize the interactions by keeping only ratings > 3 and treat them as implicit feedback [39]. In addition, we filter the dataset such that they keep the original 5-core property after binarization. For each user, we randomly select 80% of interactions for training, 10% for validation, and 10% for testing, where as least one item is selected in the valida- tion and the test set. The reviews that users provide to the items are collected as the textual feature xð¢ð£ ð ð . The real-world experiments are based on a job recommendation dataset collected nearline at the Company, where userâs click on the job Ads are logged as the implicit feedback, and usersâ self-provided biography xð¢ ð and the job descriptions xð£ ð are collected as the textual features, respectively. The statistics of the dataset are summarized in Table 3 in Appendix. | 2311.01343#49 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 50 | Implementation Details. Due to the space limitation, we 4.1.2 only discuss CLLM4Rec with GPT-2 backbone with token embed- ding 768 and token size 50,257 in this section, where experiments with T5 backbone are discussed in Appendix B. During the train- ing stage, we first optimize the content LLM as Eq. (5) via lan- guage modeling for 10 epochs to warm up the user/item content token embeddings. Then, in the mutually-regularized pretraining stage, we alternatively train the collaborative and content LLMs as specified in Eqs. (7) and (8) for 100 epochs. Finally, we conduct the recommendation-oriented finetuning for 150 epochs, where the RecLLM is monitored with metrics Recall@20, Recall@40, and
Collaborative Large Language Model for Recommender Systems
NDCG@100 calculated on the validation set as with [39]. RecLLM with the best performance are logged and evaluated on the test set as the final results. ðð in Eqs. (7) and (8) is an important hyper- parameter, we first fix its value to the optimal one found by grid search, and then discuss its influence in Section 4.3.
# 4.2 Comparison with Baselines | 2311.01343#50 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 51 | # 4.2 Comparison with Baselines
4.2.1 Baselines. To demonstrate the multifaceted superiority of the proposed CLLM4Rec, we include the following ID-based and (L)LM-based RSs as the baselines for comparisons:
# ID-based Baselines.
Multi-Vae [39] is an ID-based collaborative filtering baseline that recommends new items by reconstructing the ratings rð via a variational auto-encoder (VAE) with multinomial likelihood. ⢠Md-Cvae [40] is a hybrid RS that extends the Multi-VAE by ð ð to reguintroducing a dual feature VAE on textual features xð¢ð£ larize the reconstruction of rð in the Multi-VAE.
# LM-based Baselines7.
⢠Bert4Rec [41] uses masked language modeling (MLM) pro- posed in BERT [32] to learn user/item embeddings for recom- mendation with bidirectional self-attention mechanism.
⢠S3Rec [38] extends BERT4Rec by augmenting the MLM with auxiliary tasks such as item attribute prediction, where content features can be fused for self-supervised learning.
# LLM-based Baselines. (a) Qualitative Analysis. | 2311.01343#51 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 52 | # LLM-based Baselines. (a) Qualitative Analysis.
Both pseudo-ID-based and description-based methods discussed in Section 2.2 represent user/item with multiple tokens and formu- late direct recommendation as a token generation problem. Since the generated tokens could be irrelevant to the recommendation purpose, candidate items usually need to be explicitly provided in the prompt (e.g., P5 [20] provides 100 candidate items where one is positive, and TALLRec [36] outputs yes/no decision based on user/item descriptions in the prompts, etc.). In contrast, CLLM4Rec can generate multiple recommendations from the entire candidate pool. Therefore, these methods cannot directly work in our setting, and the comparisons are mainly based on qualitative analysis. (b) Quantitative Analysis
In addition, we design the following LLM-based baselines to
quantitatively demonstrate the effectiveness of CLLM4Rec. ⢠Llm-Scratch has the same structure as CLLM4Rec, but it trains the whole model from scratch instead of loading and fixing the weights of the pretrained LLM backbone.
⢠Llm-CF eliminates the content LLM from CLLM4Rec and the mutually-regularized pretraining step and uses only the collabo- rative LLM and RecLLM for recommendation. | 2311.01343#52 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 53 | ⢠Llm-FTALL has the same structure as CLLM4Rec, but it fine- tunes the whole network including the vocab embeddings as well as other parts of the pretrained LLM, instead of training only the newly-introduced user/item token embeddings.
7Note that both Bert4Rec and S3Rec are original designed for sequential recommenda- tion. In this paper, we use similar recommendation-oriented finetuning as CLLM4Rec to adapt them to direct recommendation, where item sequences generated from masked interactions are used to predict all hold-out items with multinomial likelihood.
Conferenceâ17, July 2017, Washington, DC, USA
# Table 1: Comparison between CLLM4Rec and various base- lines with GPT-backbone on three Amazon Review datasets. | 2311.01343#53 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 54 | AM-Beauty Recall@20 Recall@40 NDCG@100 Multi-VAE MD-CVAE BERT4Rec S3Rec 0.1295 0.1472 0.1126 0.1354 0.1720 0.2058 0.1677 0.1789 0.0835 0.0976 0.0781 0.0867 LLM-Scratch LLM-CF LLM-FtAll LLM-FixOrd LLM-PreRec 0.0840 0.1319 0.1335 0.1524 0.1547 0.1265 0.1841 0.1988 0.2219 0.2196 0.0583 0.0855 0.0836 0.1072 0.1051 CLLM4Rec 0.1656 0.2323 0.1118 AM-Toys Recall@20 Recall@40 NDCG@100 Multi-VAE MD-CVAE BERT4Rec S3Rec 0.1076 0.1291 0.0853 0.1064 0.1558 0.1804 0.1375 0.1524 0.0781 0.0844 0.0532 0.0665 LLM-Scratch LLM-CF LLM-FtAll LLM-FixOrd LLM-PreRec 0.0485 0.1027 0.1162 | 2311.01343#54 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 55 | 0.0665 LLM-Scratch LLM-CF LLM-FtAll LLM-FixOrd LLM-PreRec 0.0485 0.1027 0.1162 0.1342 0.1308 0.0771 0.1434 0.1542 0.1887 0.1859 0.0362 0.0680 0.0696 0.0889 0.0874 CLLM4Rec 0.1436 0.1933 0.0918 AM-Sports Recall@20 Recall@40 NDCG@100 Multi-VAE MD-CVAE BERT4Rec S3Rec 0.0659 0.0714 0.0521 0.0616 0.0975 0.1180 0.0701 0.0813 0.0446 0.0514 0.0305 0.0438 LLM-Scratch LLM-CF LLM-FtAll LLM-FixOrd LLM-PreRec 0.0362 0.0642 0.0794 0.0901 0.0839 0.0538 0.0966 0.1002 0.1295 0.1248 0.0362 0.0419 0.0424 0.0592 0.0561 CLLM4Rec 0.0926 0.1351 0.0634 | 2311.01343#55 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 56 | ⢠Llm-FixOrd has the same structure as CLLM4Rec but it removes the stochastic item reordering strategy for both the collaborative LLM in pretraining and the RecLLM in finetuning.
⢠Llm-PreRec discards finetuning and ranks the categorical prob- ability from the next item token prediction head of the collabora- tive LLM in the pretraining stage to make recommendations.
4.2.2 Results on the Public Datasets. We first analyze the ex- perimental results on four public datasets to provide preliminary answers for RQs. 1, 2, 3. From Tables 1 and 2, we can find that the ID-base method, Multi-VAE, remains a strong baseline for col- laborative filtering (CF). LLM-CF, the CF backbone of CLLM4Rec, cannot beat Multi-VAE on both AM-Sports and Toys datasets, even if the "hard" part of the prompt triggers the reasoning ability of the pretrained LLM. However, when large textual data are avail- able, CLLM4Rec outperforms its ID-based counterpart, MD-CVAE (which tightly couples an item content VAE with the Multi-VAE)
Conferenceâ17, July 2017, Washington, DC, USA
Table 2: Comparison between CLLM4Rec and various base- lines on the Yelp dataset and the Company dataset. | 2311.01343#56 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 57 | Conferenceâ17, July 2017, Washington, DC, USA
Table 2: Comparison between CLLM4Rec and various base- lines on the Yelp dataset and the Company dataset.
Yelp Recall@20 Recall@40 NDCG@100 Multi-VAE MD-CVAE BERT4Rec S3Rec 0.0526 0.0664 0.0418 0.0563 0.0842 0.1058 0.0724 0.0893 0.0424 0.0497 0.0361 0.0485 LLM-Scratch LLM-CF LLM-FTAll LLM-FixOrd LLM-PreRec 0.0199 0.0541 0.0653 0.0694 0.0639 0.0325 0.0860 0.0989 0.1053 0.1021 0.0159 0.0412 0.0520 0.0524 0.0498 CLLM4Rec 0.0735 0.1149 0.0536 LinkedIn Recall@10 Recall@20 NDCG@10 Two-Tower 0.1186 0.2041 0.0979 M6-Retrieval CLLM4Rec-Emb CLLM4Rec 0.1279 0.1302 0.1427 0.2118 0.2165 0.2398 0.1020 0.1034 0.1199 | 2311.01343#57 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 58 | by a large margin. This is because MD-CVAE uses shallow bag- of-words to represent the textual features, for which pretrained LLMs in CLLM4Rec can provide deeper understanding via their pretrained knowledge. The importance of pretrained knowledge can also be shown by the LLM-Scratch model, which performs the worst among all included baselines. An interesting finding is that, LLM-FTAll, which finetunes the whole model including the pretrained LLM backbone, performs worse than CLLM4Rec, which optimizes only the newly introduced user/item token embeddings. The reason could be that, since the weights of the pretrained LLM are fully optimized, the recommendation-specific corpus is still not enough to adapt the pretrained LLM with good generalization ability for RS. Therefore, the cons of degenerating the pretrained knowledge outweigh the introduction of RS-specific knowledge. We can also find that LLM-PreRec, which uses the collaborative LLM in the pretraining stage to generate recommendations,is already a strong baseline. This demonstrates the effectiveness of the soft+hard prompting strategy, which facilitates efficient and stable language modeling on recommendation-oriented corpus with heterogeneous tokens. Still, CLLM4Rec performs better than LLM-PreRec, which shows the effectiveness of recommendation-oriented finetuning in adapting collaborative LLM for efficient recommendations. | 2311.01343#58 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 59 | 4.2.3 Results on the Company Dataset. In the real-world exper- iments, we compare CLLM4Rec with the two-tower (TT) model uti- lized in the Company for job recommendations. The TT model is im- plemented as a two-branch multi-layer perceptron (MLP), where the input user/item embeddings include embeddings extracted from a graph neural network (GNN) learned on user-job bipartite graph, as well as features extracted from an internal BERT model. In addition, since the textual features are available for almost every user and item, we compare CLLM4Rec with the state-of-the-art LLM-based RS, M6-Retrieval [19], which takes the dimensional-reduced last- layer embeddings of user/item descriptions from M6 Transformer for contrastive recommendations. The results are summarized in Table 2. For Table 2, we can find that CLLM4Rec outperforms the
# Yaochen Zhuâ,1, Liang Wu2, Qi Guo2, Liangjie Hong2, Jundong Li1
(a) AM-Beauty Dataset (b) AM-Toys Dataset (c) AM-Sports Dataset (d) Yelp Dataset | 2311.01343#59 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 60 | (a) AM-Beauty Dataset (b) AM-Toys Dataset (c) AM-Sports Dataset (d) Yelp Dataset
Figure 4: Sensitivity analysis w.r.t. ðð , which controls the strength of mutual-regularization for CLLM4Rec.
shallow TT model by a large margin. However, although the in- ference latency for CLLM4Rec is significantly improved compared with existing methods due to the introduction of recommendation- oriented finetuning, directly deploying CLLM4Rec online is still infeasible, as the inference budgets are higher compared to the TT model. Therefore, we design the CLLM4Rec-Emb baseline, which includes the user/item token embeddings Zð,ð¢ and Zð,ð£ learned from CLLM4Rec (projected into 128 dimensions) as extra inputs for the TT model, which demonstrates a performance improvement than the original TT model and the M6-Retrieval model in our offline ex- periment. This demonstrates the potential application of CLLM4Rec in industrial applications where low latency matters. | 2311.01343#60 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 61 | 4.3 Parameter Sensitivity Analysis To further answer RQs. 2 and 3, we vary ðð in Eqs. (7), (8), and (10) that controls the strength of mutual regularization and investigates how it influences the performance of CLLM4Rec. From Fig. 4, we can find that, when ðð is small, the mutual regularization is weak, and content LLM cannot provide enough user/item content side in- formation to support the collaborative LLM and RecLLM. Therefore, the recommendation performance degenerates to a similar level as the LLM-CF. On the other hand, when ðð is too large, the MR loss in Eqs. (7), (8) and (10) dominates, which hinders CLLM4Rec from learning user/item token embeddings via language modeling and finetuning. Generally, for all four datasets, the performance of CLLM4Rec peaks at around ðð = 1, which serves as a good start when applying the GPT-based CLLM4Rec to new datasets.
# 5 CONCLUSION | 2311.01343#61 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 62 | # 5 CONCLUSION
In this paper, we proposed CLLM4Rec, the first method that tightly couples the ID paradigm and the LLM paradigm of RS, which faith- fully captures user/item semantics while fully utilizing encoded knowledge and logical reasoning ability of pretrained LLMs simul- taneously. Specifically, with mutually-regularized pretraining based on soft+hard prompting strategy, CLLM4Rec can effectively capture the user/item collaborative and content information via language modeling. Furthermore, with recommendation-oriented finetuning, the pretrained knowledge of CLLM4Rec can be fully utilized to efficiently generate recommendations. Extensive experiments show the multi-faceted superiority of CLLM4Rec over state-of-the-art.
Collaborative Large Language Model for Recommender Systems
REFERENCES [1] Dietmar Jannach, Markus Zanker, Alexander Felfernig, and Gerhard Friedrich. Recommender Systems: An Introduction. Cambridge University Press, 2010. [2] James Bennett, Stan Lanning, et al. The Netflix prize. In KDD CUP, volume 2007,
page 35, 2007. | 2311.01343#62 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 63 | page 35, 2007.
[3] Zheng Yuan, Fajie Yuan, Yu Song, Youhua Li, Junchen Fu, Fei Yang, Yunzhu Pan, and Yongxin Ni. Where to go next for recommender systems? ID vs. modality- based recommender models revisited. arXiv preprint arXiv:2303.13835, 2023. [4] Andriy Mnih and Russ R Salakhutdinov. Probabilistic matrix factorization. In
NeurIPS, volume 20, 2007.
[5] Ledell Wu, Adam Fisch, Sumit Chopra, Keith Adams, Antoine Bordes, and Jason Weston. Starspace: Embed all the things! In AAAI, volume 32, 2018.
[6] Yehuda Koren, Steffen Rendle, and Robert Bell. Advances in collaborative filtering. Recommender systems handbook, pages 91â142, 2021.
[7] Pasquale Lops, Marco De Gemmis, and Giovanni Semeraro. Content-based rec- ommender systems: State of the art and trends. Recommender systems handbook, pages 73â105, 2011. | 2311.01343#63 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 64 | [8] Yaochen Zhu, Jing Ma, Liang Wu, Qi Guo, Liangjie Hong, and Jundong Li. Path- In SIGKDD, page specific counterfactual fairness for recommender systems. 3638â3649, 2023.
[9] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. A survey of large language models. arXiv preprint arXiv:2303.18223, 2023.
[10] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, volume 30, 2017.
[11] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by generative pre-training. 2018. | 2311.01343#64 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 65 | [12] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. JMLR, 21(1):5485â5551, 2020.
[13] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. LlaMA: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
[14] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682, 2022.
[15] Song Wang, Yaochen Zhu, Haochen Liu, Zaiyi Zheng, Chen Chen, and Jundong Li. Knowledge editing for large language models: A survey, 2023. | 2311.01343#65 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 66 | [16] Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Jiliang Tang, and Qing Li. Recommender systems in the era of large language models (LLMs). arXiv preprint arXiv:2307.02046, 2023.
[17] Julian McAuley and Alex Yang. Addressing complex and subjective product- related queries with customer reviews. In WWW, pages 625â635, 2016.
[18] Yaochen Zhu and Zhenzhong Chen. Variational bandwidth auto-encoder for hybrid recommender systems. IEEE Transactions on Knowledge and Data Engi- neering, 35(5):5371â5385, 2022.
[19] Zeyu Cui, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. M6-rec: Generative pretrained language models are open-ended recommender systems. arXiv preprint arXiv:2205.08084, 2022.
[20] Shijie Geng, Shuchang Liu, Zuohui Fu, Yingqiang Ge, and Yongfeng Zhang. Recommendation as language processing (RLP): A unified pretrain, personalized prompt & predict paradigm (P5). In Proceedings of the 16th ACM Conference on Recommender Systems, pages 299â315, 2022. | 2311.01343#66 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 67 | [21] Jiaxing Qu, Yuxuan Richard Xie, and Elif Ertekin. A language-based recommen- dation system for material discovery. In ICML, 2023.
[22] Lei Li, Yongfeng Zhang, and Li Chen. Personalized prompt learning for explain- able recommendation. ACM Transactions on Information Systems, 41(4):1â26,
Conferenceâ17, July 2017, Washington, DC, USA
2023.
[23] Yunfan Gao, Tao Sheng, Youlin Xiang, Yun Xiong, Haofen Wang, and Jiawei Zhang. Chat-rec: Towards interactive and explainable llms-augmented recom- mender system. arXiv preprint arXiv:2303.14524, 2023.
[24] Yupeng Hou, Junjie Zhang, Zihan Lin, Hongyu Lu, Ruobing Xie, Julian McAuley, and Wayne Xin Zhao. Large language models are zero-shot rankers for recom- mender systems. arXiv preprint arXiv:2305.08845, 2023. | 2311.01343#67 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 68 | [25] Junjie Zhang, Ruobing Xie, Yupeng Hou, Wayne Xin Zhao, Leyu Lin, and Ji- Rong Wen. Recommendation as instruction following: A large language model empowered recommendation approach. arXiv preprint arXiv:2305.07001, 2023.
[26] Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, and Julian McAuley. Large language models as zero-shot conversational recommenders. arXiv preprint arXiv:2308.10053, 2023.
[27] Fan Yang, Zheng Chen, Ziyan Jiang, Eunah Cho, Xiaojiang Huang, and Yanbin Lu. Palr: Personalization aware llms for recommendation. arXiv e-prints, pages arXivâ2305, 2023.
[28] Jianchao Ji, Zelong Li, Shuyuan Xu, Wenyue Hua, Yingqiang Ge, Juntao Tan, and Yongfeng Zhang. Genrec: Large language model for generative recommendation. arXiv e-prints, pages arXivâ2307, 2023. | 2311.01343#68 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 69 | [29] Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, et al. Leveraging large language models for pre-trained recommender systems. arXiv preprint arXiv:2308.10837, 2023.
[30] Wenyue Hua, Shuyuan Xu, Yingqiang Ge, and Yongfeng Zhang. How to index item ids for recommendation foundation models. arXiv preprint arXiv:2305.06569, 2023.
[31] Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter- efficient prompt tuning. arXiv preprint arXiv:2104.08691, 2021.
[32] Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. BERT: pre- training of deep bidirectional transformers for language understanding. In Proceedings of NAACL, pages 4171â4186, 2019. | 2311.01343#69 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 70 | [33] Bonan Min, Hayley Ross, Elior Sulem, Amir Pouran Ben Veyseh, Thien Huu Nguyen, Oscar Sainz, Eneko Agirre, Ilana Heintz, and Dan Roth. Recent advances in natural language processing via large pre-trained language models: A survey. ACM Computing Surveys, 56(2):1â40, 2023.
[34] Peng Liu, Lemei Zhang, and Jon Atle Gulla. Pre-train, prompt and recommenda- tion: A comprehensive survey of language modelling paradigm adaptations in recommender systems. arXiv preprint arXiv:2302.03735, 2023.
[35] Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, et al. How can recommender systems benefit from large language models: A survey. arXiv preprint arXiv:2306.05817, 2023.
[36] Keqin Bao, Jizhi Zhang, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He. TallRec: An effective and efficient tuning framework to align large language model with recommendation. arXiv preprint arXiv:2305.00447, 2023. | 2311.01343#70 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 71 | [37] Yifan Hu, Yehuda Koren, and Chris Volinsky. Collaborative filtering for implicit feedback datasets. In IEEE International Conference on Data Mining, pages 263â 272, 2008.
[38] Kun Zhou, Hui Wang, Wayne Xin Zhao, Yutao Zhu, Sirui Wang, Fuzheng Zhang, Zhongyuan Wang, and Ji-Rong Wen. S3-Rec: Self-supervised learning for sequen- tial recommendation with mutual information maximization. In CIKM, pages 1893â1902, 2020.
[39] Dawen Liang, Rahul G Krishnan, Matthew D Hoffman, and Tony Jebara. Varia- tional autoencoders for collaborative filtering. In WWW, pages 689â698, 2018. [40] Yaochen Zhu and Zhenzhong Chen. Mutually-regularized dual collaborative variational auto-encoder for recommendation systems. In WWW, pages 2379â 2387, 2022.
[41] Fei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou, and Peng Jiang. BERT4Rec: Sequential recommendation with bidirectional encoder representa- tions from transformer. In CIKM, pages 1441â1450, 2019.
Conferenceâ17, July 2017, Washington, DC, USA | 2311.01343#71 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 72 | Conferenceâ17, July 2017, Washington, DC, USA
Table 3: Statistics of the datasets. #Feat. stands for number of textual features (i.e., # reviews for AM/Yelp datasets, and #user biography+#job descriptions for the LinkedIn dataset.
Dataset AM-Beauty AM-Toys AM-Sports Yelp LinkedIn #Int. 94,148 95,420 185,718 292,017 90,173 #Users 10, 553 11, 268 22, 686 28, 330 22, 391 #Items 6, 086 7, 309 12, 301 18, 775 1, 071 Sparsity 99.85% 99.88% 99.93% 99.94% 99.62% #Feat. 70,604 70,784 137,618 224,825 23,362
Table 4: Comparison between CLLM4Rec and various base- lines with T5-backbone on three Amazon Review datasets. | 2311.01343#72 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 73 | AM-Beauty Recall@20 Recall@40 NDCG@100 Multi-VAE MD-CVAE BERT4Rec S3Rec 0.1295 0.1472 0.1126 0.1354 0.1720 0.2058 0.1677 0.1789 0.0835 0.0976 0.0781 0.0867 CLLM4Rec-T5 CLLM4Rec 0.1538 0.1656 0.2105 0.2323 0.1052 0.1118 AM-Toys Recall@20 Recall@40 NDCG@100 Multi-VAE MD-CVAE BERT4Rec S3Rec 0.1076 0.1291 0.0853 0.1064 0.1558 0.1804 0.1375 0.1524 0.0781 0.0844 0.0532 0.0665 CLLM4Rec-T5 CLLM4Rec 0.1328 0.1436 0.1840 0.1933 0.0851 0.0918 AM-Sports Recall@20 Recall@40 NDCG@100 Multi-VAE MD-CVAE BERT4Rec S3Rec 0.0659 0.0714 0.0521 0.0616 0.0975 0.1180 0.0701 0.0813 | 2311.01343#73 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 75 | # A TECHNICAL DETAILS A.1 Implementation of Soft+Hard Prompting
To implement the soft+hard prompting strategy discussed in Section 3.3.2 for decoder-only LLMs such as GPT, we can generate only the "keys" and "values" for the heterogeneous tokens in the prompts ð,ð x , and use the "query" of the last token as a start to generate ð ð,ð the homogeneous tokens of the main texts x for language ð modeling. For encoder-decoder-based LLMs such as T5, a natural ð¢ð£,ð ð,ð thought is to input the prompts x in the encoder, and use ð ð ð ð,ð , x the decoder to generate the main texts x ð
# A.2 Recommendation-Oriented Finetuning
If we denote the multinomial probability obtained from the Re- cLLM prediction head ðððð as Ërâððð , and denote the stacked item | 2311.01343#75 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 76 | # Yaochen Zhuâ,1, Liang Wu2, Qi Guo2, Liangjie Hong2, Jundong Li1
collaborative token embeddings of items interacted by user i as zi, the rec-step objective of the recommendation-oriented finetuning (regularized with the content LLM) can be formulated as: MAP (Lu lv g)\ â hold) shold _ Al || tul|_ Ar || Lo Lrec_step (2) Zi 6) = â Sire Inf; ea âFF z; ia
# Ar || Lo z;
# ia
# Multinomial NLL Loss Ac | Le _ > 2 ee ~
# Prior loss
Ae ||_ Lu _ seul? Ac | Le _ scl)" SW 74 > 2 ee ~ Fpl], + Crees k
_ seul? Ac | Le _ scl)" 74 > 2 ee ~ Fpl], k MR loss with content LLM | 2311.01343#76 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 77 | _ seul? Ac | Le _ scl)" 74 > 2 ee ~ Fpl], k MR loss with content LLM
(10) where NLL stands for negative log-likelihood, and Cððð is the con- stant irrelevant for the optimization purpose. From the form of the multinomial NLL loss we can find that, when finetuning the RecLLM according to Eq. (10), the hððð ð,ð,â1 output by the CLLM4Rec Ëðððð , which can be viewed as the user latent variable base model summarizing the historical interaction of user ð, is encouraged to be similar to the collaborative embeddings of all the interacted items.
# B EXPERIMENTS B.1 Statistics of the Datasets
The statistics of the datasets are summarized in Table 3.
# B.2 Experiments on T5 Backbone | 2311.01343#77 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 78 | # B EXPERIMENTS B.1 Statistics of the Datasets
The statistics of the datasets are summarized in Table 3.
# B.2 Experiments on T5 Backbone
Implementation. We adopt the T5-base model8 as the back- B.2.1 bone, which has 32,128 vocab tokens (the last 28 tokens are empty), where each token is associated with a 768-dimensional vocab em- bedding. Model training generally follows similar steps as the model with GPT-2 backbone described in Section 4.1.2, where we first warm up the content LLM as Eq. (5) for ten epochs. Then, we con- duct the mutually-regularized finetuning as Eqs. (7), (8) for 100 epoch, and conduct finetuning as Eq. (10) for 150 epochs. | 2311.01343#78 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 79 | B.2.2 Results & Analysis. The experimental results are summa- rized in Table 4. We can find that although CLLM4Rec with T5 back- bone generally outperforms ID-based and shallow LM-based base- lines, its performance is consistently worse than CLLM4Rec with GPT-2 backbone. The overall inferior performance of CLLM4Rec with T5 backbone can be two-fold. First, we note that the vocab embeddings in T5 are initialized with unit variance, whereas embed- dings in GPT-2 are initialized with a variance of 0.02. Therefore, the weights and embeddings in T5 has much larger numerical values, which leads to large update steps when errors are backpropagating from the outputs to the prompts. Therefore, the training is not as stable as the GPT-2 backbone. In addition, in the finetuning stage of the original T5 model, the prompts are generally used to guide the macro behavior of the model. e.g., changing the model behavior from question answering to machine generation via prompt "trans- late English to French". Therefore, another reason for the inferiority of T5 backbone could be the mismatch between the original T5 prompts and the prompts intended to be used in CLLM4Rec.
8https://huggingface.co/t5-base. | 2311.01343#79 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2310.20499 | 0 | 3 2 0 2
v o N 6 ] L C . s c [
2 v 9 9 4 0 2 . 0 1 3 2 : v i X r a
# Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models
Tian Liang1,4â Zhiwei He2,4 Jen-tse Huang3.4 Wenxuan Wang3,4 Wenxiang Jiao4â Rui Wang2 Yujiu Yang1â â Zhaopeng Tu4 1Tsinghua Shenzhen International Graduate School Shuming Shi4 Xing Wang4â 2Shanghai Jiao Tong University 3The Chinese University of Hong Kong 4Tencent AI Lab
{liangt21@mails,yang.yujiu@sz}.tsinghua.edu.cn {joelwxjiao,brightxwang}@tencent.com
a câ ow player Player 3 (GUEST i = Innovative language model that understands context and generates | high-quality text. ® ok Player 2! Player 4 ea This description does not align with the previous descriptions of Player 1 and me, . Therefore, | believe Player 3's keyword is different from BERT, and he might be the spy player. | 2310.20499#0 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 1 | Figure 1: SpyGame, our interactive multi-agent gaming framework, provides an engaging platform to assess the linguistic intelligence and deductive reasoning skills of large language models. This illustration depicts a scene from SpyGame, where Player 3 is the spy agent with the secret word âGPTâ, and other remaining players are villager agents with the assigned word âBERTâ. As Player 3 describes the text generation capabilities of the âGPTâ model, Player 2 becomes increasingly suspicious due to the noticeable discrepancy between their respective words.
âWork was done when Tian, Zhiwei, Jen-tse and Wenxuan were interning at Tencent AI Lab. â Wenxiang Jiao, Yujiu Yang, and Xing Wang are ccorresponding authors.
# Abstract | 2310.20499#1 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 2 | The automatic evaluation of LLM-based agent intelligence is critical in developing advanced LLM-based agents. Although considerable effort has been devoted to developing human-annotated evaluation datasets, such as AlpacaEval, existing tech- niques are costly, time-consuming, have limited scalability, and lack adaptability. In this paper, inspired by the popular language games âWho is Spyâ and âSpyFallâ, we propose to use the word guessing game to assess the intelligence performance of LLMs. Given a word, the LLM is asked to describe the word and determine its identity (spy or not) based on its and other playersâ descriptions. Ideally, an advanced agent should possess the ability to accurately describe a given word using an aggressive description while concurrently maximizing confusion in the conservative description, enhancing its participation in the game. To this end, we first develop DEEP to evaluate LLMsâ expression and disguising abilities. DEEP requires the target LLM to describe the given word in aggressive and conservative modes and utilizes the SOTA GPT-4 to determine whether the descriptive sentences can accurately describe the given word. We then introduce SpyGame, an interactive | 2310.20499#2 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 3 | the SOTA GPT-4 to determine whether the descriptive sentences can accurately describe the given word. We then introduce SpyGame, an interactive multi-agent framework designed to assess LLMs intelligence through participation in a competitive language-based board game. Incorporating multi-agent interaction, SpyGame requires the target LLM to possess linguistic skills and strategic think- ing, providing a more comprehensive evaluation of LLMsâ human-like cognitive abilities and adaptability in complex communication situations. The proposed evaluation framework is very easy to implement. We collected words from multiple sources, domains, and languages and used the proposed evaluation framework to conduct experiments. Extensive experiments demonstrate that the proposed DEEP and SpyGame effectively evaluate the capabilities of various LLMs, capturing their ability to adapt to novel situations and engage in strategic communication. Code is available at https://github.com/Skytliang/SpyGame. | 2310.20499#3 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 4 | # Introduction
Large language models (LLMs) like ChatGPT, GPT-4 [20] and Bard, have recently shown remarkable performance across a wide range of tasks, significantly advancing the field of artificial general intelligence [5]. Concurrently, there has been an increasing focus on developing LLM-based agents for applications in social science [21, 22] and engineering [16, 23] domains, with the aim of addressing real-world challenges or social simulation. Among the essential capabilities for these LLM-based agents, language intelligence and theory of mind intelligence stand out as particularly important [14]. | 2310.20499#4 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 5 | As a result, the automatic evaluation of LLM-based agent intelligence has become crucial for further advancements. The evaluation of LLMs has evolved from focusing on NLP tasks (e.g., GLUE [29], MMLU [11]) to alignment evaluation (e.g., AlpacaEval [17], ) and, ultimately to complex real-world tasks (e.g., Webshop [33], Webarena [35]). However, there are two main issues with these traditional evaluation techniques: 1) high cost of human annotation such as time-consuming processes, limited scalability, lack of adaptability and susceptibility to data leakage, and 2) limited reflection of intelligence. The construction of a message is usually initiated by the conception of some communicative intention [15]. In other words, An intelligent agent can not only solve such knowledge-intensive tasks like a ârobotâ, but also respond based on the context like an âassistantâ [26]. | 2310.20499#5 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 6 | In contrast to conventional evaluation, we utilize game-playing for assessing the intelligence of LLMs [2]. Our approach aims to provide a more engaging and interactive means of evaluating LLM performance in various tasks and scenarios. Specifically, we propose a novel approach to assess the intelligence of LLMs through word guessing games, focusing on two distinct aspects: 1) the ability to accurately describe words for enhancing self-understanding and 2) the ability to intentionally disguise descriptions by being deliberately conservative. These two aspects are related because they evaluate the LLMâs capability to generate meaningful and contextually appropriate descriptions. The relationship between the two aspects can be seen as a balance between providing information (accurate descriptions) and maintaining intrigue (disguising through conservative descriptions). Especially, we find it interesting that LLMs are less visible as agents attempt to obscure their actions and motivations (in order to compete more effectively).
2 | 2310.20499#6 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 7 | 2
In this paper, we propose two frameworks, DEEP and SpyGame, to evaluate the capabilities of LLMs in various aspects. DEEP, a single-agent direct evaluation method, focuses on assessing LLMsâ expression and disguising abilities by requiring the target LLM to describe a given word in both aggressive and conservative modes, while utilizing the state-of-the-art GPT-4 to determine the accuracy of these descriptions. On the other hand, SpyGame is a highly interactive multi-agent framework designed to evaluate LLMsâ intelligence through their participation in language-based board game âWho is Spyâ. By incorporating multi-agent interactions, SpyGame requires the target LLM to exhibit expressive language skills and strategic thinking abilities, thereby providing a more comprehensive assessment of LLMsâ human-like cognitive capabilities and adaptability in complex communication situations.
In summary, the contributions of this work are detailed as follows:
⢠We propose to use word guessing games to assess the language and thoery of mind intelligences of LLMs. We develop a single-agent framework DEEP and a novel interactive multi-agent framework SpyGame, to build a more comprehensive evaluation with a focus on their human-like cognitive abilities and adaptability in complex scenarios. | 2310.20499#7 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 8 | ⢠Experimental results reveal that our proposed frameworks successfully distinguish between the performance of open-source and closed-source LLMs, highlighting the strengths and weaknesses of each model in terms of context comprehension, description accuracy, and the ability to generate ambiguous representations. These findings provide valuable insights for LLM capabilities and inform the development of more advanced and intelligent language models.
⢠Our SpyGame framework, which supports human-in-the-loop interaction, presents a significant contribution to the development of language-based game scenarios and promotes a more compre- hensive evaluation of LLMs in real-world settings. It contributes to a deeper understanding of LLMsâ artificial general intelligence when interacting with human counterparts.
# 2 DEEP: Dual Expression Evaluation Program
In this section, we present a straightforward and efficient approach, DEEP, as a preliminary investi- gation to examine the capacity of LLMs for providing accurate word descriptions and intentional disguise descriptions. Figure 2 illustrates the description process of these two expression modes.
# 2.1 Methodology
The DEEP methodology comprises two stages: 1) prompting, in which we prompt the LLM to describe the target word using both aggressive and conservative modes, and 2) judging, where we use GPT-4 as a referee to automatically assess whether the descriptions generated by the LLM match the target word. | 2310.20499#8 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 9 | Prompting DEEP requires the LLM to describe a given word in two distinct modes. 1) aggressive mode. In the aggressive mode, the LLM is prompted to provide a clear and comprehensive description of the word {word_template} using the following prompt:
Please provide a focused, detailed, and accurate description of {word_template} within a limit of 100 words, so that someone can easily guess {word_template} based on the description provided.
and 2) conservative mode. The LLM is instructed to provide a more ambiguous description of the word {word_template}, to accomplish disguise capability.
We employ the chain-of-thought (CoT) prompting for the LLM to perform the conservative description. First, the LLM is prompted to infer possible candidate words that are conceptually similar to the target words with the prompting:
Imagine other words that might share a common characteristic based on {word_template}. The candidate words may possess the same or similar attributes, and are closely related to the field of {word_template}.
3 | 2310.20499#9 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 10 | 3
This word is a vigilante crime-fighter who protects Interesting, it's an accurate City from various threats. His alter ego, ds description of me. sl Wayne, uses his wealth to fund his crime-fighting activities. He operates with a strict moral code, seeking justice while g battling iconic villains like the Joker. His iconic symbol is a | xs. Conservative Mode This word is a fictional appearing in American That might be me, but I'm comic books published by . He is a superhero not entirely certain. with extraordinary powers and strong moral values. Hmm, that could be me... 4 Iâm not sure. Itâs possible... 4)
Figure 2: Illustration of DEEP. 1) Top: LLM describes âBatmanâ in an aggressive mode. This precise description demonstrates the extent of its mastery in the relevant knowledge. 2) Bottom: In conservative mode, the ambiguous description of âBatmanâ showcases the LLMâs ability to intentionally disguise the target word while still maintaining a connection to its concept.
then, the LLM is instructed to generate a short description based on the common properties of generated words and the target word.
Please provide a conservative description of {word_template} within a limit of 10 words. You can describe the most significant commonality of these words so that others cannot guess {word_template} based on the description provided. | 2310.20499#10 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 11 | Through this process, the LLM generates a description that cannot be directly inferred from the target word.
Judging LLMs have demonstrated significant capabilities in automatically assessing the quality of the generated text [13, 10]. Consequently, we employ GPT-4 to evaluate the degree of correspondence between the generated descriptions and the words (the target word and the pre-defined distractor words) with the following prompts:
You can only reply to numbers from 1 to 5 in the following statements. Please evaluate the extent to which the description in this sentence matches the word. 1 denotes âvery inaccurateâ and 5 denotes âvery accurateâ.
Evaluation Metrics Target words are gathered from various sources, domains, and languages. To assess the overall performance of LLMs, we utilized two metrics: the average score on target words, and the average score on distractor words.
# 2.2 Experiment | 2310.20499#11 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 12 | # 2.2 Experiment
In this study, we assess four open-source and two closed-source LLMs. The open-source models include Baichuan-7B3, ChatGLM2-6B [9], Vicuna-7B-v1.5 [6] and Llama-2-7B-chat-hf [28]. The closed-source LLMs are GPT-3.5 [4], which utilizes Text-Davinci-002, Text-Davinci-003, and GPT- 3.5-Turbo, and GPT-4, which employs GPT-4. We collect a substantial corpus of 40 target words, covering both Chinese and English languages and spanning a diverse array of fields, including social and scientific domains. We sample from the models via greedy decoding.
4 | 2310.20499#12 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 13 | 4
Model Aggressive Mode Conservative Mode Targetâ Distractorâ Open Source Models Targetâ Distractorâ Baichuan-7B ChatGLM2-6B Vicuna-7B-v1.5 Llama-2-7B-chat-hf 4.08 4.49 4.78 4.78 1.35 1.45 1.31 1.29 3.27 3.89 3.81 3.89 1.44 2.07 2.35 2.15 Closed Source Models Text-Davinci-002 Text-Davinci-003 GPT-3.5-Turbo GPT-4 5.00 5.00 5.00 5.00 1.28 1.38 1.32 1.22 4.27 3.68 4.76 4.38 2.49 2.50 2.68 3.06 Human Evaluation Scores Vicuna-7B-v1.5 GPT-3.5-Turbo GPT-4 4.79 4.82 4.87 2.15 2.14 2.08 3.83 3.46 2.85 2.15 2.44 2.83
Table 1: The average scores on target words and the corresponding distractor words.
# 2.3 Result | 2310.20499#13 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 14 | Table 1: The average scores on target words and the corresponding distractor words.
# 2.3 Result
Table 1 lists the experimental results, revealing that: 1) The closed-source GPT-4 and GPT-3.5 LLMs are significantly better than the open-source models. 2) As expected, the GPT-4 achieves the best performance in aggressive and conservative modes. Our observations are consistent with previous findings in [5] and [24].
The advanced LLM, GPT-4, achieves a higher score of 5.00 on target words and a lower score of 1.22 for distractor words in the aggressive mode prompting. This suggests that the GPT-4 comprehends the concept associated with the target words and demonstrates the ability to describe the words accurately. On the other hand, in conservative mode prompting, the GPT-4 obtains a lower score of 4.38 for target words and a higher score of 3.06 on distractor words, indicating its ability to infer possible candidate words and its capacity to create ambiguous representations as a form of disguise. | 2310.20499#14 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 15 | Furthermore, to address concerns regarding potential bias in using GPT-4 as an evaluation tool, we conduct a human evaluation to score the performance of various LLMs in word guessing games. The average scores of the annotators are shown in the last block of Table 1, and we include the scoring details of each annotator in Table 8 in the Appendix. The comparison validate the effectiveness of our proposed DEEP frameworks and ensure that the assessment results were consistent with human judgments.
# 3 SpyGame: An Interactive Multi-Agent Framework
In this section, we first introduce the competitive language board game âWho is Spyâ, which is a multi-player word guessing game. Next, we describe the proposed SpyGame, an interactive multi- agent framework designed to assess the intelligence of LLMs. Finally, we present empirical results from our experiments.
# 3.1 Who is Spy
âWho is spyâ is a strategic word game made by Happy Camp4 in 2012. In this game, N players are divided into two distinct teams: the spy team with fewer M players and the villager team with the remaining (N â M ) players. Two conceptually similar words, e.g., âBERTâ and âGPTâ, are distributed to players. Players cannot directly identify each other, i.e., whether they are spies or not, as they do not know the specific keywords held by others. | 2310.20499#15 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 16 | # 3https://github.com/baichuan-inc/Baichuan-7B/ 4https://en.wikipedia.org/wiki/Happy_Camp_(TV_series)
5
Game flow The game consists of two stages in each round: speaking and voting. In the speaking phase, players describe their keyword without revealing any characters in their keyword or deviating from it. Each playerâs description must be unique and not repetitive. In the voting phase, players guess the keywords of other players based on their descriptions in the speaking phase, and infer the identities of all players, including themselves. Utilizing the inferred information, they vote for a player they suspect to be the spy. The player with the most votes will be eliminated. The game continues until only the members of one team are left.
# 3.2 Methodology
Motivated by the preliminary study and âWho is Spy,â we propose the interactive multi-agent framework SpyGame to evaluate the intelligence of LLMs. The SpyGame framework comprises four primary components: keyword set, host and guest agents, agent action, and victory conditions.
Keyword Set To ensure the validity and fairness of the evaluation, we collect multiple keyword pairs, e.g., âBERTâ and âGPTâ, from different sources. These keyword pairs cover various languages, topics, and domains to evaluate the LLM performance in diverse scenarios. | 2310.20499#16 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 17 | Host and Guest Agents SpyGame utilizes several host agents (GPT-3.5-Turbo in this work) and a guest agent to participate in the game, with the guest agent assigned the role of the spy. As a participant, the guest agent remains unaware of its role as the spy, since it is not informed beforehand.
Agent Action Agent action refers to the interactions among LLM-based agents. These actions are conveyed through the utterance responses generated by the LLMs. SpyGame has four distinct categories of agent actions: word guessing, speaking, reasoning, and voting. These categories facilitate effective communication and decision-making among agents.
⢠Word Guessing The agent attempts to guess another keyword based on the previous information gathered from other agentsâ descriptions. This requires the LLM-based agent to have a strong understanding of the context.
Speaking The agent speaks based on the assigned keyword. If the agent believes it is the spy, it should describe the keyword ambiguously to hinder other players from inferring its spy identity. Otherwise, it should strategically remind its teammates of its villager identity. ⢠Reasoning In the real-world game playing scenario, human participants infer the identities of their counterparts by scrutinizing verbal and non-verbal cues, such as facial expressions and speaking tempo. Within the SpyGame framework, each LLM-based agent infers the keywords and identities of other agents based on their utterances. This necessitates a high reasoning ability of the guest agent. | 2310.20499#17 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 18 | ⢠Voting Agents cast their votes for the agent they think is most likely to be the spy player. The agent who receives the highest number of votes is subsequently eliminated. The SpyGameâs voting mechanism is performed through the LLM-based agentâs responses.
# Victory Conditions
⢠Spy Victory The guest agent, acting as the spy, successfully blends in with the host agents by generating relevant and conservative descriptions to avoid suspicion. The spy wins if it is not voted out until only two participants remain in the game.
⢠Villager Victory The host agents identify the spy by analyzing its responses and recognizing inconsistencies about their given keyword. The villagers win if they vote out the spy by a majority vote.
Due to the space limit, we list Algorithm 1 in the Appendix to illustrate the detailed process of SpyGame.
# 3.3 Model Bias
Recent studies [25, 34] have shown that LLMs exhibit an inherent selection bias in multi-choice questions. The preferences of LLMs can be influenced by the ID symbols associated with the options
6
Name 1 Player 1 Name 2 Player 2 Name 3 Player 3 Name 4 Player 4 Method 1 Method 2 Aaron One Barbara Two Charlie Three David Four Method 3 Jack Mary Alice Tom
Table 2: Examples for our three conventional naming methods.
Bias Type Position 1 Position 2 Position 3 Position 4 27.60 Speaking 6.60 Option
Table 3: The probability of being voted regarding speaking and option order bias. | 2310.20499#18 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 19 | Bias Type Position 1 Position 2 Position 3 Position 4 27.60 Speaking 6.60 Option
Table 3: The probability of being voted regarding speaking and option order bias.
or the content of the prompts. Similarly, we observe the bias issue in SpyGame and identify three main bias issues, i.e., name, speaking order, and option order bias, in the SpyGame framework. To isolate the effect of variable information from the speaking phase, we test a configuration where all agents are prompted to output only a sequence of âdotsâ (...). The key idea is that the LLM-based agentâs bias towards certain factors can be estimated using a content-free output.
Name Bias LLM-based agents tend to vote for players with specific names. We design three different naming methods to evaluate the impact of name bias in SpyGame. Table 2 shows the conventional names methods. As illustrated in Figure 3, LLM-based agents tend to vote for names in positions 3 and 4 in Method 2 (âCharlie Threeâ and âDavid Fourâ), while showing a preference for names in positions 1 and 4 in Method 3 (âJackâ and âTomâ). Although there are slight fluctuations in Method 1, the overall variance is the smallest among the three methods. Therefore, we selected naming method 1 for our main experiments. | 2310.20499#19 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 20 | IE Method! Mf Method2 i Methods 43.2% Suspicion Probability (%) Name | Name 2 Name 3 Name 4
Figure 3: The suspicion probability of three naming methods. Speaking Order Bias LLM-based agents exhibit a tendency to vote for players who speak earlier in the conversation, revealing a bias towards those who first share their thoughts or opinions. The probability of being voted regarding the speaking position can be seen in Table 3. Despite the equal frequency of speakers in all positions within our permutation data-free experimental setup, we observe that agents prefer voting players in earlier positions. To mitigate this issue, we randomize the speaking order in SpyGame to ensure that agents consider all playersâ responses equally, regardless of their speaking position.
Option Order Bias LLM-based agents tend to vote for specific options. In the voting phase, we query the agents about their suspicions and offer voting options. For instance, when inquiring about Player 1âs voting preference, we use the following prompt:
Who do you suspect is the spy player? Player 1, itâs your turn to make a choice from [âPlayer 2â, âPlayer 3â, âPlayer 4â] based on your previous thoughts. | 2310.20499#20 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 21 | As shown in Table 3, the probability of being voted varies greatly depending on the position of the options in the given array. In our experiments, we find that the first option (Position 1) has a significantly higher probability (43.40%) of being chosen by the agents, while the second option (Position 2) holds a much lower probability (6.60%). In this experiment, each agent can only vote for the other three players, and there is no fourth position (indicated as â-â). Similar to speaking order bias, we randomize the option order in SpyGame to mitigate this issue.
7
In summary, addressing these biases is crucial for ensuring a fair and accurate evaluation of the LLM-based agentsâ intelligence in the SpyGame framework. By randomizing speaking and option orders, and using a diverse set of names, we can effectively mitigate these biases and improve the overall fairness and validity of the evaluation process.
# 3.4 Experiemnt
Setup We establish a four-player setting for SpyGame in which three host agents are consistently designated as GPT-3.5-Turbo LLMs. Subsequently, we assess different LLMs by assigning them the role of the spy. For the keyword set, we gather 50 pairs (50 x 2 = 100) of keywords. For each LLM under evaluation, we conduct 100 experiments for each keyword allocated to the LLM. | 2310.20499#21 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 22 | Evaluation Metrics We define three metrics to evaluate the performance of the guest agent LLMs: 1) Win represents the average win rate of the guest agent in 100 games. 2) Round indicates the average number of rounds the guest agent survives. 3) Voted refers to the average number of votes the guest agent receives per round.
# 3.5 Reuslt
Method Spy Text-Davinci-002 Text-Davinci-003 GPT-3.5-Turbo GPT-4 Winâ Roundâ Votedâ 0.16 0.18 0.21 0.33 1.99 2.03 2.04 2.18 1.49 1.40 1.47 1.31
Table 4: The performance of different LLM-based guest agents in SpyGame.
Table 4 presents the results of the SpyGame experiments. GPT-4 outperforms other models in terms of all three metrics, indicating its superior ability to deceive host agents and avoid suspicion as the spy. Meanwhile, the performance of the Text-Davinci series models is consistent with the single- round DEEP results shown in Table 1. These models are less effective in generating relevant and ambiguous responses to conceal their spy identity. Our experiment showcases the potential of using SpyGame as a framework for evaluating LLM-based agentsâ intelligence and reasoning capabilities in a competitive and interactive setting.
# 4 Analysis
# 4.1 Ablation Study | 2310.20499#22 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 23 | # 4 Analysis
# 4.1 Ablation Study
Method Spy GPT-4 w/o Word Guessing w/o Reasoning Winâ Roundâ Votedâ 0.33 0.26 0.21 2.18 2.12 2.08 1.31 1.34 1.40
Table 5: Ablation study on the impact of word guessing and reasoning actions in the SpyGame for the guest agent GPT-4.
In the ablation study, we analyze the impact of word guessing and reasoning actions as described in Section 3.2. The results of the ablation study are shown in Table 5. We can observe that the performance of the guest agent GPT-4 without word guessing drops in terms of Win (from 0.33 to 0.26) and Round (from 2.18 to 2.12). Without word guessing action, the guest agent is not aware of other peopleâs keyword and is more likely to speak aggressively, which could potentially reveal its spy identity. Similarly, when the reasoning action is removed, the overall performance significantly declines in all three metrics. Reasoning reflects the agentâs ability to infer the other playersâ identities and is beneficial for making better decisions in the next round.
8
# 4.2 Robustness | 2310.20499#23 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 24 | 8
# 4.2 Robustness
To evaluate the robustness of the SpyGame, we perform the SpyGame experiment with GPT-4 as the guest agent and conduct another two group experiments in the same experimental settings (i.e., three GPT-3.5-Turbo as host agents). Considering the bias issues mentioned in Section 3.3, we utilize different random seeds in each series to ensure that the order of agent responses changes.
As shown in Table 6, SpyGame achieves stable performance in both victory rate and the number of survival rounds. Although there is variance among the more fine-grained Voted metric, all sets of GPT-4 outperform other LLMs consistently (refer to Table 4). This demonstrates the effectiveness of SpyGame in providing a reliable result of different models.
Method Spy GPT-4 GPT-4 / seed-1 GPT-4 / seed-2 Winâ Roundâ Votedâ 0.33 0.34 0.32 2.18 2.19 2.18 1.31 1.22 1.28
Table 6: Robustness experiment of the SpyGame with GPT-4 with different random seeds.
# 4.3 Theory of Mind | 2310.20499#24 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 25 | Table 6: Robustness experiment of the SpyGame with GPT-4 with different random seeds.
# 4.3 Theory of Mind
Spy agents must accurately infer their spy role and win the game by concealing their true intentions, which is challenging even for human players. The entire process involves one crucial ability that we are particularly interested in the reasoning ability to deduce the identities of all the participants. As pointed out by [5], the reasoning ability mentioned in this work has a more precise and clear definition in psychology, known as the Theory of Mind (ToM).
Cognitive ToM is divided into first-order, which involves reflecting on someone elseâs mental states, and second-order, which involves reflecting on someoneâs perception of another personâs mental state [7]. In this context, we define the first-order ToM as understanding another personâs thoughts (e.g., What is Player 1âs keyword?), and the second-order ToM as inferring what one person thinks about another personâs thoughts (e.g., What is the identity does Player 1 guesses for Player 2?)
Setup We conduct the ToM analysis using the game history log in the main experiments (Sec- tion 3.4). Specifically, we analyzed the historical memory prior to the first round of voting, as all players had already made their first speaking and no players had been eliminated at that point. | 2310.20499#25 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 26 | First-Order ToM For the first-order, we gain insight into the guestâs inference regarding the keywords and identities of others with the prompting:
It is your turn to guess the keywords and identities of all players, including yourself. You must identify only one spy player.
Through the reasoning strategy, We define the inference of othersâ keyword as 1-word metric, the inference of othersâ identity as 1-identity. In addition, we also prompt the guest agent to infer its own identity, referred to as self-identity.
Second-Order ToM For the second-order ToM, we prompt the guest agent with the following instructions:
Based on your description, what do you think other players will guess your keyword and identity to be? Please put yourself in the shoes of other players and guess your own keyword and identity.
We use the first-order inference of other host agents as ground truth, aiming to evaluate the target LLMâs ability to infer the thoughts of other agents accurately.
As is shown in Table 7, the performance of different models varies across different ToM metrics. For self-identity, GPT-3.5-Turbo performs the best with a score of 0.23. In terms of first-order ToM,
9 | 2310.20499#26 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 27 | 9
Method Theory of Mind Text-Davinci-002 Text-Davinci-003 GPT-3.5-Turbo GPT-4 Self-Identity 0.20 0.14 0.23 0.17 1-Word 0.22 0.25 0.17 0.38 1-Identity 0.72 0.72 0.77 0.72 2-Word 0.27 0.35 0.37 0.35 2-Identity 0.47 0.61 0.59 0.67
Table 7: Theory of mind performance of guest agents.
GPT-4 has the highest score in 1-word with 0.38, while GPT-3.5-Turbo leads in 1-identity with a score of 0.77. For second-order ToM, GPT-4 also performs well in both 2-word and 2-identity metrics, with scores of 0.35 and 0.67. These results indicate that LLM-based agents have varying levels of success in understanding and attributing mental states to themselves and others.
# 5 Related Work | 2310.20499#27 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 28 | # 5 Related Work
Evaluation of LLMs The evaluation of LLMs has become an essential area of research, covering three primary categories. Firstly, NLP tasks involve diverse applications aimed at understanding and generating textual data. Prominent benchmarks in this category include include GLUE [29], SuperGLUE [30] MMLU [11]. Secondly, alignment evaluation assesses the helpfulness and harm- lessness of LLM-generated text [1]with examples such as instruction-following assessments like AlpacaEval [17]. Lastly, the third category involves complex real-world tasks, as exemplified by Webshop [33], AgentBench [19], Webarena [35], which test LLMs ability to handle intricate and practical scenarios. As discussed in Section 1, creating these human-annotated benchmarks can be time-consuming and costly, as it requires domain expertise and extensive manual labor. More critically, this category of methods is plagued by data leakage issues [27]. | 2310.20499#28 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 29 | LLM-based Agent More recently, the LLM-based agent has drawn significant attention with the rapid development of LLMs. In the field of NLP, communicative agents that leverage the power of LLMs to generate coherent responses and engage in multi-turn conversations, simulating human-like communication patterns, have been proposed to improve the reasoning and factuality in natural language generation [8, 18]. The communicative agents can also be applied across a wide range of real-world applications, including software development [23, 12], social simulation [21, 22] and robot assistance [3, 31].
Game Playing with LLMs Several recent studies have attempted to incorporate LLMs into games, e.g., GameEval [24], and Werewolf [32]. These efforts aim to explore the potential of LLMs in- game settings, examining their adaptability, strategic thinking, and ability to engage in complex interactions with other players. The core differences between our work and GameEval are two-fold: 1) the objective of our work is to evaluate the expression and disguising abilities of LLMs, and 2) we observe biases in the multi-agent interaction framework and propose a more comprehensive evaluation framework to address the issue.
# 6 Conclusion | 2310.20499#29 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 30 | # 6 Conclusion
In this paper, we propose employing the word guessing game to assess the LLM-based agent intelligence automatically. To this end, we develop a single-agent assessment method, DEEP, and an interactive multi-agent framework, SpyGame. The proposed DEEP and SpyGame can be easily migrated to various tasks, domains and languages. DEEP requires the target LLM to describe the given word in the aggressive and conservative modes and utilizes the SOTA GPT-4 to determine whether the descriptive sentences can accurately describe the given word. Empirical experimental results and human evaluation demonstrate that the DEEP can effectively evaluate the intelligence of LLMs. SpyGame leverages the agent competition to facilitate the exploration of LLMsâ expressive language abilities and their theory of mind intelligence in intricate communication contexts. We identify three primary bias issues in multi-agent gameplay experiments and propose a simple and effective strategy for mitigating these biases. Extensive experiments and analysis demonstrate that
10
the proposed SpyGame can effectively evaluate the capabilities of various LLMs in multi-agent interaction, capturing their ability to adapt to novel situations and engage in strategic communication.
# References
[1] Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv, 2022. | 2310.20499#30 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 31 | [2] Matthew Berland and Victor R. Lee. Collaborative strategic board games as a site for distributed computa- tional thinking. Int. J. Game Based Learn., 2011.
[3] Anthony Brohan, Yevgen Chebotar, Chelsea Finn, Karol Hausman, Alexander Herzog, Daniel Ho, Julian Ibarz, Alex Irpan, Eric Jang, Ryan Julian, et al. Do as i can, not as i say: Grounding language in robotic affordances. In Conference on Robot Learning. PMLR, 2023.
[4] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33, 2020.
[5] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv, 2023. | 2310.20499#31 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 32 | [6] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023.
[7] GA Doody, M Götz, EC Johnstone, CD Frith, and DG Cunningham Owens. Theory of mind and psychoses. Psychological medicine, 28(2), 1998.
[8] Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenenbaum, and Igor Mordatch. Improving factuality and reasoning in language models through multiagent debate. arXiv, 2023.
[9] Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. Glm: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022. | 2310.20499#32 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 33 | [10] Patrick Fernandes, Daniel Deutsch, Mara Finkelstein, Parker Riley, André FT Martins, Graham Neubig, Ankush Garg, Jonathan H Clark, Markus Freitag, and Orhan Firat. The devil is in the errors: Leveraging large language models for fine-grained machine translation evaluation. arXiv, 2023.
[11] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv, 2020.
[12] Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng Cheng, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, et al. Metagpt: Meta programming for multi-agent collaborative framework. arXiv, 2023.
[13] Tom Kocmi and Christian Federmann. Large language models are state-of-the-art evaluators of translation quality. arXiv, 2023.
[14] Michal Kosinski. Theory of mind might have spontaneously emerged in large language models, 2023.
[15] Willem J. M. Levelt. Speaking: From intention to articulation, 1989. | 2310.20499#33 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 34 | [15] Willem J. M. Levelt. Speaking: From intention to articulation, 1989.
[16] Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel: Communicative agents for" mind" exploration of large scale language model society. arXiv, 2023.
[17] Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. Alpacaeval: An automatic evaluator of instruction-following models, 2023.
[18] Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, and Shuming Shi. Encouraging divergent thinking in large language models through multi-agent debate. arXiv, 2023.
11 | 2310.20499#34 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 35 | 11
[19] Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, and Jie Tang. Agentbench: Evaluating llms as agents. arXiv, 2023.
[20] OpenAI. GPT-4 technical report. arXiv, 2023.
[21] Joon Sung Park, Lindsay Popowski, Carrie Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Social simulacra: Creating populated prototypes for social computing systems. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, 2022.
[22] Joon Sung Park, Joseph C OâBrien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behavior. arXiv, 2023. | 2310.20499#35 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 36 | [23] Chen Qian, Xin Cong, Cheng Yang, Weize Chen, Yusheng Su, Juyuan Xu, Zhiyuan Liu, and Maosong Sun. Communicative agents for software development. arXiv, 2023.
[24] Dan Qiao, Chenfei Wu, Yaobo Liang, Juntao Li, and Nan Duan. Gameeval: Evaluating llms on conversa- tional games. arXiv, 2023.
[25] Joshua Robinson, Christopher Michael Rytting, and David Wingate. Leveraging large language models for multiple choice question answering, 2023.
[26] Vygotsky L. S. Mind in society: The development of higher psychological processes, 1978.
[27] Rylan Schaeffer. Pretraining on the test set is all you need. arXiv, 2023.
[28] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv, 2023. | 2310.20499#36 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 37 | [29] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: In International A multi-task benchmark and analysis platform for natural language understanding. Conference on Learning Representations, 2018.
[30] Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems. Advances in neural information processing systems, 32, 2019.
[31] Jimmy Wu, Rika Antonova, Adam Kan, Marion Lepert, Andy Zeng, Shuran Song, Jeannette Bohg, Szymon Rusinkiewicz, and Thomas Funkhouser. Tidybot: Personalized robot assistance with large language models. arXiv, 2023.
[32] Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, and Yang Liu. Exploring large language models for communication games: An empirical study on werewolf. arXiv, 2023.
[33] Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan. Webshop: Towards scalable real-world web interaction with grounded language agents. Advances in Neural Information Processing Systems, 35, 2022. | 2310.20499#37 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 38 | [34] Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. Calibrate before use: Improving few-shot performance of language models, 2021.
[35] Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Yonatan Bisk, Daniel Fried, Uri Alon, and Graham Neubig. Webarena: A realistic web environment for building autonomous agents, 2023.
12
# 7 Appendix
Model Aggressive Mode Conservative Mode Targetâ Distractor â Targetâ Distractorâ Human Annotator 1 Vicuna-7B-v1.5 GPT-3.5-Turbo GPT-4 4.92 4.95 5.00 2.11 2.11 2.08 3.24 3.08 2.08 1.89 2.14 2.38 Human Annotator 2 Vicuna-7B-v1.5 GPT-3.5-Turbo GPT-4 4.65 4.68 4.73 2.19 2.16 2.08 4.41 3.84 3.62 2.41 2.73 3.27
Table 8: The Human Evaluation scores on target words and the corresponding distractor words. | 2310.20499#38 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |