doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2311.01964 | 33 | # Suggestions for benchmark maintainers:
⢠Provide the detail of the data source for con- structing the benchmark, and conduct the con- tamination analysis of the current dataset with mainstream pre-training corpora (as many as possible). The benchmark should explicitly alert possible contamination risks for com- monly used pre-training datasets.
⢠Each submission is suggested to be accompa- nied with a specific contamination analysis re- port from the result provider, where it can per- form semantic relevance checking (e.g., over- lap statistics) between pre-training data and evaluation data (both training and test data).
⢠Provide a diverse set of prompts for testing. The final evaluation results should be aver- aged over these multiple runs. It can help reduce the sensitivity of specific prompts, and enhance the reliability of the model results.
# 5 Conclusion
In this paper, we conducted empirical studies to investigate the penitential risk and impact of bench- mark leakage on LLM evaluation. We found that data leakage can largely boost the benchmark re- sults of LLMs (even small models), making the evaluation unfair and untrustworthy. These find- ings suggest that such attempts should be strictly avoided for fairly assessing the model performance on evaluation benchmarks. | 2311.01964#33 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 34 | Despite that this issue is hard to be fully elimi- nated from the pre-training stage, we suggest sev- eral useful guidelines to improve the use of exist- ing evaluation benchmarks. A key point is that both LLM developers and benchmark maintain- ers should be aware of the data contamination is- sue when interpreting and using the results from the performance leaderboards. In practice, several heuristic strategies can be useful to detect such po- tential contamination issues, e.g., calculating the token overlap between training and evaluation data. Besides, we also suggest benchmark test should be conducted with multiple task prompts for deriving a more stable and reliable model performance.
This work aims to draw the attention of the re- search community to the appropriate use of existing evaluation benchmarks for LLMs. More meaning- ful work can be conducted following this line, e.g., alerting the potential contamination datasets.
# Limitation
In this work, we conducted preliminary experi- ments to emphasize the potential risks associated with benchmark leakage in training LLMs. How- ever, there are still several limitations in our study. First, our experiments involved continually train- ing existing pre-trained LLMs with leaked data. We do not have sufficient computational resources to
8 | 2311.01964#34 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 35 | 8
investigate the impact when directly incorporating benchmark leakage during the pre-training process. Given that the pre-training dataset is significantly larger than the benchmark data, introducing data leakage during pre-training might yield different findings. Nonetheless, we strongly recommend avoiding this situation as it would breaks the nature of zero-shot/few-shot evaluation.
Second, we did not explore more fine-grained data leakage scenarios in this study, such as only leaking training examples without labels and vary- ing the proportion of the leaked dataset. We en- courage more research efforts into this issue with more systematic studies.
Third, we did not calculate the degree of con- tamination between the mainstream benchmarks and commonly-used pre-training datasets, which could serve as an important reference for alerting LLM developers to adjust their evaluation settings. While we suggest that developers and benchmark maintainers report contamination analyses, accu- rately and efficiently estimating the contamination risk of each example in the benchmark is also a challenging task. For example, the suggested n- gram hash algorithm may not detect semantic-level knowledge leakage risks.
# References
Rachith Aiyappa, Jisun An, Haewoon Kwak, and Yong- Yeol Ahn. 2023. Can we trust the evaluation on chatgpt? arXiv preprint arXiv:2303.12767. | 2311.01964#35 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 36 | Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin John- son, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gau- rav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernández Ãbrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan A. Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark DÃaz, Nan Du, Ethan Dyer, Vladimir Feinberg, Fangxi- aoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, and Palm 2 technical report. CoRR, et al. 2023. abs/2305.10403. | 2311.01964#36 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 37 | Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. 2020. PIQA: reasoning about physical commonsense in natural language. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Ap- plications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 7432â 7439. AAAI Press.
Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. 2021. GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh- Tensorflow. If you use this software, please cite it using these metadata. | 2311.01964#37 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 38 | Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Ad- vances in Neural Information Processing Systems 33: Annual Conference on Neural Information Process- ing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Sahil Chaudhary. 2023. Code alpaca: An instruction- following llama model for code generation. https: //github.com/sahil280114/codealpaca. | 2311.01964#38 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 39 | Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Pondé de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sas- try, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cum- mings, Matthias Plappert, Fotios Chantzis, Eliza- beth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, | 2311.01964#39 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 41 | Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. Boolq: Exploring the surprising difficulty of natural yes/no questions. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 2924â2936. Associa- tion for Computational Linguistics.
9
Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question an- swering? try arc, the AI2 reasoning challenge. CoRR, abs/1803.05457.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word prob- lems. CoRR, abs/2110.14168. | 2311.01964#41 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 42 | Together Computer. 2023. Redpajama-data: An open source recipe to reproduce llama training dataset.
OpenCompass Contributors. 2023. Opencompass: A universal evaluation platform for foundation https://github.com/open-compass/ models. opencompass.
Yiming Cui, Ting Liu, Wanxiang Che, Li Xiao, Zhipeng Chen, Wentao Ma, Shijin Wang, and Guoping Hu. 2019. A span-extraction dataset for chinese ma- In Proceedings of chine reading comprehension. the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, Novem- ber 3-7, 2019, pages 5882â5888. Association for Computational Linguistics.
Leo Gao, Stella Biderman, Sid Black, Laurence Gold- ing, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2021. The pile: An 800gb dataset of diverse text for language modeling. CoRR, abs/2101.00027.
Xinyang Geng and Hao Liu. 2023. Openllama: An open reproduction of llama. | 2311.01964#42 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 43 | Xinyang Geng and Hao Liu. 2023. Openllama: An open reproduction of llama.
Shahriar Golchin and Mihai Surdeanu. 2023. Time travel in llms: Tracing data contamination in large language models. arXiv preprint arXiv:2308.08493.
Ian J Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. 2013. An empirical investigation of catastrophic forgetting in gradient- based neural networks. CoRR, abs/1312.6211.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Stein- hardt. 2021. Measuring massive multitask language understanding. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. The goldilocks principle: Reading childrenâs books with explicit memory representa- tions. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. | 2311.01964#43 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 44 | Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu, Maosong Sun, and Junxian He. 2023. C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. CoRR, abs/2305.08322.
Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard H. Hovy. 2017. RACE: large-scale read- ing comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 785â794. Association for Computational Lin- guistics.
Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, and Yin Tat Lee. 2023. Textbooks are all you need II: phi-1.5 technical report. CoRR, abs/2309.05463.
Yucheng Li. 2023. An open source data contam- ination report for llama series models. CoRR, abs/2307.03109. | 2311.01964#44 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 45 | Yucheng Li. 2023. An open source data contam- ination report for llama series models. CoRR, abs/2307.03109.
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blun- som. 2017. Program induction by rationale genera- tion: Learning to solve and explain algebraic word problems. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 158â167. Association for Computational Linguistics.
Yun Luo, Zhen Yang, Fandong Meng, Yafu Li, Jie Zhou, and Yue Zhang. 2023. An empirical study of catas- trophic forgetting in large language models during continual fine-tuning. CoRR, abs/2308.08747.
Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct elec- tricity? A new dataset for open book question an- swering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2381â2391. Association for Computational Linguistics. | 2311.01964#45 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 46 | Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Donât give me the details, just the summary! topic-aware convolutional neural networks for ex- treme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 1797â1807. Association for Computational Linguistics.
OpenAI. 2023. GPT-4 technical report. CoRR, abs/2303.08774.
Yonatan Oren, Nicole Meister, Niladri Chatterji, Faisal Ladhak, and Tatsunori B. Hashimoto. 2023. Proving test set contamination in black box language models. CoRR, abs/2307.03109.
10
Denis Paperno, Germán Kruszewski, Angeliki Lazari- dou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernández. 2016. The LAMBADA dataset: Word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics. | 2311.01964#46 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 47 | Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. Coqa: A conversational question answering challenge. Trans. Assoc. Comput. Linguistics, 7:249â 266.
Oscar Sainz, Jon Ander Campos, Iker GarcÃa-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, and Eneko Agirre. 2023. Nlp evaluation in trouble: On the need to measure llm data contamination for each benchmark. arXiv preprint arXiv:2310.18018.
Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavat- ula, and Yejin Choi. 2020. Winogrande: An adver- sarial winograd schema challenge at scale. In The Thirty-Fourth AAAI Conference on Artificial Intelli- gence, AAAI 2020, The Thirty-Second Innovative Ap- plications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8732â 8740. AAAI Press. | 2311.01964#47 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 49 | Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Par- rish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Santilli, Andreas Stuhlmüller, Andrew M. Dai, Andrew La, Andrew K. Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Anto- nio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin | 2311.01964#49 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 51 | Kai Sun, Dian Yu, Dong Yu, and Claire Cardie. 2020. Investigating prior knowledge for challenging chi- nese machine reading comprehension. Trans. Assoc. Comput. Linguistics, 8:141â155.
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. Commonsenseqa: A question
answering challenge targeting commonsense knowl- In Proceedings of the 2019 Conference of edge. the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4149â4158. Association for Computational Linguistics.
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https:// github.com/tatsu-lab/stanford_alpaca. | 2311.01964#51 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 55 | Yaqing Wang, Quanming Yao, James T. Kwok, and Li- onel M. Ni. 2021. Generalizing from a few examples: A survey on few-shot learning. ACM Comput. Surv., 53(3):63:1â63:34.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. 2022. Chain-of-thought prompt- ing elicits reasoning in large language models. In NeurIPS.
Yongqin Xian, Christoph H. Lampert, Bernt Schiele, and Zeynep Akata. 2019. Zero-shot learning - A comprehensive evaluation of the good, the bad and the ugly. IEEE Trans. Pattern Anal. Mach. Intell., 41(9):2251â2265.
Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a
11
machine really finish your sentence? In Proceedings of the 57th Conference of the Association for Compu- tational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4791â4800. Association for Computational Linguis- tics. | 2311.01964#55 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01964 | 56 | Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Be- ichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2023. A survey of large language models. CoRR, abs/2303.18223.
Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. 2023. Agieval: A human-centric benchmark for evaluating foundation models. CoRR, abs/2304.06364.
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Zhicheng Dou, and Ji-Rong Wen. 2023. Large language models for infor- mation retrieval: A survey. CoRR, abs/2308.07107. | 2311.01964#56 | Don't Make Your LLM an Evaluation Benchmark Cheater | Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | http://arxiv.org/pdf/2311.01964 | Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | [
{
"id": "2310.18018"
},
{
"id": "2303.12767"
},
{
"id": "2310.16789"
},
{
"id": "2308.08493"
}
] |
2311.01343 | 0 | 3 2 0 2 v o N 8 ] R I . s c [
3 v 3 4 3 1 0 . 1 1 3 2 : v i X r a
Collaborative Large Language Model for Recommender Systems Yaochen Zhuâ,1, Liang Wu2, Qi Guo2, Liangjie Hong2, Jundong Li1 1University of Virginia, 2LinkedIn Inc. 1{uqp4qh, jundong}@virginia.edu, 2{liawu, qguo, liahong}@linkedin.com
Liangjie Hongâ, Jundong Li! âLinkedIn Inc. qguo, liahong}@linkedin.com Recommendations QO Yes! tT retrieve few tural Language e.g., user t transform user interactions and features item 2 is a computer. (continuous or categorical) will user_1 buy a mouse? Thouse is a component of PC maybe she needs a mouse encoded knowledge reasoning ability has bought item 2. [e) oho is a CS student.
# ABSTRACT | 2311.01343#0 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 0 | 3 2 0 2 v o N 2 ] R I . s c [
1 v 5 5 5 1 0 . 1 1 3 2 : v i X r a
# Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
Weiwei Sun1 Zheng Chen1 Xinyu Ma2 Pengjie Ren1 Zhumin Chen1 Dawei Yin2 Zhaochun Ren3 1Shandong University, Qingdao, China 3Leiden University, Leiden, The Netherlands {sunnweiwei,xinyuma2016,lingyongy}@gmail.com [email protected], [email protected]
# Abstract | 2311.01555#0 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.04915 | 0 | # Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, & Sowon Hahn Human Factors Psychology Lab Seoul National University [email protected], [email protected]
Standard Prompting Input: | just broke up. My life is over. Output: | am sorry to hear that you are sad Chain-of-Empathy Prompting (CoE) Input: | just broke up. My life is over, 1st Step : The client is feeling sad. Reasoning : (Emotion) (CoE) 2nd Step : The client shows (Cognitive catastrophic cognitive error. Error) Output :'m sorry to hear that. | understand you're feeling extremely upset due to your breakup. It's natural to) feel this way. Breakups are hard but they don't define) your whole existence or future. Let's work through! these emotions and explore ways to cope, Psychotherapy-based CoEs Focus "Cognitive Error" "Emotion Control" }-â2"Self-Understanding" }-â2"Decision Making"
Figure 1: Chain-of-Empathy (CoE) prompting with cognitive reasoning of humanâs emotion based on psychotherapy models.
# Abstract | 2311.04915#0 | Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models | We present a novel method, the Chain of Empathy (CoE) prompting, that
utilizes insights from psychotherapy to induce Large Language Models (LLMs) to
reason about human emotional states. This method is inspired by various
psychotherapy approaches including Cognitive Behavioral Therapy (CBT),
Dialectical Behavior Therapy (DBT), Person Centered Therapy (PCT), and Reality
Therapy (RT), each leading to different patterns of interpreting clients'
mental states. LLMs without reasoning generated predominantly exploratory
responses. However, when LLMs used CoE reasoning, we found a more comprehensive
range of empathetic responses aligned with the different reasoning patterns of
each psychotherapy model. The CBT based CoE resulted in the most balanced
generation of empathetic responses. The findings underscore the importance of
understanding the emotional context and how it affects human and AI
communication. Our research contributes to understanding how psychotherapeutic
models can be incorporated into LLMs, facilitating the development of
context-specific, safer, and empathetic AI. | http://arxiv.org/pdf/2311.04915 | Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, Sowon Hahn | cs.CL, cs.AI, cs.HC | null | null | cs.CL | 20231102 | 20231214 | [
{
"id": "2302.13971"
},
{
"id": "2305.10601"
},
{
"id": "2212.14402"
},
{
"id": "2205.11916"
},
{
"id": "2108.07258"
},
{
"id": "2209.08141"
},
{
"id": "2201.11903"
},
{
"id": "2306.08997"
},
{
"id": "2303.13375"
},
{
"id": "1808.10399"
}
] |
2311.01343 | 1 | Recently, there is a growing interest in developing next-generation recommender systems (RSs) based on pretrained large language models (LLMs), fully utilizing their encoded knowledge and reason- ing ability. However, the semantic gap between natural language and recommendation tasks is still not well addressed, leading to multiple issues such as spuriously-correlated user/item descriptors, ineffective language modeling on user/item contents, and ineffi- cient recommendations via auto-regression, etc. In this paper, we propose CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and ID paradigm of RS, aiming to address the above challenges simultaneously. We first extend the vocabulary of pretrained LLMs with user/item ID tokens to faithfully model the user/item collaborative and content semantics. Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is proposed to effectively learn user/item collaborative/content token embeddings via language modeling on RS-specific corpora estab- lished from user-item interactions and user/item features, where each document is split into a prompt consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and | 2311.01343#1 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 1 | # Abstract
Recent studies have demonstrated the great potential of Large Language Models (LLMs) serving as zero-shot relevance rankers. The typical ap- proach involves making comparisons between pairs or lists of documents. Although effective, these listwise and pairwise methods are not efficient and also heavily rely on intricate prompt engineering. To tackle this problem, we introduce a novel instruction distillation method. The key idea is to distill the pairwise ranking ability of open-sourced LLMs to a simpler but more efficient pointwise ranking. Specifically, given the same LLM, we first rank documents using the effective pairwise approach with complex instructions, and then distill the teacher predictions to the pointwise ap- proach with simpler instructions. Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that instruction distillation can improve efficiency by 10 to 100Ã and also enhance the ranking performance of LLMs. Furthermore, our approach surpasses the performance of exist- ing supervised methods like monoT5 and is on par with the state-of-the- art zero-shot methods. The code to reproduce our results is available at www.github.com/sunnweiwei/RankGPT.
# Introduction | 2311.01555#1 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.04915 | 1 | Figure 1: Chain-of-Empathy (CoE) prompting with cognitive reasoning of humanâs emotion based on psychotherapy models.
# Abstract
We present a novel method, the Chain of Empathy that utilizes insights from psychotherapy to induce Large Language Models (LLMs) to reason about human emotional states. This method is inspired psychotherapy various approachesâCognitive-Behavioral Therapy (CBT), Dialectical Behavior Therapy (DBT), Person-Centered Therapy (PCT), and Reality Therapy to different patterns of interpreting clientsâ mental states. generated LLMs predominantly exploratory responses. However, when LLMs used CoE reasoning, we found a more comprehensive range of empathetic responses aligned with each psychotherapy modelâs different reasoning patterns. The CBT- based CoE resulted in the most balanced responses. The generation of empathetic importance of the findings underscore understanding the emotional context and how it affects human-AI communication. Our research
contributes how psychotherapeutic models can be incorporated into LLMs, facilitating the development of context-specific, safer, and empathetic AI.
# 1. Introduction | 2311.04915#1 | Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models | We present a novel method, the Chain of Empathy (CoE) prompting, that
utilizes insights from psychotherapy to induce Large Language Models (LLMs) to
reason about human emotional states. This method is inspired by various
psychotherapy approaches including Cognitive Behavioral Therapy (CBT),
Dialectical Behavior Therapy (DBT), Person Centered Therapy (PCT), and Reality
Therapy (RT), each leading to different patterns of interpreting clients'
mental states. LLMs without reasoning generated predominantly exploratory
responses. However, when LLMs used CoE reasoning, we found a more comprehensive
range of empathetic responses aligned with the different reasoning patterns of
each psychotherapy model. The CBT based CoE resulted in the most balanced
generation of empathetic responses. The findings underscore the importance of
understanding the emotional context and how it affects human and AI
communication. Our research contributes to understanding how psychotherapeutic
models can be incorporated into LLMs, facilitating the development of
context-specific, safer, and empathetic AI. | http://arxiv.org/pdf/2311.04915 | Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, Sowon Hahn | cs.CL, cs.AI, cs.HC | null | null | cs.CL | 20231102 | 20231214 | [
{
"id": "2302.13971"
},
{
"id": "2305.10601"
},
{
"id": "2212.14402"
},
{
"id": "2205.11916"
},
{
"id": "2108.07258"
},
{
"id": "2209.08141"
},
{
"id": "2201.11903"
},
{
"id": "2306.08997"
},
{
"id": "2303.13375"
},
{
"id": "1808.10399"
}
] |
2311.01343 | 2 | and user/item features, where each document is split into a prompt consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and a main text con- sisting of homogeneous item tokens or vocab tokens that facilitates stable and effective language modeling. In addition, a novel mutual regularization strategy is introduced to encourage the CLLM4Rec to capture recommendation-oriented information from user/item contents. Finally, we propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where an item prediction head with multinomial likelihood is added to the pretrained CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts established from masked user-item interaction history, where rec- ommendations of multiple items can be generated efficiently1. | 2311.01343#2 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 2 | # Introduction
Large Language Models (LLMs), such as ChatGPT and GPT-4, have achieved remarkable success in various Natural Language Processing (NLP) tasks (OpenAI, 2022; 2023). One notable capability of LLMs is their ability to solve tasks using carefully designed prompts or instructions (Microsoft, 2023). This has drawn much attention from the Information Retrieval (IR) community given its potential to significantly reduce the huge annotation costs (Shi et al., 2023; Sun et al., 2023c).
Relevance ranking has been the most critical problem in IR, which aims at ranking a set of candidate items by their relevance given the query (Fan et al., 2021). Recently, there has been a series of works using large models as zero-shot rankers through pointwise, pairwise, and listwise ranking prompting, and these have achieved impressive results on IR benchmarks (Sun et al., 2023c; Ma et al., 2023; Qin et al., 2023). | 2311.01555#2 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.04915 | 2 | contributes how psychotherapeutic models can be incorporated into LLMs, facilitating the development of context-specific, safer, and empathetic AI.
# 1. Introduction
(LLMs) have Large Language Models dramatically generation performance that highly resembles human expressions (Brown et al., 2020; Touvron et al., 2023; Taori et al., 2023; Bommasani et al., 2021). These models have been showcasing their reasoning abilities and achieving high performance in various problem-solving tasks, including professional exams such as the bar exam (Bommarito II and Katz, 2022), a math test (Zhang et al., 2023), and medical diagnoses (Nori et al., 2023). Among many recent findings related to LLMs, one interesting point is the introduction of âChain-of-Thought (CoT)â prompting (Wei et al., 2022; Kojima et al., 2022). This method elicits reasoning before generating outputs. Nevertheless, this recent method has primarily experimented with
logical or arithmetic tasks. Whether reasoning about emotional states or underlying causes enhances empathetic responses to user input remains a relatively under-explored area and merits investigation. Empathetic | 2311.04915#2 | Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models | We present a novel method, the Chain of Empathy (CoE) prompting, that
utilizes insights from psychotherapy to induce Large Language Models (LLMs) to
reason about human emotional states. This method is inspired by various
psychotherapy approaches including Cognitive Behavioral Therapy (CBT),
Dialectical Behavior Therapy (DBT), Person Centered Therapy (PCT), and Reality
Therapy (RT), each leading to different patterns of interpreting clients'
mental states. LLMs without reasoning generated predominantly exploratory
responses. However, when LLMs used CoE reasoning, we found a more comprehensive
range of empathetic responses aligned with the different reasoning patterns of
each psychotherapy model. The CBT based CoE resulted in the most balanced
generation of empathetic responses. The findings underscore the importance of
understanding the emotional context and how it affects human and AI
communication. Our research contributes to understanding how psychotherapeutic
models can be incorporated into LLMs, facilitating the development of
context-specific, safer, and empathetic AI. | http://arxiv.org/pdf/2311.04915 | Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, Sowon Hahn | cs.CL, cs.AI, cs.HC | null | null | cs.CL | 20231102 | 20231214 | [
{
"id": "2302.13971"
},
{
"id": "2305.10601"
},
{
"id": "2212.14402"
},
{
"id": "2205.11916"
},
{
"id": "2108.07258"
},
{
"id": "2209.08141"
},
{
"id": "2201.11903"
},
{
"id": "2306.08997"
},
{
"id": "2303.13375"
},
{
"id": "1808.10399"
}
] |
2311.01343 | 3 | # Figure 1: Prospective of developing the next generation of recommender systems based on the pretrained LLMs.
[10], such as GPT [11], T5 [12], LlaMA [13], have demonstrated emergent ability when trained on large-scale corpora [14], show- casing an unprecedented understanding of knowledge and patterns contained in natural language [9, 15]. Consequently, it is promising to develop the next generation of RS based on the pretrained LLMs [16], fully utilizing their encoded knowledge, logical reasoning abil- ity, and generative AI power to understand and reason with the user/item semantics and make more accurate recommendations accordingly, especially when users and items are associated with large amounts of textual features, such as biographies, descriptions, content, reviews, and explanations, etc., in modern online platforms [17, 18]. (see Fig. 1 for an intuitive example of an LLM-based RS)
# 1 INTRODUCTION | 2311.01343#3 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 3 | Employing LLMs for ranking tasks still faces several practical challenges, including appli- cation efficiency and output stability. On one hand, both listwise and pairwise ranking methods suffer from efficiency issues. For listwise ranking (Sun et al., 2023c; Ma et al., 2023), the exponential time complexity of the Transformer with respect to input length renders it impractical for many industrial applications. Pairwise ranking requires pairing every document with every other, with the obvious drawback being its costly O(n2) calls to LLMs (Qin et al., 2023). On the other hand, while pointwise ranking is more efficient, it compromises on effectiveness (Liang et al., 2022). The pretraining objective of LLMs isnât inherently tailored for ranking tasks (i.e., generative language modeling vs. relevance ranking), meaning its prediction probability isnât calibrated to the relevance score (Zhao
1
nDCG@10 Pairwise 40 220M 1x 10x 100x 1000x 10000x | 2311.01555#3 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.04915 | 3 | logical or arithmetic tasks. Whether reasoning about emotional states or underlying causes enhances empathetic responses to user input remains a relatively under-explored area and merits investigation. Empathetic
requires cognitive reasoning of othersâ mental states. Different psychotherapeutic approaches offer varied perspectives on empathy (Hofmann et al., 2010; Linehan, 1987; Cooper and McLeod, 2011; Wubbolding et al., 2017). By integrating these approaches into LLMsâ reasoning stage, we can enhance the depth and specificity of their empathetic responses. For this purpose, this study delves into these possibilities and proposes a novel prompting, Chain-of- Empathy prompting (CoE). The CoE prompt integrates a text generation. It focuses on clientsâ emotions and the specific factors leading to those emotions, such as cognitive errors, before generating the output.
# 2. Related Work
# 2.1. Theoretical Backgrounds of Empathy | 2311.04915#3 | Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models | We present a novel method, the Chain of Empathy (CoE) prompting, that
utilizes insights from psychotherapy to induce Large Language Models (LLMs) to
reason about human emotional states. This method is inspired by various
psychotherapy approaches including Cognitive Behavioral Therapy (CBT),
Dialectical Behavior Therapy (DBT), Person Centered Therapy (PCT), and Reality
Therapy (RT), each leading to different patterns of interpreting clients'
mental states. LLMs without reasoning generated predominantly exploratory
responses. However, when LLMs used CoE reasoning, we found a more comprehensive
range of empathetic responses aligned with the different reasoning patterns of
each psychotherapy model. The CBT based CoE resulted in the most balanced
generation of empathetic responses. The findings underscore the importance of
understanding the emotional context and how it affects human and AI
communication. Our research contributes to understanding how psychotherapeutic
models can be incorporated into LLMs, facilitating the development of
context-specific, safer, and empathetic AI. | http://arxiv.org/pdf/2311.04915 | Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, Sowon Hahn | cs.CL, cs.AI, cs.HC | null | null | cs.CL | 20231102 | 20231214 | [
{
"id": "2302.13971"
},
{
"id": "2305.10601"
},
{
"id": "2212.14402"
},
{
"id": "2205.11916"
},
{
"id": "2108.07258"
},
{
"id": "2209.08141"
},
{
"id": "2201.11903"
},
{
"id": "2306.08997"
},
{
"id": "2303.13375"
},
{
"id": "1808.10399"
}
] |
2311.01343 | 4 | # 1 INTRODUCTION
With content growing exponentially on the Web, recommender system (RS) has become an essential component for online service platforms [1]. Nevertheless, since Netflix released its Prize in 2006 [2], RS has long been dominated by the ID-based paradigm, where users and items are represented by unique, continuous ID embed- dings denoting their semantic similarity (e.g., w.r.t. usersâ prefer- ences on items, user/item contents, etc.) [3]. Exemplar ID-based RSs include matrix factorization-based methods such as PMF [4] and the two-tower models [5], where the user/item ID embeddings are either randomly initialized and learned from their historical interactions (i.e., collaborative filtering [6]), or established based on user/item content features (i.e., content-based methods [7, 8]). Recently, large language model (LLM) has become a heated re- search topic that revolutionized both academia and industry [9]. Transformer-based neural networks with billions of parameters | 2311.01343#4 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 4 | 1
nDCG@10 Pairwise 40 220M 1x 10x 100x 1000x 10000x
Figure 1: The average nDCG@10 of various LLM-based re-ranking methods on TREC benchmarks. The horizontal axis represents the speed of each method relative to monoT5- Base (Nogueira et al., 2020), as measured by the average latency time per query. All methods are based on the T5 series foundation models. RG refers to the relevance generation method, and PRP refers to the pairwise ranking method.
et al., 2021; 2023). Other challenges, such as unstable outputs, position bias, and repetitions from LLMs, become more pronounced in IR tasks, where deterministic output in terms of relevance is crucial (Sun et al., 2023c). | 2311.01555#4 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.04915 | 4 | # 2. Related Work
# 2.1. Theoretical Backgrounds of Empathy
Empathy, defined as sharing othersâ emotions and experiences, is a multifaceted concept encompassing cognitive and emotional aspects (Neff, 2003; Anderson and Keltner, 2002; De Vignemont and Singer, 2006; Hall and Schwartz, 2019; Zaki, 2019). Cognitive empathy involves understanding othersâ emotions and perspectives, linked to abilities such as mentalizing and narrative imagination (Eisenberg, 2014). It requires an in-depth cognitive appraisal of the situation, considering factors like pleasantness, control, and certainty of the outcome (Lazarus, 1991; Wondra and (emotional) 2015). Affective Ellsworth, empathy allows to experience individuals othersâ emotions, while motivational empathy, a newer concept, embodies the desire to alleviate othersâ emotional distress (Zaki, 2019).
# 2.2. Empathetic Communication in Text
Natural Language Processing (NLP) has been increasingly developing conversational agents, or chatbots, across various professional domains. These include mental healthcare for victims of crime (Ahn et al., 2020), individuals on the autism spectrum (Diehl et al., 2012), and those suffering from anxiety disorders (Rasouli et al., 2022). | 2311.04915#4 | Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models | We present a novel method, the Chain of Empathy (CoE) prompting, that
utilizes insights from psychotherapy to induce Large Language Models (LLMs) to
reason about human emotional states. This method is inspired by various
psychotherapy approaches including Cognitive Behavioral Therapy (CBT),
Dialectical Behavior Therapy (DBT), Person Centered Therapy (PCT), and Reality
Therapy (RT), each leading to different patterns of interpreting clients'
mental states. LLMs without reasoning generated predominantly exploratory
responses. However, when LLMs used CoE reasoning, we found a more comprehensive
range of empathetic responses aligned with the different reasoning patterns of
each psychotherapy model. The CBT based CoE resulted in the most balanced
generation of empathetic responses. The findings underscore the importance of
understanding the emotional context and how it affects human and AI
communication. Our research contributes to understanding how psychotherapeutic
models can be incorporated into LLMs, facilitating the development of
context-specific, safer, and empathetic AI. | http://arxiv.org/pdf/2311.04915 | Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, Sowon Hahn | cs.CL, cs.AI, cs.HC | null | null | cs.CL | 20231102 | 20231214 | [
{
"id": "2302.13971"
},
{
"id": "2305.10601"
},
{
"id": "2212.14402"
},
{
"id": "2205.11916"
},
{
"id": "2108.07258"
},
{
"id": "2209.08141"
},
{
"id": "2201.11903"
},
{
"id": "2306.08997"
},
{
"id": "2303.13375"
},
{
"id": "1808.10399"
}
] |
2311.01343 | 5 | Several preliminary studies have been conducted to investigate the adaptation of LLMs for recommendation systems [19â22]. Typ- ically, these methods can be summarized into two steps: 1) First, instead of representing users/items with continuous ID embeddings, relevant information necessary for reasoning with user interests and generating recommendations, i.e., target user, interacted items, user/item features, and candidate items, are converted into a nat- ural language-based prompt. 2) Then, the prompt is used to query the LLM, where information relevant to recommendations (e.g., whether the user will interact with an item or not) is retrieved from the textual output of the LLM to generate recommendations. The above procedure can be performed in a zero-shot manner [23â26], where the recommendation decisions are obtained directly from the pretrained LLM (e.g., we input all relevant information regarding a user and an item into the chatbox of ChatGPT and ask if the user will interact with the item), or if groundtruths are available, the pretrained LLMs can also be finetuned, such that RS-specific knowledge can be updated into the pretrained model [20, 27â29]. Although progress has been achieved by | 2311.01343#5 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 5 | To address these challenges, this paper introduces a novel Instruction Distillation method to enhance the efficiency and stability of LLMs in the ranking task. The key idea is to distill the predictions of pairwise ranking (PRP) with computationally demanding instruction (teacher instruction) to the efficient pointwise prompting method but with simpler instruction (student instruction). Through this distillation process, the task instructions used for ranking are substantially simplified, leading not only to increased efficiency but also to enhanced performance. In this work, we use open-sourced LLMs FLAN-T5 and our method is zero- shot text ranking since FLAN-T5 is not directly exposed to human-labeled data. | 2311.01555#5 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.04915 | 5 | Recently, chatbots designed for psychotherapy (e.g., CBT) have shown promising results in assisting the long-term treatment of anxiety and depression (Nwosu et al., 2022). However, current AI-generated responses appear generic and less authentic, making personalized responses a significant challenge. Empathetic reasoning is crucial for these systems, leading to ongoing efforts to enhance their empathetic expression incorporating human-like traits (Roller et al., 2021).
# 2.3. Computational Approach to Empathy
Past research in psychotherapy has primarily focused on empathy based on the analysis of nonverbal cues, such as body language and facial expressions, often requiring manual coding of empathetic responses (Scherer et al., 2001; American Psychiatric Association et al., 1994; Ekman and Friesen, 1971).
Recent advances in artificial intelligence have shifted towards a computational approach, where empathy is predicted from a text corpus and quantified through the labeling of emotions (Rashkin et al., 2019) and distress (Buechel et al., 2018). While most studies have traditionally concentrated on the clientâs capacity for the empathy, counselor is increasingly recognized as critical to successful therapy outcomes (Truax and Carkhuff, 2007). This aspect of expressed empathy is particularly relevant to our approach, where we aim to use LLMs to reflect their understanding of the clientâs needs accurately.
# 2.4. Reasoning in Large Language Models | 2311.04915#5 | Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models | We present a novel method, the Chain of Empathy (CoE) prompting, that
utilizes insights from psychotherapy to induce Large Language Models (LLMs) to
reason about human emotional states. This method is inspired by various
psychotherapy approaches including Cognitive Behavioral Therapy (CBT),
Dialectical Behavior Therapy (DBT), Person Centered Therapy (PCT), and Reality
Therapy (RT), each leading to different patterns of interpreting clients'
mental states. LLMs without reasoning generated predominantly exploratory
responses. However, when LLMs used CoE reasoning, we found a more comprehensive
range of empathetic responses aligned with the different reasoning patterns of
each psychotherapy model. The CBT based CoE resulted in the most balanced
generation of empathetic responses. The findings underscore the importance of
understanding the emotional context and how it affects human and AI
communication. Our research contributes to understanding how psychotherapeutic
models can be incorporated into LLMs, facilitating the development of
context-specific, safer, and empathetic AI. | http://arxiv.org/pdf/2311.04915 | Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, Sowon Hahn | cs.CL, cs.AI, cs.HC | null | null | cs.CL | 20231102 | 20231214 | [
{
"id": "2302.13971"
},
{
"id": "2305.10601"
},
{
"id": "2212.14402"
},
{
"id": "2205.11916"
},
{
"id": "2108.07258"
},
{
"id": "2209.08141"
},
{
"id": "2201.11903"
},
{
"id": "2306.08997"
},
{
"id": "2303.13375"
},
{
"id": "1808.10399"
}
] |
2311.01343 | 6 | the pretrained LLMs can also be finetuned, such that RS-specific knowledge can be updated into the pretrained model [20, 27â29]. Although progress has been achieved by these pioneer works, some fundamental dichotomies between natural language process- ing (NLP) and recommendation still remain to be addressed. One main challenge is the gap between natural language and user/item semantics. Generally, there are two strategies to represent user/item | 2311.01343#6 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 6 | We empirically evaluate instruction distilled models against other baselines in Figure 1. These distilled student models are between 10 and 100Ã more efficient compared to their teacher models (i.e., PRP) while also yielding significant enhancements. Compared to vanilla pointwise ranking methods (Relevance Generation methods, RG), our distilled models show a 40% performance improvement in terms of nDCG@10. Remarkably, our distilled FLAN- T5-XL model even surpasses the SOTA supervised systems like monoT5-3B (Nogueira et al., 2020) in IR benchmarks. This is particularly notable as it achieves this without relying on any human relevance judgments. We also condu Further verification is conducted on various ranking tasks such as the BEIR benchmark and the conversational recommendation tasks present in the REDIAL benchmark.
In summary, this paper makes the following contributions:
⢠We propose Instruction Distillation, an unsupervised approach to specialize LLMs on IR tasks by distilling instructions.
⢠We show the instruction distilled LLM is both more efficient and effective compared to existing zero-shot LLMs with the same amount of parameters.
⢠We illustrate the robust performance of our method in both passage ranking and movie recommendation tasks, surpassing the state-of-the-art supervised methods.1
1Code and pre-trained models are available at https://github.com/sunnweiwei/RankGPT/ tree/main/InstructDistill
2 | 2311.01555#6 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.04915 | 6 | # 2.4. Reasoning in Large Language Models
Recently, CoT has shown effective in eliciting the reasoning process of the LLMs (Wei et al., 2022; Prystawski et al., 2022; Yao et al., 2023; Kojima et al., 2022). CoT prompting in previous research has included reasoning steps within the prompt instruction for zero- or one- shot learning of LLMs during text generation
CBT-CoE Goal Cognitive reframing Reasoning Tackling negative thought patterns Prompt conditions DBT-CoE PCT-CoE Emotion regulation Addressing emotional dysregulation Self- understanding Enhancing self-awareness RT-CoE Problem-focused coping Identifying cause of the dissatisfaction
Table 1: Comparison of goals and reasoning style in different psychotherapy based CoEs.
(Kojima et al., 2022). This model has improved the performance of problem-solving (Kojima et al., understanding or metaphor (Prystawski et al., 2022), offering new insights and suggesting possibilities for generative models to be used in many other domains.
2011; Knutson and Koch, 2022), and Reality Therapy (RT; Wubbolding et al., 2017)2. Except these promptsâ for instructions were designed the to reflect therapistsâ reasoning process in their respective counseling models.
# 3. The Present Study | 2311.04915#6 | Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models | We present a novel method, the Chain of Empathy (CoE) prompting, that
utilizes insights from psychotherapy to induce Large Language Models (LLMs) to
reason about human emotional states. This method is inspired by various
psychotherapy approaches including Cognitive Behavioral Therapy (CBT),
Dialectical Behavior Therapy (DBT), Person Centered Therapy (PCT), and Reality
Therapy (RT), each leading to different patterns of interpreting clients'
mental states. LLMs without reasoning generated predominantly exploratory
responses. However, when LLMs used CoE reasoning, we found a more comprehensive
range of empathetic responses aligned with the different reasoning patterns of
each psychotherapy model. The CBT based CoE resulted in the most balanced
generation of empathetic responses. The findings underscore the importance of
understanding the emotional context and how it affects human and AI
communication. Our research contributes to understanding how psychotherapeutic
models can be incorporated into LLMs, facilitating the development of
context-specific, safer, and empathetic AI. | http://arxiv.org/pdf/2311.04915 | Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, Sowon Hahn | cs.CL, cs.AI, cs.HC | null | null | cs.CL | 20231102 | 20231214 | [
{
"id": "2302.13971"
},
{
"id": "2305.10601"
},
{
"id": "2212.14402"
},
{
"id": "2205.11916"
},
{
"id": "2108.07258"
},
{
"id": "2209.08141"
},
{
"id": "2201.11903"
},
{
"id": "2306.08997"
},
{
"id": "2303.13375"
},
{
"id": "1808.10399"
}
] |
2311.01555 | 7 | 1Code and pre-trained models are available at https://github.com/sunnweiwei/RankGPT/ tree/main/InstructDistill
2
# 2 Related Work
2.1 LLMs for Information Retrieval
Large language models (LLMs) have been pre-trained on a large-scale corpus and possess strong text understanding and reasoning capabilities (OpenAI, 2023; Google, 2023; Shoeybi et al., 2019; Touvron et al., 2023). Recently, LLMs have found increasing applications in information retrieval (Zhu et al., 2023; Wu et al., 2023; Yu et al., 2023; Sun et al., 2023a; Hou et al., 2023; Sun et al., 2023b; Bao et al., 2023). These methods can be broadly divided into two categories: synthetic data generation and relevance ranking.
Several approaches have been proposed to utilize LLMs to generate synthetic data for IR. For example, SGPT (Muennighoff, 2022) generates text embeddings using GPT for dense retrieval; and Gao et al. (2022); Wang et al. (2023a) proposes to generate pseudo-documents using LLMs and retrieve these pseudo-documents first using queries. Dai et al. (2023) proposes to generate pseudo-queries for few-shot dense retrieval. | 2311.01555#7 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.04915 | 7 | # 3. The Present Study
We investigated whether eliciting empathetic reasoning in LLMs leads to natural responses. Therefore, we developed CoE prompting to reason emotion and situational factors that could help the model to accurately infer the clientâs emotional experience in mental healthcare and thus choose the most appropriate and context-aware empathetic strategy to communicate.
Models in each prompting condition were tested in zero-shot, with only instructions on which option to choose per class: empathetic strategy (emotional reaction, exploration, and interpretation) and communication level (no expression, weak, and strong) (Sharma et al., 2020). The common reasoning steps involved in each CoE condition were: (1) Identify any word that represents the clientâs emotion, and individual/situational (2) Understand factors that may have led to the expression in the clientâs message.
# 4. Methods
# 4.1. Language Model
# 5. Experiments
We used GPT-3.5 API from OpenAI 1 for system setup. The model (âtext-davinci-003â) temperature was set to 0.9. The top p parameter was set to 1 for nucleus sampling to reduce the randomness of the output (The frequency penalty = 0 and the presence penalty = 0.6).
# 4.2. Chain-of-Empathy Reasoning | 2311.04915#7 | Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models | We present a novel method, the Chain of Empathy (CoE) prompting, that
utilizes insights from psychotherapy to induce Large Language Models (LLMs) to
reason about human emotional states. This method is inspired by various
psychotherapy approaches including Cognitive Behavioral Therapy (CBT),
Dialectical Behavior Therapy (DBT), Person Centered Therapy (PCT), and Reality
Therapy (RT), each leading to different patterns of interpreting clients'
mental states. LLMs without reasoning generated predominantly exploratory
responses. However, when LLMs used CoE reasoning, we found a more comprehensive
range of empathetic responses aligned with the different reasoning patterns of
each psychotherapy model. The CBT based CoE resulted in the most balanced
generation of empathetic responses. The findings underscore the importance of
understanding the emotional context and how it affects human and AI
communication. Our research contributes to understanding how psychotherapeutic
models can be incorporated into LLMs, facilitating the development of
context-specific, safer, and empathetic AI. | http://arxiv.org/pdf/2311.04915 | Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, Sowon Hahn | cs.CL, cs.AI, cs.HC | null | null | cs.CL | 20231102 | 20231214 | [
{
"id": "2302.13971"
},
{
"id": "2305.10601"
},
{
"id": "2212.14402"
},
{
"id": "2205.11916"
},
{
"id": "2108.07258"
},
{
"id": "2209.08141"
},
{
"id": "2201.11903"
},
{
"id": "2306.08997"
},
{
"id": "2303.13375"
},
{
"id": "1808.10399"
}
] |
2311.01343 | 8 | in an LLM-based RS. One strategy is pseudo-ID-based method, where an ID-like word (e.g., "user_ð" or "item_ð") is used to rep- resent the ðth user and ðth item [20]. However, since the vocabu- lary of most LLM contains number-tokens up to two digits, when tokenized, the pseudo ID breaks down into atomic tokens, e.g., "user_4332" into ["user", "_", "43", "32"], where spurious correlations can be introduced for irrelevant users/items (e.g., "user_4332" with "user_43" and "user_32"). In contrast, description-based methods use semantically meaningful descriptions to index users/items, such as item titles [19, 24] or a small amount of newly-introduced tokens assigned to different user/items based on their content similarity [30]. However, description-based methods introduce a strong induc- tive bias on user-item semantic similarity, which may not faithfully capture the true semantics. Introducing user/item ID tokens, un- fortunately, is generally considered infeasible for LLMs, | 2311.01343#8 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 8 | In addition, LLMs have also been used for relevance ranking tasks. UPR (Sachan et al., 2022a) and SGPT-CE (Muennighoff, 2022) introduce instructional query generation methods, which rank documents based on the generation likelihood of query given the document. HELM (Liang et al., 2022) utilizes instructional relevance generation for ranking, prompting LLMs to generate relevance proxy tokens and rank documents based on the generation probability. RankGPT (Sun et al., 2023c) proposes a zero-shot permutation generation method, which prompts LLMs to directly generation the ranking permutation and its performance surpasses supervised models when based on GPT4. Qin et al. (2023) proposes a pairwise ranking prompting method (PRP) based on open-sourced LLMs.
Though good results are achieved by the methods above, two challenges still remain: (1) Unstable output, sensitivity of input, repetition, and position bias could harm the perfor- mance severely. (2) Sophisticated instruction techniques and task designs are commonly adapted to achieve high performance at the cost of computational complexity. It would be hard for computationally costly methods to be applied to a practical scenario.
2.2 LLMs Distillation | 2311.01555#8 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.04915 | 8 | # 4.2. Chain-of-Empathy Reasoning
Table 1 and Figure 1 show four unique prompts with CoE in addition to the base condition (no reasoning): Cognitive-Behavioral Therapy (CBT; Beck, 1979; Kaczkurkin and Foa, 2022; Hofmann et al., 2010), Dialectical Behavior Therapy (DBT; Linehan, 1987), Person- Centered Therapy (PCT; Cooper and McLeod,
We to generate appropriate responses to the posts of seekers seeking advice on Reddit and predict the best suitable empathetic strategy. For the ground- truth label of each empathetic strategy class, we used the EPITOME 3 , crowdsourced Reddit posts of mental health, with an average inter- annotator agreement reported as above 0.68 (Sharma et al., 2020). The dataset comprised pairs of help-seeking posts and responding posts. Each pair was labeled based on (1) the type of expressed âempathy mechanismâ (i.e.,
1 https://openai.com/ 2 We want to emphasize that these descriptions are not exhaustive representations of the goals of each psychotherapy. These goals and reasoning strategies have been specifically modified for LLM prompting
and do not reflect the entire interaction between clinical/counseling psychologists and clients. 3 https://github.com/behavioral-data/ Empathy- Mental-Health | 2311.04915#8 | Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models | We present a novel method, the Chain of Empathy (CoE) prompting, that
utilizes insights from psychotherapy to induce Large Language Models (LLMs) to
reason about human emotional states. This method is inspired by various
psychotherapy approaches including Cognitive Behavioral Therapy (CBT),
Dialectical Behavior Therapy (DBT), Person Centered Therapy (PCT), and Reality
Therapy (RT), each leading to different patterns of interpreting clients'
mental states. LLMs without reasoning generated predominantly exploratory
responses. However, when LLMs used CoE reasoning, we found a more comprehensive
range of empathetic responses aligned with the different reasoning patterns of
each psychotherapy model. The CBT based CoE resulted in the most balanced
generation of empathetic responses. The findings underscore the importance of
understanding the emotional context and how it affects human and AI
communication. Our research contributes to understanding how psychotherapeutic
models can be incorporated into LLMs, facilitating the development of
context-specific, safer, and empathetic AI. | http://arxiv.org/pdf/2311.04915 | Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, Sowon Hahn | cs.CL, cs.AI, cs.HC | null | null | cs.CL | 20231102 | 20231214 | [
{
"id": "2302.13971"
},
{
"id": "2305.10601"
},
{
"id": "2212.14402"
},
{
"id": "2205.11916"
},
{
"id": "2108.07258"
},
{
"id": "2209.08141"
},
{
"id": "2201.11903"
},
{
"id": "2306.08997"
},
{
"id": "2303.13375"
},
{
"id": "1808.10399"
}
] |
2311.01343 | 9 | which may not faithfully capture the true semantics. Introducing user/item ID tokens, un- fortunately, is generally considered infeasible for LLMs, as directly conducting language modeling on sequences with heterogeneous tokens can be ineffective and unstable, especially when the vocabu- lary of most LLMs is diluted (e.g., â¼ 50k for GPT, and â¼ 30k for T5) by a large number of randomly initialized user/item embeddings. Even if user/item ID token embeddings can be effectively learned via language modeling, another challenge that hinders effective collaborative filtering with LLMs is that, since the order of inter- actions usually does not matter for direct recommendations while human language naturally has an order, spurious temporal cor- relation can be introduced for items placed in different positions when transforming the user historical interactions into textual sen- tences. Furthermore, for content modeling, since pretrained LLMs are not recommendation-oriented, they can easily capture noise in the user/item textual features irrelevant to the recommendation purpose. Finally, since LLMs generate the next token in an autore- gressive manner, recommending multiple items can be inefficient. For both pseudo-ID-based | 2311.01343#9 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 9 | 2.2 LLMs Distillation
Despite their impressive capabilities, LLMs such as GPT-4 often come with high costs and lack open-source availability. As a result, considerable research has explored various ways to distill the capabilities of LLMs into specialized, customized models. For instance, Fu et al. (2023) and Magister et al. (2022) have successfully distilled the reasoning ability of LLMs into smaller models. Self-instruct (Wang et al., 2023b; Taori et al., 2023) propose iterative approaches to distill GPT-3 using their outputs.
Additionally, Sachan et al. (2022b) and Shi et al. (2023) utilize the generation probability of LLMs to improve retrieval systems. Snell et al. (2022) introduces a similar context distillation method to simplify the overlong context when prompting LLMs on Text-to-SQL tasks. This paper presents the Instruction Distillation method, aiming at distilling the ability explored by sophisticated instructions into the model using more efficient instructions to enhance the model efficiency and output stability.
# 3 Method | 2311.01555#9 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.04915 | 9 | and do not reflect the entire interaction between clinical/counseling psychologists and clients. 3 https://github.com/behavioral-data/ Empathy- Mental-Health
Acc Emotional Reaction Interpretation Exploration Base 0.340 Prec. 0.467 Recall 0.185 F1 0.27 Prec. 0 Recall 0 F1 0 Prec. 0.327 Recall F1 0.866 0.475 CBT-CoE 0.319 0.463 0.165 0.244 0.293 0.260 0.276 0.303 0.543 0.389 DBT-CoE 0.334 0.392 0.372 0.382 0.291 0.060 0.100 0.309 0.582 0.404 PCT-CoE 0.336 0.399 0.243 0.302 0.333 0.016 0.031 0.319 0.757 0.449 RT-CoE 0.336 0.407 0.308 0.350 0.354 0.044 0.079 0.309 0.664 0.420
Table 2: Model performance in empathetic strategy classification task by CoE prompting conditions. *Prec. = Precision | 2311.04915#9 | Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models | We present a novel method, the Chain of Empathy (CoE) prompting, that
utilizes insights from psychotherapy to induce Large Language Models (LLMs) to
reason about human emotional states. This method is inspired by various
psychotherapy approaches including Cognitive Behavioral Therapy (CBT),
Dialectical Behavior Therapy (DBT), Person Centered Therapy (PCT), and Reality
Therapy (RT), each leading to different patterns of interpreting clients'
mental states. LLMs without reasoning generated predominantly exploratory
responses. However, when LLMs used CoE reasoning, we found a more comprehensive
range of empathetic responses aligned with the different reasoning patterns of
each psychotherapy model. The CBT based CoE resulted in the most balanced
generation of empathetic responses. The findings underscore the importance of
understanding the emotional context and how it affects human and AI
communication. Our research contributes to understanding how psychotherapeutic
models can be incorporated into LLMs, facilitating the development of
context-specific, safer, and empathetic AI. | http://arxiv.org/pdf/2311.04915 | Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, Sowon Hahn | cs.CL, cs.AI, cs.HC | null | null | cs.CL | 20231102 | 20231214 | [
{
"id": "2302.13971"
},
{
"id": "2305.10601"
},
{
"id": "2212.14402"
},
{
"id": "2205.11916"
},
{
"id": "2108.07258"
},
{
"id": "2209.08141"
},
{
"id": "2201.11903"
},
{
"id": "2306.08997"
},
{
"id": "2303.13375"
},
{
"id": "1808.10399"
}
] |
2311.01555 | 10 | # 3 Method
In this section, we introduce the instruction distillation method in detail. This novel ap- proach enhances both the effectiveness and efficiency of open-sourced LLMs during the inference stage by distilling the capabilities harnessed by complex instructions into a more efficient one. Thus, when deploying to real-world applications, our methodology is able to obtain good performance which necessitates only lower computation costs compared to others.
3
3.1 Task Formalization
The task of relevance ranking can be formally defined as follows: Given a query q and a set of candidate items D = {d1, . . . , dn}, the objective is to determine the ranking of these candidates, represented as R = {r1, . . . , rn}. Here, ri â {1, 2, . . . , n} denotes the rank of candidate di. For instance, if ri = 3, it denotes that di is ranked third among the n candidates. A ranking model, denoted as f (·), assigns scores to the candidates based on their relevance to the query:
# si = f (q, di)
3; = f(q,di) (1)
Subsequently, the candidates are ranked according to these relevance scores: arg sorti(s1, . . . , sn)
(1) ri =
3.2 Prompting LLMs for Ranking Tasks | 2311.01555#10 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.04915 | 10 | Table 2: Model performance in empathetic strategy classification task by CoE prompting conditions. *Prec. = Precision
empathy strategy) and (2) the presence and âlevelâ of each expressed empathy (i.e., communication strength). The three empathy reaction, strategies exploration, with corresponding levels of 0, 1, and 2. Pairs labeled as level 0, indicating no expression of empathy, were excluded. The number of pairs for each strategy was as follows: âemotion reactionâ=1,047, and âinterpretationâ=1,436. We randomly sampled 500 pairs in each emotional reaction and interpretation data to balance the number of pairs between strategies. Each strategyâs final number of pairs was emotional reaction=500, exploration=480, and interpretation=500.
# 5.1. Model Performances
Table 2 and Figure 2 show the performance of the empathetic strategy classification of LLMs with each CoE prompt, measured in terms of precision, recall, F1 score, and accuracy. Upon generating a response, each model with CoE prompts predicted which empathy strategy is most suitable for each seekerâs post among the three strategies. We the predicted empathy strategy with the ground truth calculated strategy prediction accuracy. | 2311.04915#10 | Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models | We present a novel method, the Chain of Empathy (CoE) prompting, that
utilizes insights from psychotherapy to induce Large Language Models (LLMs) to
reason about human emotional states. This method is inspired by various
psychotherapy approaches including Cognitive Behavioral Therapy (CBT),
Dialectical Behavior Therapy (DBT), Person Centered Therapy (PCT), and Reality
Therapy (RT), each leading to different patterns of interpreting clients'
mental states. LLMs without reasoning generated predominantly exploratory
responses. However, when LLMs used CoE reasoning, we found a more comprehensive
range of empathetic responses aligned with the different reasoning patterns of
each psychotherapy model. The CBT based CoE resulted in the most balanced
generation of empathetic responses. The findings underscore the importance of
understanding the emotional context and how it affects human and AI
communication. Our research contributes to understanding how psychotherapeutic
models can be incorporated into LLMs, facilitating the development of
context-specific, safer, and empathetic AI. | http://arxiv.org/pdf/2311.04915 | Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, Sowon Hahn | cs.CL, cs.AI, cs.HC | null | null | cs.CL | 20231102 | 20231214 | [
{
"id": "2302.13971"
},
{
"id": "2305.10601"
},
{
"id": "2212.14402"
},
{
"id": "2205.11916"
},
{
"id": "2108.07258"
},
{
"id": "2209.08141"
},
{
"id": "2201.11903"
},
{
"id": "2306.08997"
},
{
"id": "2303.13375"
},
{
"id": "1808.10399"
}
] |
2311.01343 | 11 | To address the above challenges, we present CLLM4Rec, the first method that tightly combines the ID paradigm of RS with the LLM-based paradigm to address the semantic gap. We first extend the vocabulary of pretrained LLMs with user/item ID tokens to faith- fully model the user/item collaborative/content semantics, where the embeddings are learned in two stages. The pretraining stage consists of mutually-regularized collaborative and content LLMs that learn user/item token embeddings via language modeling on RS-specific corpora established from user/item interactions and tex- tual features. Specifically, a novel "soft+hard" prompting strategy is proposed for effective language modeling on documents with heterogeneous tokens, where each document is decomposed into a prompt consisting of user/item (soft [31]) and vocab (hard) tokens that describe the contexts and a main text consisting of homoge- neous item tokens (i.e., interaction history) or vocab tokens (i.e., user/item textual features), respectively. Through this strategy, the prediction heads for the two LLMs can focus exclusively on collab- orative and content information, and the stability and effectiveness of language modeling can be substantially enhanced. In addition, a stochastic reordering strategy is proposed for the collaborative LLM to ignore the order of item tokens without negative influence on the vocab tokens. Finally, we propose a novel recommendation-oriented | 2311.01343#11 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 11 | (1) ri =
3.2 Prompting LLMs for Ranking Tasks
Recent studies have explored the potential of using Large Language Models (LLMs) for the re-ranking task. Diverse prompting strategies have been explored. Based on the type of instruction employed, existing strategies can be categorized into three types: (1) pointwise ranking, (2) pairwise ranking, and (3) listwise ranking (Wu et al., 2023; Zhu et al., 2023).
Pointwise Ranking assigns an independent score to each item di, subsequently ranking the set D based on these scores. A prevalent pointwise prompting approach for LLMs is instructional relevance generation, which is exemplified in HELM (Liang et al., 2022). In this approach, LLMs are prompted to output either "Yes" or "No" to determine the relevance of the candidates to a given query. The generation probability is then converted to the relevance score:
_ _ f1+f(Yes | Irc(q,di)), if output Yes (2) ' =f(No | Zrc(q,d;)), if output No
Here f (·) represents the large language model, and IRG denotes the relevance generation instruction that converts the input q and di into the test-based prompt.
si = 1 |q| â t log p(qt | q<t, pi, Iquery) (3) | 2311.01555#11 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.04915 | 11 | retrieval (e.g., âNo Empathy Strategyâ). In addition, they sometimes predicted new strategies which did not fall into any of the predefined three strategies (e.g., âReflection,â âValidation: clientâs acknowledging feelings and experiences,â and âApproval: expressing approval or positive reinforcement to the clientâ).
# 6. Qualitative Evaluations
The LLM generally generated courteous and comprehensive responses. While many human peer supporters often provided brief comments and shared personal opinions or give advice, the CoE LLM mostly responded with at least two empathetic strategies and frequently suggested seeking professional help. The model tended to initiate responses by interpreting usersâ current state and subsequent advice or exploring potential options. For example, when a distressed seeker could not control her anxiety after a violent fight between her parents, DBT- responded with multiple CoE prompt empathetic strategies, âIâm so sorry you had to witness that. Itâs understandable that youâre
Outputs with errors in the predicted strategy names were excluded from the analysis. Most of these errors resulted from the nature of LLM as a generative model, which behaves differently from traditional supervised learning models for classification tasks. Despite explicit instructions the models occasionally generated ânoiseâ output and predicted strategies that were not among the provided options. These errors include responses of failed predictions or response | 2311.04915#11 | Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models | We present a novel method, the Chain of Empathy (CoE) prompting, that
utilizes insights from psychotherapy to induce Large Language Models (LLMs) to
reason about human emotional states. This method is inspired by various
psychotherapy approaches including Cognitive Behavioral Therapy (CBT),
Dialectical Behavior Therapy (DBT), Person Centered Therapy (PCT), and Reality
Therapy (RT), each leading to different patterns of interpreting clients'
mental states. LLMs without reasoning generated predominantly exploratory
responses. However, when LLMs used CoE reasoning, we found a more comprehensive
range of empathetic responses aligned with the different reasoning patterns of
each psychotherapy model. The CBT based CoE resulted in the most balanced
generation of empathetic responses. The findings underscore the importance of
understanding the emotional context and how it affects human and AI
communication. Our research contributes to understanding how psychotherapeutic
models can be incorporated into LLMs, facilitating the development of
context-specific, safer, and empathetic AI. | http://arxiv.org/pdf/2311.04915 | Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, Sowon Hahn | cs.CL, cs.AI, cs.HC | null | null | cs.CL | 20231102 | 20231214 | [
{
"id": "2302.13971"
},
{
"id": "2305.10601"
},
{
"id": "2212.14402"
},
{
"id": "2205.11916"
},
{
"id": "2108.07258"
},
{
"id": "2209.08141"
},
{
"id": "2201.11903"
},
{
"id": "2306.08997"
},
{
"id": "2303.13375"
},
{
"id": "1808.10399"
}
] |
2311.01343 | 12 | Yaochen Zhuâ,1, Liang Wu2, Qi Guo2, Liangjie Hong2, Jundong Li1
finetuning strategy for CLLM4Rec, where an item prediction head with multinomial likelihood is added to the pretrained collabora- tive LLM backbone to predict hold-out items based on soft+hard prompts established from masked usersâ interaction history, where recommendations of multiple items can be generated efficiently. The contribution of this paper can be concretely summarized as: | 2311.01343#12 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 12 | si = 1 |q| â t log p(qt | q<t, pi, Iquery) (3)
Pairwise Ranking is employed by PRP (Qin et al., 2023). In this technique, both the query and a pair of candidate items serve as prompts, guiding the LLMs in ranking tasks. For every pair of items di and dj, a specific pairwise comparison instruction, denoted by IPRP, is employed to instruct the LLMs, i.e., f (·), to determine which item is more relevant to the given query. This can be formalized as:
ci,j = 1, 0, 0.5, if f (IPRP(q, di, dj)) = i if f (IPRP(q, di, dj)) = j else (4)
Here, ci,j denotes the LLMâs choice. Considering that LLMs may exhibit sensitivity to the order of text in the prompt, for every pair di and dj, PRP consults the LLM twice, inverting their order between IPRP(q, di, dj) and IPRP(q, dj, di). Subsequently, to compute the relevance score of the i-th candidate di, PRP compares di against all other candidates in the set D:
si = â j̸=i ci,j + (1 â cj,i) (5) | 2311.01555#12 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.04915 | 12 | Base 10 og 08 06 06 Fl-score Fl-score 04 04 0.2 emotion reactions interpretation Empathy Strategy 0.0 0.0 exploration PCT CoE 1.0 og 08 0.6 06 Fl-score Fl-score 04 o4 02 emotion_reactions interpretation Empathy Strategy 0.0 0.0 exploration CBT CoE emotion reactions interpretation Empathy Strategy RT CoE DBT CoE 10 08 06 Fl-score 04 emotion reactions interpretation Empathy Strategy 0.0 exploration exploration emotion_reactions interpretation exploration Empathy Strategy
Figure 2: Empathic expression strategy classification accuracy per prompt conditions. Compared to Base condition, CBT-CoE provided the balanced set of each empathy expression but less emotional reaction than other CoEs.
feeling overwhelmed and scared right now. Itâs not okay for anyone to threaten or hurt another person, and itâs not your fault. How can I support you right now?â. This contradicts the original human response in benchmark data: âEverything is wrong with people.â
# 7. Conclusions | 2311.04915#12 | Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models | We present a novel method, the Chain of Empathy (CoE) prompting, that
utilizes insights from psychotherapy to induce Large Language Models (LLMs) to
reason about human emotional states. This method is inspired by various
psychotherapy approaches including Cognitive Behavioral Therapy (CBT),
Dialectical Behavior Therapy (DBT), Person Centered Therapy (PCT), and Reality
Therapy (RT), each leading to different patterns of interpreting clients'
mental states. LLMs without reasoning generated predominantly exploratory
responses. However, when LLMs used CoE reasoning, we found a more comprehensive
range of empathetic responses aligned with the different reasoning patterns of
each psychotherapy model. The CBT based CoE resulted in the most balanced
generation of empathetic responses. The findings underscore the importance of
understanding the emotional context and how it affects human and AI
communication. Our research contributes to understanding how psychotherapeutic
models can be incorporated into LLMs, facilitating the development of
context-specific, safer, and empathetic AI. | http://arxiv.org/pdf/2311.04915 | Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, Sowon Hahn | cs.CL, cs.AI, cs.HC | null | null | cs.CL | 20231102 | 20231214 | [
{
"id": "2302.13971"
},
{
"id": "2305.10601"
},
{
"id": "2212.14402"
},
{
"id": "2205.11916"
},
{
"id": "2108.07258"
},
{
"id": "2209.08141"
},
{
"id": "2201.11903"
},
{
"id": "2306.08997"
},
{
"id": "2303.13375"
},
{
"id": "1808.10399"
}
] |
2311.01343 | 13 | We present CLLM4Rec, the first framework that tightly couples the ID paradigm and LLM paradigm of RS, where encoded knowledge and reasoning ability of LLMs can be fully utilized, while user/item ID token embeddings aligned to the vocab space can well capture intrinsic user interests and item properties. ⢠A novel soft+hard prompting strategy is proposed to pretrain the LLMs on sequences of heterogeneous tokens describing user historical interactions and user/item features via language modeling, where the collaborative and content information can be effectively learned by the user/item token embeddings. ⢠A mutual-regularization strategy is proposed to constrain the CLLM4Rec to learn information more relevant for recommenda- tions from user/item content. In addition, stochastic reordering is proposed such that the order of item tokens can be ignored by the collaborative LLM without influence on the textual parts. ⢠A recommendation-oriented finetuning strategy is proposed for CLLM4Rec, where an item prediction head with multino- mial likelihood is added on the collaborative LLM that predicts hold-out items based on prompt interaction history, where rec- ommendations for multiple items can be generated efficiently.
# 2 RELATED WORK 2.1 Large Language Model (LLM) Basics | 2311.01343#13 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 13 | si = â j̸=i ci,j + (1 â cj,i) (5)
The final relevance score aggregates all comparison results.
Listwise Ranking has been adopted by Sun et al. (2023c); Ma et al. (2023). This approach involves feeding a set of items into the LLMs, where each item is identified by a unique identifier (e.g., [1], [2], etc.). The LLMs are then instructed to generate a permutation of these items, such as â[2] > [3] > [1] > . . . â:
Perm = f (IList(q, d1, d2, . . . , dn)) (6)
4
Table 1: Computational complexity of different instruction methods. n is the number of items to be ranked. k is a constant related to the sliding window method.
Instruction Complexity Examples Pointwise Ranking Pairwise Ranking Listwise Ranking O(n) O(n2) O(k â n) (Liang et al., 2022; Sachan et al., 2022a) (Qin et al., 2023) (Sun et al., 2023c; Ma et al., 2023) | 2311.01555#13 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.04915 | 13 | # 7. Conclusions
In summary, we developed a CoE reasoning prompt for generating empathetic responses based on psychotherapy models, and we the performance of empathetic compared strategy classification. Our findings revealed that LLMs without reasoning showed a significant preference for the exploration strategy, with interpretation being the least preferred strategy. Although all reasoning prompts generated responses most strongly associated with exploration, they differed from the base prompt by generating interpretation to a certain extent. Intriguingly, only the CBT- CoE generated the highest number of the interpretation strategy. This pattern might reflect CBTâs inherent approach - clarifying cognitive errors to clients. These findings incorporating importance of highlight
context-specific therapeutic interactions with generative AIs.
# 8. Limitations and Suggestions
We acknowledge several limitations that should be considered research and development. First, we did not employ more extensive evaluative criteria for empathy, especially those validated from psychology literature like the Interpersonal Reactivity Index (Davis, 1980; Davis, 1983). Future studies should consider evaluating LLMs using their these established scales communication and reproducibility. | 2311.04915#13 | Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models | We present a novel method, the Chain of Empathy (CoE) prompting, that
utilizes insights from psychotherapy to induce Large Language Models (LLMs) to
reason about human emotional states. This method is inspired by various
psychotherapy approaches including Cognitive Behavioral Therapy (CBT),
Dialectical Behavior Therapy (DBT), Person Centered Therapy (PCT), and Reality
Therapy (RT), each leading to different patterns of interpreting clients'
mental states. LLMs without reasoning generated predominantly exploratory
responses. However, when LLMs used CoE reasoning, we found a more comprehensive
range of empathetic responses aligned with the different reasoning patterns of
each psychotherapy model. The CBT based CoE resulted in the most balanced
generation of empathetic responses. The findings underscore the importance of
understanding the emotional context and how it affects human and AI
communication. Our research contributes to understanding how psychotherapeutic
models can be incorporated into LLMs, facilitating the development of
context-specific, safer, and empathetic AI. | http://arxiv.org/pdf/2311.04915 | Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, Sowon Hahn | cs.CL, cs.AI, cs.HC | null | null | cs.CL | 20231102 | 20231214 | [
{
"id": "2302.13971"
},
{
"id": "2305.10601"
},
{
"id": "2212.14402"
},
{
"id": "2205.11916"
},
{
"id": "2108.07258"
},
{
"id": "2209.08141"
},
{
"id": "2201.11903"
},
{
"id": "2306.08997"
},
{
"id": "2303.13375"
},
{
"id": "1808.10399"
}
] |
2311.01343 | 14 | Transformers with billions of parameters trained on large corpora, i.e., large language models (LLMs), have demonstrated an unprece- dented understanding of natural language and good logical reason- ing ability based on factual knowledge [9]. Based on the part of transformer utilized for language modeling, existing LLMs can be categorized into three classes: encoder-only LLMs, such as BERT [32], encoder-decoder-based LLMs, such as T5 [12], and decoder- only LLMs, such as GPT [11] and LlaMA [13], etc. We focus on LLMs with decoders due to their superior generative abilities compared with the encoder-only models [33]. The training of LLMs is mainly based on two stages. In the pretraining stage, LLMs are trained on large corpora such as website content, Wikipedia, ArXiv paper, and GitHub codes via language modeling (i.e., next/masked token pre- diction), where knowledge in the corpus can be effectively encoded in the weights of the transformer network facilitated by the stacked self-attention modules. Then, during the finetuning stage, exemplar prompt-output pairs (such as questions and answers) or human feedback on multiple generated answers are provided to the LLMs such that they can conduct logical reasoning and generate answers based on the encoded knowledge from the pretrained stage. | 2311.01343#14 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 14 | This generated permutation Perm can be readily transformed into ranking results R, which bypasses the necessity to compute an explicit relevance score, si, for each candidate di. To ensure consistency in notation with scoring-based methodologies, the relevance score si is defined as the reciprocal of its rank: si := 1 ri
3.3 Computational Complexity of Different Instructions.
Different ranking instructions offer various trade-offs in terms of efficiency and effectiveness. A summary of these instructions is listed in Table 1. Among these, the pointwise ranking is computationally the most efficient, having a complexity of O(N). Nevertheless, this approach requires the model to yield a calibrated pointwise score, a feat which is notably challenging.
In contrast, the pairwise ranking paradigm resolves the calibration issue by engaging in one-to-one pairwise comparisons. This solution, however, elevates the computational complexity to O(N2). To tackle this, Qin et al. (2023) propose two methods to curtail the pairwise rankingâs complexity: sorting and the sliding window technique. While promising, these methods are still in their nascent stages, proving challenging to stabilize and parallelize.
On another note, listwise ranking demonstrates good performance when tested on commer- cial and also proprietary LLMs, such as GPT-4. However, it performs poorly on smaller, open-source models. A possible reason could be the inferior comprehension of instructions in these open-source counterparts. | 2311.01555#14 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.04915 | 14 | Our evaluation focused solely on the empathic accuracy of the LLMsâ and did not measure user perception. User perception of empathetic expression varies depending on whether interact with humans or artificially intelligent systems (Medeiros et al., 2021). Furthermore, people perceive and react differently to AIsâ empathetic expressions (Urakami et al., 2019). Thus, future works should investigate how users perceive and
respond to the modelsâ empathetic responses to enhance our understanding of the efficacy of LLMsâ empathetic expressions.
For quantitative evaluation, we used a single LLM model (GPT-3.5) and one domain, mental health. Incorporating a diverse text corpus, and motivational interviewing (Miller and Rollnick, 2012), could enable LLMs to produce more personalized communication. This presents an opportunity for future research to encompass a wider array of topics and conversational styles, thereby increasing the reliability of LLMâs performance. Additionally, different LLMs may excel in varied capabilities, leading each in LLM specific tasks (Sivarajkumar et al., 2023). Investigating and assessing the empathetic expressions generated by different LLMs is crucial for a comprehensive evaluation of LLMsâ ability to discern human emotions and craft appropriate, empathetic responses.
# 9. Ethical Considerations | 2311.04915#14 | Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models | We present a novel method, the Chain of Empathy (CoE) prompting, that
utilizes insights from psychotherapy to induce Large Language Models (LLMs) to
reason about human emotional states. This method is inspired by various
psychotherapy approaches including Cognitive Behavioral Therapy (CBT),
Dialectical Behavior Therapy (DBT), Person Centered Therapy (PCT), and Reality
Therapy (RT), each leading to different patterns of interpreting clients'
mental states. LLMs without reasoning generated predominantly exploratory
responses. However, when LLMs used CoE reasoning, we found a more comprehensive
range of empathetic responses aligned with the different reasoning patterns of
each psychotherapy model. The CBT based CoE resulted in the most balanced
generation of empathetic responses. The findings underscore the importance of
understanding the emotional context and how it affects human and AI
communication. Our research contributes to understanding how psychotherapeutic
models can be incorporated into LLMs, facilitating the development of
context-specific, safer, and empathetic AI. | http://arxiv.org/pdf/2311.04915 | Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, Sowon Hahn | cs.CL, cs.AI, cs.HC | null | null | cs.CL | 20231102 | 20231214 | [
{
"id": "2302.13971"
},
{
"id": "2305.10601"
},
{
"id": "2212.14402"
},
{
"id": "2205.11916"
},
{
"id": "2108.07258"
},
{
"id": "2209.08141"
},
{
"id": "2201.11903"
},
{
"id": "2306.08997"
},
{
"id": "2303.13375"
},
{
"id": "1808.10399"
}
] |
2311.01343 | 15 | # 2.2 LLM in Recommender Systems
Recently, LLM-based RS has attracted extensive attention from both academia and industry, which are promising to address the long- standing issues of traditional ID-based RSs, such as shallow textual information understanding, poor generalization, etc. [34, 35]. Hou et al. showed that existing LLMs can be viewed as zero-shot rankers,
Collaborative Large Language Model for Recommender Systems | 2311.01343#15 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 15 | In summary, each ranking method comes with its set of pros and cons: the pointwise approach is efficient but may not be highly effective; the pairwise method is effective but computationally demanding; and the listwise method is most effective but limited to closed- source LLMs like GPT-4. These insights set the stage for our novel solution â the instruction distillation strategy., which we will introduce in the next section.
An overview of the proposed instruction distillation approach is presented. Instruction distillation distills the abilities obtained from complex instruction techniques (e.g., pair- wise ranking) into a model that is more efficient with simple instruction techniques (e.g., pointwise ranking).
3.4 Instruction Distillation
The key idea of Instruction Distillation is to distill the ability obtained from the complex but effective instruction technique (e.g., pairwise ranking instruction) into a model that is more efficient with the simple instruction technique (e.g., pointwise ranking instruction). Figure 2 shows an overview of the propose instruction distillation approach. We denote the sources of relevance scores or ranking results with superscripts t and s for teacher instruction and simplified student instruction, respectively. Our method unfolds in three stages: (1) Candidate generation, (2) Teacher inference, and (3) Student learning. | 2311.01555#15 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.04915 | 15 | # 9. Ethical Considerations
The expanding use of large language models (LLMs), especially within mental healthcare, calls for thoughtful ethical engagement. As these models advance in generating responses that mirror human counselors, it is imperative we closely examine their impact on users, particularly those navigating mental health challenges.
# References
Ahn, Y., Zhang, Y., Park, Y., & Lee, J. (2020). A chatbot solution to chat app problems: Envisioning a chatbot counseling system for teenage victims of online sexual exploitation. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1â7).
American Psychiatric Association, American Psychiatric Association, (1994). Diagnostic and statistical manual of mental disorders: DSM-IV, volume 4. American Psychiatric Association, Washington, DC.
Anderson, C., & Keltner, D. (2002). The role of empathy in the formation and maintenance of social bonds. Behavioral and Brain Sciences, 25(1), 21â22.
Beck, A. T. (1979). Cognitive therapy and the emotional disorders. Penguin.
Bommarito II, M., & Katz, D. M. (2022). Gpt takes preprint exam. bar the arXiv:2212.14402. | 2311.04915#15 | Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models | We present a novel method, the Chain of Empathy (CoE) prompting, that
utilizes insights from psychotherapy to induce Large Language Models (LLMs) to
reason about human emotional states. This method is inspired by various
psychotherapy approaches including Cognitive Behavioral Therapy (CBT),
Dialectical Behavior Therapy (DBT), Person Centered Therapy (PCT), and Reality
Therapy (RT), each leading to different patterns of interpreting clients'
mental states. LLMs without reasoning generated predominantly exploratory
responses. However, when LLMs used CoE reasoning, we found a more comprehensive
range of empathetic responses aligned with the different reasoning patterns of
each psychotherapy model. The CBT based CoE resulted in the most balanced
generation of empathetic responses. The findings underscore the importance of
understanding the emotional context and how it affects human and AI
communication. Our research contributes to understanding how psychotherapeutic
models can be incorporated into LLMs, facilitating the development of
context-specific, safer, and empathetic AI. | http://arxiv.org/pdf/2311.04915 | Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, Sowon Hahn | cs.CL, cs.AI, cs.HC | null | null | cs.CL | 20231102 | 20231214 | [
{
"id": "2302.13971"
},
{
"id": "2305.10601"
},
{
"id": "2212.14402"
},
{
"id": "2205.11916"
},
{
"id": "2108.07258"
},
{
"id": "2209.08141"
},
{
"id": "2201.11903"
},
{
"id": "2306.08997"
},
{
"id": "2303.13375"
},
{
"id": "1808.10399"
}
] |
2311.01343 | 16 | which can rank the relevance of movies based on user historical in- teractions and movie descriptions. However, since pretrained LLMs are not aligned with the recommendation task, more efforts have been devoted to the finetuning of LLMs to obtain recommendation- oriented models. An exemplar work is P5 [20], which finetunes T5 with token sequences transformed from interactions and user/item features, where items are presented by pseudo-IDs in the form of "item_ð". Afterwards, M6 [19] was proposed that combines text infill- ing and auto-regression in the pretraining stage, where pseudo IDs in P5 are completely avoided and replaced by textual descriptions. Recently, TALLRec [36] was proposed where items are represented by both pseudo-ID and textual descriptions. Pseudo-ID-based item representations can easily introduce spurious correlations between irrelevant items. To address this issue, Hua et al. proposed to intro- duce a small number of new tokens, where tokens used to describe the items are determined by their content and collaborative similar- ity. However, representing items with multiple shared tokens can still introduce bias. In addition, for the above | 2311.01343#16 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 16 | ⢠Candidate generation. Suppose we have a dataset comprising a set of queries Q and a corresponding set of items D. It is worth mentioning that none of the queries require a labeled item. For a query q â Q, an unsupervised retriever (e.g., BM25)
5
# RankNet Loss
{ Ranking }__f Ranking } Pointwise ranking Pairwise ranking ow, | => Flan-T5 | | Flan-T5 | = Teacher Instruction Student Instruction Query + Passages ow)
Figure 2: An overview of the proposed instruction distillation approach. Instruction distilla- tion distills the abilities harvested from complex instruction techniques into a model that is more efficient with simple instruction techniques.
is employed to fetch n potentially relevant candidate samples D = (d1, d2, . . . , dn) from the item set D. | 2311.01555#16 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.04915 | 16 | Bommarito II, M., & Katz, D. M. (2022). Gpt takes preprint exam. bar the arXiv:2212.14402.
Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., ... & Bohg, J. (2021). On the opportunities and risks of foundation preprint arXiv:2108.07258.
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... & Sastry, G. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33, 1877â1901.
Buechel, S., Buffone, A., Slaff, B., Ungar, L., & Sedoc, J. (2018). Modeling empathy and distress in reaction to news stories. arXiv preprint arXiv:1808.10399.
Cooper, M., & McLeod, J. (2011). Person- centered therapy: A pluralistic perspective. Experiential Person-Centered Psychotherapies, 10(3), 210â223.
Davis, M. H. (1980). Interpersonal reactivity index. | 2311.04915#16 | Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models | We present a novel method, the Chain of Empathy (CoE) prompting, that
utilizes insights from psychotherapy to induce Large Language Models (LLMs) to
reason about human emotional states. This method is inspired by various
psychotherapy approaches including Cognitive Behavioral Therapy (CBT),
Dialectical Behavior Therapy (DBT), Person Centered Therapy (PCT), and Reality
Therapy (RT), each leading to different patterns of interpreting clients'
mental states. LLMs without reasoning generated predominantly exploratory
responses. However, when LLMs used CoE reasoning, we found a more comprehensive
range of empathetic responses aligned with the different reasoning patterns of
each psychotherapy model. The CBT based CoE resulted in the most balanced
generation of empathetic responses. The findings underscore the importance of
understanding the emotional context and how it affects human and AI
communication. Our research contributes to understanding how psychotherapeutic
models can be incorporated into LLMs, facilitating the development of
context-specific, safer, and empathetic AI. | http://arxiv.org/pdf/2311.04915 | Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, Sowon Hahn | cs.CL, cs.AI, cs.HC | null | null | cs.CL | 20231102 | 20231214 | [
{
"id": "2302.13971"
},
{
"id": "2305.10601"
},
{
"id": "2212.14402"
},
{
"id": "2205.11916"
},
{
"id": "2108.07258"
},
{
"id": "2209.08141"
},
{
"id": "2201.11903"
},
{
"id": "2306.08997"
},
{
"id": "2303.13375"
},
{
"id": "1808.10399"
}
] |
2311.01343 | 17 | items are determined by their content and collaborative similar- ity. However, representing items with multiple shared tokens can still introduce bias. In addition, for the above methods, candidate items need to be explicitly provided in the prompt when conducting direct recommendation, where the size of candidate pool is limited. Finally, recommendations are generated via autoregression, which is highly inefficient. In summary, the dichotomy between natural language processing and RS still remains to be well addressed. | 2311.01343#17 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 17 | is employed to fetch n potentially relevant candidate samples D = (d1, d2, . . . , dn) from the item set D.
⢠Teacher inference. Then, LLMs with costly pairwise ranking are employed as the teacher models to re-rank the candidate set D = (d1, d2, . . . , dn) corresponding to each query q. To adopt the pairwise method, the n items are juxtaposed in pairs, resulting in n(n â 1) ordered tuples (di, dj) where i ̸= j. The model then scores the relevance of di and dj to the given query q using Eq. (5). Based on these scores, each document di is assigned a rank rt i for every query q.
⢠Student learning. In this phase, the pointwise ranking model serves as the student. To leverage the ranking lists rt i generated by the teacher, we employ the RankNet loss (Burges et al., 2005) to optimize the student model. RankNet is a pairwise loss function that measures the accuracy of relative ordering between items:
L = n â i=1 n â j=1 1 i <rt rt j log(1 + exp(ss i â ss j ))
Unlike other loss functions that utilize a sparse signal, the RankNet loss offers a richer transfer of ranking information from the teacher to the student. | 2311.01555#17 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.04915 | 17 | Davis, M. H. (1980). Interpersonal reactivity index.
Davis, M. H. (1983). Measuring individual for a differences multidimensional of personality and social psychology, 44(1), 113.
De Vignemont, F., & Singer, T. (2006). The
empathic brain: How, when, and why? Trends in Cognitive Sciences, 10(10), 435â441.
Diehl, J. J., Schmitt, L. M., Villano, M., & Crowell, C. R. (2012). The clinical use of robots for individuals with autism spectrum disorders: A critical review. Research in autism spectrum disorders, 6(1), 249â262.
Eisenberg, N. (2014). Altruistic emotion, cognition, and behavior (PLE: Emotion). Psychology Press.
Ekman, P., & Friesen, W. V. (1971). Constants across cultures in the face and emotion. Journal of personality and social psychology, 17(2), 124.
Hall, J. A., & Schwartz, R. (2019). Empathy present and future. The Journal of social psychology, 159(3), 225â243.
Hofmann, S. G., Sawyer, A. T., & Fang, A. (2010). The empirical status of the "new wave" of cognitive behavioral therapy. Psychiatric Clinics, 33(3), 701â710. | 2311.04915#17 | Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models | We present a novel method, the Chain of Empathy (CoE) prompting, that
utilizes insights from psychotherapy to induce Large Language Models (LLMs) to
reason about human emotional states. This method is inspired by various
psychotherapy approaches including Cognitive Behavioral Therapy (CBT),
Dialectical Behavior Therapy (DBT), Person Centered Therapy (PCT), and Reality
Therapy (RT), each leading to different patterns of interpreting clients'
mental states. LLMs without reasoning generated predominantly exploratory
responses. However, when LLMs used CoE reasoning, we found a more comprehensive
range of empathetic responses aligned with the different reasoning patterns of
each psychotherapy model. The CBT based CoE resulted in the most balanced
generation of empathetic responses. The findings underscore the importance of
understanding the emotional context and how it affects human and AI
communication. Our research contributes to understanding how psychotherapeutic
models can be incorporated into LLMs, facilitating the development of
context-specific, safer, and empathetic AI. | http://arxiv.org/pdf/2311.04915 | Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, Sowon Hahn | cs.CL, cs.AI, cs.HC | null | null | cs.CL | 20231102 | 20231214 | [
{
"id": "2302.13971"
},
{
"id": "2305.10601"
},
{
"id": "2212.14402"
},
{
"id": "2205.11916"
},
{
"id": "2108.07258"
},
{
"id": "2209.08141"
},
{
"id": "2201.11903"
},
{
"id": "2306.08997"
},
{
"id": "2303.13375"
},
{
"id": "1808.10399"
}
] |
2311.01555 | 18 | Unlike other loss functions that utilize a sparse signal, the RankNet loss offers a richer transfer of ranking information from the teacher to the student.
After the instruction distillation process, the pointwise instruction technique is utilized during the inference stage. See Appendix A for more details about the prompts.
# 4 Experimental Setup
In order to comprehensively validate the effectiveness of the proposed method. We conduct experiments on a variety of IR tasks, including both the text-based passage re-ranking task and the item-based conversational recommendation task.
For passage re-ranking, the training data contain 10K queries sampled from the MS MARCO dataset (Campos et al., 2016). Each query is then paired with the top 10 documents retrieved by BM25. The trained models are evaluated on subtasks of TREC (Craswell et al., 2020) benchmarks and BEIR (Thakur et al., 2021) benchmarks. NDCG@1, 5, 10 are chosen as the metrics.
For conversational recommendation, we use the ReDial dataset (Li et al., 2018a), which is a movie recommendation task based on conversation logs between the user and the recommender. The trained models are then evaluated on the official test set. For this setting, Acc@1 is adopted as the metric.
4.1 Datasets | 2311.01555#18 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.04915 | 18 | Kaczkurkin, A. N., & Foa, E. B. (2022). Cognitive-behavioral for anxiety disorders: An update on the empirical evidence. Dialogues in Clinical Neuroscience.
Knutson, D., & Koch, J. M. (2022). Person- centered therapy as applied to work with transgender and gender diverse clients. Journal of Humanistic Psychology, 62(1), 104â122.
Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., & Iwasawa, Y. (2022). Large language models are preprint zero-shot arXiv:2205.11916.
Lazarus, R. S. (1991). Emotion and adaptation. Oxford University Press.
Linehan, M. M. (1987). Dialectical behavioral therapy: A cognitive behavioral approach to parasuicide. Journal of Personality Disorders, 1(4), 328â333.
Medeiros, L., Bosse, T., & Gerritsen, C. (2021). Can a chatbot comfort humans? studying the impact of a supportive chatbot on users' self- perceived IEEE Transactions on Human-Machine Systems, 52(3), 343â353.
Miller, W. R., & Rollnick, S. Motivational change. Guilford Press. | 2311.04915#18 | Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models | We present a novel method, the Chain of Empathy (CoE) prompting, that
utilizes insights from psychotherapy to induce Large Language Models (LLMs) to
reason about human emotional states. This method is inspired by various
psychotherapy approaches including Cognitive Behavioral Therapy (CBT),
Dialectical Behavior Therapy (DBT), Person Centered Therapy (PCT), and Reality
Therapy (RT), each leading to different patterns of interpreting clients'
mental states. LLMs without reasoning generated predominantly exploratory
responses. However, when LLMs used CoE reasoning, we found a more comprehensive
range of empathetic responses aligned with the different reasoning patterns of
each psychotherapy model. The CBT based CoE resulted in the most balanced
generation of empathetic responses. The findings underscore the importance of
understanding the emotional context and how it affects human and AI
communication. Our research contributes to understanding how psychotherapeutic
models can be incorporated into LLMs, facilitating the development of
context-specific, safer, and empathetic AI. | http://arxiv.org/pdf/2311.04915 | Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, Sowon Hahn | cs.CL, cs.AI, cs.HC | null | null | cs.CL | 20231102 | 20231214 | [
{
"id": "2302.13971"
},
{
"id": "2305.10601"
},
{
"id": "2212.14402"
},
{
"id": "2205.11916"
},
{
"id": "2108.07258"
},
{
"id": "2209.08141"
},
{
"id": "2201.11903"
},
{
"id": "2306.08997"
},
{
"id": "2303.13375"
},
{
"id": "1808.10399"
}
] |
2311.01343 | 19 | In this paper, we focus on recommendations with implicit feedback [37]. Consider a system of ð¼ users and ð½ items. We use a binary rating vector rð â {0, 1}ð½ to denote whether user ð has interacted with the ð½ items. In addition, we use xð¢ ð , xð£ ð to denote the textual features associated with user ð and item ð, such as user biography and item content, etc. xð¢ð£ ð ð denotes the textual features associated with both user ð and item ð, such as user ðâs review for item ð. Hereafter, {ð¢,ð£,ð¢ð£ } {ð¢,ð£,ð¢ð£ } {ð,ð,ð ð },ð is a size ð we take a sequential view of x {ð,ð,ð ð } , where x one-hot vector denoting the ðth token in the textual sequence2. In addition, we have a pretrained large language model (LLM), of which we take a | 2311.01343#19 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 19 | 4.1 Datasets
TREC (Campos et al., 2016) is a widely used benchmark dataset in IR research. We use the test sets of the 2019 and 2020 competitions. TREC-DL19 and TREC-DL20 are both derived
6
from MS MARCO datasets with human-generated labels. Each query is paired with 100 retrieved documents retrieved by BM25. They share the same format. TREC-DL19 contains 43 test queries, and TREC-DL20 contains 54 test queries.
BEIR (Thakur et al., 2021) consists of diverse retrieval tasks and domains. We choose eight tasks in BEIR to evaluate the models: (1) Covid retrieves scientific articles for COVID- 19 related questions. (2) NFCorpus is a bio-medical IR data. (3) Touche is a argument retrieval datasets. (4) DBPedia retrieves entities from DBpedia corpus. (5) SciFact retrieves evidence for claims verification. (6) Signal retrieves relevant tweets for a given news title. (7) News retrieves relevant news articles for news headlines. (8) Robust04 evaluates poorly performing topics. The evaluation results are averaged over the eight datasets.
Redial (Recommendation Dialogues) (Li et al., 2018b) is an annotated conversational movie recommendation dataset, where users recommend movies to each other.
4.2 Baselines | 2311.01555#19 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.04915 | 19 | Miller, W. R., & Rollnick, S. Motivational change. Guilford Press.
(2003). Self-compassion: An Neff, K. alternative conceptualization of a healthy attitude toward oneself. Self and Identity, 2(2), 85â101.
Nori, H., King, N., McKinney, S. M., Carignan, D., & Horvitz, E. (2023). Capabilities of gpt-4 on medical challenge problems. arXiv preprint arXiv:2303.13375.
Nwosu, A., Boardman, S., Husain, M. M., & Doraiswamy, P. M. (2022). Digital therapeutics for mental health: Is attrition the Achilles heel? Frontiers in Psychiatry, 1598.
Prystawski, B., Thibodeau, P., & Goodman, N. (2022). Psychologically-informed chain-of- thought prompts for metaphor understanding in large language models. arXiv preprint arXiv:2209.08141. | 2311.04915#19 | Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models | We present a novel method, the Chain of Empathy (CoE) prompting, that
utilizes insights from psychotherapy to induce Large Language Models (LLMs) to
reason about human emotional states. This method is inspired by various
psychotherapy approaches including Cognitive Behavioral Therapy (CBT),
Dialectical Behavior Therapy (DBT), Person Centered Therapy (PCT), and Reality
Therapy (RT), each leading to different patterns of interpreting clients'
mental states. LLMs without reasoning generated predominantly exploratory
responses. However, when LLMs used CoE reasoning, we found a more comprehensive
range of empathetic responses aligned with the different reasoning patterns of
each psychotherapy model. The CBT based CoE resulted in the most balanced
generation of empathetic responses. The findings underscore the importance of
understanding the emotional context and how it affects human and AI
communication. Our research contributes to understanding how psychotherapeutic
models can be incorporated into LLMs, facilitating the development of
context-specific, safer, and empathetic AI. | http://arxiv.org/pdf/2311.04915 | Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, Sowon Hahn | cs.CL, cs.AI, cs.HC | null | null | cs.CL | 20231102 | 20231214 | [
{
"id": "2302.13971"
},
{
"id": "2305.10601"
},
{
"id": "2212.14402"
},
{
"id": "2205.11916"
},
{
"id": "2108.07258"
},
{
"id": "2209.08141"
},
{
"id": "2201.11903"
},
{
"id": "2306.08997"
},
{
"id": "2303.13375"
},
{
"id": "1808.10399"
}
] |
2311.01343 | 20 | one-hot vector denoting the ðth token in the textual sequence2. In addition, we have a pretrained large language model (LLM), of which we take a probabilistic view and denote it as ðððð (xð+1|x1:ð ), (ð¿) 1:ð â Rð Ãð¾â via which transform x1:ð into a latent sequence h (ð¿) ð¿ stacked self-attention modules ððð(x1:ð ) and maps the h to ð the probability space of the next token xð+1. Since the LLM is pretrained on large corpora and finetuned on exemplar prompt- answer pairs, the generation is based on logical reasoning with the context information in x1:ð according to its pretrained knowledge. Our aim is to design a new RS that tightly couples the LLM with the recommendation task by introducing user/item ID tokens (and token embeddings), such that user/item semantics (e.g., user inter- ests in item) can be accurately modeled for effective and efficient recommendation whereas | 2311.01343#20 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 20 | Redial (Recommendation Dialogues) (Li et al., 2018b) is an annotated conversational movie recommendation dataset, where users recommend movies to each other.
4.2 Baselines
To compare our methods with existing unsupervised and supervised methods, we choose widely applied methods as below:
⢠BM25 is an unsupervised, based on weighted term frequency. It is one of most the commonly adopted retrieval methods.
⢠RankGPT (Sun et al., 2023c) is a listwise permutation generation approach based on gpt-3.5-turbo and gpt-4.
⢠Relevance Gerneration (Sachan et al., 2022a) is a pointwise ranking method based on FLAN-T5.
⢠PRP (Qin et al., 2023) is a pairwise ranking ranking method based on FLAN-T5.
⢠MonoT5 (Sachan et al., 2022b) is pointwise ranking method based on T5 models and is supervised trained on MS MARCO.
⢠Cohere Rerank is a commercial text ranking system developed by Cohere2.
4.3 Implementation Details | 2311.01555#20 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.04915 | 20 | Rashkin, H., Smith, E. M., Li, M., & Boureau, Y-L. (2019). Towards empathetic open-domain conversation models: A new benchmark and dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (pp. 5370â5381). Association for Computational Linguistics.
Rasouli, S., Gupta, G., Nilsen, E., & Dautenhahn, K. (2022). Potential applications of social robots in robot-assisted interventions for social anxiety. International Journal of Social Robotics, 14(5), 1â32.
Roller, S., Dinan, E., Goyal, N., Ju, D., Williamson, M., Liu, Y., Xu, J., Ott, M., Smith, E. M., Boureau, Y-L., & Weston, J. (2021). Recipes for building an open-domain chatbot. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume (pp. for Computational 300â325). Association Linguistics.
Scherer, K. R., Banse, R., & Wallbott, H. G. from vocal (2001). Emotion expression correlate across languages and cultures. Journal of Cross-cultural psychology, 32(1), 76â92. | 2311.04915#20 | Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models | We present a novel method, the Chain of Empathy (CoE) prompting, that
utilizes insights from psychotherapy to induce Large Language Models (LLMs) to
reason about human emotional states. This method is inspired by various
psychotherapy approaches including Cognitive Behavioral Therapy (CBT),
Dialectical Behavior Therapy (DBT), Person Centered Therapy (PCT), and Reality
Therapy (RT), each leading to different patterns of interpreting clients'
mental states. LLMs without reasoning generated predominantly exploratory
responses. However, when LLMs used CoE reasoning, we found a more comprehensive
range of empathetic responses aligned with the different reasoning patterns of
each psychotherapy model. The CBT based CoE resulted in the most balanced
generation of empathetic responses. The findings underscore the importance of
understanding the emotional context and how it affects human and AI
communication. Our research contributes to understanding how psychotherapeutic
models can be incorporated into LLMs, facilitating the development of
context-specific, safer, and empathetic AI. | http://arxiv.org/pdf/2311.04915 | Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, Sowon Hahn | cs.CL, cs.AI, cs.HC | null | null | cs.CL | 20231102 | 20231214 | [
{
"id": "2302.13971"
},
{
"id": "2305.10601"
},
{
"id": "2212.14402"
},
{
"id": "2205.11916"
},
{
"id": "2108.07258"
},
{
"id": "2209.08141"
},
{
"id": "2201.11903"
},
{
"id": "2306.08997"
},
{
"id": "2303.13375"
},
{
"id": "1808.10399"
}
] |
2311.01555 | 21 | ⢠Cohere Rerank is a commercial text ranking system developed by Cohere2.
4.3 Implementation Details
Passage Re-Ranking Task. Following Sun et al. (2023c), we sample 10K queries from the MS MARCO training set. Utilizing BM25 as the candidate generator, we retrieve 10 passages for each query. Our BM25 implementation is derived from BM25Okapi as presented in RankBM25 (Trotman et al., 2014). Prior to retrieval, we ensure that stopwords are eliminated. In implementing the pairwise prompting strategy, each queryâs 10 passages are juxtaposed in pairs, leading to the generation of 90 ordered passage pairs. The teacher models are instructed to determine which document is more relevant to the query and subsequently produce the ranking results. The results are then used as the pseudo labels for pointwise instruction distillation. To harness the full potential of the ranking outcomes, we employ RankNet (Burges et al., 2005).
Conversational Recommendation Task. For this task, we use the dialogue history as the query, the descriptions of movies as documents, and employ BM25 to fetch the top-5 movies into the candidate pool. Furthermore, following Hou et al. (2023), an additional 4 popular movies are incorporated into the candidate pool3. This is done to simulate the inherent feature of popularity bias in recommendations (Chen et al., 2023). | 2311.01555#21 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.04915 | 21 | Sharma, A., Miner, A., Atkins, D., & Althoff, T. (2020). A to computational understanding empathy expressed in text-based mental health support. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 5263â5276). Association for Computational
Linguistics.
Sivarajkumar, S., Kelley, M., Samolyk- Mazzanti, A., Visweswaran, S., & Wang, Y. (2023). An empirical evaluation of prompting strategies for large language models in zero- shot clinical natural language processing.
Taori, R., Gulrajani, I., Zhang, T., Dubois, Y., Li, X., Guestrin, C., Liang, P., & Hashimoto, T. B. replicable instruction-following model. Stanford Center for Research on Foundation Models. [Online]. at Available https://crfm.stanford.edu/2023/03/13/alpaca.html | 2311.04915#21 | Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models | We present a novel method, the Chain of Empathy (CoE) prompting, that
utilizes insights from psychotherapy to induce Large Language Models (LLMs) to
reason about human emotional states. This method is inspired by various
psychotherapy approaches including Cognitive Behavioral Therapy (CBT),
Dialectical Behavior Therapy (DBT), Person Centered Therapy (PCT), and Reality
Therapy (RT), each leading to different patterns of interpreting clients'
mental states. LLMs without reasoning generated predominantly exploratory
responses. However, when LLMs used CoE reasoning, we found a more comprehensive
range of empathetic responses aligned with the different reasoning patterns of
each psychotherapy model. The CBT based CoE resulted in the most balanced
generation of empathetic responses. The findings underscore the importance of
understanding the emotional context and how it affects human and AI
communication. Our research contributes to understanding how psychotherapeutic
models can be incorporated into LLMs, facilitating the development of
context-specific, safer, and empathetic AI. | http://arxiv.org/pdf/2311.04915 | Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, Sowon Hahn | cs.CL, cs.AI, cs.HC | null | null | cs.CL | 20231102 | 20231214 | [
{
"id": "2302.13971"
},
{
"id": "2305.10601"
},
{
"id": "2212.14402"
},
{
"id": "2205.11916"
},
{
"id": "2108.07258"
},
{
"id": "2209.08141"
},
{
"id": "2201.11903"
},
{
"id": "2306.08997"
},
{
"id": "2303.13375"
},
{
"id": "1808.10399"
}
] |
2311.01343 | 22 | # 3.2 Extension of User/Item Tokens
3.2.1 Vocab Expansion. To tightly couple the pretrained LLM with the recommendation task, we first expand the vocabulary of
2we use ð¢ and ð£ in the superscript to distinguish user or item-related variables.
Conferenceâ17, July 2017, Washington, DC, USA
Vocab Pred. Head t Shared Pretrained LLM Backbone q Ivem Pred. Head Collab LLM, <user_i> has interacted with <item_j> <item_> <item_l>
Figure 2: The overview of the proposed CLLM4Rec in the mutually-regularized pretraining stage. Mutual regulariza- tion of item_k is omitted for simplicity.
the LLM by adding user/item ID tokens to describe the intrinsic user/item semantic, such that semantic gap between RS and natural language can be well bridged. We use bracket notations "<user_ð>" and "<item_ð>" to denote the newly-introduced token for the ðth user and the ðth item, respectively, which has token ID ð + ð and ð + ð¼ + ð, and will not be broken down into atomic tokens. | 2311.01343#22 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 22 | Training Details. Throughout the training phase, we employ the AdamW optimizer with a consistent learning rate of 3e â 5. We constrain the maximum input length to 512 tokens. The
2https://cohere.com/rerank 3The criterion for determining a movieâs popularity is based on its frequency of mentions through- out the training dataset. Movies cited more than 200 times are classified as popular. The likelihood of selecting a popular movie is proportional to its representation in the overall popularity.
7
Table 2: Results on TREC-DL19 and TREC-DL20 by re-ranking top-100 passages retrieved by BM25. Sec/Q indicates the average time in seconds to the re-rank 100 passages for a query. Best performing unsupervised and overall system(s) are marked bold. | 2311.01555#22 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.04915 | 22 | Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M-A., Lacroix, T., Roziere, B., Goyal, N., Hambro, E., Azhar, F., et al. (2023). Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
Truax, C. B., & Carkhuff, R. (2007). Toward effective and psychotherapy: Training and practice. Transaction Publishers.
Urakami, J., Moore, B. A., Sutthithatip, S., & Park, S. (2019). Users' perception of empathic expressions by an advanced intelligent system. In Proceedings of the 7th International Conference on Human-Agent Interaction (pp. 11â18).
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Chi, E., Le, Q., & Zhou, D. (2022). Chain of thought prompting elicits reasoning in large language preprint arXiv:2201.11903.
Wondra, J. D., & Ellsworth, P. C. (2015). An appraisal theory of empathy and other vicarious emotional experiences. Psychological review, 122(3), 411. | 2311.04915#22 | Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models | We present a novel method, the Chain of Empathy (CoE) prompting, that
utilizes insights from psychotherapy to induce Large Language Models (LLMs) to
reason about human emotional states. This method is inspired by various
psychotherapy approaches including Cognitive Behavioral Therapy (CBT),
Dialectical Behavior Therapy (DBT), Person Centered Therapy (PCT), and Reality
Therapy (RT), each leading to different patterns of interpreting clients'
mental states. LLMs without reasoning generated predominantly exploratory
responses. However, when LLMs used CoE reasoning, we found a more comprehensive
range of empathetic responses aligned with the different reasoning patterns of
each psychotherapy model. The CBT based CoE resulted in the most balanced
generation of empathetic responses. The findings underscore the importance of
understanding the emotional context and how it affects human and AI
communication. Our research contributes to understanding how psychotherapeutic
models can be incorporated into LLMs, facilitating the development of
context-specific, safer, and empathetic AI. | http://arxiv.org/pdf/2311.04915 | Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, Sowon Hahn | cs.CL, cs.AI, cs.HC | null | null | cs.CL | 20231102 | 20231214 | [
{
"id": "2302.13971"
},
{
"id": "2305.10601"
},
{
"id": "2212.14402"
},
{
"id": "2205.11916"
},
{
"id": "2108.07258"
},
{
"id": "2209.08141"
},
{
"id": "2201.11903"
},
{
"id": "2306.08997"
},
{
"id": "2303.13375"
},
{
"id": "1808.10399"
}
] |
2311.01343 | 23 | 3.2.2 Token Embeddings. For LLMs to understand the tokens, they must be first transformed into dense embeddings. Accordingly, we use zð¡ ð â ð
ð¾ to represent the pretrained embedding of the ðth vocab token. In addition, for the newly-introduced user/item tokens, we introduce two types of embeddings to represent user/item col- laborative and content semantics. Specifically, to align the user/item tokens with the vocab space of the pretrained LLM, we sample the user/item collaborative token embeddings from the same size-ð¾ latent space as follows:
aa? ~ N (0. ay! âIk), (1)
where A; is the prior precision for at 2? Importantly, to align the content semantics with the collaborative semantic for more recommendation-oriented content modeling, we sample the user/item content token embeddings from the following conditional prior: | 2311.01343#23 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 23 | Method LLM Sec/Q DL19 nDCG@1/5/10 DL20 nDCG@1/5/10 BM25 â â 54.26 / 52.78 / 50.58 57.72 / 50.67 / 47.96 Supervised LLMs Methods monoT5 monoT5 Cohere Rerank T5-Base T5-XL english-v2.0 0.12 1.30 â 77.47 / 69.40 / 66.99 79.84 / 73.77 / 71.48 79.07 / 73.74 / 71.83 80.25 / 72.32 / 68.89 77.13 / 76.17 / 73.22 79.32 / 71.00 / 67.08 Unsupervised LLMs Methods RankGPT RankGPT gpt-3.5-turbo gpt-4 â â 82.17 / 71.15 / 65.80 79.32 / 66.76 / 62.91 82.56 / 79.16 / 75.59 78.40 / 74.11 / 70.56 FLAN-T5-Base Relevance Generation PRP (Allpair) FLAN-T5-Base Instruction Distillation FLAN-T5-Base 0.12 21.51 0.12 55.25 / 50.35 / 48.32 58.13 / 48.52 / 47.43 | 2311.01555#23 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.04915 | 23 | Wondra, J. D., & Ellsworth, P. C. (2015). An appraisal theory of empathy and other vicarious emotional experiences. Psychological review, 122(3), 411.
Wubbolding, R. E., Casstevens, W. J., & Fulkerson, M. H. (2017). Using the wdep system of reality therapy to support person- treatment planning. Journal of centered Counseling & Development, 95(4), 472â477.
Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T. L., Cao, Y., & Narasimhan, K. (2023). Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601.
Zaki, J. (2019). The war for kindness: Building empathy in a fractured world. Crown.
Zhang, S. J., Florin, S., Lee, A. N., Niknafs, E., Marginean, A., Wang, A., Tyser, K., Chin, Z., Hicke, Y., Singh, N., et al. (2023). Exploring the MIT mathematics and EECS curriculum using language models. arXiv preprint large arXiv:2306.08997. | 2311.04915#23 | Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models | We present a novel method, the Chain of Empathy (CoE) prompting, that
utilizes insights from psychotherapy to induce Large Language Models (LLMs) to
reason about human emotional states. This method is inspired by various
psychotherapy approaches including Cognitive Behavioral Therapy (CBT),
Dialectical Behavior Therapy (DBT), Person Centered Therapy (PCT), and Reality
Therapy (RT), each leading to different patterns of interpreting clients'
mental states. LLMs without reasoning generated predominantly exploratory
responses. However, when LLMs used CoE reasoning, we found a more comprehensive
range of empathetic responses aligned with the different reasoning patterns of
each psychotherapy model. The CBT based CoE resulted in the most balanced
generation of empathetic responses. The findings underscore the importance of
understanding the emotional context and how it affects human and AI
communication. Our research contributes to understanding how psychotherapeutic
models can be incorporated into LLMs, facilitating the development of
context-specific, safer, and empathetic AI. | http://arxiv.org/pdf/2311.04915 | Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, Sowon Hahn | cs.CL, cs.AI, cs.HC | null | null | cs.CL | 20231102 | 20231214 | [
{
"id": "2302.13971"
},
{
"id": "2305.10601"
},
{
"id": "2212.14402"
},
{
"id": "2205.11916"
},
{
"id": "2108.07258"
},
{
"id": "2209.08141"
},
{
"id": "2201.11903"
},
{
"id": "2306.08997"
},
{
"id": "2303.13375"
},
{
"id": "1808.10399"
}
] |
2311.01343 | 24 | ð,ð¢ ð â¼ N z ð,ð¢ z ð ð,ð£ ð â¼ N ð,ð£ z ð , ðâ1 ð , ðâ1 ð , z . · Ið¾ · Ið¾ (2)
ð,ð¢ where ðð is the precision for the conditional prior of z . The ð horizontally-stacked matrices of vocab/collaborative/content token embeddings are denoted as Zð¡ , Zð,{ð¢,ð£ } , and Zð,{ð¢,ð£ } , respectively3.
3.2.3 CLLM4Rec Base Model. With user/item tokens and the corresponding token embeddings introduced in the previous sub- sections, we are ready to introduce the CLLM4Rec base model with expanded vocabulary. The CLLM4Rec base model is denoted with (ð¿) {ð,ð },1:ð = Ëððð {ð,ð } (x1:ð ), | 2311.01343#24 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 24 | Distillation FLAN-T5-Base 0.12 21.51 0.12 55.25 / 50.35 / 48.32 58.13 / 48.52 / 47.43 51.16 / 53.44 / 51.45 53.40 / 48.61 / 48.36 59.69 / 60.21 / 57.30 63.27 / 55.50 / 53.09 FLAN-T5-Large Relevance Generation PRP (Allpair) FLAN-T5-Large Instruction Distillation FLAN-T5-Large 1.10 49.19 1.10 40.43 / 45.19 / 46.67 43.41 / 47.65 / 48.41 74.03 / 69.00 / 66.58 68.21 / 64.63 / 61.51 74.33 / 74.18 / 69.81 72.84 / 65.59 / 62.80 FLAN-T5-XL Relevance Generation PRP (Allpair) FLAN-T5-XL Instruction Distillation FLAN-T5-XL 1.30 112.12 1.30 45.37 / 48.56 / 49.07 50.00 / 54.33 / 52.85 77.91 / 73.46 / 70.58 76.85 / 69.58 / 67.21 79.85 / 75.15 / 71.92 81.17 / | 2311.01555#24 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.01343 | 25 | (ð¿) which maps the token sequence x1:ð into the hidden space h {ð,ð },1:ð through ð¿ stacked self-attention module (the superscript (ð¿) will be omitted if no ambiguity exists); here, xð is a size ð + ð¼ + ð½ one-hot
3We use super/subscript ð and ð to distinguish the variables related to the collaborative and content model process, respectively.
Conferenceâ17, July 2017, Washington, DC, USA User!ID:0057 Item ID: 0046 Item Title: Wet n Wild Mega Last Lip Color 908C Sugar Plum Fairy Review: The color is a perfect mix of dark purple, red and pink. The only downside is the drying aspect of the lipstick, which I counteract by using lip balm before putting it on. filling as a the main collaborative effectiveness For interactions P and
# Yaochen Zhuâ,1, Liang Wu2, Qi Guo2, Liangjie Hong2, Jundong Li1 | 2311.01343#25 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 26 | # Yaochen Zhuâ,1, Liang Wu2, Qi Guo2, Liangjie Hong2, Jundong Li1
filling the pretexts in detail. Therefore, we can view the first part as a soft+hard prompt and conduct language modeling only on the main text. This encourages the model to focus exclusively on collaborative and content information, such that the stability and effectiveness of language modeling can be substantially enhanced. ð transformed from the historical interactions of user ð can be broken down into the soft+hard prompt ð,ð x ð
Figure 3: Example review data from Amazon Beauty dataset.
(a) Historical Interactions r;: soft+hard prompt x7? . rm item token seq. x | 2311.01343#26 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 26 | training environment is 4 * A800-80G, with a batch size fixed at 32. We train the model up to 3 epochs. Our experiments are based on the FLAN-T5 family (Chung et al., 2022), a suite of models which has been fine-tuned for various NLP tasks. Our experiments specifically leverage models such as FLAN-T5-XL (3B), FLAN-T5-Large (770M), and FLAN-T5-Base (220M).
The prompts used can be seen in Appendix A.
# 5 Experimental Results
5.1 Results on Passage Re-Ranking Tasks
The experimental results on TREC and BEIR datasets are presented in Table 2 and Table 3 respectively. Based on these results, we draw the following observations:
Firstly, when compared with previous unsupervised LLM prompting strategies, our instruction-distilled modelsâ inference speed aligns with that of the Relevance Generation method, and it is notably over 100Ã faster than the PRP method. Moreover, the performance of our approach using FLAN-T5-XL and FLAN-T5-Large surpasses both the Relevance Generation and PRP methods with the same LLMs. | 2311.01555#26 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.01343 | 27 | Figure 3: Example review data from Amazon Beauty dataset.
(a) Historical Interactions r;: soft+hard prompt x7? . rm item token seq. x
vector denoting the token of either a vocab, a user, or an item. In addition, the subscript in Ëððð {ð,ð } denotes which embedding matrix is used to encode the user/item tokens (where ð stands for matrix Zð,{ð¢,ð£ } and ð stands for matrix Zð,{ð¢,ð£ } ). For the CLLM4Rec base Ëððð {ð,ð } , only the user/item token embeddings are trainable, model whereas the vocab embeddings Zð¡ as well as the other parts of the backbone LLM are fixed to preserve the pretrained knowledge. | 2311.01343#27 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 27 | Secondly, the instruction-distilled models yield results akin to their supervised counter- parts but with reduced annotation requirements. Specifically, our instruction-distilled FLAN-T5-XL model achieves nDCG@10 of 71.92 and 69.29 on TREC-DL19 and TREC-DL20, respectively, either matches or surpasses the performance of the supervised monoT5 of equivalent parameter size.
Lastly, the instruction-distilled models always perform superior to their teachers. For example, the distilled models of all different model sizes perform better than their PRP teachers. This can be attributed to the fact that unspecialized teacher models might produce unstable outputs. After distillation on task-related data, student models are able to strictly
8
Table 3: Results (nDCG@10) on BEIR. | 2311.01555#27 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.01343 | 28 | Accordingly, we introduce the collaborative LLM by adding an item prediction head ðð : Rð¾â â P(ð½ ) to the CLLM4Rec base model Ëðððð , which maps the final-layer last-step hidden representation hð,â1 calculated via Ëðððð to the item probability space P(ð½ ) to predict the next item token. The weights of ðð are tied with the item collab- orative token embeddings Zð,ð£ as ðð (hð,â1) = softmax(Zð,𣠷 hð,â1). The generative process of the collaborative LLM can be denoted as:
# 3.3 Mutually-Regularized Pretraining | 2311.01343#28 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 28 | Method LLM Covid NFC. Touche DBP. SciFact Signal News Robust04 Avg. BM25 monoT5 monoT5 Cohere Rerank english-v2.0 RankGPT RankGPT â T5-Base T5-XL 59.47 30.75 44.22 31.80 67.89 78.34 37.38 30.82 42.42 73.40 80.71 38.97 32.41 44.45 76.57 81.81 36.36 32.51 42.51 74.44 gpt-3.5-turbo 76.67 35.62 36.18 44.47 70.43 gpt-4 85.51 38.47 38.57 47.12 74.95 33.05 39.52 31.67 46.83 32.55 48.49 29.60 47.59 32.12 48.85 34.40 52.89 40.70 51.72 56.71 50.78 50.62 57.55 Ours Ours Ours FLAN-T5-XL FLAN-T5-Large FLAN-T5-Base 80.96 38.25 30.97 45.09 75.66 79.95 35.41 30.25 45.22 71.22 69.11 30.51 24.10 32.15 36.92 32.45 49.21 | 2311.01555#28 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.01343 | 29 | # 3.3 Mutually-Regularized Pretraining
With CLLM4Rec base model introduced in the previous section, we discuss the mutually-regularized pretraining strategy for CLLM4Rec to learn the user/item collaborative/content token embeddings based on language modeling on corpora established from user- item interactions and user/item textual features, where the encoded knowledge and logical reasoning ability of the pretrained LLM can be fully utilized. The overall process can be referred to in Fig. 2.
rm of Pp Xi eel hr, (Xitel ie Xt ). (4)
ð,ð where the prompt x serves as a context to generate the next ð item token based on previous item tokens. Since the generation of ð,ð x ð,ð+1 requires attending to previous tokens, when maximizing the likelihood, the collaborative LLM pushes the token embeddings of ð,ð¢ user ð, i.e., z , and the token embeddings of the interacted items, i.e., ð ð,ð£ ð,ð£ ð , · · · , to be close to each other, where user/item collaborative z , z ð semantics in recommendation can be accurately captured. | 2311.01343#29 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01343 | 30 | 3.3.1 Recommendation-Specific Corpora. Generally, we can transform the interactions and user/item content features into doc- uments of user/item/vocab token sequences as follows:
# Raw Corpora Transformed from Recommendation Data
Similarly, for the documents transformed from the user/item ð¢ð£,ð content5, it can also naturally be split into a soft+hard prompt x ð ð and the main text x
(a) Historical Interactions rð : <user_ð> has interacted with <item_ð> <item_ð> ... (b) User/Item Textual Features xð¢ The biography of <user_ð> is: Main biography. The content of <item_ð> is: Main contents. <user_ð> writes the review for <item_ð> : Main reviews.
(b) User/Item Textual Features xij vocab seq. xiâ soft+hard prompt x,â | 2311.01343#30 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 30 | follow the given instructions, generating more reliable outputs. This specialization phase significantly enhances both the efficiency and performance of all involved models.
Similar findings can be observed on the BEIR dataset.
5.2 Results on Conversational Recommendation Tasks
Understanding user preferences from dialogue history presents a greater challenge than merely ranking relevance based on a specified query. Despite this, our method demonstrates noteworthy results, which are summarized in Table 4.
Firstly, our method achieves the best results among all the unsupervised methods. Specif- ically, our distillation technique outperforms other methods across all scales in terms of Acc@1 metrics. The FLAN-T5-XL distilled model achieves a peak value of 24.93% on Acc@1, outperforming all other unsupervised models.
Secondly, when compared with the teacher model, the student model exhibits either com- parable or superior performance. The teacher model, employing FLAN-T5-XL with PRP techniques, posts an Acc@1 of 20%. In contrast, the distilled model with equivalent param- eter size achieves an impressive 24.93% in terms of Acc@1. Meanwhile, the Large model, with less than a third of the teacher modelâs parameters, records a close Acc@1 score of 19.71%.
Table 4: Results (Acc) on REDIAL. | 2311.01555#30 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |
2311.01343 | 31 | (b) User/Item Textual Features xij vocab seq. xiâ soft+hard prompt x,â
Accordingly, we introduce the content LLM by adding a vocab prediction head ðð : Rð¾â â P(ð ) to the CLLM4Rec base model Ëðððð , which maps the final-layer last-step hidden representation hð,â1 calculated via Ëðððð (which shares the same pretrained LLM with Ëðððð but uses Zð,{ð¢,ð£ } to decode the user/item token) to the vocab probability space. Similarly, the weights of ðð are tied with the vocab embeddings Zð¡ as ðð (hð,â1) = softmax(Zð¡ · hð,â1). The generative process of the content LLM can be denoted as follows: | 2311.01343#31 | Collaborative Large Language Model for Recommender Systems | Recently, there is a growing interest in developing next-generation
recommender systems (RSs) based on pretrained large language models (LLMs),
fully utilizing their encoded knowledge and reasoning ability. However, the
semantic gap between natural language and recommendation tasks is still not
well addressed, leading to multiple issues such as spuriously-correlated
user/item descriptors, ineffective language modeling on user/item contents, and
inefficient recommendations via auto-regression, etc. In this paper, we propose
CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and
ID paradigm of RS, aiming to address the above challenges simultaneously. We
first extend the vocabulary of pretrained LLMs with user/item ID tokens to
faithfully model the user/item collaborative and content semantics.
Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is
proposed to effectively learn user/item collaborative/content token embeddings
via language modeling on RS-specific corpora established from user-item
interactions and user/item features, where each document is split into a prompt
consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and
a main text consisting of homogeneous item tokens or vocab tokens that
facilitates stable and effective language modeling. In addition, a novel mutual
regularization strategy is introduced to encourage the CLLM4Rec to capture
recommendation-oriented information from user/item contents. Finally, we
propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where
an item prediction head with multinomial likelihood is added to the pretrained
CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts
established from masked user-item interaction history, where recommendations of
multiple items can be generated efficiently. | http://arxiv.org/pdf/2311.01343 | Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, Jundong Li | cs.IR | null | null | cs.IR | 20231102 | 20231108 | [
{
"id": "2302.13971"
},
{
"id": "2206.07682"
},
{
"id": "2308.10053"
},
{
"id": "2307.02046"
},
{
"id": "2306.05817"
},
{
"id": "2205.08084"
},
{
"id": "2303.14524"
},
{
"id": "2305.07001"
},
{
"id": "2303.13835"
},
{
"id": "2303.18223"
},
{
"id": "2302.03735"
},
{
"id": "2305.00447"
},
{
"id": "2305.08845"
},
{
"id": "2104.08691"
},
{
"id": "2308.10837"
},
{
"id": "2305.06569"
}
] |
2311.01555 | 31 | Table 4: Results (Acc) on REDIAL.
Method LLM Sec/Q Acc Random Popularity BM25 â â â â â â 10.77 7.69 8.62 Unsupervised LLMs Methods Listwise Ranking Pairwise Ranking Pointwise Ranking Instruction Distillation T5-XL T5-XL T5-XL T5-XL 0.02 7.90 1.44 1.44 16.92 20.00 12.00 24.93 Listwise Ranking Pairwise Ranking Pointwise Ranking Instruction Distillation T5-Large T5-Large T5-Large T5-Large 0.01 3.06 0.49 0.49 13.85 16.62 8.00 19.71 Listwise Ranking Pairwise Ranking Pointwise Ranking Instruction Distillation T5-Base T5-Base T5-Base T5-Base 0.01 1.00 0.18 0.18 1.54 13.69 10.77 15.07
9 | 2311.01555#31 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT. | http://arxiv.org/pdf/2311.01555 | Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | [
{
"id": "2210.11416"
}
] |