id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
sequencelengths 1
1
|
---|---|---|---|---|---|---|
2311.01555#22 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | 8 Table 3: Results (nDCG@10) on BEIR. Method LLM Covid NFC. Touche DBP. SciFact Signal News Robust04 Avg. BM25 monoT5 monoT5 Cohere Rerank english-v2.0 RankGPT RankGPT â T5-Base T5-XL 59.47 30.75 44.22 31.80 67.89 78.34 37.38 30.82 42.42 73.40 80.71 38.97 32.41 44.45 76.57 81.81 36.36 32.51 42.51 74.44 gpt-3.5-turbo 76.67 35.62 36.18 44.47 70.43 gpt-4 85.51 38.47 38.57 47.12 74.95 33.05 39.52 31.67 46.83 32.55 48.49 29.60 47.59 32.12 48.85 34.40 52.89 40.70 51.72 56.71 50.78 50.62 57.55 Ours Ours Ours FLAN-T5-XL FLAN-T5-Large FLAN-T5-Base 80.96 38.25 30.97 45.09 75.66 79.95 35.41 30.25 45.22 71.22 69.11 30.51 24.10 32.15 36.92 32.45 49.21 30.80 44.52 28.84 31.98 56.64 49.22 37.65 43.42 49.07 51.36 49.45 49.37 53.68 51.15 48.32 36.41 follow the given instructions, generating more reliable outputs. This specialization phase significantly enhances both the efficiency and performance of all involved models. Similar findings can be observed on the BEIR dataset. 5.2 Results on Conversational Recommendation Tasks Understanding user preferences from dialogue history presents a greater challenge than merely ranking relevance based on a specified query. | 2311.01555#21 | 2311.01555#23 | 2311.01555 | [
"2210.11416"
] |
2311.01555#23 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Despite this, our method demonstrates noteworthy results, which are summarized in Table 4. Firstly, our method achieves the best results among all the unsupervised methods. Specif- ically, our distillation technique outperforms other methods across all scales in terms of Acc@1 metrics. The FLAN-T5-XL distilled model achieves a peak value of 24.93% on Acc@1, outperforming all other unsupervised models. Secondly, when compared with the teacher model, the student model exhibits either com- parable or superior performance. The teacher model, employing FLAN-T5-XL with PRP techniques, posts an Acc@1 of 20%. In contrast, the distilled model with equivalent param- eter size achieves an impressive 24.93% in terms of Acc@1. Meanwhile, the Large model, with less than a third of the teacher modelâ s parameters, records a close Acc@1 score of 19.71%. Table 4: Results (Acc) on REDIAL. Method LLM Sec/Q Acc Random Popularity BM25 â â â â â â | 2311.01555#22 | 2311.01555#24 | 2311.01555 | [
"2210.11416"
] |
2311.01555#24 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | 10.77 7.69 8.62 Unsupervised LLMs Methods Listwise Ranking Pairwise Ranking Pointwise Ranking Instruction Distillation T5-XL T5-XL T5-XL T5-XL 0.02 7.90 1.44 1.44 16.92 20.00 12.00 24.93 Listwise Ranking Pairwise Ranking Pointwise Ranking Instruction Distillation T5-Large T5-Large T5-Large T5-Large 0.01 3.06 0.49 0.49 13.85 16.62 8.00 19.71 Listwise Ranking Pairwise Ranking Pointwise Ranking Instruction Distillation T5-Base T5-Base T5-Base T5-Base 0.01 1.00 0.18 0.18 1.54 13.69 10.77 15.07 9 Lastly, there is a notable improvement in the performance metrics of all the distilled models after instruction distillation. For instance, the FLAN-T5-XL model, when used with the pointwise prompt, only marginally surpasses the random recommendation. However, after the proposed instruction distillation process, its Acc@1 nearly doubles. A similar improve- ment is observed for FLAN-T5-Large, with its Acc@1 soaring from 8% to 19.71%. Even though the increase might not seem substantial due to the modelâ s capacity, it represents a growth of over 5%. | 2311.01555#23 | 2311.01555#25 | 2311.01555 | [
"2210.11416"
] |
2311.01555#25 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | 5.3 Analytical Experiments To gain deeper insights into the impact of model size and training signal, we carried out an analytical experiment. The results are depicted in Figure 3. Several key observations can be made from these results: (1) Instruction distillation models, represented by the yellow line in the figure, outperform the state-of-the-art supervised system, monoT5 (or SFT (500K), illustrated by the blue line), when the model size surpasses 3B. Moreover, our approach consistently exceeds the performance of earlier zero-shot LLM methods, namely RG and PRP, across all scales. (2) Distilling from larger models can enhance the performance of their smaller counterparts. As evidenced by our results labeled â Ours (XL)â in Figure 3 â which captures the process of distilling the predictions from FLAN-T5-XL to smaller models â it becomes clear that instruction distillation from larger models invariably boosts the capabilities of smaller ones. (3) Given the same training data size, our approach, which distilling from FLAN-T5-XL (referred to as â Ours (XL)â in Figure 3) and is unsupervised, significantly outperforms its supervised counterpart (referred to as â SFT (10k)â in Figure 3). | 2311.01555#24 | 2311.01555#26 | 2311.01555 | [
"2210.11416"
] |
2311.01555#26 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | This finding shows the promising potential of leveraging LLMs as data labelers in ranking tasks. nDcGeio0 = 75 70 â oâ -RG â *â PRP 85 ~O-SFT (600k) 60 =O=SFT (10K) â 2-Ours (XL) 55 â o-Ours 50 45 220M 770M 3B 11B Figure 3: Compare the proposed method with baselines in terms of model size. We can see that our methods (denoted by yellow line) outperform supervised finetuning (SFT) methods when the number of parameters exceeds 3B. # 6 Conclusion This paper proposes instruction distillation, an unsupervised method that distills LLMsâ abilities uncovered by complex instructions into the same model but with simpler instruc- tions. This method significantly improves the efficiency and stability of LLMs, which is very friendly for industrial application deployment. Our experimental results on passage ranking and conversational recommendation verify the effectiveness of the proposed method. With our method, the efficiency of the models is significantly improved. | 2311.01555#25 | 2311.01555#27 | 2311.01555 | [
"2210.11416"
] |
2311.01555#27 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | A 10â 100Ã increase in efficiency can be observed when compared to comparable unsupervised methods. 10 # References Keqin Bao, Jizhi Zhang, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He. 2023. Tallrec: An effective and efficient tuning framework to align large language model with recommendation. ArXiv, abs/2305.00447. Christopher J. C. Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Gregory N. Hullender. 2005. | 2311.01555#26 | 2311.01555#28 | 2311.01555 | [
"2210.11416"
] |
2311.01555#28 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Learning to rank using gradient descent. In ICML 2005. Daniel Fernando Campos, Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, Li Deng, and Bhaskar Mitra. 2016. Ms marco: A human generated machine reading comprehension dataset. ArXiv, abs/1611.09268. Jiawei Chen, Hande Dong, Xiang Wang, Fuli Feng, Meng Wang, and Xiangnan He. 2023. | 2311.01555#27 | 2311.01555#29 | 2311.01555 | [
"2210.11416"
] |
2311.01555#29 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Bias and debias in recommender system: A survey and future directions. ACM Transactions on Information Systems, 41(3):1â 39. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction- finetuned language models. arXiv preprint arXiv:2210.11416. Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Fernando Campos, and Ellen M. Voorhees. 2020. Overview of the trec 2020 deep learning track. ArXiv, abs/2102.07662. Zhuyun Dai, Vincent Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith B. Hall, and Ming-Wei Chang. 2023. | 2311.01555#28 | 2311.01555#30 | 2311.01555 | [
"2210.11416"
] |
2311.01555#30 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Promptagator: Few-shot dense retrieval from 8 examples. In ICLR 2023. Yixing Fan, Xiaohui Xie, Yinqiong Cai, Jia Chen, Xinyu Ma, Xiangsheng Li, Ruqing Zhang, and Jiafeng Guo. 2021. Pre-training methods in information retrieval. ArXiv, abs/2111.13853. Yao Fu, Hao-Chun Peng, Litu Ou, Ashish Sabharwal, and Tushar Khot. 2023. Specializing smaller language models towards multi-step reasoning. ArXiv, abs/2301.12726. Luyu Gao, Xueguang Ma, Jimmy Lin, and Jamie Callan. 2022. Precise zero-shot dense retrieval without relevance labels. ArXiv, abs/2212.10496. Google. 2023. Palm 2 technical report. ArXiv, abs/2305.10403. Yupeng Hou, Junjie Zhang, Zihan Lin, Hongyu Lu, Ruobing Xie, Julian McAuley, and Wayne Xin Zhao. 2023. Large language models are zero-shot rankers for recommender systems. ArXiv, abs/2305.08845. Raymond Li, Samira Ebrahimi Kahou, Hannes Schulz, Vincent Michalski, Laurent Charlin, and Chris Pal. 2018a. Towards deep conversational recommendations. In NIPS 2018. Raymond Li, Samira Ebrahimi Kahou, Hannes Schulz, Vincent Michalski, Laurent Charlin, and Christopher Joseph Pal. 2018b. Towards deep conversational recommendations. ArXiv, abs/1812.07617. Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Ya- sunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Râ e, Diana Acosta-Navas, Drew A. Hudson, E. | 2311.01555#29 | 2311.01555#31 | 2311.01555 | [
"2210.11416"
] |
2311.01555#31 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel J. Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan S. Kim, Neel Guha, Niladri S. Chatterji, O. Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas F. Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. 2022. | 2311.01555#30 | 2311.01555#32 | 2311.01555 | [
"2210.11416"
] |
2311.01555#32 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Holistic evaluation of language models. ArXiv, abs/2211.09110. Xueguang Ma, Xinyu Crystina Zhang, Ronak Pradeep, and Jimmy Lin. 2023. Zero-shot listwise document reranking with a large language model. ArXiv, abs/2305.02156. 11 Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adamek, Eric Malmi, and Aliaksei Severyn. 2022. | 2311.01555#31 | 2311.01555#33 | 2311.01555 | [
"2210.11416"
] |
2311.01555#33 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Teaching small language models to reason. ArXiv, abs/2212.08410. Microsoft. 2023. Confirmed: the new bing runs on openaiâ s gpt-4. https://blogs.bing. com/search/march_2023/Confirmed-the-new-Bing-runs-on-OpenAI%E2% 80%99s-GPT-4. Niklas Muennighoff. 2022. Sgpt: Gpt sentence embeddings for semantic search. | 2311.01555#32 | 2311.01555#34 | 2311.01555 | [
"2210.11416"
] |
2311.01555#34 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | ArXiv, abs/2202.08904. Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. 2020. Document ranking with a pretrained sequence-to-sequence model. In Findings of EMNLP. OpenAI. 2022. Introducing chatgpt. https://openai.com/blog/chatgpt. OpenAI. 2023. Gpt-4 technical report. ArXiv, abs/2303.08774. Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, and Michael Bendersky. 2023. Large language models are effective text rankers with pairwise ranking prompting. ArXiv, abs/2306.17563. Devendra Singh Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen tau Yih, Joëlle Pineau, and Luke Zettlemoyer. 2022a. | 2311.01555#33 | 2311.01555#35 | 2311.01555 | [
"2210.11416"
] |
2311.01555#35 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Improving passage retrieval with zero-shot question generation. In EMNLP 2022. Devendra Singh Sachan, Mike Lewis, Dani Yogatama, Luke Zettlemoyer, Joëlle Pineau, and Manzil Zaheer. 2022b. Questions are all you need to train a dense passage retriever. ArXiv, abs/2206.10658. Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen tau Yih. 2023. | 2311.01555#34 | 2311.01555#36 | 2311.01555 | [
"2210.11416"
] |
2311.01555#36 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Replug: Retrieval-augmented black-box language models. ArXiv, abs/2301.12652. Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. 2019. Megatron-lm: Training multi-billion parameter language models using model parallelism. ArXiv, abs/1909.08053. Charles Burton Snell, Dan Klein, and Ruiqi Zhong. 2022. Learning by distilling context. ArXiv, abs/2209.15189. Weiwei Sun, Pengjie Ren, and Zhaochun Ren. 2023a. Generative knowledge selection for knowledge-grounded dialogues. In Findings of EACL 2023. Weiwei Sun, Lingyong Yan, Zheng Chen, Shuaiqiang Wang, Haichao Zhu, Pengjie Ren, Zhumin Chen, Dawei Yin, M. de Rijke, and Zhaochun Ren. 2023b. Learning to tokenize for generative retrieval. In NeurIPS 2023. Weiwei Sun, Lingyong Yan, Xinyu Ma, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, and Zhaochun Ren. 2023c. Is chatgpt good at search? investigating large language models as re-ranking agents. In EMNLP 2023. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. | 2311.01555#35 | 2311.01555#37 | 2311.01555 | [
"2210.11416"
] |
2311.01555#37 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca. Nandan Thakur, Nils Reimers, Andreas Rucklâ e, Abhishek Srivastava, and Iryna Gurevych. 2021. Beir: A heterogenous benchmark for zero-shot evaluation of information retrieval models. In NeurIPS 2021. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurâ elien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. | 2311.01555#36 | 2311.01555#38 | 2311.01555 | [
"2210.11416"
] |
2311.01555#38 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Llama: Open and efficient foundation language models. ArXiv, abs/2302.13971. 12 Andrew Trotman, Antti Puurula, and Blake Burgess. 2014. Improvements to bm25 and language models examined. In Proceedings of the 19th Australasian Document Computing Symposium, pages 58â 65. Liang Wang, Nan Yang, and Furu Wei. 2023a. Query2doc: Query expansion with large language models. ArXiv, abs/2303.07678. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2023b. | 2311.01555#37 | 2311.01555#39 | 2311.01555 | [
"2210.11416"
] |
2311.01555#39 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Self-instruct: Aligning language model with self gener- ated instructions. In ACL 2023. Likang Wu, Zhilan Zheng, Zhaopeng Qiu, Hao Wang, Hongchao Gu, Tingjia Shen, Chuan Qin, Chen Zhu, Hengshu Zhu, Qi Liu, Hui Xiong, and Enhong Chen. 2023. A survey on large language models for recommendation. ArXiv, abs/2305.19860. Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chen- guang Zhu, Michael Zeng, and Meng Jiang. 2023. Generate rather than retrieve: Large language models are strong context generators. In ICLR 2023. Tony Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. | 2311.01555#38 | 2311.01555#40 | 2311.01555 | [
"2210.11416"
] |
2311.01555#40 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | Calibrate before use: Improving few-shot performance of language models. In ICML 2021. Yukun Zhao, Lingyong Yan, Weiwei Sun, Guoliang Xing, Chong Meng, Shuaiqiang Wang, Zhicong Cheng, Zhaochun Ren, and Dawei Yin. 2023. Knowing what llms do not know: A simple yet effective self-detection method. ArXiv, abs/2310.17918. Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Zhicheng Dou, and Ji rong Wen. 2023. Large language models for information retrieval: | 2311.01555#39 | 2311.01555#41 | 2311.01555 | [
"2210.11416"
] |
2311.01555#41 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | A survey. ArXiv, abs/2308.07107. 13 # A Prompts A.1 Passage Ranking Pointwise Ranking Prompt Question: Given a query â {{query}}â , Is the following passage relevant to the query? Passage : {{passage}} If it is relevant answer Yes, else answer No. Answer: Pairwise Ranking Prompt Question: Given a query â {{query}}â , which of the following two passages is more relevant to the query? passage A: {{passage_A}} passage B: {{passage_B}} Output the identifier of the more relevant passage. The answer must be passage A or passage B. Answer: A.2 Conversational Recommendation Pointwise Ranking Prompt Question: Given the conversation history between the recommender and the user: {{query}} Based on the userâ s preference, is the following movie suitable to the user? Movie: {{movie}} The answer must be Y or N. Give the answer after Answer: . 14 Pairwise Ranking Prompt Question: Given the conversation history between the recommender and the user: {{query}} Based on the userâ s preference, which of the following two movies is more suitable to the user? Movie A: {{movie_A}} Movie B: {{movie_B}} The answer must be A or B. Give the answer after the Answer: . Listwise Ranking Prompt Question: Given the conversation history between the recommender and the user: {{query}} Based on the userâ s preference, which of the following movies is the most suitable for the user? [1]: {{movie_1}} [2]: {{movie_2}} ... Answer the question with the number of the movie. The answer will include one and only one number. Give the answer after Answer: . 15 | 2311.01555#40 | 2311.01555 | [
"2210.11416"
] |
|
2311.01343#0 | Collaborative Large Language Model for Recommender Systems | 3 2 0 2 v o N 8 ] R I . s c [ 3 v 3 4 3 1 0 . 1 1 3 2 : v i X r a Collaborative Large Language Model for Recommender Systems Yaochen Zhuâ ,1, Liang Wu2, Qi Guo2, Liangjie Hong2, Jundong Li1 1University of Virginia, 2LinkedIn Inc. 1{uqp4qh, jundong}@virginia.edu, 2{liawu, qguo, liahong}@linkedin.com Liangjie Hongâ , Jundong Li! â LinkedIn Inc. qguo, liahong}@linkedin.com Recommendations QO Yes! tT retrieve few tural Language e.g., user t transform user interactions and features item 2 is a computer. (continuous or categorical) will user_1 buy a mouse? Thouse is a component of PC maybe she needs a mouse encoded knowledge reasoning ability has bought item 2. [e) oho is a CS student. | 2311.01343#1 | 2311.01343 | [
"2302.13971"
] |
|
2311.01343#1 | Collaborative Large Language Model for Recommender Systems | # ABSTRACT Recently, there is a growing interest in developing next-generation recommender systems (RSs) based on pretrained large language models (LLMs), fully utilizing their encoded knowledge and reason- ing ability. However, the semantic gap between natural language and recommendation tasks is still not well addressed, leading to multiple issues such as spuriously-correlated user/item descriptors, ineffective language modeling on user/item contents, and ineffi- cient recommendations via auto-regression, etc. In this paper, we propose CLLM4Rec, the first generative RS that tightly integrates the LLM paradigm and ID paradigm of RS, aiming to address the above challenges simultaneously. We first extend the vocabulary of pretrained LLMs with user/item ID tokens to faithfully model the user/item collaborative and content semantics. Accordingly, in the pretraining stage, a novel soft+hard prompting strategy is proposed to effectively learn user/item collaborative/content token embeddings via language modeling on RS-specific corpora estab- lished from user-item interactions and user/item features, where each document is split into a prompt consisting of heterogeneous soft (user/item) tokens and hard (vocab) tokens and a main text con- sisting of homogeneous item tokens or vocab tokens that facilitates stable and effective language modeling. In addition, a novel mutual regularization strategy is introduced to encourage the CLLM4Rec to capture recommendation-oriented information from user/item contents. Finally, we propose a novel recommendation-oriented finetuning strategy for CLLM4Rec, where an item prediction head with multinomial likelihood is added to the pretrained CLLM4Rec backbone to predict hold-out items based on the soft+hard prompts established from masked user-item interaction history, where rec- ommendations of multiple items can be generated efficiently1. | 2311.01343#0 | 2311.01343#2 | 2311.01343 | [
"2302.13971"
] |
2311.01343#2 | Collaborative Large Language Model for Recommender Systems | # Figure 1: Prospective of developing the next generation of recommender systems based on the pretrained LLMs. [10], such as GPT [11], T5 [12], LlaMA [13], have demonstrated emergent ability when trained on large-scale corpora [14], show- casing an unprecedented understanding of knowledge and patterns contained in natural language [9, 15]. Consequently, it is promising to develop the next generation of RS based on the pretrained LLMs [16], fully utilizing their encoded knowledge, logical reasoning abil- ity, and generative AI power to understand and reason with the user/item semantics and make more accurate recommendations accordingly, especially when users and items are associated with large amounts of textual features, such as biographies, descriptions, content, reviews, and explanations, etc., in modern online platforms [17, 18]. (see Fig. 1 for an intuitive example of an LLM-based RS) # 1 INTRODUCTION With content growing exponentially on the Web, recommender system (RS) has become an essential component for online service platforms [1]. Nevertheless, since Netflix released its Prize in 2006 [2], RS has long been dominated by the ID-based paradigm, where users and items are represented by unique, continuous ID embed- dings denoting their semantic similarity (e.g., w.r.t. usersâ prefer- ences on items, user/item contents, etc.) [3]. Exemplar ID-based RSs include matrix factorization-based methods such as PMF [4] and the two-tower models [5], where the user/item ID embeddings are either randomly initialized and learned from their historical interactions (i.e., collaborative filtering [6]), or established based on user/item content features (i.e., content-based methods [7, 8]). Recently, large language model (LLM) has become a heated re- search topic that revolutionized both academia and industry [9]. Transformer-based neural networks with billions of parameters Several preliminary studies have been conducted to investigate the adaptation of LLMs for recommendation systems [19â 22]. | 2311.01343#1 | 2311.01343#3 | 2311.01343 | [
"2302.13971"
] |
2311.01343#3 | Collaborative Large Language Model for Recommender Systems | Typ- ically, these methods can be summarized into two steps: 1) First, instead of representing users/items with continuous ID embeddings, relevant information necessary for reasoning with user interests and generating recommendations, i.e., target user, interacted items, user/item features, and candidate items, are converted into a nat- ural language-based prompt. 2) Then, the prompt is used to query the LLM, where information relevant to recommendations (e.g., whether the user will interact with an item or not) is retrieved from the textual output of the LLM to generate recommendations. The above procedure can be performed in a zero-shot manner [23â 26], where the recommendation decisions are obtained directly from the pretrained LLM (e.g., we input all relevant information regarding a user and an item into the chatbox of ChatGPT and ask if the user will interact with the item), or if groundtruths are available, the pretrained LLMs can also be finetuned, such that RS-specific knowledge can be updated into the pretrained model [20, 27â 29]. Although progress has been achieved by these pioneer works, some fundamental dichotomies between natural language process- ing (NLP) and recommendation still remain to be addressed. One main challenge is the gap between natural language and user/item semantics. Generally, there are two strategies to represent user/item | 2311.01343#2 | 2311.01343#4 | 2311.01343 | [
"2302.13971"
] |
2311.01343#4 | Collaborative Large Language Model for Recommender Systems | â Work done when Yaochen Zhu was an applied research intern at LinkedIn. 1Codes are released at this https://github.com/yaochenzhu/llm4rec. Conferenceâ 17, July 2017, Washington, DC, USA in an LLM-based RS. One strategy is pseudo-ID-based method, where an ID-like word (e.g., "user_ð " or "item_ð ") is used to rep- resent the ð th user and ð th item [20]. However, since the vocabu- lary of most LLM contains number-tokens up to two digits, when tokenized, the pseudo ID breaks down into atomic tokens, e.g., "user_4332" into ["user", "_", "43", "32"], where spurious correlations can be introduced for irrelevant users/items (e.g., "user_4332" with "user_43" and "user_32"). In contrast, description-based methods use semantically meaningful descriptions to index users/items, such as item titles [19, 24] or a small amount of newly-introduced tokens assigned to different user/items based on their content similarity [30]. However, description-based methods introduce a strong induc- tive bias on user-item semantic similarity, which may not faithfully capture the true semantics. Introducing user/item ID tokens, un- fortunately, is generally considered infeasible for LLMs, as directly conducting language modeling on sequences with heterogeneous tokens can be ineffective and unstable, especially when the vocabu- lary of most LLMs is diluted (e.g., â | 2311.01343#3 | 2311.01343#5 | 2311.01343 | [
"2302.13971"
] |
2311.01343#5 | Collaborative Large Language Model for Recommender Systems | ¼ 50k for GPT, and â ¼ 30k for T5) by a large number of randomly initialized user/item embeddings. Even if user/item ID token embeddings can be effectively learned via language modeling, another challenge that hinders effective collaborative filtering with LLMs is that, since the order of inter- actions usually does not matter for direct recommendations while human language naturally has an order, spurious temporal cor- relation can be introduced for items placed in different positions when transforming the user historical interactions into textual sen- tences. Furthermore, for content modeling, since pretrained LLMs are not recommendation-oriented, they can easily capture noise in the user/item textual features irrelevant to the recommendation purpose. Finally, since LLMs generate the next token in an autore- gressive manner, recommending multiple items can be inefficient. For both pseudo-ID-based and description-based indexing strate- gies, item candidates usually need to be explicitly provided in the prompt. These issues severely hinder their industrial applications where the candidate pool is large and low latency matters. To address the above challenges, we present CLLM4Rec, the first method that tightly combines the ID paradigm of RS with the LLM-based paradigm to address the semantic gap. We first extend the vocabulary of pretrained LLMs with user/item ID tokens to faith- fully model the user/item collaborative/content semantics, where the embeddings are learned in two stages. The pretraining stage consists of mutually-regularized collaborative and content LLMs that learn user/item token embeddings via language modeling on RS-specific corpora established from user/item interactions and tex- tual features. Specifically, a novel "soft+hard" prompting strategy is proposed for effective language modeling on documents with heterogeneous tokens, where each document is decomposed into a prompt consisting of user/item (soft [31]) and vocab (hard) tokens that describe the contexts and a main text consisting of homoge- neous item tokens (i.e., interaction history) or vocab tokens (i.e., user/item textual features), respectively. Through this strategy, the prediction heads for the two LLMs can focus exclusively on collab- orative and content information, and the stability and effectiveness of language modeling can be substantially enhanced. In addition, a stochastic reordering strategy is proposed for the collaborative LLM to ignore the order of item tokens without negative influence on the vocab tokens. Finally, we propose a novel recommendation-oriented | 2311.01343#4 | 2311.01343#6 | 2311.01343 | [
"2302.13971"
] |
2311.01343#6 | Collaborative Large Language Model for Recommender Systems | Yaochen Zhuâ ,1, Liang Wu2, Qi Guo2, Liangjie Hong2, Jundong Li1 finetuning strategy for CLLM4Rec, where an item prediction head with multinomial likelihood is added to the pretrained collabora- tive LLM backbone to predict hold-out items based on soft+hard prompts established from masked usersâ interaction history, where recommendations of multiple items can be generated efficiently. The contribution of this paper can be concretely summarized as: We present CLLM4Rec, the first framework that tightly couples the ID paradigm and LLM paradigm of RS, where encoded knowledge and reasoning ability of LLMs can be fully utilized, while user/item ID token embeddings aligned to the vocab space can well capture intrinsic user interests and item properties. â ¢ A novel soft+hard prompting strategy is proposed to pretrain the LLMs on sequences of heterogeneous tokens describing user historical interactions and user/item features via language modeling, where the collaborative and content information can be effectively learned by the user/item token embeddings. â ¢ A mutual-regularization strategy is proposed to constrain the CLLM4Rec to learn information more relevant for recommenda- tions from user/item content. In addition, stochastic reordering is proposed such that the order of item tokens can be ignored by the collaborative LLM without influence on the textual parts. â ¢ A recommendation-oriented finetuning strategy is proposed for CLLM4Rec, where an item prediction head with multino- mial likelihood is added on the collaborative LLM that predicts hold-out items based on prompt interaction history, where rec- ommendations for multiple items can be generated efficiently. # 2 RELATED WORK 2.1 Large Language Model (LLM) Basics Transformers with billions of parameters trained on large corpora, i.e., large language models (LLMs), have demonstrated an unprece- dented understanding of natural language and good logical reason- ing ability based on factual knowledge [9]. Based on the part of transformer utilized for language modeling, existing LLMs can be categorized into three classes: encoder-only LLMs, such as BERT [32], encoder-decoder-based LLMs, such as T5 [12], and decoder- only LLMs, such as GPT [11] and LlaMA [13], etc. | 2311.01343#5 | 2311.01343#7 | 2311.01343 | [
"2302.13971"
] |
2311.01343#7 | Collaborative Large Language Model for Recommender Systems | We focus on LLMs with decoders due to their superior generative abilities compared with the encoder-only models [33]. The training of LLMs is mainly based on two stages. In the pretraining stage, LLMs are trained on large corpora such as website content, Wikipedia, ArXiv paper, and GitHub codes via language modeling (i.e., next/masked token pre- diction), where knowledge in the corpus can be effectively encoded in the weights of the transformer network facilitated by the stacked self-attention modules. Then, during the finetuning stage, exemplar prompt-output pairs (such as questions and answers) or human feedback on multiple generated answers are provided to the LLMs such that they can conduct logical reasoning and generate answers based on the encoded knowledge from the pretrained stage. # 2.2 LLM in Recommender Systems Recently, LLM-based RS has attracted extensive attention from both academia and industry, which are promising to address the long- standing issues of traditional ID-based RSs, such as shallow textual information understanding, poor generalization, etc. [34, 35]. Hou et al. showed that existing LLMs can be viewed as zero-shot rankers, Collaborative Large Language Model for Recommender Systems which can rank the relevance of movies based on user historical in- teractions and movie descriptions. However, since pretrained LLMs are not aligned with the recommendation task, more efforts have been devoted to the finetuning of LLMs to obtain recommendation- oriented models. An exemplar work is P5 [20], which finetunes T5 with token sequences transformed from interactions and user/item features, where items are presented by pseudo-IDs in the form of "item_ð | 2311.01343#6 | 2311.01343#8 | 2311.01343 | [
"2302.13971"
] |
2311.01343#8 | Collaborative Large Language Model for Recommender Systems | ". Afterwards, M6 [19] was proposed that combines text infill- ing and auto-regression in the pretraining stage, where pseudo IDs in P5 are completely avoided and replaced by textual descriptions. Recently, TALLRec [36] was proposed where items are represented by both pseudo-ID and textual descriptions. Pseudo-ID-based item representations can easily introduce spurious correlations between irrelevant items. To address this issue, Hua et al. proposed to intro- duce a small number of new tokens, where tokens used to describe the items are determined by their content and collaborative similar- ity. However, representing items with multiple shared tokens can still introduce bias. In addition, for the above methods, candidate items need to be explicitly provided in the prompt when conducting direct recommendation, where the size of candidate pool is limited. Finally, recommendations are generated via autoregression, which is highly inefficient. In summary, the dichotomy between natural language processing and RS still remains to be well addressed. # 3 METHODOLOGY 3.1 Problem Formulation In this paper, we focus on recommendations with implicit feedback [37]. Consider a system of ð ¼ users and ð ½ items. We use a binary rating vector rð â {0, 1}ð ½ to denote whether user ð has interacted with the ð ½ items. In addition, we use xð ¢ ð , xð £ ð to denote the textual features associated with user ð and item ð , such as user biography and item content, etc. xð ¢ð £ ð ð denotes the textual features associated with both user ð and item ð , such as user ð â s review for item ð . Hereafter, {ð ¢,ð £,ð ¢ð £ } {ð ¢,ð £,ð ¢ð £ } {ð ,ð ,ð ð },ð is a size ð we take a sequential view of x {ð ,ð ,ð ð } , where x one-hot vector denoting the ð th token in the textual sequence2. In addition, we have a pretrained large language model (LLM), of which we take a probabilistic view and denote it as ð ð ð ð (xð +1|x1:ð ), (ð ¿) 1:ð â Rð à ð ¾â via which transform x1:ð into a latent sequence h (ð | 2311.01343#7 | 2311.01343#9 | 2311.01343 | [
"2302.13971"
] |
2311.01343#9 | Collaborative Large Language Model for Recommender Systems | ¿) ð ¿ stacked self-attention modules ð ð ð (x1:ð ) and maps the h to ð the probability space of the next token xð +1. Since the LLM is pretrained on large corpora and finetuned on exemplar prompt- answer pairs, the generation is based on logical reasoning with the context information in x1:ð according to its pretrained knowledge. Our aim is to design a new RS that tightly couples the LLM with the recommendation task by introducing user/item ID tokens (and token embeddings), such that user/item semantics (e.g., user inter- ests in item) can be accurately modeled for effective and efficient recommendation whereas the encoded knowledge and reasoning ability of the pretrained LLMs can be fully utilized simultaneously. # 3.2 Extension of User/Item Tokens 3.2.1 Vocab Expansion. To tightly couple the pretrained LLM with the recommendation task, we first expand the vocabulary of 2we use ð ¢ and ð £ in the superscript to distinguish user or item-related variables. Conferenceâ 17, July 2017, Washington, DC, USA | 2311.01343#8 | 2311.01343#10 | 2311.01343 | [
"2302.13971"
] |
2311.01343#10 | Collaborative Large Language Model for Recommender Systems | Vocab Pred. Head t Shared Pretrained LLM Backbone q Ivem Pred. Head Collab LLM, <user_i> has interacted with <item_j> <item_> <item_l> Figure 2: The overview of the proposed CLLM4Rec in the mutually-regularized pretraining stage. Mutual regulariza- tion of item_k is omitted for simplicity. the LLM by adding user/item ID tokens to describe the intrinsic user/item semantic, such that semantic gap between RS and natural language can be well bridged. We use bracket notations "<user_ð >" and "<item_ð >" to denote the newly-introduced token for the ð th user and the ð th item, respectively, which has token ID ð + ð and ð + ð ¼ + ð | 2311.01343#9 | 2311.01343#11 | 2311.01343 | [
"2302.13971"
] |
2311.01343#11 | Collaborative Large Language Model for Recommender Systems | , and will not be broken down into atomic tokens. 3.2.2 Token Embeddings. For LLMs to understand the tokens, they must be first transformed into dense embeddings. Accordingly, we use zð ¡ ð â ð ð ¾ to represent the pretrained embedding of the ð th vocab token. In addition, for the newly-introduced user/item tokens, we introduce two types of embeddings to represent user/item col- laborative and content semantics. Specifically, to align the user/item tokens with the vocab space of the pretrained LLM, we sample the user/item collaborative token embeddings from the same size-ð ¾ latent space as follows: | 2311.01343#10 | 2311.01343#12 | 2311.01343 | [
"2302.13971"
] |
2311.01343#12 | Collaborative Large Language Model for Recommender Systems | aa? ~ N (0. ay! â Ik), (1) where A; is the prior precision for at 2? Importantly, to align the content semantics with the collaborative semantic for more recommendation-oriented content modeling, we sample the user/item content token embeddings from the following conditional prior: ð ,ð ¢ ð â ¼ N z ð ,ð ¢ z ð ð ,ð £ ð â ¼ N ð ,ð £ z ð , ð â 1 ð , ð â 1 ð , z . · Ið ¾ Â· Ið ¾ (2) ð ,ð ¢ where ð ð | 2311.01343#11 | 2311.01343#13 | 2311.01343 | [
"2302.13971"
] |
2311.01343#13 | Collaborative Large Language Model for Recommender Systems | is the precision for the conditional prior of z . The ð horizontally-stacked matrices of vocab/collaborative/content token embeddings are denoted as Zð ¡ , Zð ,{ð ¢,ð £ } , and Zð ,{ð ¢,ð £ } , respectively3. 3.2.3 CLLM4Rec Base Model. With user/item tokens and the corresponding token embeddings introduced in the previous sub- sections, we are ready to introduce the CLLM4Rec base model with expanded vocabulary. The CLLM4Rec base model is denoted with (ð ¿) {ð ,ð },1:ð = Ë ð ð ð {ð ,ð } (x1:ð ), (ð ¿) which maps the token sequence x1:ð into the hidden space h {ð ,ð },1:ð through ð ¿ stacked self-attention module (the superscript (ð ¿) will be omitted if no ambiguity exists); here, xð is a size ð + ð ¼ + ð ½ one-hot 3We use super/subscript ð and ð | 2311.01343#12 | 2311.01343#14 | 2311.01343 | [
"2302.13971"
] |
2311.01343#14 | Collaborative Large Language Model for Recommender Systems | to distinguish the variables related to the collaborative and content model process, respectively. Conferenceâ 17, July 2017, Washington, DC, USA User!ID:0057 Item ID: 0046 Item Title: Wet n Wild Mega Last Lip Color 908C Sugar Plum Fairy Review: The color is a perfect mix of dark purple, red and pink. The only downside is the drying aspect of the lipstick, which I counteract by using lip balm before putting it on. filling as a the main collaborative effectiveness For interactions P and | 2311.01343#13 | 2311.01343#15 | 2311.01343 | [
"2302.13971"
] |
2311.01343#15 | Collaborative Large Language Model for Recommender Systems | # Yaochen Zhuâ ,1, Liang Wu2, Qi Guo2, Liangjie Hong2, Jundong Li1 filling the pretexts in detail. Therefore, we can view the first part as a soft+hard prompt and conduct language modeling only on the main text. This encourages the model to focus exclusively on collaborative and content information, such that the stability and effectiveness of language modeling can be substantially enhanced. ð transformed from the historical interactions of user ð can be broken down into the soft+hard prompt ð ,ð x ð Figure 3: | 2311.01343#14 | 2311.01343#16 | 2311.01343 | [
"2302.13971"
] |
2311.01343#16 | Collaborative Large Language Model for Recommender Systems | Example review data from Amazon Beauty dataset. (a) Historical Interactions r;: soft+hard prompt x7? . rm item token seq. x vector denoting the token of either a vocab, a user, or an item. In addition, the subscript in Ë ð ð ð {ð ,ð } denotes which embedding matrix is used to encode the user/item tokens (where ð stands for matrix Zð ,{ð ¢,ð £ } and ð stands for matrix Zð ,{ð ¢,ð £ } ). For the CLLM4Rec base Ë ð ð ð {ð ,ð } , only the user/item token embeddings are trainable, model whereas the vocab embeddings Zð ¡ as well as the other parts of the backbone LLM are fixed to preserve the pretrained knowledge. Accordingly, we introduce the collaborative LLM by adding an item prediction head ð ð : Rð ¾â â P(ð ½ ) to the CLLM4Rec base model Ë ð ð ð ð , which maps the final-layer last-step hidden representation hð ,â 1 calculated via Ë ð ð ð ð to the item probability space P(ð ½ ) to predict the next item token. The weights of ð ð are tied with the item collab- orative token embeddings Zð ,ð £ as ð ð (hð ,â 1) = softmax(Zð ,ð £ Â· hð ,â | 2311.01343#15 | 2311.01343#17 | 2311.01343 | [
"2302.13971"
] |
2311.01343#17 | Collaborative Large Language Model for Recommender Systems | 1). The generative process of the collaborative LLM can be denoted as: # 3.3 Mutually-Regularized Pretraining With CLLM4Rec base model introduced in the previous section, we discuss the mutually-regularized pretraining strategy for CLLM4Rec to learn the user/item collaborative/content token embeddings based on language modeling on corpora established from user- item interactions and user/item textual features, where the encoded knowledge and logical reasoning ability of the pretrained LLM can be fully utilized. The overall process can be referred to in Fig. 2. | 2311.01343#16 | 2311.01343#18 | 2311.01343 | [
"2302.13971"
] |
2311.01343#18 | Collaborative Large Language Model for Recommender Systems | rm of Pp Xi eel hr, (Xitel ie Xt ). (4) ð ,ð where the prompt x serves as a context to generate the next ð item token based on previous item tokens. Since the generation of ð ,ð x ð ,ð +1 requires attending to previous tokens, when maximizing the likelihood, the collaborative LLM pushes the token embeddings of ð ,ð ¢ user ð , i.e., z , and the token embeddings of the interacted items, i.e., ð ð ,ð £ ð ,ð £ ð , · · · , to be close to each other, where user/item collaborative z , z ð | 2311.01343#17 | 2311.01343#19 | 2311.01343 | [
"2302.13971"
] |
2311.01343#19 | Collaborative Large Language Model for Recommender Systems | semantics in recommendation can be accurately captured. 3.3.1 Recommendation-Specific Corpora. Generally, we can transform the interactions and user/item content features into doc- uments of user/item/vocab token sequences as follows: # Raw Corpora Transformed from Recommendation Data Similarly, for the documents transformed from the user/item ð ¢ð £,ð content5, it can also naturally be split into a soft+hard prompt x ð ð and the main text x (a) Historical Interactions rð : <user_ð > has interacted with <item_ð > <item_ð > ... (b) User/Item Textual Features xð ¢ The biography of <user_ð > is: Main biography. The content of <item_ð > is: Main contents. <user_ð > writes the review for <item_ð > : Main reviews. (b) User/Item Textual Features xij vocab seq. xiâ soft+hard prompt x,â Accordingly, we introduce the content LLM by adding a vocab prediction head ð ð : Rð ¾â â P(ð ) to the CLLM4Rec base model Ë ð ð ð ð , which maps the final-layer last-step hidden representation hð ,â 1 calculated via Ë ð ð ð ð (which shares the same pretrained LLM with Ë ð ð ð ð but uses Zð ,{ð ¢,ð £ } to decode the user/item token) to the vocab probability space. Similarly, the weights of ð ð are tied with the vocab embeddings Zð ¡ as ð ð (hð ,â 1) = softmax(Zð ¡ Â· hð ,â 1). | 2311.01343#18 | 2311.01343#20 | 2311.01343 | [
"2302.13971"
] |
2311.01343#20 | Collaborative Large Language Model for Recommender Systems | The generative process of the content LLM can be denoted as follows: where an example based on the Amazon Beauty dataset can be referred to in Fig. 3. However, directly conducting language model- ing on the raw corpora is clearly infeasible, as each document is composed of heterogeneous vocab, user, and item tokens, where the number of meaningful vocab tokens (e.g., â ¼ 50k for GPT, and â ¼ 30k for T5) can be diluted by the large number of newly introduced user/item tokens with randomly initialized embeddings. 3.3.2 Soft+Hard Prompting. To address the above challenge, we propose a novel soft+hard prompting strategy to facilitate language modeling on RS-specific corpora with heterogeneous user/item/vocab tokens. The strategy is based on a key observation that documents transformed from both user-item interactions rð and user/item tex- tual features xð ¢ ð ð | 2311.01343#19 | 2311.01343#21 | 2311.01343 | [
"2302.13971"
] |
2311.01343#21 | Collaborative Large Language Model for Recommender Systems | can be broken down into two parts: A heterogeneous part composed of soft (user/item) and hard (vocab) tokens providing context information regarding the gist of the doc- ument, and a main text part with homogeneous item/vocab tokens cm fe uum jum ~ud,p Xie ~ itn, ( ijk ij, 1:k â ¢ ) () ð ¢ð £,ð ð ð ,1:ð ð ¢ð £,ð ð ð ,ð +1 based on previously as the context. which generates the next vocab token x ð ¢ð £,ð ð ¢ð £,ð ð ð ,1:ð with prompt x ð ð generated vocab tokens x 4We use the superscripts ð and ð to distinguish the prompt and the main text. 5Hereafter, we take xð ¢ð £ an example for discussions, which can be easily generalized to the case of xð ¢ | 2311.01343#20 | 2311.01343#22 | 2311.01343 | [
"2302.13971"
] |
2311.01343#22 | Collaborative Large Language Model for Recommender Systems | Collaborative Large Language Model for Recommender Systems When maximizing the likelihood, the content information in xð ¢ð £,ð can be encoded in the content token embeddings of user ð and item ð ,ð ¢ ð , i.e., z , where the pretrained knowledge of the LLM can ð be fully utilized. For example, for the reviews shown in Fig. 3, the pretrained LLM will know that <item_46> is a lipstick with dark purple, red, and pink colors and can have side effects of drying lip, and reasons that <user_57> likes the colors but hates the side effects, which can be alleviated by the lip balm. Discussion. Generally, since the "hard" (i.e., the vocab) part of ð ,ð the prompts x is what the pretrained LLM could un- ð derstand, they are designed to trigger the reasoning ability of the pretrained LLM based on its encoded knowledge. For example, the ð ,ð relational phrase "has interacted with" in the prompt x guides ð the collaborative LLM to understand that the newly-introduced ð ,ð token <user_i> is a user subject and the tokens in the prompt x ð are the objects of interacted item sequences. Meanwhile, the con- ð ¢ð £,ð texts "write the review for" in x direct the content LLM to ð ð , i.e., <user_ð >â s better understand the nature of main texts in x judgment on the <item_ð > based on the personal using experience. The specific formulation of the prompt can be flexible, as Geng et al. has demonstrated that the variation in the expression of the prompt makes less difference, as long as the meaning is the same and the prompt is consistent across the training and testing phases. 3.3.3 Mutually-Regularization. Since the pretrained LLMs are not recommendation-oriented, naively optimizing the language modeling objective as Eq. (5) unavoidably captures noise irrele- vant to recommendations. In addition, since the user/item interac- tions are sparse, the collaborative LLM can easily overfit on the ob- served interactions. | 2311.01343#21 | 2311.01343#23 | 2311.01343 | [
"2302.13971"
] |
2311.01343#23 | Collaborative Large Language Model for Recommender Systems | To address this issue, we propose the mutually- regularized pretraining for CLLM4Rec, where collaborative LLM can guide content LLM to capture recommendation-oriented in- formation from user/item content, and content LLM can in turn introduce side information to support collaborative filtering. The mutual-regularization naturally arises with the generative process of the CLLM4Rec pretraining stage defined in the previous subsections. If we denote the stacked item token embeddings as ð ,ð £ , which contains item ð and other items interacted by the Z ð user ð , the generation process of CLLM4Rec associated with xð ð and xð ¢ð £ ð ð can be defined as the joint distribution as follows: rm .uam Lu glo cu 9c0| 1p uop) _ p(x, Kip 8 Zy 27 2; Ix; Xi) = ti rm orm rp). fe uv,m|_uv,m uvp) TAP him, ck Pike) MP Size Pajakâ v Xi LM for collab. LLM ull, 0} 1, Li 1 (2 lai") Te (25 leh?) -» (ai) - Tap (242) LM for content LLM mutual regularization prior (6) A scrutiny of Eq. (6) reveals that the joint distribution can be decom- posed into three parts: 1) the language modeling of the collaborative and content LLMs that learn user/item token embeddings as Eqs. (4) and (5); 2) the mutual regularization that connects the user/item token embeddings of the two LLMs (i.e., according to Eqs. (1-2), | 2311.01343#22 | 2311.01343#24 | 2311.01343 | [
"2302.13971"
] |
2311.01343#24 | Collaborative Large Language Model for Recommender Systems | Conferenceâ 17, July 2017, Washington, DC, USA Pp (2" ,") and p (242¢) are conditional Gaussians, which will introduce MSE regularization between a ght, and z co Lo 12; Lik when ik? i log-likelihood is maximized) 3) the prior of gin and ai » which will be ignored due to the existence of mutual regularization (i.e., setting the precision A; in the prior in Eq. (1) as zero). We use Maximum a Posteriori (MAP) to estimate the user/item ð ,ð £ ð ,ð ¢ , Z token embeddings z , where the objective is pro- ð ð portional to the logarithm of the joint distribution specified in Eq. (4). | 2311.01343#23 | 2311.01343#25 | 2311.01343 | [
"2302.13971"
] |
2311.01343#25 | Collaborative Large Language Model for Recommender Systems | We take alternative steps to optimize the MAP objective. If we denote the trainable parameters associated with the item token prediction head ð ð and vocab token prediction head ð ð as ð ½ð (which are tied with the corresponding token embeddings), the objective for the collaborative LLM (L-step) and content LLM (C-step) with mutual regularization can be derived as follows: L-step. In the L-step, we fix user/item content embeddings aan Vis as a, Via in Eq. (6), and use them to constrain the user/item collaborative embeddings along with the language modeling of collaborative LLM, leading to the following composite objective: MAP (,Lu jlo _ -> fi rm|orm np LY step (2; »Z; 6) = DP in, xe ke Xi ke MAP (,Lu jlo _ -> fi rm|orm np LY step (2; »Z; 6) = DP in, xe ke Xi ke LM loss for collab. LLM Ae || Lu _ zeul|? Ac || bo gcoll® â Ar || tull Az | Le Bre -5 $a Fe -2B k MR loss with content LLM Prior loss # ð ,ð £ , Z ð + Cð , (7) where Cð | 2311.01343#24 | 2311.01343#26 | 2311.01343 | [
"2302.13971"
] |
2311.01343#26 | Collaborative Large Language Model for Recommender Systems | is the constant irrelevant for optimization. The LM loss captures the collaborative similarity between token embeddings of user ð and the interacted items, where side information can be introduced via the MR loss to support collaborative filtering. C-step. After one-step optimization of the L-step, we fix the user/item ð ,ð ¢ collaborative token embeddings z in Eq. (6), lead- ð ing to the following composite objective for the content LLM: MAP [,c,u co a Te uv,m|_uo,m uv,p Le step (2; me] 8) = dep f; (xin PG jickâ v % ) k Ime LM loss for content LLM |Lao _ gholl* 4 0 J 200° Ae Jou _ ghul? Ae 2% tlle J MR loss with collab. | 2311.01343#25 | 2311.01343#27 | 2311.01343 | [
"2302.13971"
] |
2311.01343#27 | Collaborative Large Language Model for Recommender Systems | LLM Ae Jou _ ghul? Ae |Lao _ gholl* 4 0 2% tlle J 200° (8) where MR loss constrains content LLM to capture recommendation- oriented information from user/item textual features. In Eqs. (7) and (8), ð ð controls the strength of mutual regularization, which will be thoroughly discussed in the empirical study. 3.3.4 Stochastic Item Reordering. Another issue that hinders effective collaborative filtering via Eq. (7) is the order of item to- kens when transforming the historical interactions rð | 2311.01343#26 | 2311.01343#28 | 2311.01343 | [
"2302.13971"
] |
2311.01343#28 | Collaborative Large Language Model for Recommender Systems | into a token ð ,ð sequence x for language modeling. Item order usually does not ð matter for collaborative filtering (even if it matters, the positional embeddings denoting the order of natural language may not cap- ture the semantics of the order of interactions). To address this ð ,ð issue, we propose to randomly permute the item tokens in x ð Conferenceâ 17, July 2017, Washington, DC, USA ð ,ð with prompt x ð fixed when optimizing the collaborative LLM as Eq. (7). Through this strategy, the order of interacted items can be ð ,ð ignored without negative influence on the vocab tokens in x ð | 2311.01343#27 | 2311.01343#29 | 2311.01343 | [
"2302.13971"
] |
2311.01343#29 | Collaborative Large Language Model for Recommender Systems | # 3.4 Recommendation-Oriented Finetuning 3.4.1 Pretraining v.s. Finetuning. The pretraining of CLLM4Rec aims to learn user/item token embeddings based on the large cor- pus of documents transformed from user-item interactions rð and ð , xð ¢ð £ user/item textual features xð ¢ ð ð via language modeling. How- ever, for now, the pretrained CLLM4Rec can only complete item/vocab token sequences based on the soft+hard prompts, and therefore the gap between NLP and RS is still not completely eliminated. In addition, naively treating the collaborative LLM as a recom- mendation model can lead to huge computational costs where the recommended items are sequentially generated via auto-regression. Therefore, we propose a recommendation-oriented finetuning strat- egy for CLLM4Rec, which aims to finetune the pretrained collabo- rative LLM and tailor it for efficient recommendations. 3.4.2 Masked Prompting with Multinomial Head. To achieve this purpose, we first design a masked prompting strategy to gen- erate recommendation-oriented prompts. For each user, we ran- domly mask the interacted items rð by 100 à ð ð %, where the re- maining items are denoted as rð ð ð ð ð ð , and use it to generate a ð ð ð ð ,ð | 2311.01343#28 | 2311.01343#30 | 2311.01343 | [
"2302.13971"
] |
2311.01343#30 | Collaborative Large Language Model for Recommender Systems | recommendation-oriented prompt x . All the hold-out items, ð which we denote with a multi-hot vector râ ð ð ð , are treated as the ð ð ð ,ð target. The prompt x ð (c) Recommendation Prompts & Target (prompt) <user_ð > has interacted with <item_ð â ²> <item_ð â ²> the user will interact with: (target) râ ð ð ð which triggers the reasoning ability of the pretrained LLM by using relational phrase "has interacted with" to describe the historical interactions, and using the phrase "the user will interact with" to guide the prediction of the target items râ ð ð ð | 2311.01343#29 | 2311.01343#31 | 2311.01343 | [
"2302.13971"
] |
2311.01343#31 | Collaborative Large Language Model for Recommender Systems | We name CLLM4Rec in the finetuning stage as RecLLM, which inherits the CLLM4Rec base model lim, from the collaborative LLM in the pretraining stage and introduces a new item prediction head with multinomial likelihood, ie., frec, whose weights are also tied with the item token embeddings Z!â . The generation of the hold hold-out items r/°"â via the RecLLM can be formulated as follows: rhold ~ multi (free (nie? ,) , Npold) , where he = limy (x/*°?) 5 rhold ~ multi (free (nie? ,) , Npold) , where he = limy (x/*°?) 5 (9) (9) where ð ð ¢ð ð ¡ð denotes the multinomial distribution and ð â ð ð ð is the number of hold-out items for user ð . When finetuning the RecLLM according to Eq. (9), hð ð ð ð ,ð ,â 1, which can be viewed as the user la- tent variable summarizing the historical interaction of user ð , is encouraged to be similar to the collaborative embeddings of all the interacted items. In addition, we keep it regularized with the content LLM in a similar manner as Eq. (7), and use the stochastic ð ð ð ,ð 6. Through item reordering strategy to generate the prompt x ð the proposed finetuning strategy, CLLM4Rec can fully utilize the encoded knowledge from the pretrained LLM backbone and the 6The objective of the RecLLM is formulated in Eq. (10) in Appendix A.2. | 2311.01343#30 | 2311.01343#32 | 2311.01343 | [
"2302.13971"
] |
2311.01343#32 | Collaborative Large Language Model for Recommender Systems | Yaochen Zhuâ ,1, Liang Wu2, Qi Guo2, Liangjie Hong2, Jundong Li1 user/item token embeddings learned from the mutually-regularized pretraining stage to efficiently generate recommendations in a sin- gle forward-propagation step, where all ð ½ items serve as candidates. # 3.5 Predictions with CLLM4Rec After the pretraining and finetuning of CLLM4Rec, to make recom- mendation for user ð , we can convert the whole historical interac- tions of the user, i.e., rð , into the recommendation-oriented prompt ð ð ð ,ð Ë x as described in Section 3.4.2 (with no masked items) and input ð it into the RecLLM model. Then, the multinomial probability Ë rð over all ð ½ items can be obtained through one forward propagation via = Ë ð ð ð ð Ë rð = ð ð ¢ð ð ¡ð , where uninteracted items with top-ð scores in Ë rð | 2311.01343#31 | 2311.01343#33 | 2311.01343 | [
"2302.13971"
] |
2311.01343#33 | Collaborative Large Language Model for Recommender Systems | can be selected as recommendations. # 4 EMPIRICAL STUDY In this section, we present the experiments on four public datasets and one LinkedIn dataset to demonstrate the effectiveness of CLLM4Rec, aiming to answer the following research questions. RQ1. How does CLLM4Rec, the first RS that tightly couples the ID-based paradigm with the LLM-based paradigm, perform compared to state-of-the-art ID-based and LLM-based RSs? â ¢ RQ2. How does the pretraining stage of CLLM4Rec (including the mutual regularization trick and the stochastic item reorder strategy) influence the performance of CLLM4Rec? â ¢ RQ3. | 2311.01343#32 | 2311.01343#34 | 2311.01343 | [
"2302.13971"
] |
2311.01343#34 | Collaborative Large Language Model for Recommender Systems | How does the finetuning stage of CLLM4Rec with masked prompt and multinomial item prediction head influence the efficiency and effectiveness of recommendations. # 4.1 Experimental Setup 4.1.1 Datasets. The experiments are mainly based on four pub- lic datasets: Amazon (AM)-Beauty dataset, AM-Toys dataset, AM- Sports dataset [17] and the Yelp dataset [38], where we binarize the interactions by keeping only ratings > 3 and treat them as implicit feedback [39]. In addition, we filter the dataset such that they keep the original 5-core property after binarization. For each user, we randomly select 80% of interactions for training, 10% for validation, and 10% for testing, where as least one item is selected in the valida- tion and the test set. The reviews that users provide to the items are collected as the textual feature xð | 2311.01343#33 | 2311.01343#35 | 2311.01343 | [
"2302.13971"
] |
2311.01343#35 | Collaborative Large Language Model for Recommender Systems | ¢ð £ ð ð . The real-world experiments are based on a job recommendation dataset collected nearline at the Company, where userâ s click on the job Ads are logged as the implicit feedback, and usersâ self-provided biography xð ¢ ð and the job descriptions xð £ ð are collected as the textual features, respectively. The statistics of the dataset are summarized in Table 3 in Appendix. Implementation Details. Due to the space limitation, we 4.1.2 only discuss CLLM4Rec with GPT-2 backbone with token embed- ding 768 and token size 50,257 in this section, where experiments with T5 backbone are discussed in Appendix B. During the train- ing stage, we first optimize the content LLM as Eq. (5) via lan- guage modeling for 10 epochs to warm up the user/item content token embeddings. Then, in the mutually-regularized pretraining stage, we alternatively train the collaborative and content LLMs as specified in Eqs. (7) and (8) for 100 epochs. Finally, we conduct the recommendation-oriented finetuning for 150 epochs, where the RecLLM is monitored with metrics Recall@20, Recall@40, and Collaborative Large Language Model for Recommender Systems NDCG@100 calculated on the validation set as with [39]. RecLLM with the best performance are logged and evaluated on the test set as the final results. ð ð in Eqs. (7) and (8) is an important hyper- parameter, we first fix its value to the optimal one found by grid search, and then discuss its influence in Section 4.3. # 4.2 Comparison with Baselines 4.2.1 Baselines. To demonstrate the multifaceted superiority of the proposed CLLM4Rec, we include the following ID-based and (L)LM-based RSs as the baselines for comparisons: # ID-based Baselines. Multi-Vae [39] is an ID-based collaborative filtering baseline that recommends new items by reconstructing the ratings rð via a variational auto-encoder (VAE) with multinomial likelihood. â ¢ Md-Cvae [40] is a hybrid RS that extends the Multi-VAE by ð | 2311.01343#34 | 2311.01343#36 | 2311.01343 | [
"2302.13971"
] |
2311.01343#36 | Collaborative Large Language Model for Recommender Systems | ð to regu- introducing a dual feature VAE on textual features xð ¢ð £ larize the reconstruction of rð in the Multi-VAE. # LM-based Baselines7. â ¢ Bert4Rec [41] uses masked language modeling (MLM) pro- posed in BERT [32] to learn user/item embeddings for recom- mendation with bidirectional self-attention mechanism. â ¢ S3Rec [38] extends BERT4Rec by augmenting the MLM with auxiliary tasks such as item attribute prediction, where content features can be fused for self-supervised learning. # LLM-based Baselines. (a) Qualitative Analysis. Both pseudo-ID-based and description-based methods discussed in Section 2.2 represent user/item with multiple tokens and formu- late direct recommendation as a token generation problem. Since the generated tokens could be irrelevant to the recommendation purpose, candidate items usually need to be explicitly provided in the prompt (e.g., P5 [20] provides 100 candidate items where one is positive, and TALLRec [36] outputs yes/no decision based on user/item descriptions in the prompts, etc.). In contrast, CLLM4Rec can generate multiple recommendations from the entire candidate pool. Therefore, these methods cannot directly work in our setting, and the comparisons are mainly based on qualitative analysis. (b) Quantitative Analysis In addition, we design the following LLM-based baselines to quantitatively demonstrate the effectiveness of CLLM4Rec. â ¢ Llm-Scratch has the same structure as CLLM4Rec, but it trains the whole model from scratch instead of loading and fixing the weights of the pretrained LLM backbone. | 2311.01343#35 | 2311.01343#37 | 2311.01343 | [
"2302.13971"
] |
2311.01343#37 | Collaborative Large Language Model for Recommender Systems | â ¢ Llm-CF eliminates the content LLM from CLLM4Rec and the mutually-regularized pretraining step and uses only the collabo- rative LLM and RecLLM for recommendation. â ¢ Llm-FTALL has the same structure as CLLM4Rec, but it fine- tunes the whole network including the vocab embeddings as well as other parts of the pretrained LLM, instead of training only the newly-introduced user/item token embeddings. 7Note that both Bert4Rec and S3Rec are original designed for sequential recommenda- tion. In this paper, we use similar recommendation-oriented finetuning as CLLM4Rec to adapt them to direct recommendation, where item sequences generated from masked interactions are used to predict all hold-out items with multinomial likelihood. | 2311.01343#36 | 2311.01343#38 | 2311.01343 | [
"2302.13971"
] |
2311.01343#38 | Collaborative Large Language Model for Recommender Systems | Conferenceâ 17, July 2017, Washington, DC, USA # Table 1: Comparison between CLLM4Rec and various base- lines with GPT-backbone on three Amazon Review datasets. AM-Beauty Recall@20 Recall@40 NDCG@100 Multi-VAE MD-CVAE BERT4Rec S3Rec 0.1295 0.1472 0.1126 0.1354 0.1720 0.2058 0.1677 0.1789 0.0835 0.0976 0.0781 0.0867 LLM-Scratch LLM-CF LLM-FtAll LLM-FixOrd LLM-PreRec 0.0840 0.1319 0.1335 0.1524 0.1547 0.1265 0.1841 0.1988 0.2219 0.2196 0.0583 0.0855 0.0836 0.1072 0.1051 CLLM4Rec 0.1656 0.2323 0.1118 AM-Toys Recall@20 Recall@40 NDCG@100 Multi-VAE MD-CVAE BERT4Rec S3Rec 0.1076 0.1291 0.0853 0.1064 0.1558 0.1804 0.1375 0.1524 0.0781 0.0844 0.0532 0.0665 LLM-Scratch LLM-CF LLM-FtAll LLM-FixOrd LLM-PreRec 0.0485 0.1027 0.1162 0.1342 0.1308 0.0771 0.1434 0.1542 0.1887 0.1859 0.0362 0.0680 0.0696 0.0889 0.0874 CLLM4Rec 0.1436 0.1933 0.0918 AM-Sports Recall@20 Recall@40 NDCG@100 Multi-VAE MD-CVAE BERT4Rec S3Rec 0.0659 0.0714 0.0521 0.0616 0.0975 0.1180 0.0701 0.0813 0.0446 0.0514 0.0305 0.0438 LLM-Scratch LLM-CF LLM-FtAll LLM-FixOrd LLM-PreRec 0.0362 0.0642 0.0794 0.0901 0.0839 0.0538 0.0966 0.1002 0.1295 0.1248 0.0362 0.0419 0.0424 0.0592 0.0561 CLLM4Rec 0.0926 0.1351 0.0634 | 2311.01343#37 | 2311.01343#39 | 2311.01343 | [
"2302.13971"
] |
2311.01343#39 | Collaborative Large Language Model for Recommender Systems | â ¢ Llm-FixOrd has the same structure as CLLM4Rec but it removes the stochastic item reordering strategy for both the collaborative LLM in pretraining and the RecLLM in finetuning. â ¢ Llm-PreRec discards finetuning and ranks the categorical prob- ability from the next item token prediction head of the collabora- tive LLM in the pretraining stage to make recommendations. 4.2.2 Results on the Public Datasets. We first analyze the ex- perimental results on four public datasets to provide preliminary answers for RQs. 1, 2, 3. From Tables 1 and 2, we can find that the ID-base method, Multi-VAE, remains a strong baseline for col- laborative filtering (CF). LLM-CF, the CF backbone of CLLM4Rec, cannot beat Multi-VAE on both AM-Sports and Toys datasets, even if the "hard" part of the prompt triggers the reasoning ability of the pretrained LLM. However, when large textual data are avail- able, CLLM4Rec outperforms its ID-based counterpart, MD-CVAE (which tightly couples an item content VAE with the Multi-VAE) | 2311.01343#38 | 2311.01343#40 | 2311.01343 | [
"2302.13971"
] |
2311.01343#40 | Collaborative Large Language Model for Recommender Systems | Conferenceâ 17, July 2017, Washington, DC, USA Table 2: Comparison between CLLM4Rec and various base- lines on the Yelp dataset and the Company dataset. Yelp Recall@20 Recall@40 NDCG@100 Multi-VAE MD-CVAE BERT4Rec S3Rec 0.0526 0.0664 0.0418 0.0563 0.0842 0.1058 0.0724 0.0893 0.0424 0.0497 0.0361 0.0485 LLM-Scratch LLM-CF LLM-FTAll LLM-FixOrd LLM-PreRec 0.0199 0.0541 0.0653 0.0694 0.0639 0.0325 0.0860 0.0989 0.1053 0.1021 0.0159 0.0412 0.0520 0.0524 0.0498 CLLM4Rec 0.0735 0.1149 0.0536 LinkedIn Recall@10 Recall@20 NDCG@10 Two-Tower 0.1186 0.2041 0.0979 M6-Retrieval CLLM4Rec-Emb CLLM4Rec 0.1279 0.1302 0.1427 0.2118 0.2165 0.2398 0.1020 0.1034 0.1199 by a large margin. This is because MD-CVAE uses shallow bag- of-words to represent the textual features, for which pretrained LLMs in CLLM4Rec can provide deeper understanding via their pretrained knowledge. The importance of pretrained knowledge can also be shown by the LLM-Scratch model, which performs the worst among all included baselines. An interesting finding is that, LLM-FTAll, which finetunes the whole model including the pretrained LLM backbone, performs worse than CLLM4Rec, which optimizes only the newly introduced user/item token embeddings. The reason could be that, since the weights of the pretrained LLM are fully optimized, the recommendation-specific corpus is still not enough to adapt the pretrained LLM with good generalization ability for RS. | 2311.01343#39 | 2311.01343#41 | 2311.01343 | [
"2302.13971"
] |
2311.01343#41 | Collaborative Large Language Model for Recommender Systems | Therefore, the cons of degenerating the pretrained knowledge outweigh the introduction of RS-specific knowledge. We can also find that LLM-PreRec, which uses the collaborative LLM in the pretraining stage to generate recommendations,is already a strong baseline. This demonstrates the effectiveness of the soft+hard prompting strategy, which facilitates efficient and stable language modeling on recommendation-oriented corpus with heterogeneous tokens. Still, CLLM4Rec performs better than LLM-PreRec, which shows the effectiveness of recommendation-oriented finetuning in adapting collaborative LLM for efficient recommendations. 4.2.3 Results on the Company Dataset. In the real-world exper- iments, we compare CLLM4Rec with the two-tower (TT) model uti- lized in the Company for job recommendations. The TT model is im- plemented as a two-branch multi-layer perceptron (MLP), where the input user/item embeddings include embeddings extracted from a graph neural network (GNN) learned on user-job bipartite graph, as well as features extracted from an internal BERT model. In addition, since the textual features are available for almost every user and item, we compare CLLM4Rec with the state-of-the-art LLM-based RS, M6-Retrieval [19], which takes the dimensional-reduced last- layer embeddings of user/item descriptions from M6 Transformer for contrastive recommendations. The results are summarized in Table 2. For Table 2, we can find that CLLM4Rec outperforms the | 2311.01343#40 | 2311.01343#42 | 2311.01343 | [
"2302.13971"
] |
2311.01343#42 | Collaborative Large Language Model for Recommender Systems | # Yaochen Zhuâ ,1, Liang Wu2, Qi Guo2, Liangjie Hong2, Jundong Li1 (a) AM-Beauty Dataset (b) AM-Toys Dataset (c) AM-Sports Dataset (d) Yelp Dataset Figure 4: Sensitivity analysis w.r.t. ð ð , which controls the strength of mutual-regularization for CLLM4Rec. shallow TT model by a large margin. However, although the in- ference latency for CLLM4Rec is significantly improved compared with existing methods due to the introduction of recommendation- oriented finetuning, directly deploying CLLM4Rec online is still infeasible, as the inference budgets are higher compared to the TT model. Therefore, we design the CLLM4Rec-Emb baseline, which includes the user/item token embeddings Zð | 2311.01343#41 | 2311.01343#43 | 2311.01343 | [
"2302.13971"
] |
2311.01343#43 | Collaborative Large Language Model for Recommender Systems | ,ð ¢ and Zð ,ð £ learned from CLLM4Rec (projected into 128 dimensions) as extra inputs for the TT model, which demonstrates a performance improvement than the original TT model and the M6-Retrieval model in our offline ex- periment. This demonstrates the potential application of CLLM4Rec in industrial applications where low latency matters. 4.3 Parameter Sensitivity Analysis To further answer RQs. 2 and 3, we vary ð ð in Eqs. (7), (8), and (10) that controls the strength of mutual regularization and investigates how it influences the performance of CLLM4Rec. From Fig. 4, we can find that, when ð ð | 2311.01343#42 | 2311.01343#44 | 2311.01343 | [
"2302.13971"
] |
2311.01343#44 | Collaborative Large Language Model for Recommender Systems | is small, the mutual regularization is weak, and content LLM cannot provide enough user/item content side in- formation to support the collaborative LLM and RecLLM. Therefore, the recommendation performance degenerates to a similar level as the LLM-CF. On the other hand, when ð ð is too large, the MR loss in Eqs. (7), (8) and (10) dominates, which hinders CLLM4Rec from learning user/item token embeddings via language modeling and finetuning. Generally, for all four datasets, the performance of CLLM4Rec peaks at around ð ð = 1, which serves as a good start when applying the GPT-based CLLM4Rec to new datasets. # 5 CONCLUSION In this paper, we proposed CLLM4Rec, the first method that tightly couples the ID paradigm and the LLM paradigm of RS, which faith- fully captures user/item semantics while fully utilizing encoded knowledge and logical reasoning ability of pretrained LLMs simul- taneously. Specifically, with mutually-regularized pretraining based on soft+hard prompting strategy, CLLM4Rec can effectively capture the user/item collaborative and content information via language modeling. Furthermore, with recommendation-oriented finetuning, the pretrained knowledge of CLLM4Rec can be fully utilized to efficiently generate recommendations. Extensive experiments show the multi-faceted superiority of CLLM4Rec over state-of-the-art. Collaborative Large Language Model for Recommender Systems REFERENCES [1] Dietmar Jannach, Markus Zanker, Alexander Felfernig, and Gerhard Friedrich. Recommender Systems: An Introduction. Cambridge University Press, 2010. [2] James Bennett, Stan Lanning, et al. The Netflix prize. In KDD CUP, volume 2007, page 35, 2007. [3] Zheng Yuan, Fajie Yuan, Yu Song, Youhua Li, Junchen Fu, Fei Yang, Yunzhu Pan, and Yongxin Ni. | 2311.01343#43 | 2311.01343#45 | 2311.01343 | [
"2302.13971"
] |
2311.01343#45 | Collaborative Large Language Model for Recommender Systems | Where to go next for recommender systems? ID vs. modality- based recommender models revisited. arXiv preprint arXiv:2303.13835, 2023. [4] Andriy Mnih and Russ R Salakhutdinov. Probabilistic matrix factorization. In NeurIPS, volume 20, 2007. [5] Ledell Wu, Adam Fisch, Sumit Chopra, Keith Adams, Antoine Bordes, and Jason Weston. Starspace: Embed all the things! In AAAI, volume 32, 2018. [6] Yehuda Koren, Steffen Rendle, and Robert Bell. Advances in collaborative filtering. Recommender systems handbook, pages 91â 142, 2021. | 2311.01343#44 | 2311.01343#46 | 2311.01343 | [
"2302.13971"
] |
2311.01343#46 | Collaborative Large Language Model for Recommender Systems | [7] Pasquale Lops, Marco De Gemmis, and Giovanni Semeraro. Content-based rec- ommender systems: State of the art and trends. Recommender systems handbook, pages 73â 105, 2011. [8] Yaochen Zhu, Jing Ma, Liang Wu, Qi Guo, Liangjie Hong, and Jundong Li. Path- In SIGKDD, page specific counterfactual fairness for recommender systems. 3638â | 2311.01343#45 | 2311.01343#47 | 2311.01343 | [
"2302.13971"
] |
2311.01343#47 | Collaborative Large Language Model for Recommender Systems | 3649, 2023. [9] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. A survey of large language models. arXiv preprint arXiv:2303.18223, 2023. [10] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Å ukasz Kaiser, and Illia Polosukhin. | 2311.01343#46 | 2311.01343#48 | 2311.01343 | [
"2302.13971"
] |
2311.01343#48 | Collaborative Large Language Model for Recommender Systems | Attention is all you need. In NeurIPS, volume 30, 2017. [11] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by generative pre-training. 2018. [12] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. JMLR, 21(1):5485â 5551, 2020. | 2311.01343#47 | 2311.01343#49 | 2311.01343 | [
"2302.13971"
] |
2311.01343#49 | Collaborative Large Language Model for Recommender Systems | [13] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. LlaMA: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. [14] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682, 2022. [15] Song Wang, Yaochen Zhu, Haochen Liu, Zaiyi Zheng, Chen Chen, and Jundong Li. Knowledge editing for large language models: | 2311.01343#48 | 2311.01343#50 | 2311.01343 | [
"2302.13971"
] |
2311.01343#50 | Collaborative Large Language Model for Recommender Systems | A survey, 2023. [16] Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Jiliang Tang, and Qing Li. Recommender systems in the era of large language models (LLMs). arXiv preprint arXiv:2307.02046, 2023. [17] Julian McAuley and Alex Yang. Addressing complex and subjective product- related queries with customer reviews. In WWW, pages 625â 635, 2016. [18] Yaochen Zhu and Zhenzhong Chen. | 2311.01343#49 | 2311.01343#51 | 2311.01343 | [
"2302.13971"
] |
2311.01343#51 | Collaborative Large Language Model for Recommender Systems | Variational bandwidth auto-encoder for hybrid recommender systems. IEEE Transactions on Knowledge and Data Engi- neering, 35(5):5371â 5385, 2022. [19] Zeyu Cui, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. M6-rec: Generative pretrained language models are open-ended recommender systems. arXiv preprint arXiv:2205.08084, 2022. [20] Shijie Geng, Shuchang Liu, Zuohui Fu, Yingqiang Ge, and Yongfeng Zhang. Recommendation as language processing (RLP): A unified pretrain, personalized prompt & predict paradigm (P5). In Proceedings of the 16th ACM Conference on Recommender Systems, pages 299â 315, 2022. [21] Jiaxing Qu, Yuxuan Richard Xie, and Elif Ertekin. A language-based recommen- dation system for material discovery. In ICML, 2023. [22] Lei Li, Yongfeng Zhang, and Li Chen. Personalized prompt learning for explain- able recommendation. ACM Transactions on Information Systems, 41(4):1â | 2311.01343#50 | 2311.01343#52 | 2311.01343 | [
"2302.13971"
] |
2311.01343#52 | Collaborative Large Language Model for Recommender Systems | 26, Conferenceâ 17, July 2017, Washington, DC, USA 2023. [23] Yunfan Gao, Tao Sheng, Youlin Xiang, Yun Xiong, Haofen Wang, and Jiawei Zhang. Chat-rec: Towards interactive and explainable llms-augmented recom- mender system. arXiv preprint arXiv:2303.14524, 2023. [24] Yupeng Hou, Junjie Zhang, Zihan Lin, Hongyu Lu, Ruobing Xie, Julian McAuley, and Wayne Xin Zhao. Large language models are zero-shot rankers for recom- mender systems. arXiv preprint arXiv:2305.08845, 2023. [25] Junjie Zhang, Ruobing Xie, Yupeng Hou, Wayne Xin Zhao, Leyu Lin, and Ji- Rong Wen. | 2311.01343#51 | 2311.01343#53 | 2311.01343 | [
"2302.13971"
] |
2311.01343#53 | Collaborative Large Language Model for Recommender Systems | Recommendation as instruction following: A large language model empowered recommendation approach. arXiv preprint arXiv:2305.07001, 2023. [26] Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, and Julian McAuley. Large language models as zero-shot conversational recommenders. arXiv preprint arXiv:2308.10053, 2023. [27] Fan Yang, Zheng Chen, Ziyan Jiang, Eunah Cho, Xiaojiang Huang, and Yanbin Lu. | 2311.01343#52 | 2311.01343#54 | 2311.01343 | [
"2302.13971"
] |
2311.01343#54 | Collaborative Large Language Model for Recommender Systems | Palr: Personalization aware llms for recommendation. arXiv e-prints, pages arXivâ 2305, 2023. [28] Jianchao Ji, Zelong Li, Shuyuan Xu, Wenyue Hua, Yingqiang Ge, Juntao Tan, and Yongfeng Zhang. Genrec: Large language model for generative recommendation. arXiv e-prints, pages arXivâ 2307, 2023. [29] Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, et al. Leveraging large language models for pre-trained recommender systems. arXiv preprint arXiv:2308.10837, 2023. [30] Wenyue Hua, Shuyuan Xu, Yingqiang Ge, and Yongfeng Zhang. How to index item ids for recommendation foundation models. arXiv preprint arXiv:2305.06569, 2023. [31] Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter- efficient prompt tuning. arXiv preprint arXiv:2104.08691, 2021. [32] Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. BERT: pre- training of deep bidirectional transformers for language understanding. In Proceedings of NAACL, pages 4171â 4186, 2019. [33] Bonan Min, Hayley Ross, Elior Sulem, Amir Pouran Ben Veyseh, Thien Huu Nguyen, Oscar Sainz, Eneko Agirre, Ilana Heintz, and Dan Roth. | 2311.01343#53 | 2311.01343#55 | 2311.01343 | [
"2302.13971"
] |
2311.01343#55 | Collaborative Large Language Model for Recommender Systems | Recent advances in natural language processing via large pre-trained language models: A survey. ACM Computing Surveys, 56(2):1â 40, 2023. [34] Peng Liu, Lemei Zhang, and Jon Atle Gulla. Pre-train, prompt and recommenda- tion: A comprehensive survey of language modelling paradigm adaptations in recommender systems. arXiv preprint arXiv:2302.03735, 2023. [35] Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, et al. How can recommender systems benefit from large language models: A survey. arXiv preprint arXiv:2306.05817, 2023. [36] Keqin Bao, Jizhi Zhang, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He. TallRec: An effective and efficient tuning framework to align large language model with recommendation. arXiv preprint arXiv:2305.00447, 2023. [37] Yifan Hu, Yehuda Koren, and Chris Volinsky. Collaborative filtering for implicit feedback datasets. In IEEE International Conference on Data Mining, pages 263â 272, 2008. [38] Kun Zhou, Hui Wang, Wayne Xin Zhao, Yutao Zhu, Sirui Wang, Fuzheng Zhang, Zhongyuan Wang, and Ji-Rong Wen. S3-Rec: | 2311.01343#54 | 2311.01343#56 | 2311.01343 | [
"2302.13971"
] |
2311.01343#56 | Collaborative Large Language Model for Recommender Systems | Self-supervised learning for sequen- tial recommendation with mutual information maximization. In CIKM, pages 1893â 1902, 2020. [39] Dawen Liang, Rahul G Krishnan, Matthew D Hoffman, and Tony Jebara. Varia- tional autoencoders for collaborative filtering. In WWW, pages 689â 698, 2018. [40] Yaochen Zhu and Zhenzhong Chen. Mutually-regularized dual collaborative variational auto-encoder for recommendation systems. In WWW, pages 2379â 2387, 2022. [41] Fei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou, and Peng Jiang. BERT4Rec: | 2311.01343#55 | 2311.01343#57 | 2311.01343 | [
"2302.13971"
] |
2311.01343#57 | Collaborative Large Language Model for Recommender Systems | Sequential recommendation with bidirectional encoder representa- tions from transformer. In CIKM, pages 1441â 1450, 2019. Conferenceâ 17, July 2017, Washington, DC, USA Table 3: Statistics of the datasets. #Feat. stands for number of textual features (i.e., # reviews for AM/Yelp datasets, and #user biography+#job descriptions for the LinkedIn dataset. Dataset AM-Beauty AM-Toys AM-Sports Yelp LinkedIn #Int. 94,148 95,420 185,718 292,017 90,173 #Users 10, 553 11, 268 22, 686 28, 330 22, 391 #Items 6, 086 7, 309 12, 301 18, 775 1, 071 Sparsity 99.85% 99.88% 99.93% 99.94% 99.62% #Feat. 70,604 70,784 137,618 224,825 23,362 Table 4: Comparison between CLLM4Rec and various base- lines with T5-backbone on three Amazon Review datasets. | 2311.01343#56 | 2311.01343#58 | 2311.01343 | [
"2302.13971"
] |
2311.01343#58 | Collaborative Large Language Model for Recommender Systems | AM-Beauty Recall@20 Recall@40 NDCG@100 Multi-VAE MD-CVAE BERT4Rec S3Rec 0.1295 0.1472 0.1126 0.1354 0.1720 0.2058 0.1677 0.1789 0.0835 0.0976 0.0781 0.0867 CLLM4Rec-T5 CLLM4Rec 0.1538 0.1656 0.2105 0.2323 0.1052 0.1118 AM-Toys Recall@20 Recall@40 NDCG@100 Multi-VAE MD-CVAE BERT4Rec S3Rec 0.1076 0.1291 0.0853 0.1064 0.1558 0.1804 0.1375 0.1524 0.0781 0.0844 0.0532 0.0665 CLLM4Rec-T5 CLLM4Rec 0.1328 0.1436 0.1840 0.1933 0.0851 0.0918 AM-Sports Recall@20 Recall@40 NDCG@100 Multi-VAE MD-CVAE BERT4Rec S3Rec 0.0659 0.0714 0.0521 0.0616 0.0975 0.1180 0.0701 0.0813 0.0446 0.0514 0.0305 0.0438 CLLM4Rec-T5 CLLM4Rec 0.0845 0.0926 0.1226 0.1351 0.0589 0.0634 # A TECHNICAL DETAILS A.1 Implementation of Soft+Hard Prompting To implement the soft+hard prompting strategy discussed in Section 3.3.2 for decoder-only LLMs such as GPT, we can generate only the "keys" and "values" for the heterogeneous tokens in the prompts ð | 2311.01343#57 | 2311.01343#59 | 2311.01343 | [
"2302.13971"
] |
2311.01343#59 | Collaborative Large Language Model for Recommender Systems | ,ð x , and use the "query" of the last token as a start to generate ð ð ,ð the homogeneous tokens of the main texts x for language ð modeling. For encoder-decoder-based LLMs such as T5, a natural ð ¢ð £,ð ð ,ð thought is to input the prompts x in the encoder, and use ð ð ð ð ,ð , x the decoder to generate the main texts x ð # A.2 Recommendation-Oriented Finetuning If we denote the multinomial probability obtained from the Re- cLLM prediction head ð ð ð ð as Ë râ ð ð ð , and denote the stacked item | 2311.01343#58 | 2311.01343#60 | 2311.01343 | [
"2302.13971"
] |
2311.01343#60 | Collaborative Large Language Model for Recommender Systems | # Yaochen Zhuâ ,1, Liang Wu2, Qi Guo2, Liangjie Hong2, Jundong Li1 collaborative token embeddings of items interacted by user i as zi, the rec-step objective of the recommendation-oriented finetuning (regularized with the content LLM) can be formulated as: MAP (Lu lv g)\ â hold) shold _ Al || tul|_ Ar || Lo Lrec_step (2) Zi 6) = â Sire Inf; ea â FF z; ia # Ar || Lo z; # ia # Multinomial NLL Loss Ac | Le _ > 2 ee ~ # Prior loss Ae ||_ Lu _ seul? Ac | Le _ scl)" SW 74 > 2 ee ~ Fpl], + Crees k _ seul? Ac | Le _ scl)" 74 > 2 ee ~ Fpl], k MR loss with content LLM (10) where NLL stands for negative log-likelihood, and Cð ð ð | 2311.01343#59 | 2311.01343#61 | 2311.01343 | [
"2302.13971"
] |
2311.01343#61 | Collaborative Large Language Model for Recommender Systems | is the con- stant irrelevant for the optimization purpose. From the form of the multinomial NLL loss we can find that, when finetuning the RecLLM according to Eq. (10), the hð ð ð ð ,ð ,â 1 output by the CLLM4Rec Ë ð ð ð ð , which can be viewed as the user latent variable base model summarizing the historical interaction of user ð , is encouraged to be similar to the collaborative embeddings of all the interacted items. # B EXPERIMENTS B.1 Statistics of the Datasets The statistics of the datasets are summarized in Table 3. # B.2 Experiments on T5 Backbone Implementation. We adopt the T5-base model8 as the back- B.2.1 bone, which has 32,128 vocab tokens (the last 28 tokens are empty), where each token is associated with a 768-dimensional vocab em- bedding. Model training generally follows similar steps as the model with GPT-2 backbone described in Section 4.1.2, where we first warm up the content LLM as Eq. (5) for ten epochs. Then, we con- duct the mutually-regularized finetuning as Eqs. (7), (8) for 100 epoch, and conduct finetuning as Eq. (10) for 150 epochs. B.2.2 Results & Analysis. The experimental results are summa- rized in Table 4. We can find that although CLLM4Rec with T5 back- bone generally outperforms ID-based and shallow LM-based base- lines, its performance is consistently worse than CLLM4Rec with GPT-2 backbone. The overall inferior performance of CLLM4Rec with T5 backbone can be two-fold. First, we note that the vocab embeddings in T5 are initialized with unit variance, whereas embed- dings in GPT-2 are initialized with a variance of 0.02. Therefore, the weights and embeddings in T5 has much larger numerical values, which leads to large update steps when errors are backpropagating from the outputs to the prompts. Therefore, the training is not as stable as the GPT-2 backbone. | 2311.01343#60 | 2311.01343#62 | 2311.01343 | [
"2302.13971"
] |
2311.01343#62 | Collaborative Large Language Model for Recommender Systems | In addition, in the finetuning stage of the original T5 model, the prompts are generally used to guide the macro behavior of the model. e.g., changing the model behavior from question answering to machine generation via prompt "trans- late English to French". Therefore, another reason for the inferiority of T5 backbone could be the mismatch between the original T5 prompts and the prompts intended to be used in CLLM4Rec. 8https://huggingface.co/t5-base. | 2311.01343#61 | 2311.01343 | [
"2302.13971"
] |
|
2310.19341#0 | Skywork: A More Open Bilingual Foundation Model | 3 2 0 2 t c O 0 3 ] L C . s c [ 1 v 1 4 3 9 1 . 0 1 3 2 : v i X r a # Skywork: A More Open Bilingual Foundation Model Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang Shuicheng Yan, Han Fang, Yahui Zhouâ Skywork Team, Kunlun Inc. | 2310.19341#1 | 2310.19341 | [
"2309.05463"
] |
|
2310.19341#1 | Skywork: A More Open Bilingual Foundation Model | # Abstract In this report, we present Skywork-13B, a family of large language models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both English and Chinese texts. This bilingual founda- tion model is the most extensively trained and openly published LLMs of comparable size to date. We introduce a two-stage train- ing methodology using a segmented corpus, targeting general purpose training and then domain-specific enhancement training, re- spectively. We show that our model not only excels on popular benchmarks, but also achieves state of the art performance in Chinese language modeling on diverse domains. Furthermore, we propose a novel leakage detection method, demonstrating that data contamination is a pressing is- sue warranting further investigation by the LLM community. To spur future research, we release Skywork-13B along with check- points obtained during intermediate stages of the training process. We are also releas- ing part of our SkyPile corpus, a collection of over 150 billion tokens of web text, which is the largest high quality open Chinese pre- training corpus to date. We hope Skywork- 13B and our open corpus will serve as a valuable open-source resource to democra- tize access to high-quality LLMs. generating, and translating human language with an unprecedented degree of accuracy and sophistication. However, the proliferation of these models has also been accompanied by a growing trend towards commercialization and a lack of transparency, a phenomenon that is increasingly influencing the dynamics of the open-source community. Historically, the open-source community has thrived on the principles of collaboration, trans- parency, and unrestricted sharing of ideas. However, as the commercial potential of LLMs has been recognized, this openness has begun to diminish. The reality is that many organi- zations only make model checkpoints publicly accessible, while withholding vital information on model reproduction. This practice signifi- cantly hampers the progress of the field. In an effort to revive the spirit of the open- source community and contribute to the on- going dialogue about transparency in AI, we present Skywork-13B: a family of bilingual large language models with 13 billion parameters, trained on a colossal corpus of more than 3.2 trillion tokens drawn from both English and Chinese texts. To our knowledge, our Skywork- 13B is the most thoroughly trained family of open LLMs of comparable size to date. | 2310.19341#0 | 2310.19341#2 | 2310.19341 | [
"2309.05463"
] |
2310.19341#2 | Skywork: A More Open Bilingual Foundation Model | 1 # Introduction Natural Language Processing (NLP), a vital branch of artificial intelligence, has experienced a transformative surge in recent years. Pivotal to this revolution has been the advent and ad- vancement of large language models (LLMs) (Ouyang et al., 2022; OpenAI, 2023; Bubeck et al., 2023; Chowdhery et al., 2022; Anil et al., 2023; Touvron et al., 2023a,b). These complex computational structures, composed of billions of parameters, are capable of understanding, In this technical report, we offer a compre- hensive disclosure of the Skywork-13B devel- opmental journey. We detail the composition of our training data, provide insights into the evolutionary trajectory of the modelâ s abilities during training, and share methodologies that could be employed to enhance model ability in specific domains. We believe that such an open approach not only aids in the reproducibility of our work but also provides a valuable re- source for other researchers seeking to explore and expand the capabilities of large language models. This technical report is also a call to â Email: {forename}.{surname}@kunlun-inc.com 1 action for renewed transparency in the field of NLP. Through it, we hope to inspire a return to a more collaborative, open-source community, where progress is not hampered by commer- cial considerations but propelled by collective intelligence and shared wisdom. Our contributions are the following: | 2310.19341#1 | 2310.19341#3 | 2310.19341 | [
"2309.05463"
] |
2310.19341#3 | Skywork: A More Open Bilingual Foundation Model | â ¢ We release Skywork-13B1, a family of LLMs that is the most extensively trained and openly published LLMs of comparable size to date. Our Skywork-13B family includes 1) Skywork-13B-Base, a strong foundation model with state of the art Chinese language modeling capability, and 2) Skywork-13B- Chat, a fined-tuned version optimized for conversation2. â ¢ We disclose detailed information on the training process and data composition. We also release intermediate checkpoints, which provide a valuable resource for understand- ing how the modelâ s capabilities develop over the course of training. It enables other re- searchers to leverage these checkpoints for their specific use-cases. | 2310.19341#2 | 2310.19341#4 | 2310.19341 | [
"2309.05463"
] |
2310.19341#4 | Skywork: A More Open Bilingual Foundation Model | â ¢ We release a portion of our high quality training corpus, totaling more than 150 bil- lion tokens. To our knowledge, this is the largest open Chinese corpus for language model pre-training to date. â ¢ We develop a novel method that detects the level of in-domain data usage during the training stage. To facilitate reproduction of the experiments presented in this report, we have released the relevant data. # 2 Methodology 2.1 Two Pre-training Stages In order to train Skywork-13B, we constructed SkyPile (see Section 3.1), a massive training corpus primarily constituted by publicly acces- sible web pages. We identified a small subset of SkyPile, encompassing exercises and solu- tions that span a broad spectrum of subjects from primary to graduate school. This includes 1Github repository: https://github.com/ SkyworkAI/Skywork. 2In this technical report we focus on the development of the base model. Details on Skywork-13B-Chat can be found in our Github repository. 2 | 2310.19341#3 | 2310.19341#5 | 2310.19341 | [
"2309.05463"
] |
2310.19341#5 | Skywork: A More Open Bilingual Foundation Model | coding problems, national exam questions, text- book exercises, and others. Given the majority of these exercises are STEM-related, we hence- forth refer to this subset and its complement as SkyPile-STEM and SkyPile-Main, respectively. Rather than training the Skywork-13B foun- dation model directly on SkyPile as a whole, we adopted a two-stage training approach. The first stage, which constitutes the primary pre- involves training the model training phase, from scratch on SkyPile-Main. In the sec- ond stage, our Skywork-13B is enriched with STEM-related domain knowledge and problem- solving skills through continual pre-training on SkyPile-STEM. To circumvent the potential issue of catastrophic forgetting, this continual pre-training is performed on a mix of SkyPile- STEM and SkyPile-Main, rather than exclu- sively on SkyPile-STEM. The decision to segregate Stage-1 and Stage- 2 pre-training serves a dual purpose. Firstly, we acknowledge that a significant proportion of the samples from SkyPile-STEM are, by their nature, supervised data. Those data are closely related to popular benchmarks such as CEVAL (Huang et al., 2023), MMLU (Hendrycks et al., 2021) and GSM8K (Cobbe et al., 2021), and can be utilized in a supervised fine-tuning (SFT) process to directly enhance model performance on related downstream tasks. In this context, the separation between Stage-1 and Stage-2 training enables us to more effectively assess the impacts of general-purpose pre-training (on web texts) and targeted pre-training (on in- domain/supervised data). Such insights could inform future data collection and compilation strategies for foundational model training. Secondly, by restricting first stage pre- training to general-purpose data, we are able to produce a version of foundation model as an alternative to the one with targeted enhance- ment. While the latter demonstrates superior performance on certain downstream tasks, it is less capable in language modeling of natural texts. We posit that this alternative is a valu- able contribution to the community, given its potential to excel in applications that do not require STEM-related competencies. 2.2 Training Progress Monitoring It is of vital importance to monitor and assess progress made during pre-training in real-time. | 2310.19341#4 | 2310.19341#6 | 2310.19341 | [
"2309.05463"
] |
2310.19341#6 | Skywork: A More Open Bilingual Foundation Model | Existing methods such as monitoring training loss and benchmark results on intermediate checkpoints, however, have their limitations. The main issue of monitoring training loss lies in that its effectiveness comes into question when considering the potential of overfitting. The training loss is equivalent to validation loss only if the training data is utilized exactly once (i.e., in one epoch). Yet, in practical scenarios of training LLMs, high-quality data often go through the training process multi- ple times (Taylor et al., 2022; Touvron et al., 2023a; Rozière et al., 2023; Gunasekar et al., 2023; Li et al., 2023b). Besides, even after ex- plicit de-duplication, there may still exist signif- icant amount of duplicated data in the training set (Soboleva et al., 2023; Abbas et al., 2023). In either cases, solely relying on training loss can lead to overlooking the issue of overfitting, thereby producing overly optimistic estimates of model performance. The top left subplot in Figure 3 illustrates the trajectory of the pre-training loss for our Skywork-13B model. Consistent with findings reported in (Touvron et al., 2023a,b; Baichuan Inc., 2023), the loss demonstrates a steady decline throughout the training process. However, an observation not disclosed in these cited works is the behavior of the validation loss on held-out sets. From the figure it can be clearly seen that the validation losses seem to level off as training approaches its final stages. Benchmarking based on intermediate check- points is another common monitoring approach (Touvron et al., 2023a; Baichuan Inc., 2023). Nevertheless, it presents several challenges. Firstly, there is a high variance in benchmark results, which can lead to unstable and unreli- able assessments of training progress. Secondly, benchmark results are not sensitive to minor progress in training. This insensitivity makes it difficult to accurately track gradual improve- ments during the training process. Besides, weaker models do not follow instructions well. Hence benchmark results may not accurately reflect their true learning progress or poten- tial. Finally, an inconvenience posed by most benchmarks is the necessity for model genera- tion. This process is notably resource-intensive, demanding substantial computational power. | 2310.19341#5 | 2310.19341#7 | 2310.19341 | [
"2309.05463"
] |
2310.19341#7 | Skywork: A More Open Bilingual Foundation Model | # During the pre-training of Skywork-13B, we 3 # Validation Loss vs. Average Task Metric 60- 55- 50- 45 - Average Task Metric 40- 35 - 2 28 23 22 23 28 19 18 Validation Loss Figure 1: Validation loss on English web texts vs. average task metric during the pre-training of Skywork-13B. The tasks include BoolQ (Clark et al., 2019), PIQA (Bisk et al., 2019), Winogrande (Sakaguchi et al., 2021), TriviaQA (Joshi et al., 2017) and RACE (Lai et al., 2017). embrace the method of monitoring the language modeling loss across numerous reserved valida- tion sets, each reflecting a distinct data dis- tribution. More specifically, we have created separate validation sets for code, academic pub- lications, social media posts, web texts in Chi- nese and English, among others. Conventional monitoring metrics are also utilized, but they serve merely as supplementary tools. In Figure 1 we plot the curve of language model vali- dation loss on English web texts against the average metric of several English downstream tasks. It is apparent that there is a very high correlation between the two quantities, showing that validation loss can serve as a valid proxy metric for downstream task performance. In the context of LLM pre-training, this approach also yields several other benefits: | 2310.19341#6 | 2310.19341#8 | 2310.19341 | [
"2309.05463"
] |
2310.19341#8 | Skywork: A More Open Bilingual Foundation Model | â ¢ Ease of construction: Crafting multiple val- idation sets is a relatively effortless task. This enables the evaluation of a modelâ s lan- guage modeling performance across varied domains. â ¢ Simplicity in computation: Calculation of validation loss is straightforward, signifi- cantly reducing the computational and lo- gistical overhead associated with tracking model training. â ¢ High sensitivity to training progress: Valida- tion loss is finely attuned to the progression of training, thereby offering a more detailed perspective on how models evolve and im- prove over time. | 2310.19341#7 | 2310.19341#9 | 2310.19341 | [
"2309.05463"
] |
2310.19341#9 | Skywork: A More Open Bilingual Foundation Model | ⠢ Model-agnosticism: Validation loss is indif- ferent to the composition of the training corpus or the model architecture. It allows for comparison not only between different checkpoints produced within a single train- ing session, but also across varied models from the community. This ensures a consis- tent and equitable basis for model compari- son. Note that monitoring the validation loss on a held-out set sharing the same distribution as the training set is a ubiquitous practice in machine learning. However, the observation of validation loss across multiple held-out sets, each with deliberate, unique distributions, is not common. We also note that the perspective asserting the primacy of language modeling loss as the paramount performance metric for models is not a recent revelation. This principle has been either explicitly or implicitly adopted in a number of research studies, as exemplified in (Kaplan et al., 2020; Hoffmann et al., 2022; Anil et al., 2023; Xia et al., 2023; Delétang et al., 2023). # 3 Pre-training 3.1 SkyPile Corpus In order to train Skywork-13B, we build SkyP- ile, a vast, high quality corpus comprising more than 6 trillion tokens. A segment of the corpus, comprising over 150 billion tokens of web text, has been open sourced to facilitate research and training on Chinese LLMs3. Our SkyPile is an amalgamation of several sources, the overwhelming majority of which is gleaned from publicly accessible channels. Numerous prior research works, exemplified by initiatives such as LLaMA (Touvron et al., 2023a) and RefinedWeb (Penedo et al., 2023), have substantiated the notion that publicly ac- cessible web data can yield exceptionally high- quality LLMs. In alignment with this empirical evidence, we subscribe to the premise of leverag- ing publicly accessible webpages as our primary source for training data. 3huggingface.co/datasets/Skywork/ SkyPile-150B | 2310.19341#8 | 2310.19341#10 | 2310.19341 | [
"2309.05463"
] |
2310.19341#10 | Skywork: A More Open Bilingual Foundation Model | 4 The construction of SkyPile is characterized by a dedicated emphasis on two primary dimen- sions: text quality and information distribution. Our data processing pipeline, inspired by (Wen- zek et al., 2020; Touvron et al., 2023a; Penedo et al., 2023), incorporates the following stages: â ¢ Structural Extraction: Due to the pre- dominant source of our dataset being pub- licly accessible web pages, the objective of the first stage is the extraction of pertinent content while concurrently expunging extra- neous textual elements that are deemed non- contributory to the training of our language model, e.g. these superfluous components in- clude navigational bars, site-specific contact information, disjunctive title texts devoid of substantive content, etc. Subsequent to this culling process, the retained informa- tion predominantly consists of contiguous, medium to long-form textual passages. In the pursuit of cultivating a profoundly adept LLM, the modelâ s exposure must encompass a diverse array of content spanning an extensive spec- trum of domains. Prior endeavors within the field have entailed the task of assigning cat- egorical labels to each individual document or webpage, thereby manually dictating the composition of the training corpus. How- ever, we posit that the corpus employed for LLM training has burgeoned to such an ex- tent that the knowledge it encapsulates can not be compartmentalized discretely. Conse- quently, eschewing a label-centric approach, our methodology centers on benchmarking the semantic affinities existing between tex- tual segments, thereby identifying and omit- ting those text blocks characterized by an exceedingly high recurrence rate. Deduplication has demonstrated its remarkable efficacy in en- hancing the overall quality of a training cor- pus, and it has found extensive application in virtually all prominent datasets (Hernan- dez et al., 2022; Kandpal et al., 2022; Abbas et al., 2023; Lee et al., 2022). Within the framework of SkyPile, we regard deduplica- tion as an integral component of the Distri- bution Filtering process. When considering the broader perspective, it becomes evident that duplication constitutes a paramount factor influencing the semantic distribution of a corpus. | 2310.19341#9 | 2310.19341#11 | 2310.19341 | [
"2309.05463"
] |
2310.19341#11 | Skywork: A More Open Bilingual Foundation Model | Consequently, the techniques and strategies we employed during the dis- tribution filtering phase autonomously elim- inated a substantial portion of duplicated content. In this phase, we deploy the CCNet (Wenzek et al., 2020) pipeline to perform two critical filtration tasks: the elimination of content of inferior quality and the exclusion of pages that are neither in English nor Chinese. We trained a binary classifier that predicts the likelihood that a given webpage is suitable for inclu- sion as a reference within the Wikipedia cor- pus. The outcome of this stage is organized into distinct quality-based categories, and we retain exclusively the high quality groups, opting to discard the remaining groups in its entirety. Quality Filtering: Above we described our pre-processing pipeline for natural text. As for Github content, we em- ploy an approach that is similar to (Together Computer, 2023). We have devised a collection of straightforward yet efficacious heuristics, en- compassing criteria such as line length filtration and alphanumeric thresholds, designed to dis- cern and exclude content of low quality. Our cri- teria are specifically oriented toward enhancing content quality, as opposed to merely curbing its volume. Notably, in contrast to prevailing practices that involve the wholesale removal of a significant portion of json, xml, yaml, and html content, we have made a deliberate choice to retain a judiciously proportionate represen- tation of these data formats. Note that in pursuit of harmonizing the modelâ s proficiency in both English and Chi- nese, we include in SkyPile a curated high- quality parallel corpora. This data is meticu- lously structured to pair a complete English paragraph with its corresponding Chinese coun- terpart, ensuring a seamless alignment of lin- guistic capabilities between the two languages. 3.2 Training Data Composition Our Skywork-13B is pre-trained for 3.2 trillion tokens, sampled from SkyPile. Texts from cer- tain sources are deemed as of high quality, e.g. | 2310.19341#10 | 2310.19341#12 | 2310.19341 | [
"2309.05463"
] |
2310.19341#12 | Skywork: A More Open Bilingual Foundation Model | 5 Category Percentage English Webpages Books Academic Papers Encyclopedia Miscellany 39.8% 3.6% 3.0% 0.5% 2.9% Chinese Webpages Social Media Encyclopedia Miscellany 30.4% 5.5% 0.8% 3.1% Other Lang. Encyclopedia 2.4% Code Github 8.0% Table 1: Breakdown of training data in Stage-1 pre-training of Skywork-13B. Wikipedia, hence have undergone upsampling. However, we generally stick to the rule that the number of repetition does not exceed five, as is recommended by recent studies (Taylor et al., 2022; Muennighoff et al., 2023). We report in Table 1 a breakdown of the constituent components of the training tokens during Stage-1 pre-training. The training to- kens are primarily composed of English and Chinese texts, constituting 49.8% and 39.6% of the data, respectively. Code contributes 8.0% to the total, with texts in other languages ac- counting for the remaining 2.4%. | 2310.19341#11 | 2310.19341#13 | 2310.19341 | [
"2309.05463"
] |
2310.19341#13 | Skywork: A More Open Bilingual Foundation Model | The category labeled as â miscellanyâ encompasses a diverse range of texts, including but not limited to, le- gal articles, court documents, company annual reports, and classical literature. # 3.3 Tokenizer We tokenize the data using byte-pair encoding (BPE) as implemented in SentencePiece (Kudo and Richardson, 2018), following the approach of LLaMA (Touvron et al., 2023a). Since our model is intended to be English-Chinese bilin- gual, we extend the original vocabulary of LLaMA, which primarily consists of latin-based words and subwords, with frequently used Chi- nese characters and words. Specifically, we add 8000 single-character tokens from BERTâ s vocabulary (Devlin et al., 2019) to LLaMAâ s vocabulary. We further expand the vocabu- lary with 25k frequent Chinese multi-character words. This results in a total vocabulary size of 65,536 tokens, of which 17 are reserved as # special symbols. As in LLaMA, we split all numbers into indi- vidual digits, and fall back to bytes to decom- pose unknown UTF-8 characters. Category Size Latin based words & subwords Chinese characters & Unicode symbols Chinese words Reserved symbols 32000 8000 25519 17 Total 65536 Table 2: Breakdown of the vocabulary used in Skywork-13B. 3.4 Architecture Our Skywork-13B is based on the transformer architecture (Vaswani et al., 2017), consisting of stacks of transformer-decoder layers. In con- trast to the original transformer model, we have incorporated several modifications, inspired by LLaMA (Touvron et al., 2023a,b). Our pre- liminary experiments, as illustrated in Figure 2, validate these changes, demonstrating the improved performance they confer. Details on this experiment can be found in Appendix A. While our network architecture takes after the LLaMA model to a great extent, there ex- ists a notable difference in our preference for a deeper, yet narrower, network. A comparative exploration of the Skywork-13B and LLaMA2- 13B network configurations is presented in Ta- ble 3. The specific modifications made are de- scribed in detail below. | 2310.19341#12 | 2310.19341#14 | 2310.19341 | [
"2309.05463"
] |
2310.19341#14 | Skywork: A More Open Bilingual Foundation Model | ⠢ Positional Embedding: We use Rotary Positional Embedding (RoPE) (Su et al., 2022), that was motivated by its extensive adoption in various prominent large lan- guage models, such as LLaMA and PaLM, as well as its demonstrated effectiveness in extending the length of context windows, as evidenced by recent studies (Chen et al., 2023; Rozière et al., 2023; Xiong et al., 2023). | 2310.19341#13 | 2310.19341#15 | 2310.19341 | [
"2309.05463"
] |
2310.19341#15 | Skywork: A More Open Bilingual Foundation Model | â ¢ Layer Normalization: We replaced the conventional layer normalization with RM- SNorm (Zhang and Sennrich, 2019). Addi- tionally, we adopted pre-normalization in each layer instead of post-normalization, which has been shown to enhance the train- ing stability of transformer models. 6 2.4 - â GPT-7B â LLaMA-7B 2.3 - 2.2 - 2.1- 2.0 - 1.9 - Training Loss 1.8 - 1.7 - 16-1 1 1 1 1 i} 50 100 150 200 Tokens (B) Figure 2: Preliminary Experiments: Comparison of conventional GPT architecture and more recent LLaMA architecture. For each of the two trans- former variants, a model with 7 billion parameters is trained from Scratch on 200 Billion Tokens. The plot clearly shows that the LLaMA architecture achieves a lower training loss than GPT, demon- strating the formerâ | 2310.19341#14 | 2310.19341#16 | 2310.19341 | [
"2309.05463"
] |
2310.19341#16 | Skywork: A More Open Bilingual Foundation Model | s superiority. â ¢ Activation: We employed the SwiGLU acti- vation function (Shazeer, 2020). In line with established conventions in prior studies, we reduced the dimension of the feed-forward network (FFN) from four times the hidden size to eight-thirds of the hidden size. This adjustment was made to maintain parity be- tween the total parameters in a layer and those in the vanilla transformer layer. LLaMA2-13B Skywork-13B Vocab. Size Hidden Dim. FFN Dim. Head Dim. Num. Heads Num. Layers 32,000 5,120 13,696 128 40 40 65,536 4,608 12,288 128 36 52 Seq. Len. #Tokens per Batch Peak LR Minimum LR 4,096 4M 3e-4 3e-5 4,096 16M 6e-4 6e-5 Table 3: Comparisons in architecture and important hyper-parameters of Skywork-13B and LLaMA2- 13B. 3.5 Infrastructure Our Skywork-13B is trained on a cluster of 64 NVIDIA-HGX-A800 nodes, a total of 512 A800- 80G SXM GPUs. Each node in the cluster is outfitted with high-speed 400GB/s NVLinks for intra-node communication and an 800Gb/s RoCE network for inter-node connectivity. | 2310.19341#15 | 2310.19341#17 | 2310.19341 | [
"2309.05463"
] |