id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
2310.06775#99
Conceptual Framework for Autonomous Cognitive Entities
World ethics. Wadsworth/Thomson Learning. [43] Thilo Hagendorff. 2022. A virtue-based framework to support putting AI ethics into practice. Philosophy & Technology 35, 3 (2022), 55. 35 , , os , , Shapiro, et al. [44] Kyle Hamilton, Aparna Nayak, Bojan BožiÄ , and Luca Longo. 2022. Is neuro-symbolic AI meeting its promises in natural language processing?
2310.06775#98
2310.06775#100
2310.06775
[ "1712.05474" ]
2310.06775#100
Conceptual Framework for Autonomous Cognitive Entities
A structured review. Semantic Web Preprint (2022), 1â 42. [45] Stevan Harnad. 2003. Can a machine be conscious? How? Journal of Consciousness Studies 10, 4-4 (2003), 69â 75. [46] J Hawkins and S Blakeslee. 2007. On Intelligence (p. 272). Henry Holt and Company (2007). [47] Timothy Hospedales, Antreas Antoniou, Paul Micaelli, and Amos Storkey. 2021.
2310.06775#99
2310.06775#101
2310.06775
[ "1712.05474" ]
2310.06775#101
Conceptual Framework for Autonomous Cognitive Entities
Meta-learning in neural networks: A survey. IEEE transactions on pattern analysis and machine intelligence 44, 9 (2021), 5149â 5169. [48] Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. 2019. Risks from learned optimization in advanced machine learning systems. arXiv preprint arXiv:1906.01820 (2019). [49] Max Jaderberg, Wojciech M Czarnecki, Iain Dunning, Luke Marris, Guy Lever, Antonio Garcia Castaneda, Charles Beattie, Neil C Rabinowitz, Ari S Morcos, Avraham Ruderman, et al. 2019. Human-level performance in 3D multiplayer games with population-based reinforcement learning. Science 364, 6443 (2019), 859â 865. [50] Mohsen Jamali, Ziv M Williams, and Jing Cai. 2023.
2310.06775#100
2310.06775#102
2310.06775
[ "1712.05474" ]
2310.06775#102
Conceptual Framework for Autonomous Cognitive Entities
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain. arXiv preprint arXiv:2309.01660 (2023). [51] Michael Janner, Sergey Levine, William T Freeman, Joshua B Tenenbaum, Chelsea Finn, and Jiajun Wu. 2018. Reasoning about physical interactions with object-oriented prediction and planning. arXiv preprint arXiv:1812.10972 (2018). [52] Davinder Kaur, Suleyman Uslu, and Arjan Durresi. 2021.
2310.06775#101
2310.06775#103
2310.06775
[ "1712.05474" ]
2310.06775#103
Conceptual Framework for Autonomous Cognitive Entities
Requirements for trustworthy artificial intelligenceâ a review. In Advances in Networked- Based Information Systems: The 23rd International Conference on Network-Based Information Systems (NBiS-2020) 23. Springer, 105â 115. [53] Diederik P Kingma, Max Welling, et al. 2019. An introduction to variational autoencoders. Foundations and Trends® in Machine Learning 12, 4 (2019), 307â
2310.06775#102
2310.06775#104
2310.06775
[ "1712.05474" ]
2310.06775#104
Conceptual Framework for Autonomous Cognitive Entities
392. [54] Barbara Kitchenham, Stuart Charters, et al. 2007. Guidelines for performing systematic literature reviews in software engineering. [55] Lawrence Kohlberg. 1921. The philosophy of moral development: Moral stages and the idea of justice. Vol. 1. San Francisco: harper & row. [56] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022.
2310.06775#103
2310.06775#105
2310.06775
[ "1712.05474" ]
2310.06775#105
Conceptual Framework for Autonomous Cognitive Entities
Large language models are zero-shot reasoners. Advances in neural information processing systems 35 (2022), 22199â 22213. [57] Eric Kolve, Roozbeh Mottaghi, Winson Han, Eli VanderBilt, Luca Weihs, Alvaro Herrasti, Matt Deitke, Kiana Ehsani, Daniel Gordon, Yuke Zhu, et al. 2017. Ai2-thor: An interactive 3d environment for visual ai. arXiv preprint arXiv:1712.05474 (2017). [58] Mary Lacity and Leslie Willcocks. 2015.
2310.06775#104
2310.06775#106
2310.06775
[ "1712.05474" ]
2310.06775#106
Conceptual Framework for Autonomous Cognitive Entities
Robotic process automation: the next transformation lever for shared services. London School of Economics Outsourcing Unit Working Papers 7 (2015), 1â 35. [59] John E Laird, Nate Derbinsky, and Jonathan Voigt. 2011. Performance evaluation of declarative memory systems in Soar. In Proc. of the 20th Behavior Representation in Modeling & Simulation Conf, Vol. 33. Citeseer, 40. [60] John E Laird, Allen Newell, and Paul S Rosenbloom. 1987. Soar: An architecture for general intelligence. Artificial intelligence 33, 1 (1987), 1â 64. [61] Himabindu Lakkaraju, Ece Kamar, Rich Caruana, and Jure Leskovec. 2019. Faithful and customizable explanations of black box models. In
2310.06775#105
2310.06775#107
2310.06775
[ "1712.05474" ]
2310.06775#107
Conceptual Framework for Autonomous Cognitive Entities
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 131â 138. [62] Hoang Le, Nan Jiang, Alekh Agarwal, Miroslav Dudà k, Yisong Yue, and Hal Daumé III. 2018. Hierarchical imitation and reinforcement learning. In International conference on machine learning. PMLR, 2917â 2926. [63] Yann LeCun. 2022. A path towards autonomous machine intelligence version 0.9. 2, 2022-06-27. Open Review 62 (2022). [64] Joel Z Leibo, Edward Hughes, Marc Lanctot, and Thore Graepel. 2019. Autocurricula and the emergence of innovation from social interaction:
2310.06775#106
2310.06775#108
2310.06775
[ "1712.05474" ]
2310.06775#108
Conceptual Framework for Autonomous Cognitive Entities
A manifesto for multi-agent intelligence research. arXiv preprint arXiv:1903.00742 (2019). [65] Jan Leike, Miljan Martic, Victoria Krakovna, Pedro A Ortega, Tom Everitt, Andrew Lefrancq, Laurent Orseau, and Shane Legg. 2017. AI safety gridworlds. arXiv preprint arXiv:1711.09883 (2017). [66] Kenneth Li, Aspen K Hopkins, David Bau, Fernanda Viégas, Hanspeter Pfister, and Martin Wattenberg. 2022. Emergent world representations: Exploring a sequence model trained on a synthetic task. arXiv preprint arXiv:2210.13382 (2022). [67] Jiongnan Liu, Jiajie Jin, Zihan Wang, Jiehan Cheng, Zhicheng Dou, and Ji-Rong Wen. 2023.
2310.06775#107
2310.06775#109
2310.06775
[ "1712.05474" ]
2310.06775#109
Conceptual Framework for Autonomous Cognitive Entities
RETA-LLM: A Retrieval-Augmented Large Language Model Toolkit. arXiv preprint arXiv:2306.05212 (2023). [68] Jason Xinyu Liu, Ziyi Yang, Ifrah Idrees, Sam Liang, Benjamin Schornstein, Stefanie Tellex, and Ankit Shah. 2023. Lang2LTL: Translating Natural Language Commands to Temporal Robot Task Specification. arXiv preprint arXiv:2302.11649 (2023). [69] Jieyi Long. 2023. Large Language Model Guided Tree-of-Thought. arXiv preprint arXiv:2305.08291 (2023). [70] Nunzio Lorè and Babak Heydari. 2023. Strategic Behavior of Large Language Models:
2310.06775#108
2310.06775#110
2310.06775
[ "1712.05474" ]
2310.06775#110
Conceptual Framework for Autonomous Cognitive Entities
Game Structure vs. Contextual Framing. arXiv:2309.05898 [cs.GT] [71] Robin Manhaeve, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, and Luc De Raedt. 2018. Deepproblog: Neural probabilistic logic programming. Advances in neural information processing systems 31 (2018). [72] Elwin Marg. 1995. DESCARTESâ ERROR: emotion, reason, and the human brain.
2310.06775#109
2310.06775#111
2310.06775
[ "1712.05474" ]
2310.06775#111
Conceptual Framework for Autonomous Cognitive Entities
Optometry and Vision Science 72, 11 (1995), 847â 848. [73] Abraham Maslow. 1974. A theory of human motivation. Lulu. com. [74] Thomas Miconi, Kenneth Stanley, and Jeff Clune. 2018. Differentiable plasticity: training plastic neural networks with backpropagation. In International Conference on Machine Learning. PMLR, 3559â 3568. [75] Earl K Miller and Jonathan D Cohen. 2001.
2310.06775#110
2310.06775#112
2310.06775
[ "1712.05474" ]
2310.06775#112
Conceptual Framework for Autonomous Cognitive Entities
An integrative theory of prefrontal cortex function. Annual review of neuroscience 24, 1 (2001), 167â 202. 36 Conceptual Framework for Autonomous Cognitive Entities [76] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. nature 518, 7540 (2015), 529â
2310.06775#111
2310.06775#113
2310.06775
[ "1712.05474" ]
2310.06775#113
Conceptual Framework for Autonomous Cognitive Entities
533. [77] Stephen H Muggleton, Dianhuan Lin, Niels Pahlavi, and Alireza Tamaddoni-Nezhad. 2014. Meta-interpretive learning: application to grammatical inference. Machine learning 94 (2014), 25â 49. [78] H Nii. 1986. Blackboard systems: Blackboard application systems, blackboard systems from a knowledge engineering perspective. The AI Magazine (1986), 82â 106. [79] Andrew M Nuxoll and John E Laird. 2007.
2310.06775#112
2310.06775#114
2310.06775
[ "1712.05474" ]
2310.06775#114
Conceptual Framework for Autonomous Cognitive Entities
Extending cognitive architecture with episodic memory. In AAAI. 1560â 1564. [80] United States. Defense Science Board. Task Force on the Role of Autonomy in DoD Systems. 2012. Task Force Report: The Role of Autonomy in DoD Systems. Office of the Under Secretary of Defense for Acquisition, Technology, and . . . . [81] Mark Petticrew and Helen Roberts. 2008. Systematic reviews in the social sciences: A practical guide. John Wiley & Sons. [82] VS Ramachandran, Sandra Blakeslee, and Raymond J Dolan. 1998. Phantoms in the brain probing the mysteries of the human mind. Nature 396, 6712 (1998), 639â 640. [83] Judith Reeves-Stevens. 2002. Prime Directive. Simon and Schuster. [84] Chris Richardson. 2018.
2310.06775#113
2310.06775#115
2310.06775
[ "1712.05474" ]
2310.06775#115
Conceptual Framework for Autonomous Cognitive Entities
Microservices patterns: with examples in Java. Simon and Schuster. [85] Manel Rodriguez-Soto, Marc Serramia, Maite Lopez-Sanchez, and Juan Antonio Rodriguez-Aguilar. 2022. Instilling moral value alignment by means of multi-objective reinforcement learning. Ethics and Information Technology 24, 1 (2022), 9. [86] Robert M Sapolsky. 2017. Behave: The biology of humans at our best and worst.
2310.06775#114
2310.06775#116
2310.06775
[ "1712.05474" ]
2310.06775#116
Conceptual Framework for Autonomous Cognitive Entities
Penguin. [87] Matthias Scheutz. 2016. The need for moral competency in autonomous agent architectures. Fundamental issues of artificial intelligence (2016), 517â 527. [88] Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. 2020. Mastering atari, go, chess and shogi by planning with a learned model. Nature 588, 7839 (2020), 604â 609. [89] Fabian Schrodt, Jan Kneissler, Stephan Ehrenfeld, and Martin V Butz. 2017.
2310.06775#115
2310.06775#117
2310.06775
[ "1712.05474" ]
2310.06775#117
Conceptual Framework for Autonomous Cognitive Entities
Mario becomes cognitive. Topics in cognitive science 9, 2 (2017), 343â 373. [90] Douglas Schuler and Aki Namioka. 1993. Participatory design: Principles and practices. CRC Press. [91] David Shapiro. 2021. Natural language cognitive architecture: A prototype artificial general intelligence: Paperback. https://www.barnesandnoble. com/w/natural-language-cognitive-architecture-david-shapiro/1139957470 [92] David Shapiro. 2022. Benevolent by Design: Six words to safeguard humanity.
2310.06775#116
2310.06775#118
2310.06775
[ "1712.05474" ]
2310.06775#118
Conceptual Framework for Autonomous Cognitive Entities
Barnes and Noble Press. [93] David Shapiro. 2022. MARAGI. https://www.maragi.io/home. (Accessed on 08/29/2023). [94] David Shapiro. 2022. Symphony of Thought: Orchestrating Artificial Cognition. Barnes and Noble Press. [95] Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. 2023.
2310.06775#117
2310.06775#119
2310.06775
[ "1712.05474" ]
2310.06775#119
Conceptual Framework for Autonomous Cognitive Entities
Reflexion: Language agents with verbal reinforcement learning. arXiv preprint arXiv:2303.11366 (2023). [96] Yoav Shoham and Kevin Leyton-Brown. 2008. Multiagent systems: Algorithmic, game-theoretic, and logical foundations. Cambridge University Press. [97] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. 2016. Mastering the game of Go with deep neural networks and tree search. nature 529, 7587 (2016), 484â 489. [98] David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. 2018. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 362, 6419 (2018), 1140â
2310.06775#118
2310.06775#120
2310.06775
[ "1712.05474" ]
2310.06775#120
Conceptual Framework for Autonomous Cognitive Entities
1144. [99] William Stallings. 1987. Handbook of computer-communications standards; Vol. 1: the open systems interconnection (OSI) model and OSI-related standards. Macmillan Publishing Co., Inc. [100] K Sudhir. 2016. The exploration-exploitation tradeoff and efficiency in knowledge production. Marketing Science 35, 1 (2016), 1â 9. [101] Theodore Sumers, Shunyu Yao, Karthik Narasimhan, and Thomas L Griffiths. 2023.
2310.06775#119
2310.06775#121
2310.06775
[ "1712.05474" ]
2310.06775#121
Conceptual Framework for Autonomous Cognitive Entities
Cognitive Architectures for Language Agents. arXiv preprint arXiv:2309.02427 (2023). [102] Zhiqing Sun, Yikang Shen, Qinhong Zhou, Hongxin Zhang, Zhenfang Chen, David Cox, Yiming Yang, and Chuang Gan. 2023. Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision. arXiv:2305.03047 [cs.LG] [103] Zhiqing Sun, Yikang Shen, Qinhong Zhou, Hongxin Zhang, Zhenfang Chen, David Cox, Yiming Yang, and Chuang Gan. 2023. Principle-driven self-alignment of language models from scratch with minimal human supervision. arXiv preprint arXiv:2305.03047 (2023). [104] Richard S. Sutton and Andrew G. Barto. 2018.
2310.06775#120
2310.06775#122
2310.06775
[ "1712.05474" ]
2310.06775#122
Conceptual Framework for Autonomous Cognitive Entities
Reinforcement Learning: An Introduction (second ed.). The MIT Press. http://incompleteideas.net/ book/the-book-2nd.html [105] Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An introduction. MIT press. [106] Kazuhiro Takemoto. 2023. The Moral Machine Experiment on Large Language Models. arXiv:2309.05958 [cs.CL] [107] A Tanenbaum, D Wetherall, J Kurose, and K Ross. 2019. Computer networks title: Computer networking:
2310.06775#121
2310.06775#123
2310.06775
[ "1712.05474" ]
2310.06775#123
Conceptual Framework for Autonomous Cognitive Entities
A top-down approach. Instructor 201901 (2019). [108] Karthik Valmeekam, Matthew Marquez, Sarath Sreedharan, and Subbarao Kambhampati. 2023. On the Planning Abilities of Large Language Modelsâ A Critical Investigation. arXiv preprint arXiv:2305.15771 (2023). [109] Dieter Vanderelst and Alan Winfield. 2018. An architecture for ethical robots inspired by the simulation theory of cognition. Cognitive Systems Research 48 (2018), 56â
2310.06775#122
2310.06775#124
2310.06775
[ "1712.05474" ]
2310.06775#124
Conceptual Framework for Autonomous Cognitive Entities
66. 37 , , os , , Shapiro, et al. [110] Wendell Wallach and Colin Allen. 2008. Moral machines: Teaching robots right from wrong. Oxford University Press. [111] Jane X Wang, Zeb Kurth-Nelson, Dharshan Kumaran, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Demis Hassabis, and Matthew Botvinick. 2018. Prefrontal cortex as a meta-reinforcement learning system. Nature neuroscience 21, 6 (2018), 860â
2310.06775#123
2310.06775#125
2310.06775
[ "1712.05474" ]
2310.06775#125
Conceptual Framework for Autonomous Cognitive Entities
868. [112] Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, and Matt Botvinick. 2016. Learning to reinforcement learn. arXiv preprint arXiv:1611.05763 (2016). [113] Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, and Heng Ji. 2023. Unleashing Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration. arXiv preprint arXiv:2307.05300 (2023).
2310.06775#124
2310.06775#126
2310.06775
[ "1712.05474" ]
2310.06775#126
Conceptual Framework for Autonomous Cognitive Entities
[114] David Warriner. 2008. The man who mistook his wife for a hat and other clinical tales. [115] Alan FT Winfield and Marina Jirotka. 2017. The case for an ethical black box. In Towards Autonomous Robotic Systems: 18th Annual Conference, TAROS 2017, Guildford, UK, July 19â 21, 2017, Proceedings 18. Springer, 262â 273. [116] Yang Xiao, Ning Zhang, Wenjing Lou, and Y Thomas Hou. 2020.
2310.06775#125
2310.06775#127
2310.06775
[ "1712.05474" ]
2310.06775#127
Conceptual Framework for Autonomous Cognitive Entities
A survey of distributed consensus protocols for blockchain networks. IEEE Communications Surveys & Tutorials 22, 2 (2020), 1432â 1465. [117] Yaqi Xie, Chen Yu, Tongyao Zhu, Jinbin Bai, Ze Gong, and Harold Soh. 2023. Translating natural language to planning goals with large-language models. arXiv preprint arXiv:2302.05128 (2023). [118] Malcolm P Young, Claus-C Hilgetag, and Jack W Scannell. 2000.
2310.06775#126
2310.06775#128
2310.06775
[ "1712.05474" ]
2310.06775#128
Conceptual Framework for Autonomous Cognitive Entities
On imputing function to structure from the behavioural effects of brain lesions. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 355, 1393 (2000), 147â 161. [119] Hector Zenil, Jesper Tegnér, Felipe S Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G Frey, Adrian Weller, Larisa Soldatova, Alan R Bundy, Nicholas R Jennings, et al. 2023.
2310.06775#127
2310.06775#129
2310.06775
[ "1712.05474" ]
2310.06775#129
Conceptual Framework for Autonomous Cognitive Entities
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence. arXiv preprint arXiv:2307.07522 (2023). [120] Yue Zhen, Sheng Bi, Lu Xing-tong, Pan Wei-qin, Shi Hai-peng, Chen Zi-rui, and Fang Yi-shu. 2023. Robot Task Planning Based on Large Language Model Representing Knowledge with Directed Graph Structures. arXiv preprint arXiv:2306.05171 (2023).
2310.06775#128
2310.06775#130
2310.06775
[ "1712.05474" ]
2310.06775#130
Conceptual Framework for Autonomous Cognitive Entities
38
2310.06775#129
2310.06775
[ "1712.05474" ]
2310.04450#0
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
3 2 0 2 t c O 3 ] L C . s c [ 1 v 0 5 4 4 0 . 0 1 3 2 : v i X r a 2023 11th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW) # Investigating Large Language Modelsâ Perception of Emotion Using Appraisal Theory Nutchanon Yongsatianchot Khoury College of Computer Science Northeastern University Massachusetts, USA [email protected] Parisa Ghanad Torshizi Khoury College of Computer Science Northeastern University Massachusetts, USA [email protected] Stacy Marsella Khoury College of Computer Science Northeastern University Massachusetts, USA [email protected]
2310.04450#1
2310.04450
[ "2302.02083" ]
2310.04450#1
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Abstractâ Large Language Models (LLM) like ChatGPT have significantly advanced in recent years and are now being used by the general public. As more people interact with these systems, improving our understanding of these black box models is crucial, especially regarding their understanding of human psychological aspects. In this work, we investigate their emotion perception through the lens of appraisal and coping theory using the Stress and Coping Process Questionaire (SCPQ). SCPQ is a validated clinical instrument consisting of multiple stories that evolve over time and differ in key appraisal variables such as controllability and changeability. We applied SCPQ to three recent LLMs from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with predictions from the appraisal theory and human data. The results show that LLMsâ responses are similar to humans in terms of dynamics of appraisal and coping, but their responses did not differ along key appraisal dimensions as predicted by the theory and data. The magnitude of their responses is also quite different from humans in several variables. We also found that GPTs can be quite sensitive to instruction and how questions are asked. This work adds to the growing literature evaluating the psychological aspects of LLMs and helps enrich our understanding of the current models.
2310.04450#0
2310.04450#2
2310.04450
[ "2302.02083" ]
2310.04450#2
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Index Termsâ Large language model, Appraisal theory, coping # I. INTRODUCTION Large language models (LLM) have made significant progress in recent years. With the introduction of ChatGPT by OpenAI, the general public, not just researchers, has widely used and interacted with these LLMs. These models can write stories, songs, poems, and code. People have also used them to answer various questions, including basic facts about the world, medical questions, and social and emotional events. As these AI systems interact with people more and more, it is essential to investigate and improve our understanding of how they perceive and understand humansâ social and psychological aspects. Existing research has begun to study various cognitive and psychological abilities of LLMs, includ- ing decision-making, information search, causal reasoning, and theory of mind [1]â [3]. Continuing this line of research, in this work, we aim to further investigate LLMsâ
2310.04450#1
2310.04450#3
2310.04450
[ "2302.02083" ]
2310.04450#3
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
ability to perceive and evaluate emotions and related factors. Emotion has multiple dimen- sions, including the expression of emotion, the relation to cognition, physiological experience, subjective experience, and the impact on coping responses. There are also multiple theories of emotion [4]â [9]. We choose to investigate emotion perception through the lens of appraisal and coping theory. Specifically, we compare LLMs perception of emotional and stressful scenarios to the characterizations of these scenarios by appraisal theory and related human data. From another angle, we investigate whether or not LLMs are sensitive to appraisal dimensions of scenarios and whether this would lead to responses with different coping tendencies. We choose appraisal theory because it provides a representation of emo- tional scenarios in terms of appraisal variables, allowing us to investigate emotion perception at a deeper level beyond simple emotion categories. In addition, some appraisal theories, such as Lazarusâ s theory [4], provide a link from appraisal variables to coping behaviors, allowing us to further examine LLMsâ responses at the behavior level. To accomplish this, we use a validated clinical instrument, the Stress and Coping Process Questionaire (SCPQ), by Perrez and Reicherts [10]. SCPQ is built upon Lazarusâ s appraisal and coping theory. It includes measurements of emotional experi- ence, appraisal variables, and coping intentions and behaviors. It has also been used to evaluate a computational model of emotion before [11]. In SCPQ, subjects are presented with hy- pothetical stereotypical stressful scenarios which evolve over time, and their responses are measured across multiple time steps. This allows us to investigate the dynamics of appraisal and coping. Furthermore, SCPQ consists of two specific types of scenarios: aversive and loss or failure. These two types differ significantly along several key appraisal dimensions: controllability, changeability, and ambiguity. This permits us to check the modelâ s sensitivity to appraisal dimensions. In sum, SCPQ provides a useful testbed to investigate the important aspects of appraisal and coping theory within LLMs. We subjected SCPQ to three recent LLMs from OpenAI: text-davinci-003, ChatGPT, and GPT-4 [12], [13]. We focus on models from OpenAI because they are the most well-known models and GPT-4 seems to be the current best available model at the time of this writing [14].
2310.04450#2
2310.04450#4
2310.04450
[ "2302.02083" ]
2310.04450#4
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
We compared their results with human data and hypotheses from the theory [10]. In addition, we tested how LLMs would change if we instructed them to act as a person with depression compared to what the theory predicted. Lastly, we also investigated the # 979-8-3503-2745-8/23/$31.00 ©2023 IEEE sensitivity of these models on instruction and prompts along several aspects. The results show that LLMsâ responses are similar to human trends regarding the dynamics of appraisal and coping. However, they still could not differentiate between the two scenario types well. Their responses are also quite different from humans in terms of magnitude in several key variables, including controllability and coping. ChatGPT and GPT-4, when instructed to act as a depressed person, respond in a way that is consistent with the theoryâ s prediction. Lastly, we found that LLMs can be quite sensitive to instruction and how questions are asked. # II. RELATED WORK
2310.04450#3
2310.04450#5
2310.04450
[ "2302.02083" ]
2310.04450#5
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
As SCPQ is heavily influenced by Lazarusâ appraisal and coping theory, we first briefly review Lazarusâ s theory here. Appraisal theories of emotion define appraisal as an evaluation of what the situation implies for personal well-being based on oneâ s goals and beliefs [15], [16], [4], [5]. Lazarusâ s theory emphasizes the importance of the process or dynamics involved in coping [4]. In particular, the person-environment relationship is always changing, leading to different, evolving emotional experiences, appraisal evaluations, and coping. Lazarus proposes two main dimensions of appraisals: pri- mary and secondary appraisal dimensions. Primary appraisals include goal relevance, goal congruence, and type of ego- involvement. Secondary appraisals include blameworthiness, coping potential (whether and how a person can manage the demands and consequences of the situation), and future expectancy (the degree to which things are likely to change for the better or worse ). Effectively, secondary appraisals involve how people can cope with the situation. Note that, in SCPQ, with influence from earlier work on helplessness [17], Perrez and Reicherts use the term controllability (the subjective appraisal of personal ability to control the situation) instead of coping potential and changeability (the subjective appraisal that the stressful event will change by itself) instead of future expectancy. Lazarus also proposes two broad types of coping: problem- focused coping (directly changing the situation or the envi- ronment) and emotion-focused coping (changing oneâ s goals and/or beliefs to adjust to the situation). These copings are also the main focus of SCPQ.
2310.04450#4
2310.04450#6
2310.04450
[ "2302.02083" ]
2310.04450#6
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
With the influence of Lazarusâ s theory, SCPQ focuses on not only appraisal but also the dynamics of appraisal and coping. This makes it stand out among other similar scenario-based instruments [18], [19]. In addition, SCPQ extends Lazarusâ s taxonomy further. We go into more detail in the next section. Additionally, SCQP has been used to evaluate a computational model before [11]. A critical difference is that in the previous work, the scenarios were manually constructed to be in the right format that the model could process, but here we are using LLMs to interpret the scenario directly from the text. On the other side, there has been more and more work evaluating the psychological aspects of LLMs. For example, Binz and Schulz (2023) studied GPT-3â s decision-making, reasoning using cognitive information search, and causal tests such as heuristic and biases tests and psychological the cognitive reflection tests [1]. They found that it can solve these tasks similarly or better than human subjects. Kosinski (2023) investigated Theory of Mind (ToM) in LLMs using standard false-belief tasks and found that ChatGPT and text-davinci-003 can solve most ToM tasks [3]. Miotto et al. (2022) explored personality, values, and demographic of GPT-3 using validated questionnaires [20]. They found GPT- 3 to be similar to the human baseline sample and is close to a young adult demographic. Bubeck et al. (2023) subject GPT-4 to various tests such as mathematics, coding, medicine, law, and psychology [2]. They show that GPT-4 outperforms ChatGPT on ToM and emotion perception. Nevertheless, they simply tested the models on a few examples and did not systematically evaluate their psychological aspects and related factors.
2310.04450#5
2310.04450#7
2310.04450
[ "2302.02083" ]
2310.04450#7
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
# III. STRESS AND COPING PROCESS QUESTIONAIRE The Stress and Coping Process Questionaire (SCPQ) was developed by Perrez and Reicherts to measure a human sub- jectâ s appraisal and coping variables in stressful and emotional scenarios that occur in their daily life [10]. SCPQ has been validated by a panel of clinician experts and applied to normal human subjects as well as in clinical settings. A subject is presented with a series of hypothetical scenarios that are divided into three episodes or phases, corresponding to different stages of the stressful scenario: phase 1 beginning, phase 2 continuation, and phase 3 outcome. Their responses are measured at the end of each phase, reflecting the key assumption of SCPQ that the dynamics of a stressful scenario are crucial to understanding how stress and coping develop. SCPQ consists of two types of scenarios: aversive and loss or failure (loss). Examples of loss scenarios are the loss of a friendly relationship, the loss of an important object, and the failure of an interesting side job. Examples of aversive scenarios are criticism from the partner, arguments about problems in a relationship, and reproaches from colleagues.
2310.04450#6
2310.04450#8
2310.04450
[ "2302.02083" ]
2310.04450#8
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
The key differences between the two types are the level of controllability, changeability, and ambiguity. By design, the loss scenarios are less controllable, less changeable, and less ambiguous than the aversive scenarios. Both types of scenarios follow a similar course of three episodes. The loss or aversive scenario is looming at the beginning (phase 1) and becomes unavoidable, imminent, or reinforced in phase 2. The outcome phase (phase 3) can either be positive or negative. For loss scenarios, the positive outcome involves finding a substitution, while the negative outcome depicts the final loss without any successful substi- tution. For aversive scenarios, the positive outcome involves successfully removing the source of stress, while the negative outcome depicts the continuation of the stress. Below are examples of an aversive scenario and a loss scenario, respectively. An aversive scenario with a positive outcome: â ¢ Phase 1: â
2310.04450#7
2310.04450#9
2310.04450
[ "2302.02083" ]
2310.04450#9
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
You are together with some colleagues. One says that you donâ t pull your weight when there is difficult work. He claims that you donâ t think of other colleagues.â â ¢ Phase 2: â Sometime later, another colleague hints that the problem is not that you donâ t think of others but that you lack any real interest in the work.â â ¢ Phase 3: â Finally, you realize what your colleagues were really getting at, and you, for your part, were able to convince them that you sometimes are more cautious at your work than others.â A loss scenario with a negative outcome. â ¢ Phase 1: â
2310.04450#8
2310.04450#10
2310.04450
[ "2302.02083" ]
2310.04450#10
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
A person who was very close to you, especially in recent times, has to move away unexpectedly. When you parted, you reassured each other you would both keep in close contact. But his/her new home is quite far away. You could see each other only rarely, if at all.â Phase 2: â In the meantime, some weeks have passed. The person hasnâ t gotten in touch with you again. Neverthe- less, you feel from time to time that you miss him/her.â
2310.04450#9
2310.04450#11
2310.04450
[ "2302.02083" ]
2310.04450#11
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
â ¢ Phase 3: â Finally, it has become clear that your friendship is not the same anymore. Your relationship with other people canâ t replace what you have lost. Now and then, you feel disappointed about the relationship you have lost.â There are nine scenarios for each type, a total of eighteen scenarios. The responses can be aggregated to reflect the gen- eral tendency toward these types of scenarios and compared between the two types, which differ along crucial appraisal dimensions. SCPQ includes the following measurement. â ¢ Emotional Responses: 1) anxious - calm, 2) depressed - cheerful, and 3) angry - gentle,
2310.04450#10
2310.04450#12
2310.04450
[ "2302.02083" ]
2310.04450#12
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
â ¢ Appraisals: 1) changeability, 2) controllability, and 3) negative valence, â ¢ Coping intentions: 1) Problem-focused coping, 2) Emotion-focused coping1, and 3) Self-esteem, â ¢ Self-directed coping behaviors: 1) search for information, 2) suppress information, 3) re-evaluation, and 4) pallia- tion (calming self-instruction or smoking, drinking, and eating), â ¢ Environment-directed coping behavior: 1) Active (to pre- vent or confront the stressor) and 2) Passive (waiting, hesitating, resigning). â ¢ Blameworthines: 1) Self-blaming and 2) Other-blaming, Below, we summarize the hypotheses that are supported by the human data from the SCPQ study2. â ¢ H1.1: Valence should be lower in the positive outcome than in the negative outcome in phase 3. â ¢ H1.2:
2310.04450#11
2310.04450#13
2310.04450
[ "2302.02083" ]
2310.04450#13
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Subjects should perceive higher controllability and changeability in the aversive scenarios than in the loss scenarios. 1The question is â To remain calm and composed . . . â Strictly speaking, this is not the same as emotion-focused coping as defined in Lazarus theory which is about changing one internal beliefs, goals, or intention. 2Note that we do not present the results involving self-directed coping here as they were not supported by human data, but the LLM results can be found on Github.
2310.04450#12
2310.04450#14
2310.04450
[ "2302.02083" ]
2310.04450#14
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
â ¢ H1.3: Controllability and changeability should decrease from phase 1 to phase 2. â ¢ H2.1: Subjects should use more active coping in aversive scenarios than in loss scenarios. â ¢ H2.2: Subjects should use less passive coping in aversive scenarios than in loss scenarios. â ¢ H3.1: Subjectsâ intention to use problem-focused coping is less in aversive scenarios than in loss scenarios. â ¢ H3.2: Subjectsâ
2310.04450#13
2310.04450#15
2310.04450
[ "2302.02083" ]
2310.04450#15
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
intention to use emotion-focused coping is more in aversive scenarios than loss scenarios. â ¢ H4.1: Subjects will blame themselves and others more in aversive scenarios than in loss scenarios. â ¢ H4.2: Self-blame will decrease over time, while Other- blame will increase over time. These are the trends that we will investigate in LLMsâ results. The main rationale of H2-H4 is that aversive scenarios should be perceived as more controllable and changeable, so subjects are expected to cope differently between the two types of scenarios. The SCPQ study involved 100 non-student adults with an average age of 38 years (sd 11.8). Additionally, Perrez and Reicherts provide the following hypotheses regarding depression:
2310.04450#14
2310.04450#16
2310.04450
[ "2302.02083" ]
2310.04450#16
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
â ¢ H5.1: Depressed persons perceive stressful scenarios to be more stressful and higher negative valence. â ¢ H5.2: Depressed persons perceive lower controllability and changeability. â ¢ H6.1: Depressed persons use less active/problem-focused coping. H6.2: Depressed persons use more palliation. â ¢ H6.3: Depressed persons blame themselves more. In short, depressed persons are expected to perceive scenar- ios worse both in controllability and changeability, resulting in different coping patterns.
2310.04450#15
2310.04450#17
2310.04450
[ "2302.02083" ]
2310.04450#17
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
# IV. OPENAIâ S GPTS In this work, we choose to investigate three recent LLMs from OpenAIâ s family of Generative Pre-trained Transformer models, or GPT [12], [13]. These include text-davinci-003 (D003), gpt-3.5-turbo (Chat- GPT), gpt-4 (GPT-4). The first two are from the GPT3.5 family. These three models have been fine-tuned using Rein- forcement Learning with Human Feedback (RLHF) [21], and ChatGPT and GPT-4 have been optimized for chat. ChatGPT and GPT-4 also allow the user to set a system message (i.e., describing what kind of an assistant you want it to be).
2310.04450#16
2310.04450#18
2310.04450
[ "2302.02083" ]
2310.04450#18
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
We do not use this feature to allow a comparison with the old model. To maximize the replicability of our results, we set the temperature parameter to 0 in all of our experiments. This makes the outputs mostly deterministic, selecting the outputs with the highest log probability. All other parameters are set to default. As these models can be sensitive to instruction [1], [22], [23], we investigate four different variations of prompting and asking the models. Here is the default instruction taken from SCPQ with a slight modification: â Try to clearly imagine the scenario below and then answer the question with the choice only in one line.â First, we either ask it to output choices (default) or just the number only (â the choiceâ s number onlyâ ). The number only makes sense here because all measurements use a Likert scale ranging from 0 up to 5. We test this variation because our early testing showed that sometimes the models may output more than just a choice, such as repeating the question, even when the instruction specifies â choice only.â The second variation is the location of the instruction. There are two versions: either putting the instruction before (default) or after (â the above scenarioâ ) the scenario. The reason for testing this is that, as these models use attention mechanisms, the distance of the context could impact how the LLM follows the instruction. Third, we investigate either asking them one question at a time (individual) or multiple questions at a time (batch). The batch follows the set of questions as stated above. The rationale for this is that asking in batches can save time and costs, as you donâ t need to repeat the scenario every time. These first three variations result in eight different com- binations of instructions. Lastly, we also test the effect of appending the previous (appraisal) answers to the prompt. The reason is that, as we are interested in the dynamics, knowing their previous answers could be crucial. For this variation, we only use the default instruction as asking for the number only or after the scenario does not make sense in this case. Code, including all SCPQ scenarios and instructions, data, and all results, including additional results not shown in the paper, can be found at github.com/yongsa-nut/PerrezSAIWS. # V. RESULTS
2310.04450#17
2310.04450#19
2310.04450
[ "2302.02083" ]
2310.04450#19
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Figure 1 shows the estimated mean with the 95% standard error for all the key measurements of the three models and human data. The setup here is the default setup, where the question is asked one by one, and the instruction is placed before the scenario and asks for choices. We choose to report this here as it is the most similar to the human setup. We discuss the results for other setups in the next section. Crucially, we focus mainly here on the qualitative results comparing the trend of results from the model and humans. The main reason is that there is a discrepancy between human data and model data. The human results are obtained from averaging over 100 subjects and nine scenarios, while the model results are from averaging nine scenarios making their uncertainty incomparable. Figure 1.A shows the results for depressed/cheerful emo- tional reactions. For this result and valence, we only focus on the outcome (positive or negative) in phase 3. We see that all three models show the expected trend where the positive outcome results in more cheerful and less depressed than the negative outcome. Compared to humans, all three models rate the cheerful to be lower in the positive outcome, where D003 is closest to the human rating. The results for the other two emotional reactions are similar. The results for valence in Figure 1.B also shows a similar trend. Like humans, all three models rate the valence to be lower in the positive outcome than in the negative outcome. However, all three models rate valence higher than humans in both negative and positive outcomes. Next, for changeability in Figure 1.C, we see that none of the models follow the human trend exactly where there is a difference between the two types of scenarios across two times, and the changeability in both types goes down. D003 always rates changeability to be zero. On the other hand, ChatGPT only rates changeability to go down in phase 2 for loss scenarios, while GPT-4 only rates changeability to go down for aversive scenarios. For controllability (Figure 1.D), we see that only D003 and GPT-4 show the expected trend of controllability going down over time for both scenario types. However, GPT-4 does not perceive the two types to be different, unlike D003. In all cases, all three models perceive controllability to be lower than what humans perceive.
2310.04450#18
2310.04450#20
2310.04450
[ "2302.02083" ]
2310.04450#20
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
We turn now to coping intentions. For problem-focused coping in Figure 1.E, only ChatGPT shows the trend of lowering it over time for loss scenarios. None of the models show that problem-focused coping at phase 2 in loss scenarios is lower than in aversive scenarios. In addition, all models rate problem-focused coping higher than the human data across time and type. For emotion-focused coping in Figure 1.F, we see that only D003 shows a similar trend to the human data, where the intention is going down over time in the aversive case. On the other hand, both ChatGPT and GPT-4 rate it maximum across time and type. Next, we look at coping behaviors. First, for passivity (Figure 1.G, both ChatGPT and GPT-4 show a trend similar to humans where the passivity increases over time. Second, for active influence (Figure 1.H), only GPT-4 shows the trend that the active influence would decrease over time but only for the aversive case. On the other hand, only ChatGPT shows a clear difference between the two types.
2310.04450#19
2310.04450#21
2310.04450
[ "2302.02083" ]
2310.04450#21
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Lastly, we turn to blameworthiness. First, for blaming others (Figure 1.I), all models show that, in the loss scenarios, blaming others increases from phase 1 to 2. However, only D003 shows an increase in blaming others in the aversive scenarios. None of the models shows that blaming others is higher in the aversive than in the loss scenarios at phase 2, like the human data. Second, for self-blaming (Figure ??J), both ChatGPT and GPT-4 show trends similar to the human data, where blaming oneself decreases over time in the aversive type and is higher in the aversive type than in the loss type in phase 1. Overall, we observe in many cases that LLMsâ s responses are similar to humanâ s data in the case of the dynamics, but not in the case of scenario types. Next, we look at the results comparing the model instructed to act as a person with depression (Depression) and the model without the instruction (Normal), focusing only on aversive scenarios (the loss scenarios show similar trends). Figure 2 shows the key six measurements. The pattern is clear that, for ChatGPT and GPT-4 but not D003, there is a difference between the depression and normal case in the expected di- rections. In particular, controllability, changeability, problem- A Depressed-Cheerful B_ Negative Valence Human D003 Chat GPT4 Human D003 Chat GPT4 2 gs â § 2 Ba4- Sa4- j VA WEE z i 7 1 5 = RS z £ i go. A d His st te r Ht = g 1 By & = Oo- so. Neg 3-Pos 3-Neg 3-Pos 3-Neg 3-Pos 3-Neg 3-Pos Neg 3-Pos 3-Neg 3-Pos 3-Neg 3-Pos 3-Neg 3-Pos Outcome Outcome C Changeability D Controllability Human D003 Chat GPT4 Human D003 Chat GPT4 2 2 2 2 5a Ba- a a G2 G2 e a peâ 4y a4 z â â
2310.04450#20
2310.04450#22
2310.04450
[ "2302.02083" ]
2310.04450#22
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
F, rs z g ee 5 so. oh en so. 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 Phase Phase E Problem-focused F Emotion-focused Human D003 Chat GPT4 Human D003 Chat GPT4 i i i | | ; Likert: Important Likert: Important ° ° G Passivity H_ Active Influence Human D003 â Chat GPT4 Human D003 â Chat GPT4 =.o_â 3a 7â 4 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 Phase Phase | Blame Others J Blame Self Human D003 Chat GPT4 Human D003 Chat GPT4 type © Aversive ~& LossFailure Fig. 1. Human vs The three models results for selected variables. The points show The estimated means and the error bars is 95% standard errors. The pink line with circle dots is the aversive type and the blue line with triangles is the loss type. The likert scales are as follows. Emotion: Very depressed (0) - Very cheerful (5); Appraisal: Very small (0) - very large (5); Coping Intention: Not important (0) - Very important (4); and Coping behaviors: Not at all 0% (0) - Certainty 100% (4). A Changeability B_ Controllability C_ Problem-focused D003 Chat GPT4 D003 Chat GPT4 D003 Chat GPT4 . â
2310.04450#21
2310.04450#23
2310.04450
[ "2302.02083" ]
2310.04450#23
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
ee ea t = 2 5 | ae 5 | 5 } ' a o a4- 4 ; ¢ 6 é é o® a? a, ' 7 4 o- @ } ¢ 0- @® ee e °@ o- 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 Phase Phase Phase D Palliation E Blame Self F Valence D003 Chat GPT4 D003 Chat GPT4 D003 Chat GPT4 oO at $6 * ° 6 2 TA $ é 3- 81 6 Sa. } r2-@ ef © + f= = PS o 5. é e = 4 =? 5 $ é 3 4- eo. @ a ; OG 2- 1 # g ° 1 1 ' 1 1 ba 1 1 ' 1 1 a % 1 ' 1 1 4 2 1 2 1 2 4 2 1 2 1 2 neg pos neg = pos neg = pos Phase Phase Outcome Instruction -@ Depression -® Normal Fig. 2. Depression vs Normal Results for the three models for the selected variables. The pink with circle points is the depression instruction and the blue with triangle points is without the instruction. focused coping, and palliation are lower in the depression case than in the normal case, while blaming oneself and valence are higher in the depression case than in the normal case.
2310.04450#22
2310.04450#24
2310.04450
[ "2302.02083" ]
2310.04450#24
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Controllability 1003 hat or Hy 3 5 5 a . ° * ; 5 1 3 i 2 i 2 Phase Instruction â @ choice -@ Numony Questions @ batcn A indv Place â Aer â = Betore Fig. 3. The sensitivity analysis results on controllability for the three models across eight possible combinations across three choices. indiv = individual. Num only = Number only. A Changeabili geabity Choice Choice Num Oniy Num Only â After Before After Before - a g 8 - & el ae i os Oe er Sr > a 4s EL o* Es 5 5 Se 2 1 then +e4i* 4 ane co 1 2 1 2 1 2 1 2 Phase B_ Controllability Choice Choice Num Oniy Num Only â ter Before Aer Before 5 o â hh 4} +t] (> ee 3 g 8 5 Do- § 8 at ao Es ee gs- = = 3 ial + z SS 2 =). $ 445 tay ~ es, tha 3 4 o- j 3 j 3 j 3 j 3 Phase type -@ Aversive Ae LossFailure Figure 3 shows the results on controllability for the three models across eight combination instructions across three choices. Overall, we see that there are variations across these instructions. This means that the instruction, where it is, and how many questions are asked could affect the output from the models. The biggest difference comes from asking in a batch instead of asking each question individually. The variation also
2310.04450#23
2310.04450#25
2310.04450
[ "2302.02083" ]
2310.04450#25
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
â Fig. 4. The sensitive analysis results across eight combinations across three indiv = choices for GPT-4 on changeability (A) and controllability (B). individual. Num only = Number only. depends on the model. Similar results can be found in other questions not shown here. Next, we zoom into selected questions. Figure 4 shows the GPT-4â s results for changeability (A) and controllability (B) across all combinations of setup. Due to space limitations, we focus only on these two as the theory argues they strongly influence the coping response, and GPT-4 is the latest model. Again, we see that there are variations in both controllabil- ity and changeability across combinations. For changeability (Figure 4.A), a few combinations show the expected trends aligning with human data, where changeability decreases over time and differs between aversive and loss types. In the case of controllability (Figure 4.B), it increases rather than decreases over time for the aversive type when asking in a batch. In addition, the value is also higher in the batch setup. On the other hand, when asking the questions individually, controllability decreases over time, aligning with the expected trend. However, only in one of the setups (asking to output only a number and after the scenario), controllability across all phases is higher in the aversive scenarios than in the loss scenarios, as expected by the theory and human data. Nevertheless, the value in this setup is still lower than humans, and its changeability does not align with humans. Overall, there is no single setup here where both changeability and controllability align with the expected trends. In addition to these eight setups, we look at the effect of appending their appraisal answers to the prompt. However, we do not observe any significant changes in any variables aside from a few cases for ChatGPT. These include changeability and controllability in phase 2, in the right direction. Beyond the variation shown in the figure, we found that GPT-4 follows instructions better than the other two models. In particular, when asking in a batch, ChatGPT and D003 may not answer all the questions. Further, when asked to answer with choice, ChatGPT occasionally did not answer just a choice but provided a full sentence reiterating the question instead.
2310.04450#24
2310.04450#26
2310.04450
[ "2302.02083" ]
2310.04450#26
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
These did not happen with GPT-4. # VI. DISCUSSION Overall, no model follows all the human trends and hypothe- ses as predicted by appraisal and coping theory. Nonetheless, the responses from the three models depict the right trends for the dynamics in several variables, including emotional responses, appraisal variables, and coping. In many cases, however, the models could not differentiate the two scenario types well, and the magnitudes are quite different from hu- mans. A few cases stand out. For example, all models rate the negative valence to be more negative than humans. One potential explanation could be from the human side, namely it could be due to experimenter demand effects. Another interesting case concerns the particular aspects of emotion- focused coping that SCPQ considers, specifically to remain calm and composed.
2310.04450#25
2310.04450#27
2310.04450
[ "2302.02083" ]
2310.04450#27
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Both ChatGPT and GPT-4 always answer the highest value. We speculate that this could be due to fine- tuning with RLHF. Importantly, we also observe some differences between humans and LLMs on several key appraisal variables. In particular, GPT-4 rated the controllability and changeability decrease over time but didnâ t rate the two scenario types differently. We speculate that this could be due to the limited information provided in the scenarios. Human subjects bring with them their own knowledge and experiences of these daily stressful scenarios, which could make them aware of various ways that they could deal with them. However, these are not explicitly in the sceanrios, and LLM may not be able to infer them from just a short snippet. Another explanation and limitation of SCPQ is that these scenarios are hypothetical, and people may behave and appraise them differently if they were real. To fully test the perception of appraisal and emotion, future work is needed to compare LLMsâ results with human data from real events. is that ChatGPT and GPT-4 can be instructed to act as a depressed person, where their responses show trends similar to the theoryâ s prediction, such as perceiving less controllability and more negative valence. Nevertheless, we need to interpret this result with caution. At a minimum, it could mean that these models have learned the stereotypical behaviors of depressed people. Future research is needed to further explore LLMs in this direction. Still, this opens up the possibility of instructing the models to act as a person with various personalities or psychological conditions to investigate how it would affect the appraisal evaluation and emotional experiences. This highlights another limitation of this work: human data is an average over multiple people and not a single individual. We did not compare LLMs, which have been fine-tuned in a specific way, against a specific person. Future work may look into instructing the model to match with a specific subject or group of subjects for comparison, a matched pair design. Our results also indicate that all models can be quite sensitive to the instruction and prompts. Asking in a batch, which could reduce the cost and speed up the query, could yield different results from asking each question one by one. Moreover, the older models may struggle to answer all the questions in the right format, especially when the number of questions increases.
2310.04450#26
2310.04450#28
2310.04450
[ "2302.02083" ]
2310.04450#28
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
In conclusion, this work seeks to understand LLMs through the lens of appraisal and coping theory, and we found some evidence suggesting that there is still some discrepancy be- tween how human and LLMs perceive emotional scenarios. Nevertheless, as mentioned, this only touches a few aspects of emotional experiences and provides only one view of emotion theory. It is also possible that these LLMs trained on a large amount of human data would learn a different representation of scenarios from appraisal theory. It is an open question whether or not this different representation could be used in some way to inform theory or our understanding of emotion. Regardless, as these black box LLMs interact with more and more people, it is crucial for researchers to investigate how they understand human emotional experiences thoroughly. This work provides some initial steps toward this endeavor. # ETHICAL IMPACT STATEMENT In this work, we evaluate LLMs on their emotion perception ability. There are several ethical problems associated with LLMs including bias, harmful content, misinformation, and privacy concerns. However, given how LLMs are positioned to impact us, it is critical for research to explore and evaluate them.
2310.04450#27
2310.04450#29
2310.04450
[ "2302.02083" ]
2310.04450#29
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
We did not collect human data in this work. We used existing data and results from a previously published and approved study. # REFERENCES [1] M. Binz and E. Schulz, â Using cognitive psychology to understand gpt- 3,â Proceedings of the National Academy of Sciences, vol. 120, no. 6, p. e2218523120, 2023. [2] S. Bubeck, V. Chandrasekaran, R. Eldan, J. Gehrke, E. Horvitz, E. Ka- mar, P. Lee, Y. T. Lee, Y. Li, S. Lundberg, et al., â
2310.04450#28
2310.04450#30
2310.04450
[ "2302.02083" ]
2310.04450#30
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Sparks of artificial intelligence: Early experiments with gpt-4,â arXiv preprint general arXiv:2303.12712, 2023. [3] M. Kosinski, â Theory of mind may have spontaneously emerged in large language models,â arXiv preprint arXiv:2302.02083, 2023. [4] R. S. Lazarus, Emotion and adaptation. Oxford University Press on Demand, 1991. [5] A. Moors, P. C. Ellsworth, K. R. Scherer, and N. H. Frijda, â
2310.04450#29
2310.04450#31
2310.04450
[ "2302.02083" ]
2310.04450#31
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Appraisal theories of emotion: State of the art and future development,â Emotion Review, vol. 5, no. 2, pp. 119â 124, 2013. [6] P. Ekman et al., â Basic emotions,â Handbook of cognition and emotion, vol. 98, no. 45-60, p. 16, 1999. [7] A. R. Damasio, â The somatic marker hypothesis and the possible functions of the prefrontal cortex,â
2310.04450#30
2310.04450#32
2310.04450
[ "2302.02083" ]
2310.04450#32
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, vol. 351, no. 1346, pp. 1413â 1420, 1996. [8] J. A. Russell, â A circumplex model of affect.,â Journal of personality and social psychology, vol. 39, no. 6, p. 1161, 1980. [9] L. F. Barrett, â The theory of constructed emotion: an active inference ac- count of interoception and categorization,â Social cognitive and affective neuroscience, vol. 12, no. 1, pp. 1â
2310.04450#31
2310.04450#33
2310.04450
[ "2302.02083" ]
2310.04450#33
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
23, 2017. [10] M. Perrez and M. Reicherts, â Stress, coping, and health: A situation- behavior approach: Theory, methods, applications,â (No Title), 1992. [11] J. Gratch and S. Marsella, â Evaluating a computational model of emotion,â Autonomous Agents and Multi-Agent Systems, vol. 11, pp. 23â 43, 2005. [12] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al., â Language mod- els are few-shot learners,â Advances in neural information processing systems, vol. 33, pp. 1877â
2310.04450#32
2310.04450#34
2310.04450
[ "2302.02083" ]
2310.04450#34
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
1901, 2020. [13] O. AI, â Gpt-4 technical report,â arXiv preprint arXiv:2303.08774, 2023. [14] B. Peng, C. Li, P. He, M. Galley, and J. Gao, â Instruction tuning with gpt-4,â arXiv preprint arXiv:2304.03277, 2023. [15] M. B. Arnold, Emotion and personality.
2310.04450#33
2310.04450#35
2310.04450
[ "2302.02083" ]
2310.04450#35
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Columbia University Press, 1960. [16] C. A. Smith, R. S. Lazarus, et al., â Emotion and adaptation,â Handbook of personality: Theory and research, vol. 21, pp. 609â 637, 1990. [17] M. Seligman, â P.(1975). helplessness: On depression, development, and death,â Friedman, San Francisco, 1972. [18] C. Harmon-Jones, B. Bastian, and E. Harmon-Jones, â
2310.04450#34
2310.04450#36
2310.04450
[ "2302.02083" ]
2310.04450#36
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
The discrete emotions questionnaire: A new tool for measuring state self-reported emotions,â PloS one, vol. 11, no. 8, p. e0159915, 2016. [19] K. R. Scherer, â Evidence for the existence of emotion dispositions and the effects of appraisal bias.,â Emotion, vol. 21, no. 6, p. 1224, 2021. [20] M. Miotto, N. Rossberg, and B. Kleinberg, â Who is gpt-3? an ex- ploration of personality, values and demographics,â
2310.04450#35
2310.04450#37
2310.04450
[ "2302.02083" ]
2310.04450#37
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
arXiv preprint arXiv:2209.14338, 2022. [21] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, et al., â Training language models to follow instructions with human feedback,â Advances in Neural Information Processing Systems, vol. 35, pp. 27730â 27744, 2022. [22] M.
2310.04450#36
2310.04450#38
2310.04450
[ "2302.02083" ]
2310.04450#38
Investigating Large Language Models' Perception of Emotion Using Appraisal Theory
Bommarito II and D. M. Katz, â Gpt takes the bar exam,â arXiv preprint arXiv:2212.14402, 2022. [23] X. Li, Y. Li, L. Liu, L. Bing, and S. Joty, â Is gpt-3 a psychopath? evaluating large language models from a psychological perspective,â arXiv preprint arXiv:2212.10529, 2022.
2310.04450#37
2310.04450
[ "2302.02083" ]
2310.02263#0
Contrastive Post-training Large Language Models on Data Curriculum
3 2 0 2 t c O 3 ] L C . s c [ 1 v 3 6 2 2 0 . 0 1 3 2 : v i X r a Preprint # CONTRASTIVE POST-TRAINING LARGE LANGUAGE MODELS ON DATA CURRICULUM Canwen Xu1â , Corby Rosset2â , Luciano Del Corro2, Shweti Mahajan2, Julian McAuley1, Jennifer Neville2, Ahmed Hassan Awadallah2, Nikhil Rao2 1University of California, San Diego, 2Microsoft Corporation 1{cxu,jmcauley}@ucsd.edu, 2{corbyrosset,ldelcorro,shmahaj}@microsoft.com 2{jenneville,ahmed.awadallah,nikhilrao}@microsoft.com # ABSTRACT Alignment serves as an important step to steer large language models (LLMs) to- wards human preferences. In this paper, we explore contrastive post-training tech- niques for alignment by automatically constructing preference pairs from multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We care- fully compare the contrastive techniques of SLiC and DPO to SFT baselines and find that DPO provides a step-function improvement even after continueing SFT saturates. We also explore a data curriculum learning scheme for contrastive post- training, which starts by learning from â
2310.02263#1
2310.02263
[ "2309.00267" ]
2310.02263#1
Contrastive Post-training Large Language Models on Data Curriculum
easierâ pairs and transitioning to â harderâ ones, which further improves alignment. Finally, we scale up our experiments to train with more data and larger models like Orca. Remarkably, contrastive post-training further improves the performance of Orca, already a state-of-the-art instruction learning model tuned with GPT-4 outputs, to exceed that of ChatGPT. # INTRODUCTION The rapid evolution of Large Language Models (LLMs) has ushered in a new era of natural language processing capabilities. These models, when scaled to billions of parameters and pretrained over trillions of text tokens, demonstrate unprecedented proficiency in a wide array of tasks (Brown et al., 2020; Chowdhery et al., 2022). Various post-training procedures like supervised instruction tuning and Reinforcement Learning from Human Feedback (RLHF) fine-tune pretrained LLMs to better align with human expectations and preferences (Ouyang et al., 2022; OpenAI, 2023; Touvron et al., 2023a). This additional alignment procedure is crucial, because the pretraining objective of essentially predicting the next token in a text sequence is known to produce LLMs whose outputs are at times incorrect, irrelevant, or unsafe (Bai et al., 2022a). Traditionally, these post-training techniques rely on human preference annotations to inform an LLM which behaviors it ought to adopt in the scenario at hand. For instance, RLHF fits a reward model on these preference pairs, against which a LLM policy is then optimized (Ziegler et al., 2019; Bai et al., 2022a; Touvron et al., 2023b). However, such human feedback is expensive to obtain and often noisy (Stiennon et al., 2020; Ouyang et al., 2022; Bai et al., 2022a). To align an LLM without human feedback, other methods such as Reinforcement Learning from AI Feedback (RLAIF) harvest preference signals via automatic feedback from another LLM (Lee et al., 2023; Bai et al., 2022b). However, studies have found AI feedback has a low agreement rate with humans (Perez et al., 2022; Casper et al., 2023b; Lee et al., 2021).
2310.02263#0
2310.02263#2
2310.02263
[ "2309.00267" ]
2310.02263#2
Contrastive Post-training Large Language Models on Data Curriculum
Also, these methods suffer from the same drawbacks as RLHF, such as reward hacking (Skalse et al., 2022). # sun Recently, certain contrastive post-training techniques such as Sequence Likelihood Calibration (SLiC) and Direct Preference Optimization (DPO) offer enticing alternatives to RLHF (Zhao et al., 2023b;a). For instance, DPO is proven to optimize the same objective as RLHF. But instead of opti- mizing against a reward model, it works by increasing the LLMâ s relative probability of generating the preferred output over the unfavorable one â making it much simpler to implement (Rafailov et al., 2023). The difference between the post-training methods is illustrated in Figure 1.
2310.02263#1
2310.02263#3
2310.02263
[ "2309.00267" ]
2310.02263#3
Contrastive Post-training Large Language Models on Data Curriculum
â Equal contribution. Work done during Canwenâ s internship at Microsoft Research. 1 Preprint reward yw e.g., InstructGPT Supervised Finetuning RLHF Contrastive Post-training Figure 1: Difference betwen SFT, RLHF, and contrastive post-training. For SFT, the model opti- mizes the negative log-likelihood for the next token. RLHF samples an output from the LLM and use a reward model to provide feedback for PPO to update the LLM. For contrastive post-training, a contrastive loss is used to steer the model towards preferred outputs. In this work, we study what we believe is a strong connection between contrastive post-training and RLAIF: one can employ LLMs to automatically generate preference pairs which can then be optimized directly via contrastive objectives like DPO. However, without feedback from hu- man annotations, LLM-feedback, or a reward model to distinguish them, the key question be- comes how to automatically construct pairs that 1) contain meaningful directional signal on a per-example basis and 2) in aggregate adhere to the values and principles that humans expect. This paper explores a simple yet effective answer to this question: contrast outputs from LLMs of varying sizes and capabilities, as motivated in Table 1. We au- tomatically construct training pairs of responses gen- erated from InstructGPT (Ouyang et al., 2022), Chat- GPT, and GPT-4 (OpenAI, 2023) as demonstrations of desirable and undesirable behaviors. We believe this choice provides a solid foundation to better under- stand the efficacy of various contrastive training tech- niques when it comes to â bridging the gapâ between stronger and weaker models. On a more general level, we wish to apply our findings to improve model dis- tillation (Hinton et al., 2015), i.e., preserve the quality of larger, more capable models in a smaller target model which is cheaper and faster to deploy at scale, as explored in many recent works (Chi- ang et al., 2023; Xu et al., 2023b; Geng et al., 2023). Model vs. Win Rate GPT-4 GPT-4 InstructGPT ChatGPT 95.3% 83.5% 89.4% ChatGPT InstructGPT
2310.02263#2
2310.02263#4
2310.02263
[ "2309.00267" ]
2310.02263#4
Contrastive Post-training Large Language Models on Data Curriculum
# cach # na ecpetineme We show through carefully crafted experiments that contrastive post-training techniques main- tain a step-function advantage over continuous supervised fine-tuning, which holds even at larger scales of models and training examples. For example, a key result of our study is that enhancing Orca (Mukherjee et al., 2023) â already a state-of-the-art instruction learning model â with DPO over pairs of GPT4-vs-InstructGPT is more beneficial than additional supervised fine-tuning on only the GPT-4 outputs, all else being equal. In fact, the contrastive fine-tuning of Orca is preferred 55%- 45% against ChatGPT in head-to-head comparison on the Alpaca Eval benchmark. Additionally, we structure how and when the model is exposed to various types of pairs in the style of curriculum learning (Bengio et al., 2009; Soviany et al., 2022). We discover that reordering the training data to start from â easy pairsâ and warm up to â harder pairsâ leads to considerable performance improvements.
2310.02263#3
2310.02263#5
2310.02263
[ "2309.00267" ]
2310.02263#5
Contrastive Post-training Large Language Models on Data Curriculum
To summarize, our contributions are as follows: 1. We propose a new automatic setting for contrastive post-training that improves performance of LLMs without human-, AI-, or reward model-feedback. 2. We explore several curriculums for SFT and DPO. We discover that performance of DPO can be further improved by simply reordering the data. 2 Preprint 3. We verify the effectiveness of our approach holds on scaled-up experiments on a state-of- the-art instruction-following model Orca. # 2 RELATED WORKS Improving downstream performance of Large Language Models (LLMs) and aligning them with user preference and designed intents are important to deployment and applications. This can be achieved by fine-tuning these models on responses written by humans or generated with human- written labels and templates. Previous works have applied supervised fine-tuning (SFT) on both instruction data (Sanh et al., 2022; Wei et al., 2022; Chung et al., 2022; Taori et al., 2023; Peng et al., 2023) and dialogue data (Chiang et al., 2023; Xu et al., 2023b; Geng et al., 2023). Although SFT can successfully adapt an LLM to instruction learning or chatting, the model can be further im- proved by post-training (Ouyang et al., 2022) to meet human preference. A straightforward solution to optimize the human preference is to use reinforcement learning. Reinforcement Learning with Human Feedback (RLHF, Ziegler et al., 2019) first trains a Bradley-Terry reward model (Bradley & Terry, 1952) on human-labeled preference pairs. Then, it samples output from the model and scores the output with the reward model. A reinforcement learning algorithm, such as Proximal Policy Optimization (PPO, Schulman et al., 2017) is used to optimize the language model for better rewards. RLHF has seen successful applications in downstream tasks (Kreutzer et al., 2018; Stien- non et al., 2020). However, RLHF methods are infamous for their instability, inefficiency, reward misgeneralization and hacking (Casper et al., 2023a; Skalse et al., 2022). Recently, there are studies proposing methods for post-training without reinforcement learning.
2310.02263#4
2310.02263#6
2310.02263
[ "2309.00267" ]
2310.02263#6
Contrastive Post-training Large Language Models on Data Curriculum
These methods optimize human preference with human-labeled contrastive pairs. FeedMe (Ope- nAI, 2022) samples model output multiple times and fine-tunes on the best response picked by human labelers. Sequence Likelihood Calibration (SLiC, Zhao et al., 2023b;a) uses a contrastive sequence calibration loss to steer the LM towards desired output. Rank responses to align human feedback (RRHF, Yuan et al., 2023) adds a ranking loss to the SFT loss. The ranking loss promotes responses based on preference ranked by humans or a reward model. Direct Preference Optimiza- tion (DPO, Rafailov et al., 2023) optimizes language models by contrasting it against a reference model on preference data. Rafailov et al. (2023) also provide a theoretical analysis that the DPO is optimizing the same objective as RLHF, but in a more efficient and stable manner. In our paper, we conduct empirical studies to compare offline post-training methods, RLHF, SLiC and DPO, in terms of performance and efficiency. Human preference is expensive to collect thus difficult to scale up. Recently, there have been at- tempts to automate post-training by replacing the human preference data with model-generated feedback. Self-distillation with feedback (SDF, Xu et al., 2023b) samples multiple outputs from the model and prompts ChatGPT to pick the best response for fine-tuning the model. RL from AI Feedback (RLAIF, Lee et al., 2023) uses an off-the-shelf LLM to replace human labels in the stan- dard RLHF. Following that, reinforcement learning from contrast distillation (RLCD, Yang et al., 2023) constructs model-generated contrastive pairs by prompting an off-the-shelf LLM to act dif- ferently on certain properties, e.g., harmlessness and helpfulness. Different from these works, our approach is an offline algorithm, which does not require time-consuming sampling during training. Our approach does not require training a reward model and can be easily scaled up.
2310.02263#5
2310.02263#7
2310.02263
[ "2309.00267" ]
2310.02263#7
Contrastive Post-training Large Language Models on Data Curriculum
# 3 PRELIMINARIES Reinforcement Learning from Human Feedback (RLHF) To optimize the human preference with reinforcement learning, we need to first train a reward model rÏ (y|x) that outputs a reward for a given output y. When training the target model, RLHF (Ziegler et al., 2019) uses a reinforcement learning algorithm (usually PPO, Schulman et al., 2017) to optimize the reward of a sampled output y from the target model Pθ. To regularize the optmization and prevent model degeneration, a KL penalty term between the sequences of distributions over tokens of the target model and a reference model (e.g., SFT model) is added to the reward (Korbak et al., 2022). This prevents the RL policy from deviating substantially away from the reference model, which often leads to incoherent text output (Ziegler et al., 2019).
2310.02263#6
2310.02263#8
2310.02263
[ "2309.00267" ]
2310.02263#8
Contrastive Post-training Large Language Models on Data Curriculum
3 Preprint Sequence Likelihood Calibration (SLiC) In contrast to RLHF, SLiC can exploit pairwise human feedback data and train offline (i.e., without sampling from the target model each time). SLiC takes a positive example y+, a negative example yâ and a reference output yref from the SFT model. In essence, SLiC encourages the target LM to output sequences those resemble the positive sequence and penalizes those that resemble the negative sequence, while using the reference sequence from the SFT model for regularization. The loss function for SLiC is: LSLiC(θ) = max(0, δ â log Pθ(y+|x) + log Pθ(yâ |x)) â λ log Pθ(yref |x) where δ and λ are two hyperparameters, controlling the margin for the ranking loss and regulariza- tion weight. SLiC is memory-efficient, as both its positive-negative pairs and reference sequences are offline. Direct Preference Optimization (DPO) Similar to SLiC, DPO is an offline preference optimiza- tion method. DPO takes a pair of (pre-computed) positive and negative examples and optimizes the difference between the target model and the reference model (i.e., SFT model), which increases the likelihood of the positive example and decreases the likelihood of the negative example. The loss function of DPO is shown below: r+(θ) = β(log Pθ(y+|x) â log Pref (y+|x)) râ (θ) = β(log Pθ(yâ |x) â log Pref (yâ |x)) (2) (3) LDPO(θ) = â log sigmoid(r+(θ) â râ (θ)) (4) where β is a temperature hyperparameter; r+ and râ are the two pseudo-rewards that resemble the reward function in RLHF. Despite DPO having a similar form, there are key differences between SLiC and DPO: at train time, SLiC requires only the sampled outputs from a reference model, while DPO requires the logits from that (frozen) reference model for both the positive and negative sequence.
2310.02263#7
2310.02263#9
2310.02263
[ "2309.00267" ]
2310.02263#9
Contrastive Post-training Large Language Models on Data Curriculum
Rafailov et al. (2023) also conduct a theoretical analysis of DPO and prove that optimizing the DPO loss is identical to the RLHF loss. # 4 CONTRASTIVE POST-TRAINING OVER PAIRWISE DATA CURRICULUM Contrastive Post-training Contrastive post-training involves the construction of positive y+ and negative yâ sequences in response to the same input x. Under the traditional settings of human- feedback, it is often the case that for some (y1, y2) â ¼ P (x) sampled from the same LLM, human annotators provide a preference as to which is the positive. As this process is expensive, to reduce costs, recent studies (Xu et al., 2023b; Lee et al., 2023; Yang et al., 2023) have investigated the use of pre-aligned models as substitutes for human annotators in providing feedback for post-training methods. However, annotating preference pairs using the largest models, such as GPT-4, on datasets with millions of examples â like the 5M examples used by Orca (Mukherjee et al., 2023) â
2310.02263#8
2310.02263#10
2310.02263
[ "2309.00267" ]
2310.02263#10
Contrastive Post-training Large Language Models on Data Curriculum
would incur a cost of $150k just for calling the API, making it prohibitively expensive as well. In our setting, we choose to sample y+ directly from a â superiorâ LLM, y+ â ¼ Psup, and yâ from an inferior Pinf . We define one model to be superior to another Psup â » Pinf if in expectation humans would prefer y+ over yâ given a reasonable input x. Relying on results in tried-and-tested benchmarks (Zheng et al., 2023; Li et al., 2023; Xu et al., 2023a) such as Alpaca Eval (shown in Table 1), we make an informed choice that GPT4 â » ChatGPT â » InstructGPT for our chosen scenario of general instruction tuning.
2310.02263#9
2310.02263#11
2310.02263
[ "2309.00267" ]
2310.02263#11
Contrastive Post-training Large Language Models on Data Curriculum
We acknowledge that there could be many reasons why humans would prefer y+, as previous stud- ies have found that a single reward function may not be sufficient to capture the range of human preferences (Hong et al., 2023; Skalse et al., 2023). Other studies emphasize only a certain property in the contrastive pair, such as helpfulness or harmlessness (Bai et al., 2022a). Data Curriculum The concept of a curriculum (Bengio et al., 2009) is analogous to the peda- gogical approach in human learning where tasks are presented in increasing order of difficulty. By adopting this methodology, we aim to facilitate a smoother and more effective learning trajectory for our models. For our curriculum, we approximate the difficulty of the learning task as being inversely propor- tional to the gap between the Psup and Pinf , as indicated in Table 1. That is, the more clear-cut 4 Preprint Table 2:
2310.02263#10
2310.02263#12
2310.02263
[ "2309.00267" ]
2310.02263#12
Contrastive Post-training Large Language Models on Data Curriculum
Time for post-training LLaMA-7B on Alpaca for one epoch on 16 Nvidia V100 GPUs. Method SFT RLHF/RLAIF (RM) RLHF/RLAIF (PPO) SLiC DPO Training Time 4h 3h 24h 7h 12h the preference between juxtaposed y+ and yâ , the easier the learning task. We define an EasyPair as y+ â ¼ GPT-4(x) and yâ â ¼ InstructGPT(x). On the other hand, a HardPair contrasts between e.g., ChatGPT and InstructGPT because the capability gap between them is narrower than that be- tween GPT-4 and InstructGPT. HardPairs present a more nuanced challenge, requiring the model to discern subtler distinctions in quality and content. We define our curriculum such that, initially, training starts with only EasyPairs to provides our model with a foundational understanding of the contrastive differences. During training, the model becomes adept at identifying distributional differences, so the probability of seeing an EasyPair in a mini-batch decreases as they are replaced by HardPair. p(EasyPair) = 1 â α p(HardPair) = α (5) As training progresses, α varies according to f (t). In our experiments, we allow f (t) = kt to be a linear function of the step number, or in some cases a constant function, for comparison. For the linear function, we choose k such that f (t) = 1 at the end of one epoch, as shown in Figure 2.
2310.02263#11
2310.02263#13
2310.02263
[ "2309.00267" ]
2310.02263#13
Contrastive Post-training Large Language Models on Data Curriculum
The anti-curriculum is the exact opposite â moving from HardPair to EasyPair. We also explore an analogous curriculum regime for supervised fine-tuning, which we define as starting from ChatGPT targets (which are easier for a smaller model to imitate), and gradually moving towards GPT-4 targets, which are more challenging. By structuring such data curriculums, we ensure that the model can gradually acclimatize to the task, building on its understanding and refining its discernment capabilities. This approach not only enhances the modelâ s performance but also provides insights into the incremental learning capabilities of large language models.
2310.02263#12
2310.02263#14
2310.02263
[ "2309.00267" ]
2310.02263#14
Contrastive Post-training Large Language Models on Data Curriculum
5 EXPERIMENTS 5.1 EXPERIMENTAL SETTINGS Training Datasets Our small-scale experiments utilize Alpaca (Taori et al., 2023), an instruction learning dataset, which originally includes 52k instructions generated with Self-Instruct (Wang et al., 2023), with responses from InstructGPT (text-davinci-003). We further collect ChatGPTâ s re- sponses with OpenAI API (gpt-3.5-turbo) and GPT-4â s responses from Peng et al. (2023). There- fore, we are able to construct three contrastive pairs, namely GPT-4 vs. td003, GPT-4 vs. ChatGPT and ChatGPT vs. td003. For large-scale experiments, we use a mixture of 550k FLAN-v2 data, 200k FLAN-v1 data (sampled according to (Mukherjee et al., 2023)), the 52k Alpaca data (Taori et al., 2023) and 50k Vicuna data (Chiang et al., 2023). Evaluation Datasets We evaluate performance of models with Alpaca Eval (Li et al., 2023) and the test set of WizardLM prompts (Xu et al., 2023a). Alpaca Eval consists of 805 instructions, which includes 252 instructions from the self-instruct evaluation set (Wang et al., 2023), 188 from Open Assistant evaluation set, 129 from Anthropic-HH helpfulness (Bai et al., 2022a), 80 from Vicuna evaluation (Chiang et al., 2023), and 156 from Koala evaluation (Geng et al., 2023). The metric is a win rate of a treatment candidate against a baseline modelâ s responses, evaluated by GPT-4 in a side-by-side fashion (OpenAI, 2023). The WizardLM test set (Xu et al., 2023a) consists of 218 prompts which cover 29 distinct skills, collected from the open-source repositories, platforms and forums. Following Xu et al. (2023a), we report the ratio of the sum over all examples of scores of the treatment model compared to a baseline (a.k.a. â score %â ) as well as the win/tie rates.
2310.02263#13
2310.02263#15
2310.02263
[ "2309.00267" ]
2310.02263#15
Contrastive Post-training Large Language Models on Data Curriculum
This metric is again a side-by-side comparison evaluated by GPT-4. Whereas AlpacaEval formats comparisons as a ranking task (re-order the 5 Preprint Table 3: An example of reward hacking in RLAIF model trained with a â in-domainâ reward model on GPT-4 vs. td003 pairs (Skalse et al., 2022), despite its response is unreadable. Prompt Method Response Transform this recipe for â vegetable fried riceâ into a vegan version. 3 tablespoons vegetable oil, 2 eggs, 1 cup diced onions, 2 garlic cloves minced, 2 cups shredded carrots, 2 cups cooked white rice, 2 tablespoons soy sauce.
2310.02263#14
2310.02263#16
2310.02263
[ "2309.00267" ]
2310.02263#16
Contrastive Post-training Large Language Models on Data Curriculum
RLAIF SFT 1/: BBCRed pepper converted to3 tbps shred blocklijke diceda)â Rotisserie veg- etablesâ Hereâ s a vegan version of vegetable fried Ingredients: 3 tablespoons veg- rice: etable oil; 1 cup diced onions.. [complete output omitted] Reward 34.594 22.156 candidate responses according to how a human would prefer them), for WizardLM the candidates are individually scored. Note that such evaluation by GPT-4 might slightly favor SFT on GPT-4 outputs, as pointed by Li et al. (2023). Both datasets have a different data distribution from our training set and thus can be a good testbed to test the zero-shot generalization capability of the models. Base Models For experiments on Alpaca, we use LLaMA-7B (Touvron et al., 2023a) as the base model. For large-scale experiments, we explore the post-training enhancement setting, where we initialize from 13B parameter state-of-the-art instruction-following model, Orca (Mukherjee et al., 2023) and improve its performance. Training Details For all model trained, we use the AdamW optimizer with a learning rate of 1e-5 and linear warm-up. The LLaMA models are trained on 16 Nvidia V100 32GB GPUs with the maximum length set to 1024 and a total batch size of 512. The Orca models are trained on 32 Nvidia A100 80GB GPUs with the maximum length set to 2048 and a total batch size of 512. The small scale experiments thus have 101 steps per epoch on Alpaca, and the large scale experiments have roughly 1600 steps. To save VRAM, we use DeepSpeed ZeRO-3 (Rajbhandari et al., 2020) for model parallelism and offload. For SLiC, we set the ranking margin δ and regularization coefficient both to 1.0, following Zhao et al. (2023a). For DPO, we use the default temperature β of 0.1, following Rafailov et al. (2023). The training time for all methods on Alpaca is shown in Table 2.
2310.02263#15
2310.02263#17
2310.02263
[ "2309.00267" ]
2310.02263#17
Contrastive Post-training Large Language Models on Data Curriculum
We implement RLAIF (Lee et al., 2023) by training reward models (initialized from LLaMA) with the same pairs for SLiC and DPO. Then, we use the trained reward models for the standard RLHF, strictly following Hugging Face TRL1. We search the KL penalty coefficient hyperparameter over {0.2, 0.5, 1.0}. 5.2 COMPARING CANDIDATES FOR POST-TRAINING: RLAIF, SLIC AND DPO We compare offline contrastive post-training algorithms, SLiC and DPO, and an online RL method, RLAIF, to SFT. Since both Alpaca Eval and WizardLM evaluations are pairwise, we choose two rea- sonable baselines to compare all techniques: SFT on ChatGPT outputs, and SFT on GPT-4 outputs, which is slightly harder. Which is the best for post-training? The top of Table 4 establishes our baselines: we fine-tune LLaMA (Touvron et al., 2023a) on both ChatGPT and GPT-4 outputs, respectively. SFT on GPT- 4 outperforms SFT on ChatGPT with a win rate of 61.2% and 72.7% on Alpaca and WizardLM evaluation sets, respectively. For contrastive post-training approaches, SLiC underperforms SFT by a large margin. A poten- tial reason is the objective that SLiC optimizes includes a fixed ranking margin δ. In our setting, the distance between the positive and negative examples fluctuates, thus may cause difficulties for learning effectively. In contrast, DPO introduces a reference model instead of using a fixed margin for the loss. By comparing Equation 1 to Equation 4, DPO can be roughly regarded as optimizing a dynamic margin δ⠲ = log Pref (y+|x) â log Pref (yâ |x) as in SLiC.
2310.02263#16
2310.02263#18
2310.02263
[ "2309.00267" ]
2310.02263#18
Contrastive Post-training Large Language Models on Data Curriculum
This may explain why DPO is # 1https://github.com/huggingface/trl 6 Preprint Table 4: Experimental results of offline post-training techniques. For SLiC and DPO, the training target contrasts a positive vs. negative pair, and the reference model for these techniques is the SFT model trained on ChatGPT responses. All baselines are compared against LLaMA models fine- tuned with ChatGPT and GPT-4 responses on Alpaca data. SFT-3.5 is the LLaMA model trained with SFT on ChatGPT responses. â RLAIF-trained models suffer crippling reward hacking.
2310.02263#17
2310.02263#19
2310.02263
[ "2309.00267" ]
2310.02263#19
Contrastive Post-training Large Language Models on Data Curriculum
vs. SFT on ChatGPT vs. SFT on GPT-4 Method Init. Training Target Epoch Alpaca WizardLM Alpaca WizardLM win% score% win (tie)% win% score% win (tie)% SFT SFT SFT RLAIFâ LLaMA LLaMA SFT-3.5 ChatGPT outputs GPT-4 outputs GPT-4 outputs LLaMA RM on output pairs 1 1 1 1 50.0 61.2 65.1 0.0 100.0 125.8 124.3 - 50.0 72.7 (6.0) 71.3 (5.1) 0.0 (0.0) 37.4 50.0 53.2 0.0 97.4 100.0 103.8 - 32.4 (6.5) 50.0 47.2 (6.5) 0.0 (0.0) SLiC SLiC SLiC LLaMA ChatGPT vs td003 LLaMA GPT4 vs ChatGPT LLaMA GPT4 vs td003 1 1 1 33.7 41.3 22.9 95.8 108.8 81.4 40.9 (0.5) 57.9 (0.5) 31.0 (1.4) 20.5 30.4 13.8 85.9 95.1 75.3 24.5 (0.5) 38.0 (0.9) 17.6 (1.4) DPO DPO DPO DPO LLaMA ChatGPT vs td003 LLaMA GPT4 vs ChatGPT LLaMA SFT-3.5 GPT4 vs td003 GPT4 vs td003 1 1 1 1 48.6 56.0 59.6 70.4 111.3 119.6 121.1 120.4 58.8 (0.5) 68.1 (0.5) 68.1 (2.8) 66.2 (2.8) 32.8 41.6 45.2 58.7 97.8 98.3 99.8 105.4 39.4 (0.5) 39.8 (1.9) 43.1 (3.7) 51.9 (2.8) SFT DPO SFT-3.5 Above GPT4 outputs GPT4 vs td003 3 1 72.8 77.3 119.3 137.8 64.4 (4.6) 80.6 (1.9) 62.1 66.5 103.4 112.2 48.1 (4.6) 62.5 (2.3)
2310.02263#18
2310.02263#20
2310.02263
[ "2309.00267" ]
2310.02263#20
Contrastive Post-training Large Language Models on Data Curriculum
Table 5: Experimental results of RLHF compared with SFT and DPO. SFT-3.5 is the LLaMA model trained with SFT on ChatGPT responses. vs. SFT on ChatGPT vs. SFT on GPT-4 Method Init. Training Target Alpaca WizardLM Alpaca WizardLM win% score% win (tie)% win% score% win (tie)% SFT DPO SFT-3.5 SFT-3.5 GPT-4 outputs GPT4 vs td003 65.1 70.4 124.3 120.4 71.3 (5.1) 66.2 (2.8) 53.2 58.7 103.8 105.4 47.2 (6.5) 51.9 (2.8) RLHF RLHF SFT-3.5 OASST DeBERTa RM 36.1 36.1 OASST Pythia RM SFT-3.5 91.0 92.7 26.9 (7.9) 30.6 (9.7) 25.3 29.4 86.6 87.9 22.2 (3.7) 25.5 (2.8) more robust in our setting where the labels are noisy. Moreover, as shown in Table 2, DPO holds an advantage against RLAIF in training efficiency and alleviates the need to tune the hyperparameter δ. When comparing head-to-head with SFT on GPT-4 responses, the best-performing DPO wins on 58.7% and 51.9% prompts on Alpaca Eval and WizardLM, respectively. Which pair should we train DPO on? We train multiple DPO models on different contrastive pairs. We find that the most distant pair, i.e., GPT-4 vs. InstructGPT, has the best performance. This may be due to this pair has the least noise, as most GPT-4 responses are expected to outperform those of InstructGPT. This provides a more reliable signal to facilitate model learning. As shown in Table 4, the DPO model trained on GPT-4 vs.
2310.02263#19
2310.02263#21
2310.02263
[ "2309.00267" ]
2310.02263#21
Contrastive Post-training Large Language Models on Data Curriculum
InstructGPT outperforms the other two pairs on both Alpaca Eval and WizardLM evaluation. Also, we find that the DPO model initialized from the SFT model can achieve better performance than initialized from the raw LLaMA checkpoint. What if we SFT the model for even longer? Due to computation budget limit, our previous experiments train the model for 1 epoch on Alpaca. However, we are curious if the advantage of DPO holds with more epochs of SFT. We train the SFT model with 3 epochs, which is the same setting as in Alpaca (Taori et al., 2023) and Vicuna (Chiang et al., 2023). As the model converges on the SFT objective after 3 epochs, training another epoch with DPO achieves substantial improvement on all metrics. This result suggests that DPO works well with a strong SFT model and may be suitable for scaling up, which we will demonstrate later in Section 5.4.
2310.02263#20
2310.02263#22
2310.02263
[ "2309.00267" ]
2310.02263#22
Contrastive Post-training Large Language Models on Data Curriculum
7 Preprint Table 6: Head-to-head comparison of Orca 13B models in scaled-up experiments. Orca with DPO post-training significantly outperforms continuing training Orca with SFT (p < 0.01). Model vs. Alpaca Eval (win%) WizardLM Eval helpful koala oasst self-instruct vicuna overall score% win (tie)% ChatGPT Orca 13B Orca + SFT ChatGPT Orca + DPO ChatGPT 55.8 46.5 58.1 53.2 55.8 57.7 47.9 48.9 52.7 41.7 41.7 47.6 73.8 77.5 73.8 50.8 50.4 55.0 94.7 97.2 97.4 42.1 (16.9) 51.0 (11.9) 51.0 (11.1) Orca + SFT Orca 13B Orca + DPO Orca + SFT 43.4 59.7 51.3 48.7 51.1 60.6 52.4 56.0 47.5 51.3 49.9 55.8 105.6 104.8 55.9 (19.9) 55.9 (19.9) 5.3 COMPARISON WITH RLAIF AND RLHF For RL, we utilize three reward models: two external RLHF reward models from OpenAssistant reported in Table 5, and one RLAIF reward model trained â in-domainâ on the contrastive pairs in the Alpaca dataset in Table 4. We strictly follow the settings and code implementation in Hugging Face TRL2 library and use PPO to tune the SFT model on ChatGPT with 1 epoch with three different KL penalties coefficient {0.2, 0.5, 1.0} and report the best result among the three. We find that PPO is unfortunately very sensitive to the quality of its reward model, and is prone to degeneration when trained on small amounts of possibly noisy â
2310.02263#21
2310.02263#23
2310.02263
[ "2309.00267" ]
2310.02263#23
Contrastive Post-training Large Language Models on Data Curriculum
in-domainâ data. An example is shown in Table 3, where a broken response trained with PPO is preferred over a coherent response generated by the SFT model. We believe this â reward hackingâ is due to the reward model failing to generalize (Tien et al., 2023), likely overfitting to spurious lexical differences between GPT-4 and InstructGPT (Zhuang & Hadfield-Menell, 2020; Skalse et al., 2022). To combat this behavior, we employ external reward models from Open Assistant (K¨opf et al., 2023) which stabilize the training in the same codebase with the same settings off-the-shelf. In particular, we use the OpenAssistant DeBERTa-Large reward model3 and the larger Pythia 6.9B reward model4. As Table 5 shows, while the outputs are coherent under these external reward models, they still fail to beat the SFT baselines, as the performance degrades on the two out-of-distribution evaluation datasets. This suggests the reward models may fail to generalize to out-of-distribution data (Tien et al., 2023). We conclude only that RLAIF/RLHF requires substantial effort to train properly. It is worth mentioning that DPO, as an alternative, works out-of-the-box on the same pairs that are used to train the â in-domainâ reward models that lead to RLAIFâ
2310.02263#22
2310.02263#24
2310.02263
[ "2309.00267" ]
2310.02263#24
Contrastive Post-training Large Language Models on Data Curriculum
s collapse. 5.4 ORCA+: SCALING UP CONTRASTIVE POST-TRAINING To verify if our findings on small-scale Alpaca experiments can generalize, we test the performance of DPO with Orca 13B (Mukherjee et al., 2023) as both the reference model and initialization. The results are shown in Table 6. The SFT baseline is Orca trained on GPT-4 responses for the same prompts. The DPO model is trained with GPT4-vs-td003 pairs. We compare Orca 13B, Orca+SFT and Orca+DPO against ChatGPT responses. Orca+DPO can successfully improve the performance, achieving 55% win rate on Alpaca Eval and 51% win rate on WizardLM Eval, respectively. We then conduct a head-to-head comparison for SFT and DPO. Compared to the original Orca model, Orca+SFT does not show statistically significant improvement on Alpaca Eval (p > 0.05). Com- pared with Orca+SFT, Orca+DPO significantly improves performance on both Alpaca Eval and WizardLM Eval (p < 0.01). We also present generated examples in Appendix A. The large-scale experiments further verify the effectiveness of our proposed contrastive post-training approach. 2https://github.com/huggingface/trl 3https://huggingface.co/OpenAssistant/reward-model-deberta-v3-large-v2 4https://huggingface.co/OpenAssistant/oasst-rm-2-pythia-6.9b-epoch-1
2310.02263#23
2310.02263#25
2310.02263
[ "2309.00267" ]
2310.02263#25
Contrastive Post-training Large Language Models on Data Curriculum
8 Preprint SFT DPO 1.0 0.0 Lo 0.0 08 po Eos 0.2 £ (1) g (3) a £06 + 0.4 & 306 048 = & 8 $s Ms g 3 ® â O04 p06 Ss 8 0.4 0.6 a : oO o24 (2) Lo.8 & 02 (4) os 8 Oo 0.0 0.0 1.0 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Epoch Epoch Figure 2: The four candidate data curriculums for SFT and DPO. For SFT (left), the curriculum (1) fine-tunes the model on GPT-4 responses and gradually transitions to ChatGPT and the other (2) does the opposite. For DPO (right), the curriculum (3) starts with GPT-4 vs. td003 and ends with ChatGPT vs. td003 while the curriculum (4) does the opposite.
2310.02263#24
2310.02263#26
2310.02263
[ "2309.00267" ]
2310.02263#26
Contrastive Post-training Large Language Models on Data Curriculum
Table 7: Experimental results of different curriculums for SFT and DPO. The corresponding cur- riculums are illustrated in Figure 2. SFT-3.5 is the LLaMA model trained with SFT on ChatGPT responses. Starting with EasyPair and warming up to HardPairs can significantly improve the performance compared to the best DPO model trained only with EasyPair (GPT-4 vs. td003). vs. SFT on ChatGPT vs. SFT on GPT-4 Curr.
2310.02263#25
2310.02263#27
2310.02263
[ "2309.00267" ]
2310.02263#27
Contrastive Post-training Large Language Models on Data Curriculum
Method Init. Training Target Alpaca WizardLM Alpaca WizardLM win% score% win (tie)% win% score% win (tie)% (1) (2) SFT SFT LLaMA LLaMA GPT-4â ChatGPT ChatGPTâ GPT-4 47.5 57.0 107.6 115.2 52.8 (7.9) 59.7 (6.0) 33.2 43.7 96.0 100.0 34.7 (2.3) 41.7 (4.2) (3) (4) SFT DPO DPO DPO SFT-3.5 SFT-3.5 SFT-3.5 SFT-3.5 GPT-4 outputs GPT4 vs td003 (GPT4â ChatGPT) vs td003 (ChatGPTâ GPT4) vs td003 65.1 70.4 72.5 68.8 124.3 120.4 126.7 127.0 71.3 (5.1) 66.2 (2.8) 71.3 (2.3) 74.1 (3.2) 53.2 58.7 59.8 56.8 103.8 105.4 108.9 105.2 47.2 (6.5) 51.9 (2.8) 57.4 (2.3) 47.4 (4.2)
2310.02263#26
2310.02263#28
2310.02263
[ "2309.00267" ]
2310.02263#28
Contrastive Post-training Large Language Models on Data Curriculum
5.5 DATA CURRICULUMS FOR POST-TRAINING We number different curriculums as shown in Figure 2. The experimental results for curriculums are shown in Table 7. All experiments are trained with the same numbers of contrastive pairs and steps. For SFT, starting with ChatGPT and transitioning to GPT-4 (Curr. 2) outperforms the opposite (Curr. 1) by a considerable margin. Since many models, such as Vicuna (Chiang et al., 2023) and Orca (Mukherjee et al., 2023), are fine-tuned with mixed ChatGPT and GPT-4 responses, our finding suggests that a simple reordering of the data can lead to different performance. For DPO, with Curr. 3, we start from EasyPair, GPT-4 vs. td003 and transition to HardPair Chat- GPT vs. td003. This strategy achieves better performance than using only EasyPair all the time. Meanwhile, the anti-curriculum, Curr. 4, underperforms single-pair DPO in general. Curriculum learning further unleashes the potential of DPO for post-training. We believe further improvement can be achieved with more thorough hyperparameter search. # 6 CONCLUSION AND FUTURE WORK In this paper, we propose a new setting for contrastive post-training large language models. We ex- plore the best method and curriculum settings to facilitate post-training. Our large-scale experiments with a state-of-the-art model Orca further verify the effectiveness of our approach and suggest its potential for improving performance of LLMs at scale. For future work, we plan to explore both how to better select meaningful contrastive pairs from fixed data regime, and subsequently to continually learning evolving a model with pairs populated by sampling from the model itself at various points through training.
2310.02263#27
2310.02263#29
2310.02263
[ "2309.00267" ]