7B WASSA2024 Track 1,2,3 baseline LLM based on LLama2-base 7B (Pure LoRA Training)

Introduction

This is a baseline model for WASSA2024 Track 1,2,3. The overall template is shown in below:

"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n\n{yourContent}\n\n### Response:\n\n"

For each task, there is a customized instruction template and a result template, shown in below:

Track 1

Instruction template:

"This is an Empathy Prediction task in Conversations. You are asked to predict the perceived empathy level of a specific individual at the conversation level. You need to make your prediction on the conversation history between speaker 1 and others to predict speaker 1's preceived empathy level. The empathy levels are divided into 9 levels. All annotations are in the range [0, level-1], and must be made using integers only.\n\nSpeaker1: {yourContent}"

Result template:

"The annotation result is as follows:\nThe Speaker1's preceived empathy level to this conversation is {preceivedEmpathyLevel}."

Track 2

Instruction template:

"This is an Empathy and Emotion Prediction task. You are asked to predict the perceived empathy, emotion polarity, emotion intensity, and self disclosure status at the speech-turn-level in a conversation. You need to make predictions based on the last statement from speaker 1 (and the previous conversation content if provided). The emotion intensity and empathy level are divided into 16 levels, while emotion polarity and self-disclosure status are divided into 10 levels. All annotations are in the range [0, level-1], and must be made using integers only.\n\nSpeaker1: {yourContent}"

Result Template:

"The annotation result of the final statement of the Speaker1 is as follows:\nThe emotion intensity is {emotionValue}, the empathy level is {empathyValue}, the emotion polarity is {emotionPolarity}, and the self disclosure status is {selfDisclosure}."

If you want to input multiturn conversation, you need to add "Speaker1", "Speaker2" index manully. Here is an example:

"This is an Empathy and Emotion Prediction task. You are asked to predict the perceived empathy, emotion polarity, emotion intensity, and self disclosure status at the speech-turn-level in a conversation. You need to make predictions based on the last statement from speaker 1 (and the previous conversation content if provided). The emotion intensity and empathy level are divided into 16 levels, while emotion polarity and self-disclosure status are divided into 10 levels. All annotations are in the range [0, level-1], and must be made using integers only.\n\nSpeaker2: what did you think about this article\nSpeaker1: It's definitely really sad to read, considering everything they're all going through. What did you think?\nSpeaker2: I think it's super sad... they seem to never catch a break, always struggling.\nSpeaker1: I can't imagine just living in an area that is constantly being ravaged by hurricanes or earthquakes. I take my location for granted.\nSpeaker2: Me too.. I also can't imagine living in the poverty and such.. It's crazy to think that people still live like that sometimes. The gap between first world countires and places like that is crazy to em\nSpeaker1: It also seems unnecessary for there to even be such a gap. With all of the wealthy countries out there, I hope Haiti gets the help it deserves, because we, and other countries, can certainly afford it.\nSpeaker2: Agreed... with how frivilous and unnessary our spending is, it's so sad that countries like that don't get more support or guidance.\nSpeaker1: It's disheartening, isn't it? Places have the ability, money, time, and knowledge, and still refuse to help.\nSpeaker2: It is so sad... Or even the millionaires/billionaires out there. I know some of them donate, but at some point, you can only spend so much money. Why not put it to use.\nSpeaker1: Yep, exactly. It's just very frustrating overall. I think it's hard for others because they don't understand until their houses are being swept away for torrential floods.\nSpeaker2: It is hard to fathom/process, it's hard for me to really imagine\nSpeaker1: Give it twenty more years, for the more compassionate people to come into leadership. I think we'll see a big difference."

Trak 3

Instruction template:

"This is an Empathy Prediction task. You are asked to predict both the empathy concern and personal distress at the essay level. You need to make predictions based on all of the speaker's utterances, also known as the person's essay. The empathy level and distress level are divided into 43 levels. All annotations are in the range [0, level-1], and must be made using integers only.\n\nPerson's Essay: {yourContent}"

Result template:

"The annotation result is as follows:\nThe empathy level is {empathLevel}, and the distress level is {distressLevel}."

Train Detail

  1. Trianing Framework: This model is trained on modified ChinChunMei-LLM Framework.
  2. Tokenizer: This model uses Llama2 tokenizer with a extra [PAD] added into the vocal. The vocab number is 32001
  3. Training Parameters: The hyperparams are: LoRA rank: 8, LoRA Alpha:32, LoRA Dropout: 0.05, LoRA Trainable Params: "q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj" LR: 1e-5, Warmup ratio: 0.001.
  4. Training Resource: 4*V100, 4 hours.
  5. Loss info: see the all_result.json

Licence

This repository's models are open-sourced under the Apache-2.0 license, and their weight usage must adhere to LLama2 MODEL LICENCE license.

Downloads last month
12
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including RicardoLee/Llama2-chat-7B-WASSA2024_VER1