bdotloh commited on
Commit
6f759c5
·
1 Parent(s): 00258c0

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -0
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ datasets:
5
+ - sst2
6
+ - glue
7
+ model-index:
8
+ - name: roberta-base-empathy
9
+ results:
10
+ - task:
11
+ type: text-classification
12
+ name: Text Classification
13
+ dataset:
14
+ name: Reaction to News Stories
15
+ type: Reaction to News Stories
16
+ config: sst2
17
+ split: validation
18
+ metrics:
19
+ - name: MSE loss
20
+ type: MSE loss
21
+ value: 7.07853364944458
22
+ verified: true
23
+ - name: Pearson's R (empathy)
24
+ type: Pearson's R (empathy)
25
+ value: 0.4336383660597612
26
+ verified: true
27
+ - name: Pearson's R (distress)
28
+ type: Pearson's R (distress)
29
+ value: 0.40006974689041663
30
+ verified: true
31
+
32
+ ---
33
+
34
+ # Roberta base finetuned on a dataset of empathic reactions to news stories (Buechel et al., 2018; Tafreshi et al., 2021, 2022)
35
+
36
+ ## Table of Contents
37
+ - [Model Details](#model-details)
38
+ - [How to Get Started With the Model](#how-to-get-started-with-the-model)
39
+ - [Uses](#uses)
40
+ - [Risks, Limitations and Biases](#risks-limitations-and-biases)
41
+ - [Training](#training)
42
+
43
+ ## Model Details
44
+ **Model Description:** This model is a fine-tuned checkpoint of [RoBERTA-base](https://huggingface.co/roberta-base), fine-tuned for Track 1 of the[WASSA 2022 Shared Task](https://aclanthology.org/2022.wassa-1.20.pdf) - predicting empathy and distress scores on a dataset of reactions to news stories.
45
+ This model attained an average Pearson's correlation (r) of 0.416854 on the dev set (for comparison, the top team had an average r of .54 on the test set ).
46
+
47
+ # Training
48
+
49
+ #### Training Data
50
+ An extended version of the [empathic reactions to news stories dataset](https://codalab.lisn.upsaclay.fr/competitions/834#learn_the_details-datasets)
51
+
52
+ ###### Fine-tuning hyper-parameters
53
+
54
+
55
+ - learning_rate = 1e-5
56
+ - batch_size = 32
57
+ - warmup = 600
58
+ - max_seq_length = 128
59
+ - num_train_epochs = 3.0
60
+
61
+