Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,21 @@
|
|
1 |
---
|
2 |
license: gpl-3.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: gpl-3.0
|
3 |
---
|
4 |
+
|
5 |
+
The mft_model_ita.pth file contains the trained weights of a bert-based model fine-tuned for classifying text according to both the <b>moral dyad</b> discussed, i.e., one of
|
6 |
+
|
7 |
+
* 0:care/harm
|
8 |
+
* 1:fairness/cheating
|
9 |
+
* 2:loyalty/betrayal
|
10 |
+
* 3:authority/subversion
|
11 |
+
* 4:purity/degradation
|
12 |
+
* 5:no moral
|
13 |
+
|
14 |
+
and the <b>focus concern</b>, i.e, one of
|
15 |
+
* 0:prescriptive
|
16 |
+
* 1:prohibitive
|
17 |
+
* 2:no focus
|
18 |
+
|
19 |
+
These weights include the parameters of the custom layers, including the weights and biases of the two classifiers attached to the pre-trained model.
|
20 |
+
|
21 |
+
The model was built as part of the European project [VALAWAI](www.valawai.eu).
|