Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,7 @@ tags:
|
|
8 |
- propaganda
|
9 |
---
|
10 |
|
11 |
-
# Model Card for identrics/
|
12 |
|
13 |
|
14 |
|
@@ -24,12 +24,12 @@ tags:
|
|
24 |
## Model Description
|
25 |
|
26 |
This model consists of a fine-tuned version of BgGPT-7B-Instruct-v0.2 for a propaganda detection task. It is effectively a binary classifier, determining wether propaganda is present in the output string.
|
27 |
-
This model was created by [`Identrics`](https://identrics.ai/), in the scope of the WASPer project.
|
28 |
|
29 |
|
30 |
## Uses
|
31 |
|
32 |
-
|
33 |
|
34 |
### Example
|
35 |
|
@@ -52,14 +52,14 @@ print(output.logits)
|
|
52 |
|
53 |
|
54 |
## Training Details
|
|
|
55 |
|
56 |
-
The
|
57 |
|
|
|
|
|
58 |
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
## Citation
|
63 |
|
64 |
If you find our work useful, please consider citing WASPer:
|
65 |
|
|
|
8 |
- propaganda
|
9 |
---
|
10 |
|
11 |
+
# Model Card for identrics/wasper_propaganda_detection_bg
|
12 |
|
13 |
|
14 |
|
|
|
24 |
## Model Description
|
25 |
|
26 |
This model consists of a fine-tuned version of BgGPT-7B-Instruct-v0.2 for a propaganda detection task. It is effectively a binary classifier, determining wether propaganda is present in the output string.
|
27 |
+
This model was created by [`Identrics`](https://identrics.ai/), in the scope of the WASPer project.The detailed taxonomy of the full pipeline could be found [here](https://github.com/Identrics/wasper/).
|
28 |
|
29 |
|
30 |
## Uses
|
31 |
|
32 |
+
Designed as a binary classifier to determine whether a traditional or social media comment contains propaganda.
|
33 |
|
34 |
### Example
|
35 |
|
|
|
52 |
|
53 |
|
54 |
## Training Details
|
55 |
+
The training dataset for the model consists of a balanced collection of Bulgarian examples, including both propaganda and non-propaganda content. These examples were sourced from a variety of traditional media and social media platforms and manually annotated by domain experts. Additionally, the dataset is enriched with AI-generated samples.
|
56 |
|
57 |
+
The model achieved an F1 score of 0.836 during evaluation.
|
58 |
|
59 |
+
## Compute Infrastructure
|
60 |
+
This model was fine-tuned using a **GPU / 2xNVIDIA Tesla V100 32GB**.
|
61 |
|
62 |
+
## Citation [this section is to be updated soon]
|
|
|
|
|
|
|
63 |
|
64 |
If you find our work useful, please consider citing WASPer:
|
65 |
|