Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,89 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- el
|
5 |
+
pipeline_tag: text-classification
|
6 |
+
task_categories:
|
7 |
+
- text-classification
|
8 |
+
- text-generation
|
9 |
+
- zero-shot-classification
|
10 |
+
tags:
|
11 |
+
- social media
|
12 |
+
- Reddit
|
13 |
+
- Text Classification
|
14 |
+
- Greek
|
15 |
+
- Greek NLP
|
16 |
+
pretty_name: GreekReddit
|
17 |
+
size_categories:
|
18 |
+
- 1K<n<10K
|
19 |
+
---
|
20 |
+
|
21 |
+
# GreekReddit
|
22 |
+
|
23 |
+
<img src="" width="600"/>
|
24 |
+
|
25 |
+
A Greek topic classification dataset collected from Greek subreddits, which contains 6,534 posts, their titles and topic labels.
|
26 |
+
This dataset has been used to train our best-performing model []() as part of our research paper: [Social Media Topic Classification on Greek Reddit]()
|
27 |
+
For information about dataset creation, limitations etc. see the original article.
|
28 |
+
|
29 |
+
### Supported Tasks and Leaderboards
|
30 |
+
|
31 |
+
This dataset supports:
|
32 |
+
|
33 |
+
**Multi-class Text Classification:** Given the text of a post, a model learns to predict the associated topic label.
|
34 |
+
**Title Generation:** Given the text of a post, a text generation model learns to generate a post title.
|
35 |
+
|
36 |
+
### Languages
|
37 |
+
|
38 |
+
All posts are written in Greek.
|
39 |
+
|
40 |
+
## Dataset Structure
|
41 |
+
|
42 |
+
### Data Instances
|
43 |
+
|
44 |
+
The dataset is structured as a `.csv` file, while three dataset splits are provided (train, validation and test).
|
45 |
+
|
46 |
+
### Data Fields
|
47 |
+
|
48 |
+
The following data fields are provided for each split:
|
49 |
+
|
50 |
+
`id`: (**str**) A unique post id.
|
51 |
+
`title`: (**str**) A short post title.
|
52 |
+
`text`: (**str**) The full text of the post.
|
53 |
+
`url`: (**str**) The URL which links to the original unprocessed post.
|
54 |
+
`category`: (**class label**): The class label of the post.
|
55 |
+
|
56 |
+
### Data Splits
|
57 |
+
|
58 |
+
|Split|No of Documents|
|
59 |
+
|-------------------|------------------------------------|
|
60 |
+
|Train|5,530|
|
61 |
+
|Validation|504|
|
62 |
+
|Test|500|
|
63 |
+
|
64 |
+
### Example code
|
65 |
+
```python
|
66 |
+
from datasets import load_dataset
|
67 |
+
|
68 |
+
# Load the training, validation and test dataset splits.
|
69 |
+
train_split = load_dataset('IMISLab/GreekReddit', split = 'train')
|
70 |
+
validation_split = load_dataset('IMISLab/GreekReddit', split = 'validation')
|
71 |
+
test_split = load_dataset('IMISLab/GreekReddit', split = 'test')
|
72 |
+
|
73 |
+
print(test_split[0])
|
74 |
+
```
|
75 |
+
## Contact
|
76 |
+
|
77 |
+
If you have any questions/feedback about the model please e-mail one of the following authors:
|
78 |
+
```
|
79 | |
80 | |
81 | |
82 |
+
```
|
83 |
+
## Citation
|
84 |
+
|
85 |
+
The model has been officially released with the article: [Social Media Topic Classification on Greek Reddit]().
|
86 |
+
If you use the model, please cite the following:
|
87 |
+
```
|
88 |
+
TBA
|
89 |
+
```
|