Datasets:
File size: 1,500 Bytes
ffa97d9 c730175 5250336 a0b5fa6 5250336 a0b5fa6 c730175 5250336 bb99db7 5250336 bb99db7 c730175 5250336 a0b5fa6 5250336 a0b5fa6 ffa97d9 9c65795 ffa97d9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
---
language:
- "en"
tags:
- text-classification
- sentiment-analysis
task_categories:
- text-classification
configs:
- config_name: quality
data_files:
- split: train
path:
- quality/train.csv.gz
- split: test
path:
- quality/test.csv.gz
- config_name: readability
data_files:
- split: train
path:
- readability/train.csv.gz
- split: test
path:
- readability/test.csv.gz
- config_name: sentiment
data_files:
- split: train
path:
- sentiment/train.csv.gz
- split: test
path:
- sentiment/test.csv.gz
---
# Text statistics
This dataset is a combination of the following datasets:
- [agentlans/text-quality-v2](https://huggingface.co/datasets/agentlans/text-quality-v2)
- [agentlans/readability](https://huggingface.co/datasets/agentlans/readability)
- [agentlans/twitter-sentiment-meta-analysis](https://huggingface.co/datasets/agentlans/twitter-sentiment-meta-analysis)
The main purpose is to collect the large data into one place for easy training and evaluation.
The full datasets were shuffled and randomly split into `train` and `test` splits.
## Dataset size
| Dataset | Train split | Test split |
|---------|-------------|------------|
| quality | 809 533 | 100 000 |
| readability | 869 663 | 50 000 |
| sentiment | 128 690 | 10 000 |
|