Datasets:

Modalities:
Text
Formats:
csv
Languages:
Kazakh
ArXiv:
Libraries:
Datasets
pandas
License:
kazsandra / README.md
yeshpanovrustem's picture
Update README.md
a39e478 verified
---
pretty_name: Kazakh Sentiment Analysis Dataset of Reviews and Attitudes
dataset_info:
- config_name: full
features:
- name: custom_id
dtype: string
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
0: "0 stars"
1: "1 star"
2: "2 stars"
3: "3 stars"
4: "4 stars"
5: "5 stars"
- name: domain
dtype: string
splits:
- name: train
num_bytes: 24381051
num_examples: 180064
- config_name: polarity_classification
features:
- name: custom_id
dtype: string
- name: text
dtype: string
- name: text_cleaned
dtype: string
- name: label
dtype:
class_label:
names:
0: "negative"
1: "positive"
- name: domain
dtype: string
splits:
- name: train
num_bytes: 32618403
num_examples: 134368
- name: validation
num_bytes: 4085072
num_examples: 16796
- name: test
num_bytes: 4285278
num_examples: 16797
- config_name: score_classification
features:
- name: custom_id
dtype: string
- name: text
dtype: string
- name: text_cleaned
dtype: string
- name: label
dtype:
class_label:
names:
0: "1 star"
1: "2 stars"
2: "3 stars"
3: "4 stars"
4: "5 stars"
- name: domain
dtype: string
splits:
- name: train
num_bytes: 34107559
num_examples: 140126
- name: validation
num_bytes: 4318229
num_examples: 17516
- name: test
num_bytes: 4235569
num_examples: 17516
configs:
- config_name: full
data_files:
- split: train
path: "full/full.csv"
default: true
- config_name: polarity_classification
data_files:
- split: train
path: "polarity_classification/train_pc.csv"
- split: validation
path: "polarity_classification/valid_pc.csv"
- split: test
path: "polarity_classification/test_pc.csv"
- config_name: score_classification
data_files:
- split: train
path: "score_classification/train_sc.csv"
- split: validation
path: "score_classification/valid_sc.csv"
- split: test
path: "score_classification/test_sc.csv"
license: cc-by-4.0
task_categories:
- text-classification
task_ids:
- sentiment-classification
language:
- kk
size_categories:
- 100K<n<1M
---
## Dataset Description
- **Repository:** https://github.com/IS2AI/KazSAnDRA
- **Paper:** https://arxiv.org/abs/2403.19335
<h1 align = "center">KazSAnDRA </h1>
<p align = "justify"><b>Kaz</b>akh <b>S</b>entiment <b>An</b>alysis <b>D</b>ataset of <b>R</b>eviews and <b>A</b>ttitudes, or KazSAnDRA, is a <a href = "https://github.com/IS2AI/KazSAnDRA">dataset</a> developed for Kazakh sentiment analysis.
KazSAnDRA comprises a collection of 180,064 reviews obtained from various sources and includes numerical ratings ranging from 1 to 5, providing a quantitative representation of customer attitudes.
</p>
<p>
In the <a href = "https://arxiv.org/abs/2403.19335">original study</a>, KazSAnDRA was utilised for two distinct tasks:
<ol>
<li>polarity classification (PC), involving the prediction of whether a review is positive or negative:</li>
<ul>
<li>reviews with original scores of 1 or 2 were classified as negative and assigned a new score of 0,</li>
<li>reviews with original scores of 4 or 5 were classified as positive and assigned a new score of 1,</li>
<li>reviews with an original score of 3 were categorised as neutral and were excluded from the task.</li>
</ul>
<li>score classification (SC), where the objective was to predict the score of a review on a scale ranging from 1 to 5. To align with the enumeration used for labelling in the classifier, which starts from 0 rather than 1, labels 1–5 were transformed into 0–4.</li>
</ol>
</p>
<p align = "justify">
KazSAnDRA consists of seven CSV files. File <b>full.csv</b> contains all the 180,064 reviews and ratings from 1 to 5. Files <b>train_pc.csv</b>, <b>valid_pc.csv</b>, and
<b>test_pc.csv</b> are the training, validation, and testing sets for the polarity classification task, respectively. Files <b>train_sc.csv</b>, <b>valid_sc.csv</b>, and
<b>test_sc.csv</b> are the training, validation, and testing sets for the score classification task, in turn.
</p>
<p align = "justify">
All files, except for <b>full.csv</b>, include records containing a custom review identifier (<i>custom_id</i>), the original review text (<i>text</i>), the pre-processed review text (<i>text_cleaned</i>), the corresponding review score (<i>label</i>), and the domain information (<i>domain</i>).
File <b>full.csv</b> includes records containing a custom review identifier (<i>custom_id</i>), the original review text (<i>text</i>), the corresponding review score (<i>label</i>), and the domain information (<i>domain</i>).
</p>
<h2 align = "center">Dataset Statistics</h2>
<p align = "justify">For the sake of maintaining consistency and facilitating reproducibility of our experimental outcomes among different research groups, we partitioned KaZSAnDRA into three distinct sets: training (train), validation (valid), and testing (test) sets, following an 80/10/10 ratio.</p>
<table align="center">
<tr align="center">
<td rowspan="3"><b>Task</b></td>
<td colspan="2"><b>Train</b></td>
<td colspan="2"><b>Valid</b></td>
<td colspan="2"><b>Test</b></td>
<td colspan="2"><b>Total</b></td>
</tr>
<tr></tr>
<tr align="center">
<td><b>#</b></td>
<td><b>%</b></td>
<td><b>#</b></td>
<td><b>%</b></td>
<td><b>#</b></td>
<td><b>%</b></td>
<td><b>#</b></td>
<td><b>%</b></td>
</tr>
<tr></tr>
<tr align="center">
<td>PC</td>
<td>134,368</td>
<td>80</td>
<td>16,796</td>
<td>10</td>
<td>16,797</td>
<td>10</td>
<td>167,961</td>
<td>100</td>
</tr>
<tr></tr>
<tr align="center">
<td>SC</td>
<td>140,126</td>
<td>80</td>
<td>17,516</td>
<td>10</td>
<td>17,516</td>
<td>10</td>
<td>175,158</td>
<td>100</td>
</tr>
</table>
<p align = "justify">The distribution of reviews across the three sets based on their domains and scores for the PC task:</p>
<table align="center">
<thead>
<tr align="center">
<th rowspan="3">Domain</th>
<th colspan="2">Train</th>
<th colspan="2">Valid</th>
<th colspan="2">Test</th>
</tr>
<tr></tr>
<tr align="center">
<th>#</th>
<th>%</th>
<th>#</th>
<th>%</th>
<th>#</th>
<th>%</th>
</tr>
</thead>
<tbody>
<tr align="center">
<td>Appstore</td>
<td>101,477</td>
<td>75.52</td>
<td>12,685</td>
<td>75.52</td>
<td>12,685</td>
<td>75.52</td>
</tr>
<tr></tr>
<tr align="center">
<td>Market</td>
<td>22,561</td>
<td>16.79</td>
<td>2,820</td>
<td>16.79</td>
<td>2,820</td>
<td>16.79</td>
</tr>
<tr></tr>
<tr align="center">
<td>Mapping</td>
<td>6,509</td>
<td>4.84</td>
<td>813</td>
<td>4.84</td>
<td>814</td>
<td>4.85</td>
</tr>
<tr></tr>
<tr align="center">
<td>Bookstore</td>
<td>3,821</td>
<td>2.84</td>
<td>478</td>
<td>2.85</td>
<td>478</td>
<td>2.85</td>
</tr>
<tr></tr>
<tr align="center">
<td><b>Total</b></td>
<td><b>134,368</b></td>
<td><b>100</b></td>
<td><b>16,796</b></td>
<td><b>100</b></td>
<td><b>16,797</b></td>
<td><b>100</b></td>
</tr>
</tbody>
</table>
<table align="center">
<thead>
<tr align="center">
<th rowspan="3">Score</th>
<th colspan="2">Train</th>
<th colspan="2">Valid</th>
<th colspan="2">Test</th>
</tr>
<tr></tr>
<tr align="center">
<th>#</th>
<th>%</th>
<th>#</th>
<th>%</th>
<th>#</th>
<th>%</th>
</tr>
</thead>
<tbody>
<tr align="center">
<td>1</td>
<td>110,417</td>
<td>82.18</td>
<td>13,801</td>
<td>82.17</td>
<td>13,804</td>
<td>82.18</td>
</tr>
<tr></tr>
<tr align="center">
<td>0</td>
<td>23,951</td>
<td>17.82</td>
<td>2,995</td>
<td>17.83</td>
<td>2,993</td>
<td>17.82</td>
</tr>
<tr></tr>
<tr align="center">
<td><b>Total</b></td>
<td><b>134,368</b></td>
<td><b>100</b></td>
<td><b>16,796</b></td>
<td><b>100</b></td>
<td><b>16,797</b></td>
<td><b>100</b></td>
</tr>
</tbody>
</table>
<p align = "justify">The distribution of reviews across the three sets based on their domains and scores for the SC task:</p>
<table align="center">
<thead>
<tr align="center">
<th rowspan="3">Domain</th>
<th colspan="2">Train</th>
<th colspan="2">Valid</th>
<th colspan="2">Test</th>
</tr>
<tr></tr>
<tr align="center">
<th>#</th>
<th>%</th>
<th>#</th>
<th>%</th>
<th>#</th>
<th>%</th>
</tr>
</thead>
<tbody>
<tr align="center">
<td>Appstore</td>
<td>106,058</td>
<td>75.69</td>
<td>13,258</td>
<td>75.69</td>
<td>13,257</td>
<td>75.69</td>
</tr>
<tr></tr>
<tr align="center">
<td>Market</td>
<td>23,278</td>
<td>16.61</td>
<td>2,909</td>
<td>16.61</td>
<td>2,910</td>
<td>16.61</td>
</tr>
<tr></tr>
<tr align="center">
<td>Mapping</td>
<td>6,794</td>
<td>4.85</td>
<td>849</td>
<td>4.85</td>
<td>849</td>
<td>4.85</td>
</tr>
<tr></tr>
<tr align="center">
<td>Bookstore</td>
<td>3,996</td>
<td>2.85</td>
<td>500</td>
<td>2.85</td>
<td>500</td>
<td>2.85</td>
</tr>
<tr></tr>
<tr align="center">
<td><b>Total</b></td>
<td><b>140,126</b></td>
<td><b>100</b></td>
<td><b>17,516</b></td>
<td><b>100</b></td>
<td><b>17,516</b></td>
<td><b>100</b></td>
</tr>
</tbody>
</table>
<table align="center">
<thead>
<tr align="center">
<th rowspan="3">Score</th>
<th colspan="2">Train</th>
<th colspan="2">Valid</th>
<th colspan="2">Test</th>
</tr>
<tr></tr>
<tr align="center">
<th>#</th>
<th>%</th>
<th>#</th>
<th>%</th>
<th>#</th>
<th>%</th>
</tr>
</thead>
<tbody>
<tr align="center">
<td>5</td>
<td>101,302</td>
<td>72.29</td>
<td>12,663</td>
<td>72.29</td>
<td>12,663</td>
<td>72.29</td>
</tr>
<tr></tr>
<tr align="center">
<td>1</td>
<td>20,031</td>
<td>14.29</td>
<td>2,504</td>
<td>14.30</td>
<td>2,504</td>
<td>14.30</td>
</tr>
<tr></tr>
<tr align="center">
<td>4</td>
<td>9,115</td>
<td>6.50</td>
<td>1,140</td>
<td>6.51</td>
<td>1,139</td>
<td>6.50</td>
</tr>
<tr></tr>
<tr align="center">
<td>3</td>
<td>5,758</td>
<td>4.11</td>
<td>719</td>
<td>4.10</td>
<td>720</td>
<td>4.11</td>
</tr>
<tr></tr>
<tr align="center">
<td>2</td>
<td>3,920</td>
<td>2.80</td>
<td>490</td>
<td>2.80</td>
<td>490</td>
<td>2.80</td>
</tr>
<tr></tr>
<tr align="center">
<td><b>Total</b></td>
<td><b>140,126</b></td>
<td><b>100</b></td>
<td><b>17,516</b></td>
<td><b>100</b></td>
<td><b>17,517</b></td>
<td><b>100</b></td>
</tr>
</tbody>
</table>
<h2 align = "center">How to Use</h2>
<p align = "justify">To load the subsets of KazSAnDRA separately:</p>
```python
from datasets import load_dataset
full = load_dataset("issai/kazsandra", "full")
pc = load_dataset("issai/kazsandra", "polarity_classification")
sc = load_dataset("issai/kazsandra", "score_classification")
```