Datasets:
Tasks:
Text Classification
Languages:
English
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,99 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
task_categories:
|
3 |
+
- text-classification
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
---
|
7 |
+
# Movie Review Data
|
8 |
+
|
9 |
+
Original source: sentence polarity dataset v1.0 http://www.cs.cornell.edu/people/pabo/movie-review-data/
|
10 |
+
|
11 |
+
## Original README
|
12 |
+
|
13 |
+
=======
|
14 |
+
|
15 |
+
Introduction
|
16 |
+
|
17 |
+
This README v1.0 (June, 2005) for the v1.0 sentence polarity dataset comes
|
18 |
+
from the URL
|
19 |
+
http://www.cs.cornell.edu/people/pabo/movie-review-data .
|
20 |
+
|
21 |
+
=======
|
22 |
+
|
23 |
+
Citation Info
|
24 |
+
|
25 |
+
This data was first used in Bo Pang and Lillian Lee,
|
26 |
+
``Seeing stars: Exploiting class relationships for sentiment categorization
|
27 |
+
with respect to rating scales.'', Proceedings of the ACL, 2005.
|
28 |
+
|
29 |
+
@InProceedings{Pang+Lee:05a,
|
30 |
+
author = {Bo Pang and Lillian Lee},
|
31 |
+
title = {Seeing stars: Exploiting class relationships for sentiment
|
32 |
+
categorization with respect to rating scales},
|
33 |
+
booktitle = {Proceedings of the ACL},
|
34 |
+
year = 2005
|
35 |
+
}
|
36 |
+
|
37 |
+
=======
|
38 |
+
|
39 |
+
Data Format Summary
|
40 |
+
|
41 |
+
- rt-polaritydata.tar.gz: contains this readme and two data files that
|
42 |
+
were used in the experiments described in Pang/Lee ACL 2005.
|
43 |
+
|
44 |
+
Specifically:
|
45 |
+
* rt-polarity.pos contains 5331 positive snippets
|
46 |
+
* rt-polarity.neg contains 5331 negative snippets
|
47 |
+
|
48 |
+
Each line in these two files corresponds to a single snippet (usually
|
49 |
+
containing roughly one single sentence); all snippets are down-cased.
|
50 |
+
The snippets were labeled automatically, as described below (see
|
51 |
+
section "Label Decision").
|
52 |
+
|
53 |
+
Note: The original source files from which the data in
|
54 |
+
rt-polaritydata.tar.gz was derived can be found in the subjective
|
55 |
+
part (Rotten Tomatoes pages) of subjectivity_html.tar.gz (released
|
56 |
+
with subjectivity dataset v1.0).
|
57 |
+
|
58 |
+
|
59 |
+
=======
|
60 |
+
|
61 |
+
Label Decision
|
62 |
+
|
63 |
+
We assumed snippets (from Rotten Tomatoes webpages) for reviews marked with
|
64 |
+
``fresh'' are positive, and those for reviews marked with ``rotten'' are
|
65 |
+
negative.
|
66 |
+
|
67 |
+
## Preprocessing
|
68 |
+
|
69 |
+
To make csv with text and label field, we use the following script.
|
70 |
+
|
71 |
+
```python3
|
72 |
+
import csv
|
73 |
+
import random
|
74 |
+
|
75 |
+
# NOTE: The encoding of original file is "latin_1". We will change it to "utf8".
|
76 |
+
with open("rt-polarity.pos", encoding="latin_1") as f:
|
77 |
+
texts_pos = [line.strip() for line in f]
|
78 |
+
with open("rt-polarity.neg", encoding="latin_1") as f:
|
79 |
+
texts_neg = [line.strip() for line in f]
|
80 |
+
|
81 |
+
rows_pos = [{"text": text, "label": 1} for text in texts_pos]
|
82 |
+
rows_neg = [{"text": text, "label": 0} for text in texts_pos]
|
83 |
+
|
84 |
+
# NOTE: For fair validation, we split it into train and test. Also, for the research who wants to use different setting, we provide whole setting.
|
85 |
+
# NOTE: We follow the split setting in LM-BFF paper.
|
86 |
+
rows_whole = rows_pos + rows_neg
|
87 |
+
random.Random(42).shuffle(rows_whole)
|
88 |
+
rows_test, rows_train = rows_whole[:2000], rows_whole[2000:]
|
89 |
+
|
90 |
+
with open("whole.csv", "w", encoding="utf8") as f:
|
91 |
+
writer = csv.DictWriter(f, fieldnames=["text", "label"])
|
92 |
+
writer.writerows(rows_train)
|
93 |
+
with open("train.csv", "w", encoding="utf8") as f:
|
94 |
+
writer = csv.DictWriter(f, fieldnames=["text", "label"])
|
95 |
+
writer.writerows(rows_train)
|
96 |
+
with open("test.csv", "w", encoding="utf8") as f:
|
97 |
+
writer = csv.DictWriter(f, fieldnames=["text", "label"])
|
98 |
+
writer.writerows(rows_test)
|
99 |
+
```
|