Matthew Franglen commited on
Commit
4244029
1 Parent(s): 27634ac

Add description of the dataset

Browse files
Files changed (1) hide show
  1. README.md +111 -0
README.md CHANGED
@@ -5,6 +5,7 @@ language:
5
  arxiv:
6
  - 2107.12214
7
  - 2010.02609
 
8
  size_categories:
9
  - 1K<n<10K
10
  task_categories:
@@ -28,3 +29,113 @@ configs:
28
  - split: test
29
  path: "data/2014/laptop/aste/test.gz.parquet"
30
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  arxiv:
6
  - 2107.12214
7
  - 2010.02609
8
+ - 1911.01616
9
  size_categories:
10
  - 1K<n<10K
11
  task_categories:
 
29
  - split: test
30
  path: "data/2014/laptop/aste/test.gz.parquet"
31
  ---
32
+
33
+ ## Dataset Description
34
+
35
+ ### Task Summary
36
+
37
+ Aspect Sentiment Triplet Extraction (ASTE) is the task of extracting the triplets of target entities, their associated sentiment, and opinion spans explaining the reason for the sentiment.
38
+ This task is firstly proposed by (Peng et al., 2020) in the paper [Knowing What, How and Why: A Near Complete Solution for Aspect-based Sentiment Analysis (In AAAI 2020)](https://arxiv.org/abs/1911.01616).
39
+
40
+ For Example, given the sentence:
41
+
42
+ > The screen is very large and crystal clear with amazing colors and resolution .
43
+
44
+ The objective of the Aspect Sentiment Triplet Extraction (ASTE) task is to predict the triplets:
45
+
46
+ > [('screen', 'large', 'Positive'), ('screen', 'clear', 'Positive'), ('colors', 'amazing', 'Positive'), ('resolution', 'amazing', 'Positive')]
47
+
48
+ where a triplet consists of (target, opinion, sentiment).
49
+
50
+ ### Dataset Summary
51
+
52
+ Sentiment analysis is increasingly viewed as a vital task both from an academic and a commercial standpoint.
53
+ The majority of current approaches, however, attempt to detect the overall polarity of a sentence, paragraph, or text span, regardless of the entities mentioned (e.g., laptops, restaurants) and their aspects (e.g., battery, screen; food, service).
54
+ By contrast, this task is concerned with aspect based sentiment analysis (ABSA), where the goal is to identify the aspects of given target entities and the sentiment expressed towards each aspect.
55
+ Datasets consisting of customer reviews with human-authored annotations identifying the mentioned aspects of the target entities and the sentiment polarity of each aspect will be provided.
56
+
57
+ ### Dataset Source
58
+
59
+ The ASTE dataset is from the [xuuuluuu/SemEval-Triplet-data](https://github.com/xuuuluuu/SemEval-Triplet-data) repository.
60
+
61
+ It is based on the [Sem Eval 2014 Task 4](https://alt.qcri.org/semeval2014/task4/) dataset, with some preprocessing applied to the text.
62
+
63
+ ### Dataset Details
64
+
65
+ The train, validation and test splits come from the ASTE dataset.
66
+ There are the following columns:
67
+
68
+ * index
69
+ The ASTE and Sem Eval datasets had multiple annotations per document.
70
+ This dataset has a single annotation per row.
71
+ To make it easier to collect all annotations for a document the index can be used to group them.
72
+ All annotations for a given document will have the same index.
73
+
74
+ * text
75
+ This is the document that is annotated, either in the ASTE form or in the Sem Eval form (see below for details).
76
+
77
+ * aspect_start_index
78
+ The zero based character index for the first letter of the aspect term
79
+
80
+ * aspect_end_index
81
+ The zero based character index for the last letter of the aspect term
82
+
83
+ * aspect_term
84
+ The aspect term as it appears in the text
85
+
86
+ * opinion_start_index
87
+ The zero based character index for the first letter of the opinion term
88
+
89
+ * opinion_end_index
90
+ The zero based character index for the last letter of the opinion term
91
+
92
+ * opinion_term
93
+ The opinion term as it appears in the text
94
+
95
+ * sentiment
96
+ The sentiment class for the opinion about the aspect.
97
+ One of _negative_, _neutral_ or _positive_.
98
+
99
+ The ASTE dataset involved preprocessing the SemEval text.
100
+ This preprocessing fixed some of the spelling mistakes, for example:
101
+
102
+ > Keyboard good sized and wasy to use.
103
+
104
+ (easy misspelt as wasy).
105
+
106
+ The preprocessing also includes tokenization of the text and then separating the tokens with whitespace, for example:
107
+
108
+ > It 's just as fast with one program open as it is with sixteen open .
109
+
110
+ Since the added whitespace can lead to unnatrual text I have provided two forms of the dataset.
111
+ Subsets that end with `aste-v2` have the preprocessed text with spelling correction and additional whitespace.
112
+ Subsets that end with `sem-eval` have the original Sem Eval text.
113
+
114
+ ### Citation Information
115
+
116
+ ```
117
+ @misc{xu2021learning,
118
+ title={Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction},
119
+ author={Lu Xu and Yew Ken Chia and Lidong Bing},
120
+ year={2021},
121
+ eprint={2107.12214},
122
+ archivePrefix={arXiv},
123
+ primaryClass={cs.CL}
124
+ }
125
+ @misc{xu2021positionaware,
126
+ title={Position-Aware Tagging for Aspect Sentiment Triplet Extraction},
127
+ author={Lu Xu and Hao Li and Wei Lu and Lidong Bing},
128
+ year={2021},
129
+ eprint={2010.02609},
130
+ archivePrefix={arXiv},
131
+ primaryClass={cs.CL}
132
+ }
133
+ @misc{peng2019knowing,
134
+ title={Knowing What, How and Why: A Near Complete Solution for Aspect-based Sentiment Analysis},
135
+ author={Haiyun Peng and Lu Xu and Lidong Bing and Fei Huang and Wei Lu and Luo Si},
136
+ year={2019},
137
+ eprint={1911.01616},
138
+ archivePrefix={arXiv},
139
+ primaryClass={cs.CL}
140
+ }
141
+ ```