shibing624 commited on
Commit
fb46600
·
1 Parent(s): 64973e3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -9
README.md CHANGED
@@ -57,23 +57,27 @@ pretty_name: Stanford Natural Language Inference
57
  - **Size of downloaded dataset files:** 16 MB
58
  - **Total amount of disk used:** 42 MB
59
  ### Dataset Summary
 
60
  常见中文语义匹配数据集,包含[ATEC](https://github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC)、[BQ](http://icrc.hitsz.edu.cn/info/1037/1162.htm)、[LCQMC](http://icrc.hitsz.edu.cn/Article/show/171.html)、[PAWSX](https://arxiv.org/abs/1908.11828)、[STS-B](https://github.com/pluto-junzeng/CNSD)共5个任务。
61
  可以从数据集对应的链接自行下载,也可以从[百度网盘(提取码:qkt6)](https://pan.baidu.com/s/1d6jSiU1wHQAEMWJi7JJWCQ)下载。其中senteval_cn目录是评测数据集汇总,senteval_cn.zip是senteval目录的打包,两者下其一就好。
62
 
63
- - ATEC: https://github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC
64
  - BQ: http://icrc.hitsz.edu.cn/info/1037/1162.htm
65
  - LCQMC: http://icrc.hitsz.edu.cn/Article/show/171.html
66
- - PAWSX: https://arxiv.org/abs/1908.11828
67
  - STS-B: https://github.com/pluto-junzeng/CNSD
68
 
69
 
70
  ### Supported Tasks and Leaderboards
71
- - Supported Tasks: 支持中文文本匹配任务,文本相似度计算等相关任务。
 
72
 
73
  中文匹配任务的结果目前在顶会paper上出现较少,我罗列一个我自己训练的结果:
74
 
75
- - **Leaderboard:** [NLI_zh leaderboard](https://github.com/shibing624/text2vec)
 
76
  ### Languages
 
77
  数据集均是简体中文文本。
78
 
79
  ## Dataset Structure
@@ -98,8 +102,11 @@ The data fields are the same among all splits.
98
  - `sentence1`: a `string` feature.
99
  - `sentence2`: a `string` feature.
100
  - `label`: a classification label, with possible values including `similarity` (1), `dissimilarity` (0).
 
101
  ### Data Splits
 
102
  #### ATEC
 
103
  ```shell
104
  $ wc -l ATEC/*
105
  20000 ATEC/ATEC.test.data
@@ -107,7 +114,9 @@ $ wc -l ATEC/*
107
  20000 ATEC/ATEC.valid.data
108
  102477 total
109
  ```
 
110
  #### BQ
 
111
  ```shell
112
  $ wc -l BQ/*
113
  10000 BQ/BQ.test.data
@@ -115,7 +124,9 @@ $ wc -l BQ/*
115
  10000 BQ/BQ.valid.data
116
  120000 total
117
  ```
 
118
  #### LCQMC
 
119
  ```shell
120
  $ wc -l LCQMC/*
121
  12500 LCQMC/LCQMC.test.data
@@ -125,6 +136,7 @@ $ wc -l LCQMC/*
125
  ```
126
 
127
  #### PAWSX
 
128
  ```shell
129
  $ wc -l PAWSX/*
130
  2000 PAWSX/PAWSX.test.data
@@ -134,6 +146,7 @@ $ wc -l PAWSX/*
134
  ```
135
 
136
  #### STS-B
 
137
  ```shell
138
  $ wc -l STS-B/*
139
  1361 STS-B/STS-B.test.data
@@ -147,10 +160,6 @@ $ wc -l STS-B/*
147
  作为中文NLI(natural langauge inference)数据集,这里把这个数据集上传到huggingface的datasets,方便大家使用。
148
  ### Source Data
149
  #### Initial Data Collection and Normalization
150
- The hypotheses were elicited by presenting crowdworkers with captions from preexisting datasets without the associated photos, but the vocabulary of the hypotheses still reflects the content of the photos as well as the caption style of writing (e.g. mostly present tense). The dataset developers report 37,026 distinct words in the corpus, ignoring case. They allowed bare NPs as well as full sentences. Using the Stanford PCFG Parser 3.5.2 (Klein and Manning, 2003) trained on the standard training set as well as on the Brown Corpus (Francis and Kucera 1979), the authors report that 74% of the premises and 88.9% of the hypotheses result in a parse rooted with an 'S'. The corpus was developed between 2014 and 2015.
151
- Crowdworkers were presented with a caption without the associated photo and asked to produce three alternate captions, one that is definitely true, one that might be true, and one that is definitely false. See Section 2.1 and Figure 1 for details (Bowman et al., 2015).
152
- The corpus includes content from the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/) and the [VisualGenome corpus](https://visualgenome.org/). The photo captions used to prompt the data creation were collected on Flickr by [Young et al. (2014)](https://www.aclweb.org/anthology/Q14-1006.pdf), who extended the Flickr 8K dataset developed by [Hodosh et al. (2013)](https://www.jair.org/index.php/jair/article/view/10833). Hodosh et al. collected photos from the following Flickr groups: strangers!, Wild-Child (Kids in Action), Dogs in Action (Read the Rules), Outdoor Activities, Action Photography, Flickr-Social (two or more people in the photo). Young et al. do not list the specific groups they collected photos from. The VisualGenome corpus also contains images from Flickr, originally collected in [MS-COCO](https://cocodataset.org/#home) and [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/).
153
- The premises from the Flickr 30k corpus corrected for spelling using the Linux spell checker and ungrammatical sentences were removed. Bowman et al. do not report any normalization, though they note that punctuation and capitalization are often omitted.
154
  #### Who are the source language producers?
155
  数据集的版权归原作者所有,使用各数据集时请尊重原数据集的版权。
156
 
@@ -172,13 +181,17 @@ Systems that are successful at such a task may be more successful in modeling se
172
  ### Other Known Limitations
173
  ## Additional Information
174
  ### Dataset Curators
 
175
  - 苏剑林对文件名称有整理
176
  - 我上传到huggingface的datasets
 
177
  ### Licensing Information
 
178
  用于学术研究。
179
 
180
  The BQ corpus is free to the public for academic research.
181
 
182
- ```
183
  ### Contributions
 
184
  Thanks to [@shibing624](https://github.com/shibing624) add this dataset.
 
57
  - **Size of downloaded dataset files:** 16 MB
58
  - **Total amount of disk used:** 42 MB
59
  ### Dataset Summary
60
+
61
  常见中文语义匹配数据集,包含[ATEC](https://github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC)、[BQ](http://icrc.hitsz.edu.cn/info/1037/1162.htm)、[LCQMC](http://icrc.hitsz.edu.cn/Article/show/171.html)、[PAWSX](https://arxiv.org/abs/1908.11828)、[STS-B](https://github.com/pluto-junzeng/CNSD)共5个任务。
62
  可以从数据集对应的链接自行下载,也可以从[百度网盘(提取码:qkt6)](https://pan.baidu.com/s/1d6jSiU1wHQAEMWJi7JJWCQ)下载。其中senteval_cn目录是评测数据集汇总,senteval_cn.zip是senteval目录的打包,两者下其一就好。
63
 
64
+ - ATEC: https://github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC (重新划分了train、valid和test)
65
  - BQ: http://icrc.hitsz.edu.cn/info/1037/1162.htm
66
  - LCQMC: http://icrc.hitsz.edu.cn/Article/show/171.html
67
+ - PAWSX: https://arxiv.org/abs/1908.11828 (只保留了中文部分)
68
  - STS-B: https://github.com/pluto-junzeng/CNSD
69
 
70
 
71
  ### Supported Tasks and Leaderboards
72
+
73
+ Supported Tasks: 支持中文文本匹配任务,文本相似度计算等相关任务。
74
 
75
  中文匹配任务的结果目前在顶会paper上出现较少,我罗列一个我自己训练的结果:
76
 
77
+ **Leaderboard:** [NLI_zh leaderboard](https://github.com/shibing624/text2vec)
78
+
79
  ### Languages
80
+
81
  数据集均是简体中文文本。
82
 
83
  ## Dataset Structure
 
102
  - `sentence1`: a `string` feature.
103
  - `sentence2`: a `string` feature.
104
  - `label`: a classification label, with possible values including `similarity` (1), `dissimilarity` (0).
105
+
106
  ### Data Splits
107
+
108
  #### ATEC
109
+
110
  ```shell
111
  $ wc -l ATEC/*
112
  20000 ATEC/ATEC.test.data
 
114
  20000 ATEC/ATEC.valid.data
115
  102477 total
116
  ```
117
+
118
  #### BQ
119
+
120
  ```shell
121
  $ wc -l BQ/*
122
  10000 BQ/BQ.test.data
 
124
  10000 BQ/BQ.valid.data
125
  120000 total
126
  ```
127
+
128
  #### LCQMC
129
+
130
  ```shell
131
  $ wc -l LCQMC/*
132
  12500 LCQMC/LCQMC.test.data
 
136
  ```
137
 
138
  #### PAWSX
139
+
140
  ```shell
141
  $ wc -l PAWSX/*
142
  2000 PAWSX/PAWSX.test.data
 
146
  ```
147
 
148
  #### STS-B
149
+
150
  ```shell
151
  $ wc -l STS-B/*
152
  1361 STS-B/STS-B.test.data
 
160
  作为中文NLI(natural langauge inference)数据集,这里把这个数据集上传到huggingface的datasets,方便大家使用。
161
  ### Source Data
162
  #### Initial Data Collection and Normalization
 
 
 
 
163
  #### Who are the source language producers?
164
  数据集的版权归原作者所有,使用各数据集时请尊重原数据集的版权。
165
 
 
181
  ### Other Known Limitations
182
  ## Additional Information
183
  ### Dataset Curators
184
+
185
  - 苏剑林对文件名称有整理
186
  - 我上传到huggingface的datasets
187
+
188
  ### Licensing Information
189
+
190
  用于学术研究。
191
 
192
  The BQ corpus is free to the public for academic research.
193
 
194
+
195
  ### Contributions
196
+
197
  Thanks to [@shibing624](https://github.com/shibing624) add this dataset.