parquet-converter commited on
Commit
8afe655
·
1 Parent(s): 79fc59f

Update parquet files

Browse files
.gitattributes ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ ATEC/nli_zh-train.parquet filter=lfs diff=lfs merge=lfs -text
2
+ BQ/nli_zh-train.parquet filter=lfs diff=lfs merge=lfs -text
3
+ LCQMC/nli_zh-train.parquet filter=lfs diff=lfs merge=lfs -text
4
+ PAWSX/nli_zh-train.parquet filter=lfs diff=lfs merge=lfs -text
ATEC/nli_zh-test.parquet ADDED
Binary file (991 kB). View file
 
ATEC/nli_zh-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ce863e4c7deb57e54437ef0ea586d277865ab532d355b066a1c4e6e59e110df
3
+ size 3091047
ATEC/nli_zh-validation.parquet ADDED
Binary file (991 kB). View file
 
BQ/nli_zh-test.parquet ADDED
Binary file (438 kB). View file
 
BQ/nli_zh-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d82a3b9b984714d135d0cf61901fe64efe99d0af210e8e63c41aa8ec4315f03d
3
+ size 4707733
BQ/nli_zh-validation.parquet ADDED
Binary file (443 kB). View file
 
LCQMC/nli_zh-test.parquet ADDED
Binary file (603 kB). View file
 
LCQMC/nli_zh-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac462a9a6bca451463d91640b8160527350617f842a9c05e1eb2c2845987098f
3
+ size 12954615
LCQMC/nli_zh-validation.parquet ADDED
Binary file (527 kB). View file
 
PAWSX/nli_zh-test.parquet ADDED
Binary file (333 kB). View file
 
PAWSX/nli_zh-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9bc99b64f2b0d41188eaac27eae04c95558b6a17e1e2140f8cd32ae78a33867
3
+ size 8212499
PAWSX/nli_zh-validation.parquet ADDED
Binary file (335 kB). View file
 
README.md DELETED
@@ -1,198 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - shibing624
4
- language_creators:
5
- - shibing624
6
- language:
7
- - zh
8
- license:
9
- - cc-by-4.0
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 100K<n<20M
14
- source_datasets:
15
- - https://github.com/shibing624/text2vec
16
- - https://github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC
17
- - http://icrc.hitsz.edu.cn/info/1037/1162.htm
18
- - http://icrc.hitsz.edu.cn/Article/show/171.html
19
- - https://arxiv.org/abs/1908.11828
20
- - https://github.com/pluto-junzeng/CNSD
21
- task_categories:
22
- - text-classification
23
- task_ids:
24
- - natural-language-inference
25
- - semantic-similarity-scoring
26
- - text-scoring
27
- paperswithcode_id: snli
28
- pretty_name: Stanford Natural Language Inference
29
- ---
30
- # Dataset Card for NLI_zh
31
- ## Table of Contents
32
- - [Dataset Description](#dataset-description)
33
- - [Dataset Summary](#dataset-summary)
34
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
35
- - [Languages](#languages)
36
- - [Dataset Structure](#dataset-structure)
37
- - [Data Instances](#data-instances)
38
- - [Data Fields](#data-fields)
39
- - [Data Splits](#data-splits)
40
- - [Dataset Creation](#dataset-creation)
41
- - [Curation Rationale](#curation-rationale)
42
- - [Source Data](#source-data)
43
- - [Annotations](#annotations)
44
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
45
- - [Considerations for Using the Data](#considerations-for-using-the-data)
46
- - [Social Impact of Dataset](#social-impact-of-dataset)
47
- - [Discussion of Biases](#discussion-of-biases)
48
- - [Other Known Limitations](#other-known-limitations)
49
- - [Additional Information](#additional-information)
50
- - [Dataset Curators](#dataset-curators)
51
- - [Licensing Information](#licensing-information)
52
- - [Citation Information](#citation-information)
53
- - [Contributions](#contributions)
54
- ## Dataset Description
55
- - **Repository:** [Chinese NLI dataset](https://github.com/shibing624/text2vec)
56
- - **Leaderboard:** [NLI_zh leaderboard](https://github.com/shibing624/text2vec) (located on the homepage)
57
- - **Size of downloaded dataset files:** 16 MB
58
- - **Total amount of disk used:** 42 MB
59
- ### Dataset Summary
60
-
61
- 常见中文语义匹配数据集,包含[ATEC](https://github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC)、[BQ](http://icrc.hitsz.edu.cn/info/1037/1162.htm)、[LCQMC](http://icrc.hitsz.edu.cn/Article/show/171.html)、[PAWSX](https://arxiv.org/abs/1908.11828)、[STS-B](https://github.com/pluto-junzeng/CNSD)共5个任务。
62
-
63
- 数据源:
64
-
65
- - ATEC: https://github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC
66
- - BQ: http://icrc.hitsz.edu.cn/info/1037/1162.htm
67
- - LCQMC: http://icrc.hitsz.edu.cn/Article/show/171.html
68
- - PAWSX: https://arxiv.org/abs/1908.11828
69
- - STS-B: https://github.com/pluto-junzeng/CNSD
70
-
71
-
72
- ### Supported Tasks and Leaderboards
73
-
74
- Supported Tasks: 支持中文文本匹配任务,文本相似度计算等相关任务。
75
-
76
- 中文匹配任务的结果目前在顶会paper上出现较少,我罗列一个我自己训练的结果:
77
-
78
- **Leaderboard:** [NLI_zh leaderboard](https://github.com/shibing624/text2vec)
79
-
80
- ### Languages
81
-
82
- 数据集均是简体中文文本。
83
-
84
- ## Dataset Structure
85
- ### Data Instances
86
- An example of 'train' looks as follows.
87
- ```
88
- {
89
- "sentence1": "刘诗诗杨幂谁漂亮",
90
- "sentence2": "刘诗诗和杨幂谁漂亮",
91
- "label": 1,
92
- }
93
- {
94
- "sentence1": "汇理财怎么样",
95
- "sentence2": "怎么样去理财",
96
- "label": 0,
97
- }
98
- ```
99
-
100
- ### Data Fields
101
- The data fields are the same among all splits.
102
-
103
- - `sentence1`: a `string` feature.
104
- - `sentence2`: a `string` feature.
105
- - `label`: a classification label, with possible values including `similarity` (1), `dissimilarity` (0).
106
-
107
- ### Data Splits
108
-
109
- #### ATEC
110
-
111
- ```shell
112
- $ wc -l ATEC/*
113
- 20000 ATEC/ATEC.test.data
114
- 62477 ATEC/ATEC.train.data
115
- 20000 ATEC/ATEC.valid.data
116
- 102477 total
117
- ```
118
-
119
- #### BQ
120
-
121
- ```shell
122
- $ wc -l BQ/*
123
- 10000 BQ/BQ.test.data
124
- 100000 BQ/BQ.train.data
125
- 10000 BQ/BQ.valid.data
126
- 120000 total
127
- ```
128
-
129
- #### LCQMC
130
-
131
- ```shell
132
- $ wc -l LCQMC/*
133
- 12500 LCQMC/LCQMC.test.data
134
- 238766 LCQMC/LCQMC.train.data
135
- 8802 LCQMC/LCQMC.valid.data
136
- 260068 total
137
- ```
138
-
139
- #### PAWSX
140
-
141
- ```shell
142
- $ wc -l PAWSX/*
143
- 2000 PAWSX/PAWSX.test.data
144
- 49401 PAWSX/PAWSX.train.data
145
- 2000 PAWSX/PAWSX.valid.data
146
- 53401 total
147
- ```
148
-
149
- #### STS-B
150
-
151
- ```shell
152
- $ wc -l STS-B/*
153
- 1361 STS-B/STS-B.test.data
154
- 5231 STS-B/STS-B.train.data
155
- 1458 STS-B/STS-B.valid.data
156
- 8050 total
157
- ```
158
-
159
- ## Dataset Creation
160
- ### Curation Rationale
161
- 作为中文NLI(natural langauge inference)数据集,这里把这个数据集上传到huggingface的datasets,方便大家使用。
162
- ### Source Data
163
- #### Initial Data Collection and Normalization
164
- #### Who are the source language producers?
165
- 数据集的版权归原作者所有,使用各数据集时请尊重原数据集的版权。
166
-
167
- BQ: Jing Chen, Qingcai Chen, Xin Liu, Haijun Yang, Daohe Lu, Buzhou Tang, The BQ Corpus: A Large-scale Domain-specific Chinese Corpus For Sentence Semantic Equivalence Identification EMNLP2018.
168
- ### Annotations
169
- #### Annotation process
170
-
171
- #### Who are the annotators?
172
- 原作者。
173
-
174
- ### Personal and Sensitive Information
175
-
176
- ## Considerations for Using the Data
177
- ### Social Impact of Dataset
178
- This dataset was developed as a benchmark for evaluating representational systems for text, especially including those induced by representation learning methods, in the task of predicting truth conditions in a given context.
179
-
180
- Systems that are successful at such a task may be more successful in modeling semantic representations.
181
- ### Discussion of Biases
182
- ### Other Known Limitations
183
- ## Additional Information
184
- ### Dataset Curators
185
-
186
- - 苏剑林对文件名称有整理
187
- - 我上传到huggingface的datasets
188
-
189
- ### Licensing Information
190
-
191
- 用于学术研究。
192
-
193
- The BQ corpus is free to the public for academic research.
194
-
195
-
196
- ### Contributions
197
-
198
- Thanks to [@shibing624](https://github.com/shibing624) add this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
STS-B/nli_zh-test.parquet ADDED
Binary file (109 kB). View file
 
STS-B/nli_zh-train.parquet ADDED
Binary file (430 kB). View file
 
STS-B/nli_zh-validation.parquet ADDED
Binary file (143 kB). View file
 
nli_zh.py DELETED
@@ -1,143 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- """
3
- @author:XuMing([email protected])
4
- @description:
5
- """
6
-
7
- """Natural Language Inference (NLI) Chinese Corpus.(nli_zh)"""
8
-
9
- import os
10
-
11
- import datasets
12
-
13
- _DESCRIPTION = """纯文本数据,格式:(sentence1, sentence2, label)。常见中文语义匹配数据集,包含ATEC、BQ、LCQMC、PAWSX、STS-B共5个任务。"""
14
-
15
- ATEC_HOME = "https://github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC"
16
- BQ_HOME = "http://icrc.hitsz.edu.cn/info/1037/1162.htm"
17
- LCQMC_HOME = "http://icrc.hitsz.edu.cn/Article/show/171.html"
18
- PAWSX_HOME = "https://arxiv.org/abs/1908.11828"
19
- STSB_HOME = "https://github.com/pluto-junzeng/CNSD"
20
-
21
- _CITATION = "https://github.com/shibing624/text2vec"
22
-
23
- _DATA_URL = "https://github.com/shibing624/text2vec/releases/download/1.1.2/senteval_cn.zip"
24
-
25
-
26
- class NliZhConfig(datasets.BuilderConfig):
27
- """BuilderConfig for NLI_zh"""
28
-
29
- def __init__(self, features, data_url, citation, url, label_classes=(0, 1), **kwargs):
30
- """BuilderConfig for NLI_zh
31
- Args:
32
- features: `list[string]`, list of the features that will appear in the
33
- feature dict. Should not include "label".
34
- data_url: `string`, url to download the zip file from.
35
- citation: `string`, citation for the data set.
36
- url: `string`, url for information about the data set.
37
- label_classes: `list[int]`, sim is 1, else 0.
38
- **kwargs: keyword arguments forwarded to super.
39
- """
40
- super().__init__(version=datasets.Version("1.0.0"), **kwargs)
41
- self.features = features
42
- self.label_classes = label_classes
43
- self.data_url = data_url
44
- self.citation = citation
45
- self.url = url
46
-
47
-
48
- class NliZh(datasets.GeneratorBasedBuilder):
49
- """The Natural Language Inference Chinese(NLI_zh) Corpus."""
50
-
51
- BUILDER_CONFIGS = [
52
- NliZhConfig(
53
- name="ATEC",
54
- description=_DESCRIPTION,
55
- features=["sentence1", "sentence1"],
56
- data_url=_DATA_URL,
57
- citation=_CITATION,
58
- url=ATEC_HOME,
59
- ),
60
- NliZhConfig(
61
- name="BQ",
62
- description=_DESCRIPTION,
63
- features=["sentence1", "sentence1"],
64
- data_url=_DATA_URL,
65
- citation=_CITATION,
66
- url=BQ_HOME,
67
- ),
68
- NliZhConfig(
69
- name="LCQMC",
70
- description=_DESCRIPTION,
71
- features=["sentence1", "sentence1"],
72
- data_url=_DATA_URL,
73
- citation=_CITATION,
74
- url=LCQMC_HOME,
75
- ),
76
- NliZhConfig(
77
- name="PAWSX",
78
- description=_DESCRIPTION,
79
- features=["sentence1", "sentence1"],
80
- data_url=_DATA_URL,
81
- citation=_CITATION,
82
- url=PAWSX_HOME,
83
- ),
84
- NliZhConfig(
85
- name="STS-B",
86
- description=_DESCRIPTION,
87
- features=["sentence1", "sentence1"],
88
- data_url=_DATA_URL,
89
- citation=_CITATION,
90
- url=STSB_HOME,
91
- ),
92
- ]
93
-
94
- def _info(self):
95
- return datasets.DatasetInfo(
96
- description=self.config.description,
97
- features=datasets.Features(
98
- {
99
- "sentence1": datasets.Value("string"),
100
- "sentence2": datasets.Value("string"),
101
- "label": datasets.Value("int32"),
102
- # "idx": datasets.Value("int32"),
103
- }
104
- ),
105
- homepage=self.config.url,
106
- citation=self.config.citation,
107
- )
108
-
109
- def _split_generators(self, dl_manager):
110
- dl_dir = dl_manager.download_and_extract(self.config.data_url) or ""
111
- dl_dir = os.path.join(dl_dir, f"senteval_cn/{self.config.name}")
112
- return [
113
- datasets.SplitGenerator(
114
- name=datasets.Split.TRAIN,
115
- gen_kwargs={
116
- "filepath": os.path.join(dl_dir, f"{self.config.name}.train.data"),
117
- },
118
- ),
119
- datasets.SplitGenerator(
120
- name=datasets.Split.VALIDATION,
121
- gen_kwargs={
122
- "filepath": os.path.join(dl_dir, f"{self.config.name}.valid.data"),
123
- },
124
- ),
125
- datasets.SplitGenerator(
126
- name=datasets.Split.TEST,
127
- gen_kwargs={
128
- "filepath": os.path.join(dl_dir, f"{self.config.name}.test.data"),
129
- },
130
- ),
131
- ]
132
-
133
- def _generate_examples(self, filepath):
134
- """This function returns the examples in the raw (text) form."""
135
- with open(filepath, 'r', encoding="utf-8") as f:
136
- for idx, row in enumerate(f):
137
- # print(row)
138
- terms = row.split('\t')
139
- yield idx, {
140
- "sentence1": terms[0],
141
- "sentence2": terms[1],
142
- "label": int(terms[2]),
143
- }