shibing624 commited on
Commit
f70f39d
1 Parent(s): 0a02f7a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +131 -0
README.md CHANGED
@@ -1,3 +1,134 @@
1
  ---
 
 
 
 
 
 
2
  license: cc-by-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - shibing624
4
+ language_creators:
5
+ - shibing624
6
+ language:
7
+ - zh
8
  license: cc-by-4.0
9
+ multilinguality:
10
+ - monolingual
11
+ size_categories:
12
+ - 1M<n<10M
13
+ source_datasets:
14
+ - https://huggingface.co/datasets
15
+ task_categories:
16
+ - text-classification
17
+ task_ids:
18
+ - natural-language-inference
19
+ - semantic-similarity-scoring
20
+ - text-scoring
21
+ paperswithcode_id: snli
22
+ pretty_name: Stanford Natural Language Inference
23
  ---
24
+ # Dataset Card for SNLI_zh
25
+
26
+ ## Dataset Description
27
+ - **Repository:** [Chinese NLI dataset](https://github.com/shibing624/text2vec)
28
+ - **Dataset:** [zh NLI](https://huggingface.co/datasets/shibing624/nli-zh-all)
29
+ - **Size of downloaded dataset files:** 4.7 GB
30
+ - **Total amount of disk used:** 4.7 GB
31
+ ### Dataset Summary
32
+
33
+ 中文自然语言推理(NLI)数据合集(nli-zh-all)
34
+
35
+
36
+
37
+ ### Supported Tasks and Leaderboards
38
+
39
+ Supported Tasks: 支持中文文本匹配任务,文本相似度计算等相关任务。
40
+
41
+ 中文匹配任务的结果目前在顶会paper上出现较少,我罗列一个我自己训练的结果:
42
+
43
+ **Leaderboard:** [NLI_zh leaderboard](https://github.com/shibing624/text2vec)
44
+
45
+ ### Languages
46
+
47
+ 数据集均是简体中文文本。
48
+
49
+ ## Dataset Structure
50
+ ### Data Instances
51
+ An example of 'train' looks as follows.
52
+ ```
53
+ {"text1":"借款后多长时间给打电话","text2":"借款后多久打电话啊","label":1}
54
+ {"text1":"没看到微粒贷","text2":"我借那么久也没有提升啊","label":0}
55
+ ```
56
+
57
+ ### Data Fields
58
+ The data fields are the same among all splits.
59
+
60
+ - `text1`: a `string` feature.
61
+ - `text2`: a `string` feature.
62
+ - `label`: a classification label, with possible values including entailment(1), contradiction(0)。
63
+
64
+ ### Data Splits
65
+
66
+ after remove None and len(text) < 1 data:
67
+ ```shell
68
+
69
+ $ wc -l nli-zh-all/*
70
+ 48818 nli-zh-all/alpaca_gpt4-train.jsonl
71
+ 5000 nli-zh-all/amazon_reviews-train.jsonl
72
+ 519255 nli-zh-all/belle-train.jsonl
73
+ 16000 nli-zh-all/cblue_chip_sts-train.jsonl
74
+ 549326 nli-zh-all/chatmed_consult-train.jsonl
75
+ 10142 nli-zh-all/cmrc2018-train.jsonl
76
+ 395927 nli-zh-all/csl-train.jsonl
77
+ 50000 nli-zh-all/dureader_robust-train.jsonl
78
+ 709761 nli-zh-all/firefly-train.jsonl
79
+ 9568 nli-zh-all/mlqa-train.jsonl
80
+ 455875 nli-zh-all/nli_zh-train.jsonl
81
+ 50486 nli-zh-all/ocnli-train.jsonl
82
+ 2678694 nli-zh-all/simclue-train.jsonl
83
+ 419402 nli-zh-all/snli_zh-train.jsonl
84
+ 3024 nli-zh-all/webqa-train.jsonl
85
+ 1213780 nli-zh-all/wiki_atomic_edits-train.jsonl
86
+ 93404 nli-zh-all/xlsum-train.jsonl
87
+ 1006218 nli-zh-all/zhihu_kol-train.jsonl
88
+ 8234680 total
89
+
90
+ ```
91
+
92
+ ### Data Length
93
+
94
+ ![len](https://huggingface.co/datasets/shibing624/nli-zh-all/resolve/main/nli-zh-all-len.png)
95
+
96
+ ## Dataset Creation
97
+ ### Curation Rationale
98
+ 受[m3e-base](https://huggingface.co/moka-ai/m3e-base#M3E%E6%95%B0%E6%8D%AE%E9%9B%86)启发,合并了中文高质量NLI(natural langauge inference)数据集,
99
+ 这里把这个数据集上传到huggingface的datasets,方便大家使用。
100
+ ### Source Data
101
+ #### Initial Data Collection and Normalization
102
+ #### Who are the source language producers?
103
+ 数据集的版权归原作者所有,使用各数据集时请尊重原数据集的版权。
104
+
105
+ - SNLI:
106
+ @inproceedings{snli:emnlp2015,
107
+ Author = {Bowman, Samuel R. and Angeli, Gabor and Potts, Christopher, and Manning, Christopher D.},
108
+ Booktitle = {Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
109
+ Publisher = {Association for Computational Linguistics},
110
+ Title = {A large annotated corpus for learning natural language inference},
111
+ Year = {2015}
112
+ }
113
+
114
+ #### Who are the annotators?
115
+ 原作者。
116
+
117
+
118
+ ### Social Impact of Dataset
119
+ This dataset was developed as a benchmark for evaluating representational systems for text, especially including those induced by representation learning methods, in the task of predicting truth conditions in a given context.
120
+
121
+ Systems that are successful at such a task may be more successful in modeling semantic representations.
122
+
123
+
124
+ ### Licensing Information
125
+
126
+ for reasearch
127
+
128
+ 用于学术研究
129
+
130
+
131
+
132
+ ### Contributions
133
+
134
+ [shibing624](https://github.com/shibing624) add this dataset.