parquet-converter commited on
Commit
c09f4ca
1 Parent(s): 5bd582f

Update parquet files

Browse files
README.md DELETED
@@ -1,166 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - other
4
- language_creators:
5
- - other
6
- language:
7
- - zh
8
- license:
9
- - mit
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 10M<n<100M
14
- source_datasets:
15
- - original
16
- task_categories:
17
- - conversational
18
- task_ids:
19
- - dialogue-generation
20
- pretty_name: lccc
21
- tags:
22
- - dialogue-response-retrieval
23
- ---
24
-
25
- # Dataset Card for lccc_large
26
-
27
- ## Table of Contents
28
- - [Dataset Card for lccc_large](#dataset-card-for-lccc_large)
29
- - [Table of Contents](#table-of-contents)
30
- - [Dataset Description](#dataset-description)
31
- - [Dataset Summary](#dataset-summary)
32
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
33
- - [Languages](#languages)
34
- - [Dataset Structure](#dataset-structure)
35
- - [Data Instances](#data-instances)
36
- - [Data Fields](#data-fields)
37
- - [Data Splits](#data-splits)
38
- - [Dataset Creation](#dataset-creation)
39
- - [Curation Rationale](#curation-rationale)
40
- - [Source Data](#source-data)
41
- - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
42
- - [Who are the source language producers?](#who-are-the-source-language-producers)
43
- - [Annotations](#annotations)
44
- - [Annotation process](#annotation-process)
45
- - [Who are the annotators?](#who-are-the-annotators)
46
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
47
- - [Considerations for Using the Data](#considerations-for-using-the-data)
48
- - [Social Impact of Dataset](#social-impact-of-dataset)
49
- - [Discussion of Biases](#discussion-of-biases)
50
- - [Other Known Limitations](#other-known-limitations)
51
- - [Additional Information](#additional-information)
52
- - [Dataset Curators](#dataset-curators)
53
- - [Licensing Information](#licensing-information)
54
- - [Citation Information](#citation-information)
55
-
56
- ## Dataset Description
57
-
58
- - **Homepage:** https://github.com/thu-coai/CDial-GPT
59
- - **Repository:** https://github.com/thu-coai/CDial-GPT
60
- - **Paper:** https://arxiv.org/abs/2008.03946
61
-
62
- ### Dataset Summary
63
-
64
- lccc: Large-scale Cleaned Chinese Conversation corpus (LCCC) is a large Chinese dialogue corpus originate from Chinese social medias. A rigorous data cleaning pipeline is designed to ensure the quality of the corpus. This pipeline involves a set of rules and several classifier-based filters. Noises such as offensive or sensitive words, special symbols, emojis, grammatically incorrect sentences, and incoherent conversations are filtered.
65
-
66
- lccc是一套来自于中文社交媒体的对话数据,我们设计了一套严格的数据过滤流程来确保该数据集中对话数据的质量。 这一数据过滤流程中包括一系列手工规则以及若干基于机器学习算法所构建的分类器。 我们所过滤掉的噪声包括:脏字脏词、特殊字符、颜表情、语法不通的语句、上下文不相关的对话等。
67
-
68
- ### Supported Tasks and Leaderboards
69
-
70
- - dialogue-generation: The dataset can be used to train a model for generating dialogue responses.
71
- - response-retrieval: The dataset can be used to train a reranker model that can be used to implement a retrieval-based dialogue model.
72
-
73
- ### Languages
74
-
75
- LCCC is in Chinese
76
-
77
- LCCC中的对话是中文的
78
-
79
- ## Dataset Structure
80
-
81
- ### Data Instances
82
-
83
- ["火锅 我 在 重庆 成都 吃 了 七八 顿 火锅", "哈哈哈哈 ! 那 我 的 嘴巴 可能 要 烂掉 !", "不会 的 就是 好 油腻"]
84
-
85
- ### Data Fields
86
-
87
- Each line is a list of utterances that consist a dialogue.
88
- Note that the LCCC dataset provided in our original Github page is in json format,
89
- however, we are providing LCCC in jsonl format here.
90
-
91
- ### Data Splits
92
-
93
- We do not provide the offical split for LCCC-large.
94
- But we provide a split for LCCC-base:
95
-
96
- |train|valid|test|
97
- |:---:|:---:|:---:|
98
- |6,820,506 | 20,000 | 10,000|
99
-
100
- ## Dataset Creation
101
-
102
- ### Curation Rationale
103
-
104
- [Needs More Information]
105
-
106
- ### Source Data
107
-
108
- #### Initial Data Collection and Normalization
109
-
110
- [Needs More Information]
111
-
112
- #### Who are the source language producers?
113
-
114
- [Needs More Information]
115
-
116
- ### Annotations
117
-
118
- #### Annotation process
119
-
120
- [Needs More Information]
121
-
122
- #### Who are the annotators?
123
-
124
- [Needs More Information]
125
-
126
- ### Personal and Sensitive Information
127
-
128
- [Needs More Information]
129
-
130
- ## Considerations for Using the Data
131
-
132
- ### Social Impact of Dataset
133
-
134
- [Needs More Information]
135
-
136
- ### Discussion of Biases
137
-
138
- [Needs More Information]
139
-
140
- ### Other Known Limitations
141
-
142
- [Needs More Information]
143
-
144
- ## Additional Information
145
-
146
- ### Dataset Curators
147
-
148
- [Needs More Information]
149
-
150
- ### Licensing Information
151
-
152
- [Needs More Information]
153
-
154
- ### Citation Information
155
-
156
- Please cite the following paper if you find this dataset useful:
157
-
158
- ```bibtex
159
- @inproceedings{wang2020chinese,
160
- title={A Large-Scale Chinese Short-Text Conversation Dataset},
161
- author={Wang, Yida and Ke, Pei and Zheng, Yinhe and Huang, Kaili and Jiang, Yong and Zhu, Xiaoyan and Huang, Minlie},
162
- booktitle={NLPCC},
163
- year={2020},
164
- url={https://arxiv.org/abs/2008.03946}
165
- }
166
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dummy/large/1.0.0/dummy_data.zip → base/lccc-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:87ec878e0941fa2af39af9ae57f74bea745ab0bb87ab3d7d1b943d22c6a1b833
3
- size 723
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67bc6298942e852982762f8b1edbadb41a0dbe7213f40c6ddf3baa92a08f0d5d
3
+ size 947703
lccc_base_train.jsonl.gz → base/lccc-train-00000-of-00002.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2162e0ed923fba62329cabf7e1493fbe59248afc94a62508e4abdea61e624627
3
- size 369854377
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:858f8acc6e9121b06fa93892dbbc5c6750b1cc0eca6f6e9bfe6815c99fedb4d6
3
+ size 339463403
base/lccc-train-00001-of-00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c83a91bf81170ed2f1c5071b632da0a954285c0848a60f6cd085f837f2253625
3
+ size 293479940
lccc_base_test.jsonl.gz → base/lccc-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cf8757587bdb8f360cc94fc38baadf9e185bad65a26155527a8430c048676016
3
- size 549124
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd1424f72bc3603735480a6cf321cc1aed55ac0cd9502a659fafa7e922e6282d
3
+ size 1849574
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"large": {"description": "LCCC: Large-scale Cleaned Chinese Conversation corpus (LCCC) is a large corpus of Chinese conversations.\nA rigorous data cleaning pipeline is designed to ensure the quality of the corpus.\nThis pipeline involves a set of rules and several classifier-based filters.\nNoises such as offensive or sensitive words, special symbols, emojis,\ngrammatically incorrect sentences, and incoherent conversations are filtered.\n", "citation": "@inproceedings{wang2020chinese,\ntitle={A Large-Scale Chinese Short-Text Conversation Dataset},\nauthor={Wang, Yida and Ke, Pei and Zheng, Yinhe and Huang, Kaili and Jiang, Yong and Zhu, Xiaoyan and Huang, Minlie},\nbooktitle={NLPCC},\nyear={2020},\nurl={https://arxiv.org/abs/2008.03946}\n}\n", "homepage": "https://github.com/thu-coai/CDial-GPT", "license": "MIT", "features": {"dialog": [{"dtype": "string", "id": null, "_type": "Value"}]}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "lccc", "config_name": "large", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1530827965, "num_examples": 12007759, "dataset_name": "lccc"}}, "download_checksums": {"https://huggingface.co/datasets/silver/lccc/resolve/main/lccc_large.jsonl.gz": {"num_bytes": 607605643, "checksum": "0eaf3b39e1f54c414c3c75a8319f89c8a98b4bc6f91913b051a0b849e7d3326f"}}, "download_size": 607605643, "post_processing_size": null, "dataset_size": 1530827965, "size_in_bytes": 2138433608}, "base": {"description": "LCCC: Large-scale Cleaned Chinese Conversation corpus (LCCC) is a large corpus of Chinese conversations.\nA rigorous data cleaning pipeline is designed to ensure the quality of the corpus.\nThis pipeline involves a set of rules and several classifier-based filters.\nNoises such as offensive or sensitive words, special symbols, emojis,\ngrammatically incorrect sentences, and incoherent conversations are filtered.\n", "citation": "@inproceedings{wang2020chinese,\ntitle={A Large-Scale Chinese Short-Text Conversation Dataset},\nauthor={Wang, Yida and Ke, Pei and Zheng, Yinhe and Huang, Kaili and Jiang, Yong and Zhu, Xiaoyan and Huang, Minlie},\nbooktitle={NLPCC},\nyear={2020},\nurl={https://arxiv.org/abs/2008.03946}\n}\n", "homepage": "https://github.com/thu-coai/CDial-GPT", "license": "MIT", "features": {"dialog": [{"dtype": "string", "id": null, "_type": "Value"}]}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "lccc", "config_name": "base", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 932634902, "num_examples": 6820506, "dataset_name": "lccc"}, "test": {"name": "test", "num_bytes": 1498216, "num_examples": 10000, "dataset_name": "lccc"}, "validation": {"name": "validation", "num_bytes": 2922731, "num_examples": 20000, "dataset_name": "lccc"}}, "download_checksums": {"https://huggingface.co/datasets/silver/lccc/resolve/main/lccc_base_train.jsonl.gz": {"num_bytes": 369854377, "checksum": "2162e0ed923fba62329cabf7e1493fbe59248afc94a62508e4abdea61e624627"}, "https://huggingface.co/datasets/silver/lccc/resolve/main/lccc_base_valid.jsonl.gz": {"num_bytes": 1071594, "checksum": "5cc27e7ac3447c5a31386178f82ff01cab56e27827445ef8d429809301491759"}, "https://huggingface.co/datasets/silver/lccc/resolve/main/lccc_base_test.jsonl.gz": {"num_bytes": 549124, "checksum": "cf8757587bdb8f360cc94fc38baadf9e185bad65a26155527a8430c048676016"}}, "download_size": 371475095, "post_processing_size": null, "dataset_size": 937055849, "size_in_bytes": 1308530944}}
 
 
dummy/base/1.0.0/dummy_data.zip.lock DELETED
File without changes
dummy/large/1.0.0/dummy_data.zip.lock DELETED
File without changes
large/lccc-train-00000-of-00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c17c923ded02d887d28e9d8c4e1193d89f243b41a979a398491455391571717
3
+ size 338593764
large/lccc-train-00001-of-00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0699fc3b1355a33f7270129a65ca7e665facb0dd7297ac8e7d5e60c2b924498b
3
+ size 340115039
large/lccc-train-00002-of-00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:555c89a681a667295466a548c2528e3e8acff6e1fab36f5cb1417413b08bd28b
3
+ size 341611928
dummy/base/1.0.0/dummy_data.zip → large/lccc-train-00003-of-00004.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4b55cad4bfdd78c371ec57503cec463e24ce1a37f60040cb8f5082c6e0d84fde
3
- size 2100
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dee8bcc245d9f1218609b38932e999600256c5ddfe8b98dd007543e1b9aa6d8d
3
+ size 20107350
lccc.py DELETED
@@ -1,136 +0,0 @@
1
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
- """
15
- LCCC: Large-scale Cleaned Chinese Conversation corpus (LCCC) is a large corpus of Chinese conversations.
16
- A rigorous data cleaning pipeline is designed to ensure the quality of the corpus.
17
- This pipeline involves a set of rules and several classifier-based filters.
18
- Noises such as offensive or sensitive words, special symbols, emojis,
19
- grammatically incorrect sentences, and incoherent conversations are filtered.
20
- """
21
-
22
- import json
23
- import os
24
-
25
- import datasets
26
-
27
-
28
- # BibTeX citation
29
- _CITATION = """\
30
- @inproceedings{wang2020chinese,
31
- title={A Large-Scale Chinese Short-Text Conversation Dataset},
32
- author={Wang, Yida and Ke, Pei and Zheng, Yinhe and Huang, Kaili and Jiang, Yong and Zhu, Xiaoyan and Huang, Minlie},
33
- booktitle={NLPCC},
34
- year={2020},
35
- url={https://arxiv.org/abs/2008.03946}
36
- }
37
- """
38
-
39
- # Description of the dataset here
40
- _DESCRIPTION = """\
41
- LCCC: Large-scale Cleaned Chinese Conversation corpus (LCCC) is a large corpus of Chinese conversations.
42
- A rigorous data cleaning pipeline is designed to ensure the quality of the corpus.
43
- This pipeline involves a set of rules and several classifier-based filters.
44
- Noises such as offensive or sensitive words, special symbols, emojis,
45
- grammatically incorrect sentences, and incoherent conversations are filtered.
46
- """
47
-
48
- _HOMEPAGE = "https://github.com/thu-coai/CDial-GPT"
49
- _LICENSE = "MIT"
50
- _URLS = {
51
- "large": "https://huggingface.co/datasets/silver/lccc/resolve/main/lccc_large.jsonl.gz",
52
- "base": {
53
- "train": "https://huggingface.co/datasets/silver/lccc/resolve/main/lccc_base_train.jsonl.gz",
54
- "valid": "https://huggingface.co/datasets/silver/lccc/resolve/main/lccc_base_valid.jsonl.gz",
55
- "test": "https://huggingface.co/datasets/silver/lccc/resolve/main/lccc_base_test.jsonl.gz",
56
- },
57
- }
58
-
59
-
60
- class LCCC(datasets.GeneratorBasedBuilder):
61
- """Large-scale Cleaned Chinese Conversation corpus."""
62
-
63
- VERSION = datasets.Version("1.0.0")
64
-
65
- BUILDER_CONFIGS = [
66
- datasets.BuilderConfig(name="large", version=VERSION, description="The large version of LCCC"),
67
- datasets.BuilderConfig(name="base", version=VERSION, description="The base version of LCCC"),
68
- ]
69
-
70
- def _info(self):
71
- features = datasets.Features(
72
- {
73
- "dialog": [datasets.Value("string")],
74
- }
75
- )
76
- return datasets.DatasetInfo(
77
- # This is the description that will appear on the datasets page.
78
- description=_DESCRIPTION,
79
- # This defines the different columns of the dataset and their types
80
- features=features, # Here we define them above because they are different between the two configurations
81
- # If there's a common (input, target) tuple from the features, uncomment supervised_keys line below and
82
- # specify them. They'll be used if as_supervised=True in builder.as_dataset.
83
- # supervised_keys=("sentence", "label"),
84
- # Homepage of the dataset for documentation
85
- homepage=_HOMEPAGE,
86
- # License for the dataset if available
87
- license=_LICENSE,
88
- # Citation for the dataset
89
- citation=_CITATION,
90
- )
91
-
92
- def _split_generators(self, dl_manager):
93
- urls = _URLS[self.config.name]
94
- downloaded_data = dl_manager.download_and_extract(urls)
95
- if self.config.name == "large":
96
- return [
97
- datasets.SplitGenerator(
98
- name=datasets.Split.TRAIN,
99
- gen_kwargs={
100
- "filepath": os.path.join(downloaded_data),
101
- "split": "train",
102
- },
103
- )
104
- ]
105
- if self.config.name == "base":
106
- return [
107
- datasets.SplitGenerator(
108
- name=datasets.Split.TRAIN,
109
- gen_kwargs={
110
- "filepath": os.path.join(downloaded_data["train"]),
111
- "split": "train",
112
- },
113
- ),
114
- datasets.SplitGenerator(
115
- name=datasets.Split.TEST,
116
- gen_kwargs={"filepath": os.path.join(downloaded_data["test"]), "split": "test"},
117
- ),
118
- datasets.SplitGenerator(
119
- name=datasets.Split.VALIDATION,
120
- gen_kwargs={
121
- "filepath": os.path.join(downloaded_data["valid"]),
122
- "split": "dev",
123
- },
124
- ),
125
- ]
126
-
127
- # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
128
- def _generate_examples(self, filepath, split):
129
- with open(filepath, encoding="utf-8") as f:
130
- for key, row in enumerate(f):
131
- row = row.strip()
132
- if len(row) == 0:
133
- continue
134
- yield key, {
135
- "dialog": json.loads(row),
136
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
lccc_base_valid.jsonl.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5cc27e7ac3447c5a31386178f82ff01cab56e27827445ef8d429809301491759
3
- size 1071594
 
 
 
 
lccc_large.jsonl.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:0eaf3b39e1f54c414c3c75a8319f89c8a98b4bc6f91913b051a0b849e7d3326f
3
- size 607605643