Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
json
Languages:
Chinese
Size:
10K - 100K
ArXiv:
License:
license: other | |
license_name: tencent-ai-lab-naturalconv-dataset-terms-and-conditions | |
license_link: LICENSE | |
task_categories: | |
- text-generation | |
language: | |
- zh | |
tags: | |
- dialogue | |
- multi-turn | |
- topic-driven | |
- document | |
- news | |
- conversation | |
size_categories: | |
- 10K<n<100K | |
configs: | |
- config_name: default | |
data_files: dialog_release.json | |
# NaturalConv: A Chinese Dialogue Dataset Towards Multi-turn Topic-driven Conversation | |
## Introduction | |
This dataset is described in the paper [NaturalConv: A Chinese Dialogue Dataset Towards Multi-turn Topic-driven Conversation](https://arxiv.org/abs/2103.02548). The entire dataset contains 5 data files. | |
### 1. dialog_release.json: | |
It is a json file containing a list of dictionaries. | |
After loading in python this way: | |
```python | |
import json | |
import codecs | |
dialog_list = json.loads(codecs.open("dialog_release.json", "r", "utf-8").read()) | |
``` | |
dialog_list is a list whose element is a dictionary. | |
Each dictionary contains three keys: "dialog_id", "document_id" and "content": | |
"dialog_id" is an unique id for this dialogue. | |
"document_id" represents which doucment this dialogue is grounded on. | |
"content" is a list of the whole dialogue session. | |
Altogether there are 19,919 dialogs, with approximately 400K dialogue utterances. | |
### 2. document_url_release.json: | |
It is a json file containing a list of dictionaries. | |
After loading in python this way: | |
```python | |
import json | |
import codecs | |
document_list = json.loads(codecs.open("document_url_release.json", "r", "utf-8").read()) | |
``` | |
document_list is a list whose element is a dictionary. | |
Each dictionary contains three keys: "document_id", "topic", and "url": | |
"document_id" is an unique id for this document. | |
"topic" represents which topic this document comes from. | |
"url" represents the url of the original document. | |
Altogether there are 6,500 documents. | |
### 3, 4, and 5. train.txt, dev.txt, and test.txt: | |
Each file contains the "dialog_id" for train, dev and test, respectively. | |
## Document Downloading | |
For research purpose only, you can refer to the code shared in this [repositary](https://github.com/naturalconv/NaturalConvDataSet) for downloading the document texts through the released urls in the document_url_release.json file. | |
## Citation | |
Please kindly cite our paper if you find this dataset useful: | |
``` | |
@inproceedings{aaai-2021-naturalconv, | |
title={NaturalConv: A Chinese Dialogue Dataset Towards Multi-turn Topic-driven Conversation}, | |
author={Wang, Xiaoyang and Li, Chen and Zhao, Jianqiao and Yu, Dong}, | |
booktitle={Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI-21)}, | |
year={2021} | |
} | |
``` | |
## License | |
The dataset is released for non-commercial usage only. By downloading, you agree to the terms and conditions in our [LICENSE](https://huggingface.co/datasets/xywang1/NaturalConv/blob/main/LICENSE). For the authorization of commercial usage, please contact [email protected] for details. | |
## Disclaimers | |
The dataset is provided AS-IS, without warranty of any kind, express or implied. The views and opinions expressed in the dataset including the documents and the dialogues do not necessarily reflect those of Tencent or the authors of the above paper. | |