Datasets:
metadata
language:
- vi
license: mit
task_categories:
- text-generation
- question-answering
- text-classification
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context
dtype: string
- name: answer_start
dtype: int64
- name: index
dtype: int64
splits:
- name: train
num_bytes: 54889695
num_examples: 48460
- name: test
num_bytes: 6061691
num_examples: 5385
download_size: 33576702
dataset_size: 60951386
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
size_categories:
- 10K<n<100K
Dataset Describe
This dataset is collected from internet sources, SQuAD dataset, wiki, etc. It has been translated into Vietnamese using "google translate" and word segmented using VnCoreNLP (https://github.com/vncorenlp/VnCoreNLP).
Data structure
Dataset includes the following columns:
question
: Question related to the content of the text.context
: Paragraph of text.answer
: The answer to the question is based on the content of the text.answer_start
: The starting position of the answer in the text.
How to use
You can load this dataset using Hugging Face's datasets
library:
from datasets import load_dataset
dataset = load_dataset("ShynBui/Vietnamese_Reading_Comprehension_Dataset")
Split data
The dataset is divided into train/test sections.
DatasetDict({
train: Dataset({
features: ['question', 'answer', 'context', 'answer_start', 'index'],
num_rows: 48460
})
test: Dataset({
features: ['question', 'answer', 'context', 'answer_start', 'index'],
num_rows: 5385
})
})
Task categories
This dataset can be used for the following main tasks
question-answering
.reading-comprehension
.natural-language-processing
.
Contribute
We welcome all contributions to this dataset. If you discover an error or have feedback, please create an Issue or Pull Request on our Hub repository.
License
This dataset is released under the MIT License.
Contact
If you have any questions, please contact us via email: [email protected].