|
--- |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: test |
|
path: data/test-* |
|
dataset_info: |
|
features: |
|
- name: data_index_by_user |
|
dtype: int32 |
|
- name: title |
|
dtype: string |
|
- name: content |
|
dtype: string |
|
- name: label |
|
dtype: int32 |
|
splits: |
|
- name: train |
|
num_bytes: 207331112 |
|
num_examples: 560000 |
|
- name: test |
|
num_bytes: 25970187 |
|
num_examples: 70000 |
|
download_size: 136871622 |
|
dataset_size: 233301299 |
|
license: cc-by-sa-3.0 |
|
--- |
|
# Dataset Card for "kor_dbpedia_14" |
|
|
|
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
# Source Data Citation Information |
|
``` |
|
Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015). |
|
|
|
Lehmann, Jens, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann et al. "DBpedia–a large-scale, multilingual knowledge base extracted from Wikipedia." Semantic web 6, no. 2 (2015): 167-195. |
|
``` |