Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
albertvillanova HF staff commited on
Commit
9abd46c
1 Parent(s): 2c5d0b3

Fix dataset card (#4)

Browse files

- Fix dataset card (e4c423caa46903dba6f7f99d657e006d65f53e01)

Files changed (1) hide show
  1. README.md +18 -5
README.md CHANGED
@@ -91,9 +91,9 @@ configs:
91
 
92
  ## Dataset Description
93
 
94
- - **Homepage:** [DBpedia14 homepage](https://wiki.dbpedia.org/develop/datasets)
95
- - **Repository:** [DBpedia14 repository](https://github.com/dbpedia/extraction-framework)
96
- - **Paper:** [DBpedia--a large-scale, multilingual knowledge base extracted from Wikipedia](https://content.iospress.com/articles/semantic-web/sw134)
97
  - **Point of Contact:** [Xiang Zhang](mailto:[email protected])
98
 
99
  ### Dataset Summary
@@ -153,7 +153,7 @@ The DBPedia ontology classification dataset is constructed by Xiang Zhang (xiang
153
 
154
  #### Initial Data Collection and Normalization
155
 
156
- [More Information Needed]
157
 
158
  #### Who are the source language producers?
159
 
@@ -199,9 +199,22 @@ The DBPedia ontology classification dataset is licensed under the terms of the C
199
 
200
  ### Citation Information
201
 
202
- Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).
 
 
 
 
 
 
 
 
 
 
 
 
203
 
204
  Lehmann, Jens, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann et al. "DBpedia–a large-scale, multilingual knowledge base extracted from Wikipedia." Semantic web 6, no. 2 (2015): 167-195.
 
205
  ### Contributions
206
 
207
  Thanks to [@hfawaz](https://github.com/hfawaz) for adding this dataset.
 
91
 
92
  ## Dataset Description
93
 
94
+ - **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
95
+ - **Repository:** https://github.com/zhangxiangxiao/Crepe
96
+ - **Paper:** https://arxiv.org/abs/1509.01626
97
  - **Point of Contact:** [Xiang Zhang](mailto:[email protected])
98
 
99
  ### Dataset Summary
 
153
 
154
  #### Initial Data Collection and Normalization
155
 
156
+ Source data is taken from DBpedia: https://wiki.dbpedia.org/develop/datasets
157
 
158
  #### Who are the source language producers?
159
 
 
199
 
200
  ### Citation Information
201
 
202
+ ```
203
+ @inproceedings{NIPS2015_250cf8b5,
204
+ author = {Zhang, Xiang and Zhao, Junbo and LeCun, Yann},
205
+ booktitle = {Advances in Neural Information Processing Systems},
206
+ editor = {C. Cortes and N. Lawrence and D. Lee and M. Sugiyama and R. Garnett},
207
+ pages = {},
208
+ publisher = {Curran Associates, Inc.},
209
+ title = {Character-level Convolutional Networks for Text Classification},
210
+ url = {https://proceedings.neurips.cc/paper_files/paper/2015/file/250cf8b51c773f3f8dc8b4be867a9a02-Paper.pdf},
211
+ volume = {28},
212
+ year = {2015}
213
+ }
214
+ ```
215
 
216
  Lehmann, Jens, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann et al. "DBpedia–a large-scale, multilingual knowledge base extracted from Wikipedia." Semantic web 6, no. 2 (2015): 167-195.
217
+
218
  ### Contributions
219
 
220
  Thanks to [@hfawaz](https://github.com/hfawaz) for adding this dataset.