AI-paper-crawl / README.md
Shuyu Wu
manually delete train split
e1bde6c
---
dataset_info:
features:
- name: index
dtype: int64
- name: text
dtype: string
- name: year
dtype: string
- name: 'No'
dtype: string
splits:
- name: ACL
num_bytes: 421459076
num_examples: 8553
- name: AAAI
num_bytes: 704912380
num_examples: 18849
- name: CVPR
num_bytes: 580597550
num_examples: 12128
- name: ECCV
num_bytes: 309254132
num_examples: 6166
- name: EMNLP
num_bytes: 346365402
num_examples: 7109
- name: ICCV
num_bytes: 260634902
num_examples: 5369
- name: ICLR
num_bytes: 456934533
num_examples: 5806
- name: ICML
num_bytes: 755479970
num_examples: 10951
- name: IJCAI
num_bytes: 955057995
num_examples: 20284
- name: NAACL
num_bytes: 145448060
num_examples: 2994
- name: NIPS
num_bytes: 955057995
num_examples: 20284
download_size: 3079484694
dataset_size: 5891201995
---
# Dataset Card for "AI-paper-crawl"
The dataset contains 11 splits, corresponding to 11 conferences.
For each split, there are several fields:
1. "index": Index number starting from 0. It's the primary key;
2. "text": The content of the paper in pure text form. Newline is turned into 3 spaces if "-" is not detected;
3. "year": A **string** of the paper's publication year, like "2018". Transform it into int if you need to;
4. "No": A **string** of index number within a year. 1-indexed. In "ECCV" split, the "No" is index number throughout the entire split. It only provides a reference of the order that these papers are accessed, instead of the real publication order.
The "ICLR" split may miss roughly 20%-25% of the papers, since it's collected by searching on arxiv, which may return 0 or more than 1 results.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)