File size: 2,971 Bytes
62e85e8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language:
- code
license:
- mit
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
pretty_name: DocPrompting-CoNaLa
tags:
- code-generation
- doc retrieval
- retrieval augmented generation
---

## Dataset Description
- **Repository:** https://github.com/shuyanzhou/docprompting
- **Paper:** [DocPrompting: Generating Code by Retrieving the Docs](https://arxiv.org/pdf/2207.05987.pdf)

### Dataset Summary
This is the natural language to bash generation dataset we harvested from the English subset of [`tldr`](https://github.com/tldr-pages/tldr)
We split the dataset by bash commands. Every command in the dev and test set is held out from the training set.

### Supported Tasks and Leaderboards
This dataset is used to evaluate code generations.

### Languages
English - Bash

## Dataset Structure
```python
dataset = load_dataset("neulab/tldr")
DatasetDict({
    train: Dataset({
        features: ['question_id', 'nl', 'cmd', 'oracle_man', 'cmd_name', 'tldr_cmd_name', 'manual_exist', 'matching_info'],
        num_rows: 6414
    })
    test: Dataset({
        features: ['question_id', 'nl', 'cmd', 'oracle_man', 'cmd_name', 'tldr_cmd_name', 'manual_exist', 'matching_info'],
        num_rows: 928
    })
    validation: Dataset({
        features: ['question_id', 'nl', 'cmd', 'oracle_man', 'cmd_name', 'tldr_cmd_name', 'manual_exist', 'matching_info'],
        num_rows: 1845
    })
})

code_docs = load_dataset("neulab/docprompting-conala", "docs")
DatasetDict({
    train: Dataset({
        features: ['doc_id', 'doc_content'],
        num_rows: 439064
    })
})
```

### Data Fields
train/dev/test:
- nl: The natural language intent
- cmd: The reference code snippet
- question_id: the unique id of a question
- oracle_man: The `doc_id` of the functions used in the reference code snippet. The corresponding contents are in `doc` split
- cmd_name: the bash command of this code snippet
- tldr_cmd_name: the bash command used in tldr github repo. The `cmd_name` and `tldr_cmd_name` can be different due to naming difference
- manual_exist: whether the manual exists in https://manned.org
- matching_info: each code snippets have multiple tokens, this is the detailed reference doc matching on each token.
 

docs:
- doc_id: the id of a doc
- doc_content: the content of the doc

## Dataset Creation
The dataset was curated from [`tldr`](https://github.com/tldr-pages/tldr). 
The project aims to provide frequent usage of bash commands with natural language intents. 
For more details, please check the repo.

### Citation Information

```
@article{zhou2022doccoder,
  title={DocCoder: Generating Code by Retrieving and Reading Docs},
  author={Zhou, Shuyan and Alon, Uri and Xu, Frank F and Jiang, Zhengbao and Neubig, Graham},
  journal={arXiv preprint arXiv:2207.05987},
  year={2022}
}
```