Datasets:
Size:
1M<n<10M
ArXiv:
Tags:
programming-language
code
program-synthesis
automatic-code-repair
code-retrieval
code-translation
License:
Merge branch 'main' of https://huggingface.co/datasets/NTU-NLP-sg/xCodeEval into main
Browse files
README.md
CHANGED
@@ -57,7 +57,7 @@ configs:
|
|
57 |
# xCodeEval
|
58 |
[xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval](https://arxiv.org/abs/2303.03004)
|
59 |
|
60 |
-
We introduce **xCodeEval**, the largest executable multilingual multitask benchmark to date consisting of
|
61 |
|
62 |
This repository contains the sample code and data link for xCodeEval [paper](https://arxiv.org/abs/2303.03004).
|
63 |
|
@@ -88,13 +88,13 @@ git lfs pull --include "apr/test/*"
|
|
88 |
|
89 |
We propose 7 Tasks.
|
90 |
|
91 |
-
1. [Tag Classification](
|
92 |
-
2. [Code Compilation](
|
93 |
-
3. [Program Synthesis](
|
94 |
-
4. [Code Translation](
|
95 |
-
5. [Automatic Program Repair](
|
96 |
-
6. [Code-Code Retrieval](
|
97 |
-
7. [NL-Code Retrieval](
|
98 |
|
99 |
# Common Data for different tasks
|
100 |
|
@@ -105,7 +105,7 @@ We have two data files that are required for multiple tasks.
|
|
105 |
1. `problem_descriptions.jsonl`
|
106 |
2. `unittest_db.json`
|
107 |
|
108 |
-
You can find these two files in the root directory of the [main](https://huggingface.co/datasets/NTU-NLP-sg/xCodeEval/tree/main) branch of huggingface dataset repository. To avoid data
|
109 |
|
110 |
## Structure of `problem_descriptions.jsonl`
|
111 |
|
@@ -140,19 +140,19 @@ A sample,
|
|
140 |
### Key Definitions
|
141 |
|
142 |
1. `description`: Problem description in textual format, math operations are written in latex.
|
143 |
-
2. `input_from`: How the program should take unit test.
|
144 |
3. `output_to`: Where the program should output the result of the unit test.
|
145 |
4. `time_limit`: Time limit to solve the problem.
|
146 |
5. `memory_limit`: Memory limit to solve the problem.
|
147 |
-
6. `input_spec`: How and what order the input will be given to the program
|
148 |
-
7. `output_spec`: How the outputs should be printed. Most of the time the unit test results are matched with *exact string match* or *floating point comparison* with a precision boundary.
|
149 |
8. `sample_inputs`: A sample input for the code that is expected to solve the problem described in `description`.
|
150 |
9. `sample_outputs`: The expected output for the `sample_input` that is expected to solve the problem described in `description`.
|
151 |
10. `notes`: Explanation of `sample_inputs` & `sample_outputs`.
|
152 |
11. `tags`: The problem categories.
|
153 |
-
12. `src_uid`: The unique id of the problem. This ID is referred in the task data samples instead of putting all
|
154 |
-
13. `difficulty`: How difficult is it to solve the problem for a human (annotated by an expert human)
|
155 |
-
14. `created_at`: The
|
156 |
|
157 |
## Structure of `unittest_db.json`
|
158 |
|
@@ -182,8 +182,8 @@ unittest_db = {
|
|
182 |
### Key Definitions
|
183 |
|
184 |
1. `unittest_db.json` dict keys i.e., `db884d679d9cfb1dc4bc511f83beedda` are the `src_uid` from `problem_descriptions.jsonl`.
|
185 |
-
2. `input
|
186 |
-
3. `output
|
187 |
|
188 |
# Citation
|
189 |
|
|
|
57 |
# xCodeEval
|
58 |
[xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval](https://arxiv.org/abs/2303.03004)
|
59 |
|
60 |
+
We introduce **xCodeEval**, the largest executable multilingual multitask benchmark to date consisting of 25 M document-level coding examples from about 7.5 K unique problems covering up to 17 programming languages with execution-level parallelism. It features a total of seven tasks involving code understanding, generation, translation and retrieval, and it employs an execution-based evaluation. We develop a test-case based multilingual code execution engine, [**ExecEval**](https://github.com/ntunlp/ExecEval) that supports all the programming languages in **xCodeEval**. We also propose a novel data splitting and a data selection schema for balancing data distributions over multiple attributes based on geometric mean and graph-theoretic principle.
|
61 |
|
62 |
This repository contains the sample code and data link for xCodeEval [paper](https://arxiv.org/abs/2303.03004).
|
63 |
|
|
|
88 |
|
89 |
We propose 7 Tasks.
|
90 |
|
91 |
+
1. [Tag Classification](https://github.com/ntunlp/xCodeEval/blob/main/apr.md)
|
92 |
+
2. [Code Compilation](https://github.com/ntunlp/xCodeEval/blob/main/code_compilation.md)
|
93 |
+
3. [Program Synthesis](https://github.com/ntunlp/xCodeEval/blob/main/program_synthesis.md)
|
94 |
+
4. [Code Translation](https://github.com/ntunlp/xCodeEval/blob/main/code_translation.md)
|
95 |
+
5. [Automatic Program Repair](https://github.com/ntunlp/xCodeEval/blob/main/apr.md)
|
96 |
+
6. [Code-Code Retrieval](https://github.com/ntunlp/xCodeEval/blob/main/retrieval.md)
|
97 |
+
7. [NL-Code Retrieval](https://github.com/ntunlp/xCodeEval/blob/main/retrieval.md)
|
98 |
|
99 |
# Common Data for different tasks
|
100 |
|
|
|
105 |
1. `problem_descriptions.jsonl`
|
106 |
2. `unittest_db.json`
|
107 |
|
108 |
+
You can find these two files in the root directory of the [main](https://huggingface.co/datasets/NTU-NLP-sg/xCodeEval/tree/main) branch of huggingface dataset repository. To avoid data redundancy we didn't include these data with the relevant tasks, rather we add a unique id `src_uid` to retrieve these data.
|
109 |
|
110 |
## Structure of `problem_descriptions.jsonl`
|
111 |
|
|
|
140 |
### Key Definitions
|
141 |
|
142 |
1. `description`: Problem description in textual format, math operations are written in latex.
|
143 |
+
2. `input_from`: How the program should take the unit test.
|
144 |
3. `output_to`: Where the program should output the result of the unit test.
|
145 |
4. `time_limit`: Time limit to solve the problem.
|
146 |
5. `memory_limit`: Memory limit to solve the problem.
|
147 |
+
6. `input_spec`: How and in what order the input will be given to the program? It also includes the date range, types, and sizes.
|
148 |
+
7. `output_spec`: How the outputs should be printed. Most of the time the unit test results are matched with an *exact string match* or *floating point comparison* with a precision boundary.
|
149 |
8. `sample_inputs`: A sample input for the code that is expected to solve the problem described in `description`.
|
150 |
9. `sample_outputs`: The expected output for the `sample_input` that is expected to solve the problem described in `description`.
|
151 |
10. `notes`: Explanation of `sample_inputs` & `sample_outputs`.
|
152 |
11. `tags`: The problem categories.
|
153 |
+
12. `src_uid`: The unique id of the problem. This ID is referred to in the task data samples instead of putting all this information.
|
154 |
+
13. `difficulty`: How difficult is it to solve the problem for a human (annotated by an expert human)?
|
155 |
+
14. `created_at`: The Unix timestamp when the problem was released. Use `datetime` lib in Python to parse it to a human-readable format.
|
156 |
|
157 |
## Structure of `unittest_db.json`
|
158 |
|
|
|
182 |
### Key Definitions
|
183 |
|
184 |
1. `unittest_db.json` dict keys i.e., `db884d679d9cfb1dc4bc511f83beedda` are the `src_uid` from `problem_descriptions.jsonl`.
|
185 |
+
2. `input`: Input of the unit test.
|
186 |
+
3. `output`: List of expected outputs for the unit test.
|
187 |
|
188 |
# Citation
|
189 |
|