Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,308 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# CodeT5+
|
2 |
+
|
3 |
+
Official research release for the **CodeT5+** models (`220M`, `770M`, `2B`, `6B` `16B`) for a wide range of **Code Understanding and Generation** tasks.
|
4 |
+
Find out more via our [blog post](https://blog.salesforceairesearch.com/codet5-open-code-large-language-models/).
|
5 |
+
|
6 |
+
*Title*: [CodeT5+: Open Code Large Language Models for Code Understanding and Generation](https://arxiv.org/pdf/2305.07922.pdf)
|
7 |
+
|
8 |
+
*Authors*: [Yue Wang](https://yuewang-cuhk.github.io/)\*, [Hung Le](https://sites.google.com/view/henryle2018/home?pli=1)\*, [Akhilesh Deepak Gotmare](https://akhileshgotmare.github.io/), [Nghi D.Q. Bui](https://bdqnghi.github.io/), [Junnan Li](https://sites.google.com/site/junnanlics), [Steven C.H. Hoi](https://sites.google.com/view/stevenhoi/home) (* indicates equal contribution)
|
9 |
+
|
10 |
+
|
11 |
+
# What is this about?
|
12 |
+
CodeT5+ is a new family of open code large language models with an encoder-decoder architecture that can flexibly operate in different modes (i.e. _encoder-only_, _decoder-only_, and _encoder-decoder_) to support a wide range of code understanding and generation tasks.
|
13 |
+
|
14 |
+
To train CodeT5+, we introduce a diverse set of pretraining tasks including _span denoising_, _causal language modeling_, _contrastive learning_, and _text-code matching_ to learn rich representations from both unimodal code data and bimodal code-text data.
|
15 |
+
Additionally, to efficiently scale up the model, we propose a simple yet effective _compute-efficient pretraining_ method to initialize our model with frozen off-the-shelf LLMs such as [CodeGen](https://github.com/salesforce/CodeGen).
|
16 |
+
Furthermore, we explore instruction tuning to align the model with natural language instructions following [Code Alpaca](https://github.com/sahil280114/codealpaca). See the below overview of CodeT5+.
|
17 |
+
|
18 |
+

|
19 |
+
|
20 |
+
## Table of Contents
|
21 |
+
|
22 |
+
1. [Released Models](#released-models)
|
23 |
+
2. [How to Use?](#how-to-use)
|
24 |
+
3. [Instruction Tuning to Align with Natural Language Instructions](#instruction-tuning-to-align-with-natural-language-instructions)
|
25 |
+
4. [How to Finetune Using Your Own Data?](#how-to-finetune-using-your-own-data)
|
26 |
+
5. [Reproduce the Results](#reproduce-the-results)
|
27 |
+
1. [HumanEval](#humaneval)
|
28 |
+
2. [Text-to-Code Retrieval](#text-to-code-retrieval)
|
29 |
+
6. [Citation](#citation)
|
30 |
+
|
31 |
+
|
32 |
+
# Released Models
|
33 |
+
We implemented a family of CodeT5+ models, with model size ranging from 220M to 16B.
|
34 |
+
Note that CodeT5+ `220M` and `770M` employ the same architecture of CodeT5-base and large respectively and are pretrained from scratch, while CodeT5+ `2B`, `6B`, `16B` employ a "_shallow encoder and deep decoder_" architecture with the shallow encoder initialized from CodeGen-mono 350M and the deep decoder initialized from CodeGen-mono 2B, 6B, 16B, respectively.
|
35 |
+
InstructCodeT5+ 16B is our instruction-tuned model from CodeT5+ 16B.
|
36 |
+
Note that as this model utilizes instruction tuning data curated using OpenAI API, the checkpoint of InstructCodeT5+ 16B is licensed for research and **non-commercial** use only.
|
37 |
+
|
38 |
+
We release the following CodeT5+ models at Huggingface:
|
39 |
+
* CodeT5+ `110M` embedding model: [codet5p-110m-embedding](https://huggingface.co/Salesforce/codet5p-110m-embedding).🔥
|
40 |
+
* CodeT5+ `220M` bimodal model: [codet5p-220m-bimodal](https://huggingface.co/Salesforce/codet5p-220m-bimodal).🔥
|
41 |
+
* CodeT5+ `220M` and `770M`: [codet5p-220m](https://huggingface.co/Salesforce/codet5p-220m) and [codet5p-770m](https://huggingface.co/Salesforce/codet5p-770m).
|
42 |
+
* CodeT5+ `220M` and `770M` that are further tuned on Python subset: [codet5p-220m-py](https://huggingface.co/Salesforce/codet5p-220m-py) and [codet5p-770m-py](https://huggingface.co/Salesforce/codet5p-770m-py).
|
43 |
+
* CodeT5+ `2B`, `6B`, `16B`: [codet5p-2b](https://huggingface.co/Salesforce/codet5p-2b), [codet5p-6b](https://huggingface.co/Salesforce/codet5p-6b), and [codet5p-16b](https://huggingface.co/Salesforce/codet5p-16b).
|
44 |
+
* InstructCodeT5+ `16B`: [instructcodet5p-16b](https://huggingface.co/Salesforce/instructcodet5p-16b).
|
45 |
+
|
46 |
+

|
47 |
+
|
48 |
+
# How to Use?
|
49 |
+
All CodeT5+ models and tokenizers can be easily loaded using the `AutoModelForSeq2SeqLM` and `AutoTokenizer` functionality.
|
50 |
+
For tokenizers, CodeT5+ `220M` and `770M` employ the same tokenizer as the original [CodeT5](https://github.com/salesforce/CodeT5) while CodeT5+ `2B`, `6B`, `16B` employ the same tokenizer as [CodeGen]( https://github.com/salesforce/CodeGen).
|
51 |
+
|
52 |
+
For CodeT5+ `2B`, `6B`, `16B`, and InstructCodeT5+ `16B`, please set `trust_remote_code=True` when loading the models as the [model class](https://huggingface.co/Salesforce/codet5p-16b/blob/main/modeling_codet5p.py) is defined in the Huggingface repo.
|
53 |
+
Besides, these models would benefit from passing additional prompts to the decoder via `decoder_input_ids` to achieve better generation performance.
|
54 |
+
|
55 |
+
|
56 |
+
```python
|
57 |
+
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
58 |
+
import torch
|
59 |
+
|
60 |
+
checkpoint = "Salesforce/instructcodet5p-16b"
|
61 |
+
device = "cuda" # for GPU usage or "cpu" for CPU usage
|
62 |
+
|
63 |
+
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
|
64 |
+
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint,
|
65 |
+
torch_dtype=torch.float16,
|
66 |
+
low_cpu_mem_usage=True,
|
67 |
+
trust_remote_code=True).to(device)
|
68 |
+
|
69 |
+
encoding = tokenizer("def print_hello_world():", return_tensors="pt").to(device)
|
70 |
+
encoding['decoder_input_ids'] = encoding['input_ids'].clone()
|
71 |
+
outputs = model.generate(**encoding, max_length=15)
|
72 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
73 |
+
```
|
74 |
+
|
75 |
+
### CodeT5+ embedding model
|
76 |
+
Apart from the generative models, we also release the [CodeT5+ 110M embedding](https://huggingface.co/Salesforce/codet5p-110m-embedding) model that can be used to extract code embeddings. This checkpoint contains an encoder of the CodeT5+ 220M model that are pretrained from two stages on both unimodal and bimodal data, as well as a linear projection layer to map the encoder output to a 256-dimensional vector.
|
77 |
+
|
78 |
+
```python
|
79 |
+
from transformers import AutoModel, AutoTokenizer
|
80 |
+
|
81 |
+
checkpoint = "Salesforce/codet5p-110m-embedding"
|
82 |
+
device = "cuda" # for GPU usage or "cpu" for CPU usage
|
83 |
+
|
84 |
+
tokenizer = AutoTokenizer.from_pretrained(checkpoint, trust_remote_code=True)
|
85 |
+
model = AutoModel.from_pretrained(checkpoint, trust_remote_code=True).to(device)
|
86 |
+
|
87 |
+
inputs = tokenizer.encode("def print_hello_world():\tprint('Hello World!')", return_tensors="pt").to(device)
|
88 |
+
embedding = model(inputs)[0]
|
89 |
+
print(f'Dimension of the embedding: {embedding.size()[0]}, with norm={embedding.norm().item()}')
|
90 |
+
# Dimension of the embedding: 256, with norm=1.0
|
91 |
+
```
|
92 |
+
|
93 |
+
### CodeT5+ bimodal model
|
94 |
+
We release a [CodeT5+ 220M bimodal model](https://huggingface.co/Salesforce/codet5p-220m-bimodal) that is pretrained through two stages on both unimodal and bimodal data. This model can be used for code summarization and code retrieval in a zero-shot manner, as well as for code generation with fine-tuning. This model's encoder and projection layer share the same weights with the [CodeT5+ 110M embedding](https://huggingface.co/Salesforce/codet5p-110m-embedding) model.
|
95 |
+
For code retrieval tasks, we can use its encoder to extract code embeddings and compute the cosine similarity between the query text and the code snippet as the embedding model, or additionally use its decoder in a text-code matching mode for optimizing the top candidates. See the [text-to-code retrieval evaluation](#text-to-code-retrieval) for more details.
|
96 |
+
|
97 |
+
Below is an example of using the model for code summarization.
|
98 |
+
|
99 |
+
```python
|
100 |
+
from transformers import AutoModel, AutoTokenizer
|
101 |
+
|
102 |
+
checkpoint = "Salesforce/codet5p-220m-bimodal"
|
103 |
+
device = "cuda" # for GPU usage or "cpu" for CPU usage
|
104 |
+
|
105 |
+
tokenizer = AutoTokenizer.from_pretrained(checkpoint, trust_remote_code=True)
|
106 |
+
model = AutoModel.from_pretrained(checkpoint, trust_remote_code=True).to(device)
|
107 |
+
|
108 |
+
code = """def svg_to_image(string, size=None):
|
109 |
+
if isinstance(string, unicode):
|
110 |
+
string = string.encode('utf-8')
|
111 |
+
renderer = QtSvg.QSvgRenderer(QtCore.QByteArray(string))
|
112 |
+
if not renderer.isValid():
|
113 |
+
raise ValueError('Invalid SVG data.')
|
114 |
+
if size is None:
|
115 |
+
size = renderer.defaultSize()
|
116 |
+
image = QtGui.QImage(size, QtGui.QImage.Format_ARGB32)
|
117 |
+
painter = QtGui.QPainter(image)
|
118 |
+
renderer.render(painter)
|
119 |
+
return image"""
|
120 |
+
|
121 |
+
input_ids = tokenizer(code, return_tensors="pt").input_ids.to(device)
|
122 |
+
|
123 |
+
generated_ids = model.generate(input_ids, max_length=20)
|
124 |
+
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
|
125 |
+
# Convert a string of SVG data to an image.
|
126 |
+
```
|
127 |
+
|
128 |
+
# Instruction Tuning to Align with Natural Language Instructions
|
129 |
+
|
130 |
+
We explore instruction tuning to align CodeT5+ with natural language instructions following [Code Alpaca](https://github.com/sahil280114/codealpaca). First download the instruction data `code_alpaca_20k.json` from [here](https://github.com/sahil280114/codealpaca/tree/master/data).
|
131 |
+
Then, you can run the following command to finetune CodeT5+ 16B on the instruction data.
|
132 |
+
You can change the `--instruct-data-path` to finetune on other instruction data or any downstream data.
|
133 |
+
|
134 |
+
```bash
|
135 |
+
MODEL=Salesforce/codet5p-16b
|
136 |
+
SAVE_DIR=saved_models/instructcodet5p-16b
|
137 |
+
|
138 |
+
deepspeed instruct_tune_codet5p.py \
|
139 |
+
--load $MODEL --save-dir $SAVE_DIR --instruct-data-path code_alpaca_20k.json \
|
140 |
+
--fp16 --deepspeed deepspeed_config.json
|
141 |
+
```
|
142 |
+
|
143 |
+
# How to Finetune Using Your Own Data?
|
144 |
+
|
145 |
+
We provide an example finetuning script [tune_codet5p_seq2seq.py](https://github.com/salesforce/CodeT5/blob/main/CodeT5%2B/tune_codet5p_seq2seq.py) for CodeT5+ models on Seq2Seq LM task.
|
146 |
+
After installing the `transformers` and `datasets` libraries, you can run `python tune_codet5p_seq2seq.py` to finetune CodeT5+ models on any Seq2Seq LM tasks such as Python code summarization as illustrated in the script.
|
147 |
+
To finetune on your own data, you just need to prepare your customized data in the `datasets` format and pass its path to `--cache-data`.
|
148 |
+
|
149 |
+
Besides, you can specify `--load` to select the specific CodeT5+ model (e.g., `Salesforce/codet5p-220m`) to finetune from. To tune the hyper-parameter setting that suit your task the best, you can customize other finetuning arguments such as `--epochs`, `--lr`, `--lr-warmup-steps`, `--max-source-len`, `--max-target-len`, `--batch-size-per-replica`, `--grad-acc-steps`, etc.
|
150 |
+
This script naturally supports both single-GPU and multi-GPU training. If you have limited GPU memory issue and want to improve the training throughput, please consider to specify `--fp16` to enable mixed-precision training and use [DeepSpeed](https://github.com/microsoft/DeepSpeed) for further optimization by passing a deedspeed config file to `--deepspeed` (see [here](https://huggingface.co/docs/transformers/main_classes/deepspeed#zero2-example) for an example config file).
|
151 |
+
|
152 |
+
# Reproduce the Results
|
153 |
+
|
154 |
+
## HumanEval
|
155 |
+
Our CodeT5+ models achieve very strong results on HumanEval benchmark in zero-shot setting. We follow common practices to employ nucleus sampling with different temperature `T` for computing `Pass@k` (`T=0.2,0.6,0.8` for `k=1,10,100` respectively).
|
156 |
+
|
157 |
+
| Model | Pass@1 | Pass@10 | Pass@100 |
|
158 |
+
|-------------------------|----------|----------|----------|
|
159 |
+
| LLaMA 7B | 10.5 | - | 36.5 |
|
160 |
+
| LaMDA 137B | 14.0 | - | 47.3 |
|
161 |
+
| InCoder 6B | 15.2 | 27.8 | 47.0 |
|
162 |
+
| GPT-NeoX 20B | 15.4 | 25.6 | 41.2 |
|
163 |
+
| **CodeT5+ 770M** | 15.5 | 27.2 | 42.7 |
|
164 |
+
| LLaMA 13B | 15.8 | - | 52.5 |
|
165 |
+
| PaLM 62B | 15.9 | - | 46.3 |
|
166 |
+
| AlphaCode 1.1B | 17.1 | 28.2 | 45.3 |
|
167 |
+
| LLaMA 33B | 21.7 | - | 70.7 |
|
168 |
+
| Replit 3B | 21.9 | - | - |
|
169 |
+
| CodeGeeX 13B | 22.9 | 39.6 | 60.9 |
|
170 |
+
| LLaMA 65B | 23.7 | - | 79.3 |
|
171 |
+
| PaLM 540B | 26.2 | - | 76.2 |
|
172 |
+
| CodeGen-mono 16B | 29.3 | 49.9 | 75.0 |
|
173 |
+
| **CodeT5+ 16B** | 30.9 | 51.6 | 76.7 |
|
174 |
+
| code-cushman-001 | 33.5 | 54.3 | 77.4 |
|
175 |
+
| StarCoder 15B | 33.6 | - | - |
|
176 |
+
| **InstructCodeT5+ 16B** | **36.1** | **57.1** | **80.7** |
|
177 |
+
|
178 |
+
Please follow the instructions below to reproduce the results.
|
179 |
+
|
180 |
---
|
181 |
+
|
182 |
+
### Installation
|
183 |
+
* Install the official HumanEval evaluation tool released by OpenAI following the instructions in this [repo](https://github.com/openai/human-eval).
|
184 |
+
* Install the Pytorch (version `1.13.1`) and transformers (version `4.21.3`) libraries.
|
185 |
+
|
186 |
+
### Generating programs from CodeT5+ models
|
187 |
+
`cd humaneval` then run the inference via `bash run_generate.sh`.
|
188 |
+
You can select the model to generate from by changing the `model` variable in the script.
|
189 |
+
Following the original setting in the HumanEval paper, we generate 200 programs (`pred_num=200`) for each problem and employs nucleus sampling with different temperature `T` for computing `Pass@k` (`T=0.2,0.6,0.8` for `k=1,10,100` respectively).
|
190 |
+
The generated programs will be saved in `preds/${model}_T${T}_N${pred_num}`.
|
191 |
+
|
192 |
+
```bash
|
193 |
+
model=instructcodet5p-16b
|
194 |
+
temp=0.2
|
195 |
+
max_len=800
|
196 |
+
pred_num=200
|
197 |
+
num_seqs_per_iter=2 # 25 for 350M and 770M, 10 for 2B, 8 for 6B, 2 for 16B on A100-40G
|
198 |
+
|
199 |
+
output_path=preds/${model}_T${temp}_N${pred_num}
|
200 |
+
|
201 |
+
mkdir -p ${output_path}
|
202 |
+
echo 'Output path: '$output_path
|
203 |
+
echo 'Model to eval: '$model
|
204 |
+
|
205 |
+
# 164 problems, 21 per GPU if GPU=8
|
206 |
+
index=0
|
207 |
+
gpu_num=8
|
208 |
+
for ((i = 0; i < $gpu_num; i++)); do
|
209 |
+
start_index=$((i * 21))
|
210 |
+
end_index=$(((i + 1) * 21))
|
211 |
+
|
212 |
+
gpu=$((i))
|
213 |
+
echo 'Running process #' ${i} 'from' $start_index 'to' $end_index 'on GPU' ${gpu}
|
214 |
+
((index++))
|
215 |
+
(
|
216 |
+
CUDA_VISIBLE_DEVICES=$gpu python generate_codet5p.py --model Salesforce/${model} \
|
217 |
+
--start_index ${start_index} --end_index ${end_index} --temperature ${temp} \
|
218 |
+
--num_seqs_per_iter ${num_seqs_per_iter} --N ${pred_num} --max_len ${max_len} --output_path ${output_path}
|
219 |
+
) &
|
220 |
+
if (($index % $gpu_num == 0)); then wait; fi
|
221 |
+
done
|
222 |
+
```
|
223 |
+
|
224 |
+
### Evaluating Pass@k
|
225 |
+
`cd humaneval` then run the evaluation via `bash run_eval.sh`.
|
226 |
+
|
227 |
+
```bash
|
228 |
+
output_path=preds/instructcodet5p-16b_T0.2_N200
|
229 |
+
|
230 |
+
echo 'Output path: '$output_path
|
231 |
+
python process_preds.py --path ${output_path} --out_path ${output_path}.jsonl
|
232 |
+
|
233 |
+
evaluate_functional_correctness ${output_path}.jsonl
|
234 |
+
```
|
235 |
+
|
236 |
+
Note that the reproduced results might be slightly different from the reported ones due to the randomness of the sampling process. We also released the model predictions for our [InstructCodeT5+ 16B](https://huggingface.co/Salesforce/instructcodet5p-16b) at `humaneval/instructcodet5p-16b_T0.2_N200.jsonl` for your reference.
|
237 |
+
It can reproduce the results of `36.1% Pass@1` with the following command.
|
238 |
+
|
239 |
+
```bash
|
240 |
+
evaluate_functional_correctness humaneval/instructcodet5p-16b_T0.2_N200.jsonl
|
241 |
+
```
|
242 |
+
|
243 |
+
## Text-to-Code Retrieval
|
244 |
+
* Download and preprocess 3 text-to-code retrieval datasets (CSN in 6 PLs, AdvTest, cosqa) following the instructions in this [repo](https://github.com/microsoft/CodeBERT/tree/master/UniXcoder/downstream-tasks/code-search#data-download).
|
245 |
+
* `cd code_retrieval` then run the evaluation of our [CodeT5+ 110M embedding](https://huggingface.co/Salesforce/codet5p-110m-embedding) model via `bash run_retrieval.sh`.
|
246 |
+
|
247 |
+
```bash
|
248 |
+
# LANG choices: ruby javascript go python java php AdvTest cosqa
|
249 |
+
LANG=ruby
|
250 |
+
BS=256
|
251 |
+
CODE_LEN=360
|
252 |
+
TEXT_LEN=64
|
253 |
+
MODEL_NAME=Salesforce/codet5p-110m-embedding
|
254 |
+
DATA_DIR=/path/to/data
|
255 |
+
|
256 |
+
TRG_DIR=saved_models/${LANG}/codet5p_110m_embedding_TL${TEXT_LEN}_CL${CODE_LEN}
|
257 |
+
mkdir -p $TRG_DIR
|
258 |
+
echo 'Target dir: '$TRG_DIR
|
259 |
+
|
260 |
+
python eval_contrast_retrieval.py --model_name $MODEL_NAME --lang $LANG --output_dir $TRG_DIR \
|
261 |
+
--data_dir $DATA_DIR --max_text_len $TEXT_LEN --max_code_len $CODE_LEN --batch_size $BS
|
262 |
+
```
|
263 |
+
|
264 |
+
|
265 |
+
* Run the evaluation of [CodeT5+ 220M bimodal](https://huggingface.co/Salesforce/codet5p-220m-bimodal) model via `bash run_match_retrieval.sh`. It can further boost the performance by activating the matching decoder to rerank the `top_k` candidates from the embedding model's contrastive retrieval. You can change the `top_k` value to control the number of candidates to rerank.
|
266 |
+
|
267 |
+
```bash
|
268 |
+
# LANG choices: ruby javascript go python java php AdvTest cosqa
|
269 |
+
LANG=ruby
|
270 |
+
BS=256
|
271 |
+
CODE_LEN=360
|
272 |
+
TEXT_LEN=64
|
273 |
+
TOPK=32
|
274 |
+
MODEL_NAME=Salesforce/codet5p-220m-bimodal
|
275 |
+
DATA_DIR=/path/to/data
|
276 |
+
|
277 |
+
TRG_DIR=saved_models/${LANG}/codet5p_220m_bimodal_TL${TEXT_LEN}_CL${CODE_LEN}_top${TOPK}
|
278 |
+
mkdir -p $TRG_DIR
|
279 |
+
echo 'Target dir: '$TRG_DIR
|
280 |
+
|
281 |
+
python eval_match_retrieval.py --model_name $MODEL_NAME --lang $LANG --output_dir $TRG_DIR \
|
282 |
+
--data_dir $DATA_DIR --max_text_len $TEXT_LEN --max_code_len $CODE_LEN --batch_size $BS --top_k $TOPK
|
283 |
+
```
|
284 |
+
|
285 |
+
### Evaluation Results
|
286 |
+
|
287 |
+
The above running scripts can reproduce the results as shown in the `CodeT5+ 110M embedding` and `CodeT5+ 220M matching` row of the following table. The results show that [CodeT5+ 220M bimodal](https://huggingface.co/Salesforce/codet5p-220m-bimodal) model achieves better performance than the embedding model via leveraging the fine-grained alignment between text and code through the matching decoder.
|
288 |
+
For UniXcoder's zero-shot results, we reproduce it following its official instructions [here](https://github.com/microsoft/CodeBERT/tree/master/UniXcoder/downstream-tasks/code-search#zero-shot-setting).
|
289 |
+
|
290 |
+
|
291 |
+
| Model | Ruby | JavaScript | Go | Python | Java | PHP | CSN_Avg | CosQA | AdvTest |
|
292 |
+
| ------------------------------ | ----- | ---------- | ----- | ------ | ----- | ----- | ------- | ----- | ------- |
|
293 |
+
| UniXcoder 125M | 57.6 | 44.2 | 64.8 | 44.7 | 46.6 | 37.3 | 49.20 | 43.1 | 29.9 |
|
294 |
+
| CodeT5+ 110M embedding | 74.51 | 69.07 | 90.69 | 71.55 | 71.82 | 67.72 | 74.23 | 39.57 | 40.49 |
|
295 |
+
| CodeT5+ 220M matching (top 32) | 76.04 | 70.17 | 91.37 | 74.17 | 74.76 | 68.6 | 75.85 | 51.51 | 42.9 |
|
296 |
+
|
297 |
+
* Note that the reported multi-task results of CodeT5+ are different from the ones in the paper which are task-specific fine-tuned results.
|
298 |
+
|
299 |
+
# Citation
|
300 |
+
|
301 |
+
```bibtex
|
302 |
+
@article{wang2023codet5plus,
|
303 |
+
title={CodeT5+: Open Code Large Language Models for Code Understanding and Generation},
|
304 |
+
author={Wang, Yue and Le, Hung and Gotmare, Akhilesh Deepak and Bui, Nghi D.Q. and Li, Junnan and Hoi, Steven C. H.},
|
305 |
+
journal={arXiv preprint},
|
306 |
+
year={2023}
|
307 |
+
}
|
308 |
+
```
|