File size: 1,676 Bytes
f222a51
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# This model can generate the solution to problem in [LeetCode](https://leetcode.com)

## The training data: [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/datasets/khaimaitien/leetcode_problem_solution)
## The base model: [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf)
You can find more information at: https://github.com/khaimt/coding_challenge_solver

The prompt template is:
``` python
prompt_str = (
            f"[INST] Write code to solve the following coding problem that obeys"
            f"the constraints and passes the example test cases."
            f"Please wrap your code answer using ```:\n{input}\n[/INST]```python\n"
        )
```
Where input is the problem in LeetCode, an example is: https://github.com/khaimt/coding_challenge_solver/blob/main/test_cases/problem1.txt

**Example for inference:**

```python
prompt_str = (
            f"[INST] Write code to solve the following coding problem that obeys"
            f"the constraints and passes the example test cases."
            f"Please wrap your code answer using ```:\n{input}\n[/INST]```python\n"
        )
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", torch_dtype=torch.bfloat16)
token_ids = tokenizer([prompt_str], return_tensors="pt")["input_ids"]
token_ids = token_ids.to(model.device)
outputs = model.generate(input_ids=token_ids, max_new_tokens=1024, do_sample=True, temperature=0.0001)
all_token_ids = outputs[0].tolist()
ouput_token_ids = all_token_ids[token_ids.shape[-1] :]
output = tokenizer.decode(ouput_token_ids)
print("-------------Solution generated from Model---------")
print(output)
```