File size: 2,442 Bytes
38a5ac6
 
 
bf38c70
38a5ac6
 
 
 
 
7c7b691
b07c0b3
 
a069fd3
1f1d72b
 
b07c0b3
 
 
6cc0b6a
b07c0b3
b232162
b07c0b3
 
1f1d72b
 
e4bbe64
a830477
 
93a9b52
 
bcc734c
 
 
 
 
 
 
 
 
a069fd3
bcc734c
 
 
f928d4f
bcc734c
 
8578b73
bcc734c
98dc221
f928d4f
98dc221
 
f928d4f
 
 
 
 
1f1d72b
f928d4f
 
bf38c70
3d42a3f
 
 
 
 
 
 
 
 
 
 
 
7c7b691
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
license: apache-2.0
datasets:
- open-r1/codeforces-cots
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-7B-Instruct
pipeline_tag: text-generation
library_name: transformers
---

# Model Card for OlympicCoder-7B

OlympicCoder-7B is a code model that achieves strong performance on competitive coding benchmarks such as LiveCodeBench and the 2024 International Olympiad in Informatics.

## Model description

- **Model type:** A 7B parameter model fine-tuned on a decontaminated version of the codeforces dataset.
- **Language(s) (NLP):** Primarily English
- **License:** apache-2.0
- **Finetuned from model:** [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct)

## Evaluation

![](./ioi-evals.png)



## Usage
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:

```python
# pip install transformers
# pip install accelerate

import torch
from transformers import pipeline

pipe = pipeline("text-generation", model="open-r1/OlympicCoder-7B", torch_dtype=torch.bfloat16, device_map="auto")

# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
    {"role": "user", "content": "Write a python program to calculate the 10th Fibonacci number"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=8000, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
#<|im_start|>user
#Write a python program to calculate the 10th fibonacci number<|im_end|>
#<|im_start|>assistant
#<think>Okay, I need to write a Python program that calculates the 10th Fibonacci number. Hmm, the Fibonacci sequence starts with 0 and 1. Each subsequent number is the sum of the two preceding ones. So the sequence goes: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, and so on. ...
```


## Training procedure
### Training hyper-parameters

The following hyperparameters were used during training:

- dataset: open-r1/codeforces-cots
- learning_rate: 4.0e-5
- train_batch_size: 2
- seed: 42
- packing: false
- distributed_type: deepspeed-zero-3
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_min_lr
- min_lr_rate: 0.1
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 10.0