Update README.md
Browse files
README.md
CHANGED
@@ -28,11 +28,12 @@ This model is fine-tuned using most of the APPS dataset including both train and
|
|
28 |
The training script used to train this model can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/run_clm_apps.py).
|
29 |
|
30 |
Training is done for 5 epochs using AdamW optimizer and leaner decay learning rate schedule with 800 warmup steps. To reproduce the training one can use this command with the above script:
|
31 |
-
|
|
|
32 |
python run_clm_apps.py \
|
33 |
-
--output_dir
|
34 |
--model_name_or_path EleutherAI/gpt-neo-125B \
|
35 |
-
--dataset_name
|
36 |
--dataset_config_name formatted \
|
37 |
--do_train --do_eval \
|
38 |
--block_size="1024" \
|
|
|
28 |
The training script used to train this model can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/run_clm_apps.py).
|
29 |
|
30 |
Training is done for 5 epochs using AdamW optimizer and leaner decay learning rate schedule with 800 warmup steps. To reproduce the training one can use this command with the above script:
|
31 |
+
|
32 |
+
```bash
|
33 |
python run_clm_apps.py \
|
34 |
+
--output_dir $HOME/gpt-neo-125M-apps \
|
35 |
--model_name_or_path EleutherAI/gpt-neo-125B \
|
36 |
+
--dataset_name $HOME/gpt-code-clippy/data_processing/apps.py \
|
37 |
--dataset_config_name formatted \
|
38 |
--do_train --do_eval \
|
39 |
--block_size="1024" \
|