Santacoder finetuned on Shadertoys-fine for 1000 steps with a batch size of 2 and full sequence length of 2048. adapted finetuning script found here

Try model in the ShaderCoder demo space

Finetuning parameters

python3 train.py --model_path "bigcode/santacoder" \
--dataset_name "Vipitis/Shadertoys-fine" \
--data_column "code" \
--split "train" \
--seq_length 2048 \
--max_steps 1000 \
--batch_size 2 \
--gradient_accumulation_steps 4 \
--learning_rate 5e-5 \
--num_warmup_steps 100 \
--eval_freq 100 \
--save_freq 100 \
--log_freq 1 \
--output_dir "checkpoint_dir" \
--no_fp16

Main purpose of this model is to explore if finetuning models improves performance on ShaderEval, which reached 0.567 with 300 samples and 0.59749 on all samples.

Disclaimer

While the train/test split is held out, there is a lot of data contamination. The model results can't be trusted for this simple benchmark. Better tasks for the benchmark will be developed and tested against these models.

License carried over from model, however training data has an undefied license. Check details in Shadertoys.

Downloads last month
27
Safetensors
Model size
1.23B params
Tensor type
F32
·
U8
·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Model tree for Vipitis/santacoder-finetuned-Shadertoys-fine

Base model

bigcode/santacoder
Finetuned
(14)
this model

Dataset used to train Vipitis/santacoder-finetuned-Shadertoys-fine

Spaces using Vipitis/santacoder-finetuned-Shadertoys-fine 2

Collection including Vipitis/santacoder-finetuned-Shadertoys-fine

Evaluation results