devngho's picture
Update README.md
6b61a81 verified
metadata
base_model:
  - microsoft/codebert-base
datasets:
  - devngho/the_stack_llm_annotations
language:
  - code
library_name: transformers
license: mit
metrics:
  - f1

devngho/code_edu_classifier_v2_microsoft_codebert-base

์ด ๋ชจ๋ธ์€ microsoft/codebert-base์— classifier๋ฅผ ์ถ”๊ฐ€ํ•œ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. HuggingFaceFW/fineweb-edu-classifier์˜ ์ฝ”๋“œ ๋ฒ„์ „์„ ๋ชฉํ‘œ๋กœ, ์ฝ”๋“œ์˜ ๊ต์œก์„ฑ ์ ์ˆ˜๋ฅผ ํ‰๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ํ•™์Šต์—๋Š” bigcode/the-stack-dedup์—์„œ ์ถ”์ถœํ•œ ์ƒ˜ํ”Œ์„ Qwen/Qwen2.5-32B-Instruct๋กœ ํ‰๊ฐ€ํ•œ devngho/the_stack_llm_annotations ๋ฐ์ดํ„ฐ์…‹์ด ์‚ฌ์šฉ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.

์ด ์—ฐ๊ตฌ๋Š” Google์˜ TPU Research Cloud (TRC)์˜ Cloud TPU ์ œ๊ณต์œผ๋กœ ์ˆ˜ํ–‰๋˜์—ˆ์Šต๋‹ˆ๋‹ค. โšก

์ƒ์„ธ

  • ์ œ์ž‘: devngho
  • ์–ธ์–ด: code
  • ๋ผ์ด์„ ์Šค: mit
  • ๊ธฐ๋ฐ˜ ๋ชจ๋ธ: microsoft/codebert-base

ํ•™์Šต ์ƒ์„ธ

  • learning_rate: 3e-4 (cosine)
  • warmup_ratio: 0.1
  • batch_size: 2048(512*4)
  • optimizer: adamw(b1=0.9, b2=0.98, eps=1e-8, weight_decay=0.01)
  • duration: 1h 36m

ํ•™์Šต ์žฅ๋น„

TPU v4-8

์„ฑ๋Šฅ

Validation Report:
              precision    recall  f1-score   support

           0       0.77      0.10      0.18       101
           1       0.57      0.47      0.51       739
           2       0.60      0.60      0.60      2409
           3       0.49      0.74      0.59      2030
           4       0.51      0.03      0.05       864
           5       0.00      0.00      0.00         1

    accuracy                           0.54      6144
   macro avg       0.49      0.32      0.32      6144
weighted avg       0.55      0.54      0.50      6144

Confusion Matrix:
[[  10   71   20    0    0    0]
 [   3  346  353   37    0    0]
 [   0  186 1450  770    3    0]
 [   0    9  509 1494   18    0]
 [   0    0   80  762   22    0]
 [   0    0    0    1    0    0]]

์ž„๋ฒ ๋”ฉ ๋ชจ๋ธ์ด ์ผ๋ถ€ ์–ธ์–ด๋ฅผ ์ง€์›ํ•˜์ง€ ์•Š๋Š” ํ•œ๊ณ„์™€ qwen2.5 32b ๋ชจ๋ธ์˜ ํ‰๊ฐ€ ํ•œ๊ณ„๋กœ ์„ฑ๋Šฅ์ด ๋‚ฎ์€ ๊ฒƒ์œผ๋กœ ๋ณด์ž…๋‹ˆ๋‹ค. 3 ์ด์ƒ๊ณผ ๋ฏธ๋งŒ์œผ๋กœ ๊ตฌ๋ถ„ํ•  ๋•Œ f1 score๋Š” ์•ฝ 0.77์ž…๋‹ˆ๋‹ค.

devngho/code_edu_classifier_v2_microsoft_codebert-base

This model is microsoft/codebert-base with classfier head. It is designed to evaluate the educational value of codes, similar to the HuggingFaceFW/fineweb-edu-classifier, but focused on code. The training data comes from devngho/the_stack_llm_annotations dataset, contains samples extracted from bigcode/the-stack-dedup and evaluated using Qwen/Qwen2.5-32B-Instruct.

This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).โšก

Training detail

  • learning_rate: 3e-4 (cosine)
  • warmup_ratio: 0.1
  • batch_size: 2048(512*4)
  • optimizer: adamw(b1=0.9, b2=0.98, eps=1e-8, weight_decay=0.01)
  • duration: 3h 21m

Training hardware

TPU v4-8

Performance

Validation Report:
              precision    recall  f1-score   support

           0       0.77      0.10      0.18       101
           1       0.57      0.47      0.51       739
           2       0.60      0.60      0.60      2409
           3       0.49      0.74      0.59      2030
           4       0.51      0.03      0.05       864
           5       0.00      0.00      0.00         1

    accuracy                           0.54      6144
   macro avg       0.49      0.32      0.32      6144
weighted avg       0.55      0.54      0.50      6144

Confusion Matrix:
[[  10   71   20    0    0    0]
 [   3  346  353   37    0    0]
 [   0  186 1450  770    3    0]
 [   0    9  509 1494   18    0]
 [   0    0   80  762   22    0]
 [   0    0    0    1    0    0]]

The low performance is likely due to the limitations of the embedding model, which does not support all languages and the evaluation limitations of the Qwen2.5 32B model. The F1 score is about 0.72 when separating above and below 3.