Update README.md
Browse files
README.md
CHANGED
@@ -19,7 +19,7 @@ should probably proofread and complete it, then remove this comment. -->
|
|
19 |
|
20 |
# edu-modernbert
|
21 |
|
22 |
-
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on
|
23 |
It achieves the following results on the evaluation set:
|
24 |
- Loss: 0.2453
|
25 |
- Precision: 0.5901
|
@@ -31,17 +31,26 @@ It achieves the following results on the evaluation set:
|
|
31 |
- Binary F1: 0.7455
|
32 |
- Binary Accuracy: 0.9578
|
33 |
|
34 |
-
|
|
|
|
|
35 |
|
36 |
-
|
37 |
|
38 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
|
40 |
-
|
|
|
|
|
41 |
|
42 |
-
## Training and evaluation data
|
43 |
-
|
44 |
-
More information needed
|
45 |
|
46 |
## Training procedure
|
47 |
|
@@ -52,48 +61,9 @@ The following hyperparameters were used during training:
|
|
52 |
- train_batch_size: 256
|
53 |
- eval_batch_size: 256
|
54 |
- seed: 0
|
55 |
-
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08
|
56 |
- lr_scheduler_type: linear
|
57 |
-
- num_epochs: 20
|
58 |
-
|
59 |
-
### Training results
|
60 |
-
|
61 |
-
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Binary Precision | Binary Recall | Binary F1 | Binary Accuracy |
|
62 |
-
|:-------------:|:-------:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|:----------------:|:-------------:|:---------:|:---------------:|
|
63 |
-
| No log | 0 | 0 | 1.3562 | 0.1531 | 0.1831 | 0.1293 | 0.3513 | 0.0 | 0.0 | 0.0 | 0.9098 |
|
64 |
-
| 0.2178 | 0.6083 | 1000 | 0.2385 | 0.5543 | 0.5114 | 0.5201 | 0.7182 | 0.6724 | 0.7846 | 0.7242 | 0.9461 |
|
65 |
-
| 0.1729 | 1.2165 | 2000 | 0.2026 | 0.5721 | 0.5284 | 0.5458 | 0.7510 | 0.7451 | 0.7400 | 0.7425 | 0.9537 |
|
66 |
-
| 0.1701 | 1.8248 | 3000 | 0.1970 | 0.5922 | 0.5389 | 0.5590 | 0.7615 | 0.7447 | 0.7661 | 0.7552 | 0.9552 |
|
67 |
-
| 0.1138 | 2.4331 | 4000 | 0.1992 | 0.5979 | 0.5151 | 0.5475 | 0.7580 | 0.8050 | 0.6728 | 0.7330 | 0.9558 |
|
68 |
-
| 0.0759 | 3.0414 | 5000 | 0.2125 | 0.5730 | 0.5204 | 0.5433 | 0.7488 | 0.8133 | 0.6550 | 0.7256 | 0.9553 |
|
69 |
-
| 0.0572 | 3.6496 | 6000 | 0.2168 | 0.5948 | 0.5175 | 0.5437 | 0.7510 | 0.7988 | 0.6847 | 0.7374 | 0.9560 |
|
70 |
-
| 0.0277 | 4.2579 | 7000 | 0.2286 | 0.5815 | 0.5236 | 0.5466 | 0.7486 | 0.7682 | 0.7139 | 0.7400 | 0.9548 |
|
71 |
-
| 0.0314 | 4.8662 | 8000 | 0.2313 | 0.5781 | 0.5237 | 0.5456 | 0.7431 | 0.7753 | 0.7072 | 0.7397 | 0.9551 |
|
72 |
-
| 0.024 | 5.4745 | 9000 | 0.2408 | 0.5485 | 0.5455 | 0.5463 | 0.7401 | 0.7520 | 0.7338 | 0.7428 | 0.9542 |
|
73 |
-
| 0.0169 | 6.0827 | 10000 | 0.2348 | 0.5943 | 0.5044 | 0.5378 | 0.7483 | 0.8372 | 0.6211 | 0.7132 | 0.9549 |
|
74 |
-
| 0.0173 | 6.6910 | 11000 | 0.2404 | 0.5794 | 0.5223 | 0.5449 | 0.7444 | 0.7749 | 0.7065 | 0.7391 | 0.9550 |
|
75 |
-
| 0.0151 | 7.2993 | 12000 | 0.2393 | 0.5697 | 0.5361 | 0.5509 | 0.7453 | 0.7643 | 0.7293 | 0.7464 | 0.9553 |
|
76 |
-
| 0.0152 | 7.9075 | 13000 | 0.2420 | 0.5707 | 0.5373 | 0.5525 | 0.7456 | 0.7893 | 0.7120 | 0.7487 | 0.9569 |
|
77 |
-
| 0.0131 | 8.5158 | 14000 | 0.2394 | 0.5840 | 0.5277 | 0.5495 | 0.7478 | 0.7931 | 0.6994 | 0.7433 | 0.9564 |
|
78 |
-
| 0.0097 | 9.1241 | 15000 | 0.2434 | 0.5814 | 0.5248 | 0.5485 | 0.7468 | 0.8120 | 0.6700 | 0.7342 | 0.9563 |
|
79 |
-
| 0.0105 | 9.7324 | 16000 | 0.2426 | 0.5694 | 0.5363 | 0.5512 | 0.7472 | 0.7750 | 0.7165 | 0.7446 | 0.9557 |
|
80 |
-
| 0.0081 | 10.3406 | 17000 | 0.2499 | 0.5798 | 0.5229 | 0.5440 | 0.7416 | 0.7973 | 0.6849 | 0.7369 | 0.9559 |
|
81 |
-
| 0.0086 | 10.9489 | 18000 | 0.2407 | 0.5846 | 0.5293 | 0.5533 | 0.7492 | 0.8078 | 0.6852 | 0.7415 | 0.9569 |
|
82 |
-
| 0.0069 | 11.5572 | 19000 | 0.2438 | 0.5996 | 0.5126 | 0.5426 | 0.7502 | 0.8115 | 0.6712 | 0.7347 | 0.9563 |
|
83 |
-
| 0.0056 | 12.1655 | 20000 | 0.2428 | 0.5925 | 0.5180 | 0.5459 | 0.7506 | 0.8093 | 0.6804 | 0.7393 | 0.9567 |
|
84 |
-
| 0.0066 | 12.7737 | 21000 | 0.2439 | 0.5878 | 0.5148 | 0.5423 | 0.7490 | 0.8119 | 0.6759 | 0.7377 | 0.9567 |
|
85 |
-
| 0.0052 | 13.3820 | 22000 | 0.2427 | 0.5921 | 0.5142 | 0.5428 | 0.7511 | 0.8332 | 0.6541 | 0.7329 | 0.9570 |
|
86 |
-
| 0.0054 | 13.9903 | 23000 | 0.2469 | 0.5949 | 0.5028 | 0.5358 | 0.7479 | 0.8373 | 0.6458 | 0.7292 | 0.9567 |
|
87 |
-
| 0.0045 | 14.5985 | 24000 | 0.2437 | 0.5872 | 0.5303 | 0.5538 | 0.7531 | 0.7904 | 0.7096 | 0.7478 | 0.9568 |
|
88 |
-
| 0.0033 | 15.2068 | 25000 | 0.2451 | 0.5862 | 0.5248 | 0.5500 | 0.7492 | 0.8234 | 0.6769 | 0.7430 | 0.9578 |
|
89 |
-
| 0.0037 | 15.8151 | 26000 | 0.2466 | 0.5835 | 0.5302 | 0.5525 | 0.7497 | 0.8003 | 0.6987 | 0.7460 | 0.9571 |
|
90 |
-
| 0.0032 | 16.4234 | 27000 | 0.2429 | 0.5929 | 0.5301 | 0.5555 | 0.7557 | 0.8043 | 0.7049 | 0.7513 | 0.9579 |
|
91 |
-
| 0.0026 | 17.0316 | 28000 | 0.2461 | 0.5870 | 0.5253 | 0.5497 | 0.7506 | 0.8031 | 0.6956 | 0.7455 | 0.9572 |
|
92 |
-
| 0.0023 | 17.6399 | 29000 | 0.2452 | 0.5880 | 0.5255 | 0.5499 | 0.7518 | 0.8027 | 0.6968 | 0.7460 | 0.9572 |
|
93 |
-
| 0.0016 | 18.2482 | 30000 | 0.2461 | 0.5892 | 0.5204 | 0.5477 | 0.7509 | 0.8238 | 0.6743 | 0.7416 | 0.9576 |
|
94 |
-
| 0.0019 | 18.8564 | 31000 | 0.2458 | 0.5898 | 0.5194 | 0.5466 | 0.7505 | 0.8179 | 0.6790 | 0.7420 | 0.9574 |
|
95 |
-
| 0.0015 | 19.4647 | 32000 | 0.2453 | 0.5901 | 0.5245 | 0.5504 | 0.7508 | 0.8168 | 0.6856 | 0.7455 | 0.9578 |
|
96 |
-
|
97 |
|
98 |
### Framework versions
|
99 |
|
|
|
19 |
|
20 |
# edu-modernbert
|
21 |
|
22 |
+
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on the [HuggingFaceFW/fineweb-edu-llama3-annotations](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-llama3-annotations) dataset.
|
23 |
It achieves the following results on the evaluation set:
|
24 |
- Loss: 0.2453
|
25 |
- Precision: 0.5901
|
|
|
31 |
- Binary F1: 0.7455
|
32 |
- Binary Accuracy: 0.9578
|
33 |
|
34 |
+
<div class="alert alert-info">
|
35 |
+
<b>Note:</b> the binary classification score is calculated by thresholding at 3 i.e (0-2 -> 0, 3-5 -> 1).
|
36 |
+
</div>
|
37 |
|
38 |
+
In comparison the reproduced version of [HuggingFaceFW/fineweb-edu-classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier) achieves:
|
39 |
|
40 |
+
- Loss: 0.2475
|
41 |
+
- Precision: 0.5595
|
42 |
+
- Recall: 0.4360
|
43 |
+
- F1: 0.4704
|
44 |
+
- Accuracy: 0.7123
|
45 |
+
- Binary Precision: 0.7781
|
46 |
+
- Binary Recall: 0.5566
|
47 |
+
- Binary F1: 0.6490
|
48 |
+
- Binary Accuracy: 0.9457
|
49 |
|
50 |
+
<div class="alert alert-info">
|
51 |
+
<b>Note:</b> one difference is that ModernBERT-base is fully trained while the original classifier trains only the regression head..
|
52 |
+
</div>
|
53 |
|
|
|
|
|
|
|
54 |
|
55 |
## Training procedure
|
56 |
|
|
|
61 |
- train_batch_size: 256
|
62 |
- eval_batch_size: 256
|
63 |
- seed: 0
|
64 |
+
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08
|
65 |
- lr_scheduler_type: linear
|
66 |
+
- num_epochs: 20(totally not needed, 3 epochs already achieve great results)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
67 |
|
68 |
### Framework versions
|
69 |
|