Improving Black-box Robustness with In-Context Rewriting
Collection
24 items
•
Updated
•
1
This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | F1 | Acc | Validation Loss |
---|---|---|---|---|---|
No log | 1.0 | 94 | 0.4638 | 0.6355 | 1.0296 |
No log | 2.0 | 188 | 0.6014 | 0.8005 | 0.5842 |
No log | 3.0 | 282 | 0.6925 | 0.8577 | 0.3928 |
No log | 4.0 | 376 | 0.6529 | 0.7895 | 0.6497 |
No log | 5.0 | 470 | 0.6965 | 0.8595 | 0.5122 |
0.5499 | 6.0 | 564 | 0.6758 | 0.8256 | 0.7653 |
0.5499 | 7.0 | 658 | 0.6720 | 0.8277 | 0.8562 |
0.5499 | 8.0 | 752 | 0.6639 | 0.8128 | 0.9879 |
Base model
google-bert/bert-base-uncased