Improving Black-box Robustness with In-Context Rewriting
Collection
24 items
•
Updated
•
1
This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | F1 | Acc | Validation Loss |
---|---|---|---|---|---|
No log | 1.0 | 375 | 0.6519 | 0.8092 | 0.5142 |
0.8417 | 2.0 | 750 | 0.6644 | 0.8447 | 0.4261 |
0.4234 | 3.0 | 1125 | 0.7150 | 0.8582 | 0.4002 |
0.234 | 4.0 | 1500 | 0.7360 | 0.8883 | 0.4355 |
0.234 | 5.0 | 1875 | 0.7408 | 0.8848 | 0.5524 |
0.1203 | 6.0 | 2250 | 0.7118 | 0.8484 | 0.8452 |
0.0738 | 7.0 | 2625 | 0.7201 | 0.8680 | 0.8452 |
0.0416 | 8.0 | 3000 | 0.6808 | 0.8249 | 1.1523 |
Base model
google-bert/bert-base-uncased