--- license: mit base_model: roberta-base tags: - generated_from_keras_callback model-index: - name: jmassot/masked-lm-tpu results: [] --- # jmassot/masked-lm-tpu This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 9.8173 - Train Accuracy: 0.0164 - Validation Loss: 9.6999 - Validation Accuracy: 0.0210 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 0.0001, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.0001, 'decay_steps': 22325, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1175, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.001} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 10.2539 | 0.0 | 10.2414 | 0.0000 | 0 | | 10.2396 | 0.0000 | 10.2294 | 0.0000 | 1 | | 10.2338 | 0.0000 | 10.2031 | 0.0000 | 2 | | 10.2003 | 0.0000 | 10.1587 | 0.0000 | 3 | | 10.1691 | 0.0 | 10.1081 | 0.0 | 4 | | 10.1135 | 0.0000 | 10.0415 | 0.0001 | 5 | | 10.0630 | 0.0001 | 9.9697 | 0.0013 | 6 | | 9.9906 | 0.0011 | 9.8881 | 0.0097 | 7 | | 9.9059 | 0.0070 | 9.7998 | 0.0183 | 8 | | 9.8173 | 0.0164 | 9.6999 | 0.0210 | 9 | ### Framework versions - Transformers 4.35.0 - TensorFlow 2.12.0 - Tokenizers 0.14.1