--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - amazon_reviews_multi metrics: - accuracy model-index: - name: bert_reviews results: - task: name: Text Classification type: text-classification dataset: name: amazon_reviews_multi type: amazon_reviews_multi config: en split: test args: en metrics: - name: Accuracy type: accuracy value: 0.6062 --- # bert_reviews This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.9204 - Accuracy: 0.6062 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 20000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.8812 | 0.04 | 1000 | 0.9970 | 0.5738 | | 0.8495 | 0.08 | 2000 | 1.0120 | 0.569 | | 0.8067 | 0.12 | 3000 | 1.0442 | 0.5766 | | 0.7934 | 0.16 | 4000 | 1.0629 | 0.5772 | | 0.7845 | 0.2 | 5000 | 1.0236 | 0.5876 | | 0.9033 | 0.24 | 6000 | 0.9822 | 0.5774 | | 0.8993 | 0.28 | 7000 | 0.9693 | 0.5816 | | 0.9012 | 0.32 | 8000 | 1.0075 | 0.5738 | | 0.873 | 0.36 | 9000 | 0.9663 | 0.5886 | | 0.9376 | 0.4 | 10000 | 0.9447 | 0.5816 | | 0.9398 | 0.44 | 11000 | 0.9509 | 0.5802 | | 0.9402 | 0.48 | 12000 | 0.9561 | 0.5916 | | 0.9247 | 0.52 | 13000 | 0.9303 | 0.6008 | | 0.9247 | 0.56 | 14000 | 0.9241 | 0.5998 | | 0.9192 | 0.6 | 15000 | 0.9276 | 0.6104 | | 0.907 | 0.64 | 16000 | 0.9251 | 0.603 | | 0.9177 | 0.68 | 17000 | 0.9198 | 0.6056 | | 0.9129 | 0.72 | 18000 | 0.9167 | 0.6078 | | 0.8948 | 0.76 | 19000 | 0.9213 | 0.604 | | 0.906 | 0.8 | 20000 | 0.9204 | 0.6062 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1