--- library_name: transformers license: apache-2.0 base_model: aviola/checkpoints tags: - generated_from_trainer datasets: - temp_domino2 model-index: - name: detrDominoTest results: [] --- # detrDominoTest This model is a fine-tuned version of [aviola/checkpoints](https://huggingface.co/aviola/checkpoints) on the temp_domino2 dataset. It achieves the following results on the evaluation set: - eval_loss: 0.3907 - eval_map: 0.7678 - eval_map_50: 0.9735 - eval_map_75: 0.9265 - eval_map_small: 0.553 - eval_map_medium: 0.7932 - eval_map_large: 0.925 - eval_mar_1: 0.5446 - eval_mar_10: 0.8104 - eval_mar_100: 0.8114 - eval_mar_small: 0.5939 - eval_mar_medium: 0.8319 - eval_mar_large: 0.925 - eval_map_pip-1: 0.718 - eval_mar_100_pip-1: 0.7846 - eval_map_pip-10: 0.8274 - eval_mar_100_pip-10: 0.8619 - eval_map_pip-11: 0.798 - eval_mar_100_pip-11: 0.8231 - eval_map_pip-12: 0.7927 - eval_mar_100_pip-12: 0.8474 - eval_map_pip-2: 0.775 - eval_mar_100_pip-2: 0.795 - eval_map_pip-3: 0.8041 - eval_mar_100_pip-3: 0.8467 - eval_map_pip-4: 0.7733 - eval_mar_100_pip-4: 0.8 - eval_map_pip-5: 0.7476 - eval_mar_100_pip-5: 0.8097 - eval_map_pip-6: 0.6602 - eval_mar_100_pip-6: 0.7318 - eval_map_pip-7: 0.7531 - eval_mar_100_pip-7: 0.8115 - eval_map_pip-8: 0.7814 - eval_mar_100_pip-8: 0.8188 - eval_map_pip-9: 0.7831 - eval_mar_100_pip-9: 0.8067 - eval_runtime: 1.9309 - eval_samples_per_second: 15.019 - eval_steps_per_second: 2.072 - epoch: 1.0 - step: 39 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 100 ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1