Edit model card

h2

This model is a fine-tuned version of distilgpt2 on hearthstone. GitHub repo. It achieves the following results on the evaluation set:

  • Loss: 2.5771
  • Exact Match: 0.0
  • Bleu: 0.6619
  • Codebleu: 0.5374
  • Ngram Match Score: 0.4051
  • Weighted Ngram Match Score: 0.4298
  • Syntax Match Score: 0.5605
  • Dataflow Match Score: 0.7541
  • Chrf: 73.9625

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 17
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • num_epochs: 200
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Exact Match Bleu Codebleu Ngram Match Score Weighted Ngram Match Score Syntax Match Score Dataflow Match Score Chrf
1.2052 11.94 1600 1.2887 0.0 0.6340 0.4427 0.3384 0.3614 0.5263 0.5446 70.8004
0.3227 23.88 3200 1.4484 0.0 0.6575 0.5050 0.3767 0.3995 0.5955 0.6485 72.9553
0.205 35.82 4800 1.6392 0.0 0.6598 0.5174 0.3788 0.4022 0.5821 0.7063 73.2766
0.1392 47.76 6400 1.8219 0.0 0.6584 0.5279 0.3922 0.4159 0.5742 0.7294 73.5022
0.0979 59.7 8000 1.9416 0.0 0.6635 0.5305 0.4012 0.4248 0.5699 0.7261 73.8081
0.0694 71.64 9600 2.1793 0.0 0.6593 0.5400 0.4027 0.4271 0.5562 0.7739 73.6746
0.0512 83.58 11200 2.2547 0.0 0.6585 0.5433 0.4040 0.4283 0.5486 0.7921 73.7670
0.0399 95.52 12800 2.3037 0.0 0.6585 0.5354 0.4040 0.4282 0.5454 0.7640 73.7431
0.0316 107.46 14400 2.4113 0.0 0.6577 0.5294 0.4006 0.4257 0.5504 0.7409 73.7004
0.0254 119.4 16000 2.4407 0.0 0.6607 0.5412 0.4041 0.4285 0.5598 0.7723 73.8828
0.0208 131.34 17600 2.4993 0.0 0.6637 0.5330 0.4042 0.4286 0.5684 0.7310 74.1760
0.0176 143.28 19200 2.5138 0.0 0.6627 0.5434 0.4050 0.4295 0.5620 0.7772 74.0546
0.0158 155.22 20800 2.5589 0.0 0.6616 0.5347 0.4044 0.4291 0.5512 0.7541 73.9516
0.0147 167.16 22400 2.5554 0.0 0.6620 0.5354 0.4049 0.4295 0.5630 0.7442 73.9461
0.0134 179.1 24000 2.5696 0.0 0.6607 0.5395 0.4046 0.4293 0.5602 0.7640 73.8383
0.0135 191.04 25600 2.5771 0.0 0.6619 0.5374 0.4051 0.4298 0.5605 0.7541 73.9625

Framework versions

  • Transformers 4.24.0
  • Pytorch 1.13.0
  • Datasets 2.6.1
  • Tokenizers 0.13.1
Downloads last month
17
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train dvitel/h2

Evaluation results