myBit-Llama2-jp-127M-7

This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 10.6539

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.00024
  • train_batch_size: 96
  • eval_batch_size: 96
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: polynomial
  • lr_scheduler_warmup_steps: 250
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
9.0536 0.04 100 7.4802
6.8962 0.07 200 6.5875
6.3685 0.11 300 6.1149
5.8698 0.15 400 5.6208
5.6334 0.18 500 6.1096
8.8705 0.22 600 10.3915
10.5174 0.26 700 10.5752
10.5929 0.29 800 10.6066
10.6128 0.33 900 10.6187
10.6218 0.37 1000 10.6255
10.6274 0.4 1100 10.6302
10.6312 0.44 1200 10.6335
10.6343 0.48 1300 10.6363
10.6369 0.51 1400 10.6384
10.6391 0.55 1500 10.6404
10.6408 0.59 1600 10.6422
10.6426 0.62 1700 10.6438
10.6441 0.66 1800 10.6451
10.6454 0.7 1900 10.6464
10.6467 0.73 2000 10.6477
10.6479 0.77 2100 10.6486
10.649 0.81 2200 10.6496
10.6499 0.84 2300 10.6506
10.6508 0.88 2400 10.6515
10.6516 0.92 2500 10.6522
10.6524 0.95 2600 10.6531
10.6534 0.99 2700 10.6539

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
18
Safetensors
Model size
128M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for HachiML/myBit-Llama2-jp-127M-test-7

Finetuned
(168)
this model