kanishka commited on
Commit
9f4be30
·
verified ·
1 Parent(s): 95bbe55

Model save

Browse files
README.md CHANGED
@@ -1,5 +1,4 @@
1
  ---
2
- base_model: models/smolm-autoreg-bpe-seed_888/config.json
3
  tags:
4
  - generated_from_trainer
5
  metrics:
@@ -14,10 +13,10 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # smolm-autoreg-bpe-seed_888
16
 
17
- This model is a fine-tuned version of [models/smolm-autoreg-bpe-seed_888/config.json](https://huggingface.co/models/smolm-autoreg-bpe-seed_888/config.json) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 2.5572
20
- - Accuracy: 0.4861
21
 
22
  ## Model description
23
 
@@ -37,8 +36,8 @@ More information needed
37
 
38
  The following hyperparameters were used during training:
39
  - learning_rate: 0.003
40
- - train_batch_size: 64
41
- - eval_batch_size: 512
42
  - seed: 888
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
@@ -47,23 +46,23 @@ The following hyperparameters were used during training:
47
 
48
  ### Training results
49
 
50
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
51
- |:-------------:|:-----:|:----:|:---------------:|:--------:|
52
- | 6.1167 | 1.0 | 732 | 3.5358 | 0.3947 |
53
- | 3.4801 | 2.0 | 1464 | 3.1202 | 0.4320 |
54
- | 2.9429 | 3.0 | 2196 | 2.9158 | 0.4498 |
55
- | 2.8071 | 4.0 | 2928 | 2.7906 | 0.4605 |
56
- | 2.6197 | 5.0 | 3660 | 2.6998 | 0.4701 |
57
- | 2.5459 | 6.0 | 4392 | 2.6419 | 0.4760 |
58
- | 2.4492 | 7.0 | 5124 | 2.6036 | 0.4802 |
59
- | 2.4065 | 8.0 | 5856 | 2.5770 | 0.4824 |
60
- | 2.3626 | 9.0 | 6588 | 2.5622 | 0.4863 |
61
- | 2.3276 | 10.0 | 7320 | 2.5572 | 0.4861 |
62
 
63
 
64
  ### Framework versions
65
 
66
- - Transformers 4.32.1
67
- - Pytorch 2.0.1+cu117
68
- - Datasets 2.12.0
69
- - Tokenizers 0.13.3
 
1
  ---
 
2
  tags:
3
  - generated_from_trainer
4
  metrics:
 
13
 
14
  # smolm-autoreg-bpe-seed_888
15
 
16
+ This model was trained from scratch on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 2.4712
19
+ - Accuracy: 0.5000
20
 
21
  ## Model description
22
 
 
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 0.003
39
+ - train_batch_size: 16
40
+ - eval_batch_size: 128
41
  - seed: 888
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
 
46
 
47
  ### Training results
48
 
49
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
50
+ |:-------------:|:-----:|:-----:|:---------------:|:--------:|
51
+ | 3.0458 | 1.0 | 2928 | 3.0245 | 0.4357 |
52
+ | 2.71 | 2.0 | 5856 | 2.7881 | 0.4585 |
53
+ | 2.5918 | 3.0 | 8784 | 2.6924 | 0.4682 |
54
+ | 2.5122 | 4.0 | 11712 | 2.6471 | 0.4759 |
55
+ | 2.4623 | 5.0 | 14640 | 2.6053 | 0.4803 |
56
+ | 2.4246 | 6.0 | 17568 | 2.5798 | 0.4824 |
57
+ | 2.3871 | 7.0 | 20496 | 2.5647 | 0.4858 |
58
+ | 2.3644 | 8.0 | 23424 | 2.5571 | 0.4853 |
59
+ | 2.2824 | 9.0 | 26352 | 2.5034 | 0.4934 |
60
+ | 2.1369 | 10.0 | 29280 | 2.4712 | 0.5000 |
61
 
62
 
63
  ### Framework versions
64
 
65
+ - Transformers 4.38.2
66
+ - Pytorch 2.1.0+cu121
67
+ - Datasets 2.16.1
68
+ - Tokenizers 0.15.1
generation_config.json CHANGED
@@ -3,5 +3,5 @@
3
  "bos_token_id": 1,
4
  "eos_token_id": 2,
5
  "pad_token_id": 0,
6
- "transformers_version": "4.32.1"
7
  }
 
3
  "bos_token_id": 1,
4
  "eos_token_id": 2,
5
  "pad_token_id": 0,
6
+ "transformers_version": "4.38.2"
7
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:92a3d07e0c943cc9655e50e8493bc4d1a3f8c2212044c5e2b169bbe1742a4645
3
  size 33810896
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c939b1cd6e781e0b63fde24af90e06e9c4d703bd65d854522c4b0bd7ecd7ab4
3
  size 33810896
runs/Mar25_21-32-38_phyl-ling-p01.la.utexas.edu/events.out.tfevents.1711420378.phyl-ling-p01.la.utexas.edu.1056977.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d03302a768ce655266f3f528138eec2dfbc09fc6b88ed40818cd7292d51b92a7
3
- size 19614
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d13d864cb0c12e5436af1cd909e0b7f1b1305a12f4cab384de759d683c899612
3
+ size 20733