abdiharyadi commited on
Commit
19d3b47
1 Parent(s): 4e745ed

Model save

Browse files
README.md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - generated_from_trainer
4
+ datasets:
5
+ - data
6
+ metrics:
7
+ - bleu
8
+ model-index:
9
+ - name: mbart-en-id-smaller-indo-amr-generation-fted-with-prefix
10
+ results:
11
+ - task:
12
+ name: Sequence-to-sequence Language Modeling
13
+ type: text2text-generation
14
+ dataset:
15
+ name: data
16
+ type: data
17
+ config: default
18
+ split: validation
19
+ args: default
20
+ metrics:
21
+ - name: Bleu
22
+ type: bleu
23
+ value: 13.717
24
+ ---
25
+
26
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
27
+ should probably proofread and complete it, then remove this comment. -->
28
+
29
+ # mbart-en-id-smaller-indo-amr-generation-fted-with-prefix
30
+
31
+ This model was trained from scratch on the data dataset.
32
+ It achieves the following results on the evaluation set:
33
+ - Loss: 2.3974
34
+ - Bleu: 13.717
35
+ - Gen Len: 36.5221
36
+
37
+ ## Model description
38
+
39
+ More information needed
40
+
41
+ ## Intended uses & limitations
42
+
43
+ More information needed
44
+
45
+ ## Training and evaluation data
46
+
47
+ More information needed
48
+
49
+ ## Training procedure
50
+
51
+ ### Training hyperparameters
52
+
53
+ The following hyperparameters were used during training:
54
+ - learning_rate: 2e-07
55
+ - train_batch_size: 2
56
+ - eval_batch_size: 2
57
+ - seed: 42
58
+ - gradient_accumulation_steps: 12
59
+ - total_train_batch_size: 24
60
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
+ - lr_scheduler_type: polynomial
62
+ - lr_scheduler_warmup_steps: 200
63
+ - num_epochs: 16.0
64
+ - label_smoothing_factor: 0.1
65
+
66
+ ### Training results
67
+
68
+ | Training Loss | Epoch | Step | Bleu | Gen Len | Validation Loss |
69
+ |:-------------:|:-------:|:-----:|:-------:|:--------:|:---------------:|
70
+ | 3.0219 | 0.9999 | 3869 | 0.0741 | 114.8177 | 2.9798 |
71
+ | 2.8978 | 2.0 | 7739 | 0.0747 | 113.0081 | 2.8610 |
72
+ | 2.8109 | 2.9999 | 11608 | 0.0795 | 111.475 | 2.7648 |
73
+ | 2.7623 | 4.0 | 15478 | 0.1685 | 105.7747 | 2.6956 |
74
+ | 2.7116 | 4.9999 | 19347 | 0.5081 | 92.4187 | 2.6404 |
75
+ | 2.6331 | 5.9999 | 23214 | 1.6991 | 66.9245 | 2.5961 |
76
+ | 2.5716 | 7.0 | 27084 | 5.2201 | 46.1405 | 2.5611 |
77
+ | 2.5943 | 7.9999 | 30953 | 8.0263 | 40.7538 | 2.5300 |
78
+ | 2.5622 | 9.0 | 34823 | 10.2353 | 38.2607 | 2.5050 |
79
+ | 2.537 | 9.9999 | 38692 | 11.3364 | 36.0732 | 2.4840 |
80
+ | 2.5345 | 11.0 | 42562 | 12.1716 | 36.4367 | 2.4645 |
81
+ | 2.4706 | 11.9999 | 46428 | 2.4479 | 12.51 | 37.4146 |
82
+ | 2.4558 | 13.0 | 50298 | 2.4330 | 12.8144 | 37.2979 |
83
+ | 2.4125 | 13.9999 | 54167 | 2.4199 | 13.0772 | 37.0436 |
84
+ | 2.4053 | 15.0 | 58037 | 2.4081 | 13.5764 | 36.1492 |
85
+ | 2.439 | 15.9994 | 61904 | 2.3974 | 13.717 | 36.5221 |
86
+
87
+
88
+ ### Framework versions
89
+
90
+ - Transformers 4.44.0
91
+ - Pytorch 2.4.0+cu121
92
+ - Datasets 2.20.0
93
+ - Tokenizers 0.19.1
generation_config.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "decoder_start_token_id": 2,
5
+ "early_stopping": true,
6
+ "eos_token_id": 2,
7
+ "forced_eos_token_id": 2,
8
+ "max_length": 200,
9
+ "num_beams": 5,
10
+ "pad_token_id": 1,
11
+ "transformers_version": "4.44.0"
12
+ }
last-checkpoint/model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f5610f104bbe5c71c2c8986a36c189e0fd7e9345d34c3ad78d767d6afdfb463c
3
  size 1575259780
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b35a3bc2ac180ed070b42029c9a9dd327a1a9559e81df276f329c07eb21d04fc
3
  size 1575259780
last-checkpoint/optimizer.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7d7ef396422aa6c019a5e69eda3ff44645515da108954efa0d94036fd3d1bc22
3
  size 3150397656
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba79276529acf6b7fcac21ab5be5fc5756c900e697415161ce71f7759f9fa8e0
3
  size 3150397656
last-checkpoint/rng_state.pth CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:29aef665b26cb1458ee97ee69d3ddd4704a10f9d74c0acba81e307397efc04fa
3
  size 14244
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e52ca2f5b1048c2984d9cb01ff8bc5c06ec7e6e1ac850eb54ef8fe7147dcf65
3
  size 14244
last-checkpoint/scheduler.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a8adff87233e7c0e142f515f7a41b0a9ab9d0ffb3771224e0f1d596be2a78b03
3
  size 1064
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df05e968bcb6cb9f8c607bdaf90fbac1131121f0efd29f2e5e7bc42c79c2d577
3
  size 1064
last-checkpoint/trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff
 
run-2024-10-28T15:22:08+00:00.log CHANGED
@@ -5374,3 +5374,7 @@ Non-default generation parameters: {'max_length': 200, 'early_stopping': True, '
5374
  [WARNING|trainer.py:2764] 2024-10-29 00:01:14,086 >> There were missing keys in the checkpoint model loaded: ['model.encoder.embed_tokens.weight', 'model.decoder.embed_tokens.weight', 'lm_head.weight'].
5375
 
5376
 
5377
 
 
 
 
 
 
5374
  [WARNING|trainer.py:2764] 2024-10-29 00:01:14,086 >> There were missing keys in the checkpoint model loaded: ['model.encoder.embed_tokens.weight', 'model.decoder.embed_tokens.weight', 'lm_head.weight'].
5375
 
5376
 
5377
 
5378
+ [WARNING|configuration_utils.py:448] 2024-10-29 00:01:47,338 >> Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41.
5379
+ Non-default generation parameters: {'max_length': 200, 'early_stopping': True, 'num_beams': 5, 'forced_eos_token_id': 2}
5380
+ [WARNING|configuration_utils.py:448] 2024-10-29 00:01:58,952 >> Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41.
5381
+ Non-default generation parameters: {'max_length': 200, 'early_stopping': True, 'num_beams': 5, 'forced_eos_token_id': 2}