lIlBrother commited on
Commit
50eafd8
1 Parent(s): afe6568

Init: 모델 최초 commit

Browse files
README.md CHANGED
@@ -1,3 +1,104 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - ko # Example: fr
4
+ license: apache-2.0 # Example: apache-2.0 or any license from https://hf.co/docs/hub/repositories-licenses
5
+ library_name: transformers # Optional. Example: keras or any library from https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Libraries.ts
6
+ tags:
7
+ - text2text-generation # Example: audio
8
+ datasets:
9
+ - aihub # Example: common_voice. Use dataset id from https://hf.co/datasets
10
+ metrics:
11
+ - bleu # Example: wer. Use metric id from https://hf.co/metrics
12
+ - rouge
13
+
14
+ # Optional. Add this if you want to encode your eval results in a structured way.
15
+ model-index:
16
+ - name: ko-TextNumbarT
17
+ results:
18
+ - task:
19
+ type: text2text-generation # Required. Example: automatic-speech-recognition
20
+ name: text2text-generation # Optional. Example: Speech Recognition
21
+ metrics:
22
+ - type: bleu # Required. Example: wer. Use metric id from https://hf.co/metrics
23
+ value: 0.9529006548919251 # Required. Example: 20.90
24
+ name: eval_bleu # Optional. Example: Test WER
25
+ verified: true # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
26
+ - type: rouge1 # Required. Example: wer. Use metric id from https://hf.co/metrics
27
+ value: 0.9693520563208838 # Required. Example: 20.90
28
+ name: eval_rouge1 # Optional. Example: Test WER
29
+ verified: true # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
30
+ - type: rouge2 # Required. Example: wer. Use metric id from https://hf.co/metrics
31
+ value: 0.9444220599246154 # Required. Example: 20.90
32
+ name: eval_rouge2 # Optional. Example: Test WER
33
+ verified: true # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
34
+ - type: rougeL # Required. Example: wer. Use metric id from https://hf.co/metrics
35
+ value: 0.9692485601662657 # Required. Example: 20.90
36
+ name: eval_rougeL # Optional. Example: Test WER
37
+ verified: true # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
38
+ - type: rougeLsum # Required. Example: wer. Use metric id from https://hf.co/metrics
39
+ value: 0.9692422603343052 # Required. Example: 20.90
40
+ name: eval_rougeLsum # Optional. Example: Test WER
41
+ verified: true # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
42
  ---
43
+
44
+ # ko-TextNumbarT(TNT Model🧨): Try Korean Reading To Number(한글을 숫자로 바꾸는 모델)
45
+
46
+ ## Table of Contents
47
+ - [ko-TextNumbarT(TNT Model🧨): Try Korean Reading To Number(한글을 숫자로 바꾸는 모델)](#ko-textnumbarttnt-model-try-korean-reading-to-number한글을-숫자로-바꾸는-모델)
48
+ - [Table of Contents](#table-of-contents)
49
+ - [Model Details](#model-details)
50
+ - [Uses](#uses)
51
+ - [Evaluation](#evaluation)
52
+ - [How to Get Started With the Model](#how-to-get-started-with-the-model)
53
+
54
+
55
+ ## Model Details
56
+ - **Model Description:**
57
+ 뭔가 찾아봐도 모델이나 알고리즘이 딱히 없어서 만들어본 모델입니다. <br />
58
+ BartForConditionalGeneration Fine-Tuning Model For Korean To Number <br />
59
+ BartForConditionalGeneration으로 파인튜닝한, 한글을 숫자로 변환하는 Task 입니다. <br />
60
+
61
+ - Dataset use [Korea aihub](https://aihub.or.kr/aihubdata/data/list.do?currMenu=115&topMenu=100&srchDataRealmCode=REALM002&srchDataTy=DATA004) <br />
62
+ I can't open my fine-tuning datasets for my private issue <br />
63
+ 데이터셋은 Korea aihub에서 받아서 사용하였으며, 파인튜닝에 사용된 모든 데이터를 사정상 공개해드릴 수는 없습니다. <br />
64
+
65
+ - Korea aihub data is ONLY permit to Korean!!!!!!! <br />
66
+ aihub에서 데이터를 받으실 분은 한국인일 것이므로, 한글로만 작성합니다. <br />
67
+ 정확히는 철자전사를 음성전사로 번역하는 형태로 학습된 모델입니다. (ETRI 전사기준) <br />
68
+
69
+ - In case, ten million, some people use 10 million or some people use 10000000, so this model is crucial for training datasets
70
+ 천만을 1000만 혹은 10000000으로 쓸 수도 있기에, Training Datasets에 따라 결과는 상이할 수 있습니다. <br />
71
+ - **Developed by:** Yoo SungHyun(https://github.com/YooSungHyun)
72
+ - **Language(s):** Korean
73
+ - **License:** apache-2.0
74
+ - **Parent Model:** See the [kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) for more information about the pre-trained base model.
75
+
76
+ ## Uses
77
+ Want see more detail follow this URL [KoGPT_num_converter](https://github.com/ddobokki/KoGPT_num_converter) <br /> and see `bart_inference.py` and `bart_train.py`
78
+
79
+ ## Evaluation
80
+ Just using `evaluate-metric/bleu` and `evaluate-metric/rouge` in huggingface `evaluate` library <br />
81
+ [Training wanDB URL](https://wandb.ai/bart_tadev/BartForConditionalGeneration/runs/1chrc03q?workspace=user-bart_tadev)
82
+ ## How to Get Started With the Model
83
+ ```python
84
+ from transformers.pipelines import Text2TextGenerationPipeline
85
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
86
+ texts = ["그러게 누가 여섯시까지 술을 마시래?"]
87
+ tokenizer = AutoTokenizer.from_pretrained(
88
+ args.model_name_or_path,
89
+ )
90
+ model = AutoModelForSeq2SeqLM.from_pretrained(
91
+ args.model_name_or_path,
92
+ )
93
+ seq2seqlm_pipeline = Text2TextGenerationPipeline(model=model, tokenizer=tokenizer)
94
+ kwargs = {
95
+ "min_length": args.min_length,
96
+ "max_length": args.max_length,
97
+ "num_beams": args.beam_width,
98
+ "do_sample": args.do_sample,
99
+ "num_beam_groups": args.num_beam_groups,
100
+ }
101
+ pred = seq2seqlm_pipeline(texts, **kwargs)
102
+ print(pred)
103
+ # 그러게 누가 6시까지 술을 마시래?
104
+ ```
all_results.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 5.13,
3
+ "eval_bleu": 0.9529006548919251,
4
+ "eval_brevity_penalty": 1.0,
5
+ "eval_length_ratio": 1.0129792088807863,
6
+ "eval_loss": 0.040760207921266556,
7
+ "eval_reference_length": 688948,
8
+ "eval_rouge1": 0.9693520563208838,
9
+ "eval_rouge2": 0.9444220599246154,
10
+ "eval_rougeL": 0.9692485601662657,
11
+ "eval_rougeLsum": 0.9692422603343052,
12
+ "eval_runtime": 348.5598,
13
+ "eval_samples_per_second": 122.995,
14
+ "eval_steps_per_second": 10.251,
15
+ "eval_translation_length": 697890,
16
+ "train_loss": 0.08529333166642622,
17
+ "train_runtime": 78966.8059,
18
+ "train_samples_per_second": 25.074,
19
+ "train_steps_per_second": 4.179
20
+ }
config.json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/data2/bart/temp_workspace/nlp/models/kobart-base-v2",
3
+ "activation_dropout": 0.0,
4
+ "activation_function": "gelu",
5
+ "add_bias_logits": false,
6
+ "add_final_layer_norm": false,
7
+ "architectures": [
8
+ "BartForConditionalGeneration"
9
+ ],
10
+ "attention_dropout": 0.0,
11
+ "author": "Heewon Jeon([email protected])",
12
+ "bos_token_id": 1,
13
+ "classif_dropout": 0.1,
14
+ "classifier_dropout": 0.1,
15
+ "d_model": 768,
16
+ "decoder_attention_heads": 16,
17
+ "decoder_ffn_dim": 3072,
18
+ "decoder_layerdrop": 0.0,
19
+ "decoder_layers": 6,
20
+ "decoder_start_token_id": 1,
21
+ "do_blenderbot_90_layernorm": false,
22
+ "dropout": 0.1,
23
+ "encoder_attention_heads": 16,
24
+ "encoder_ffn_dim": 3072,
25
+ "encoder_layerdrop": 0.0,
26
+ "encoder_layers": 6,
27
+ "eos_token_id": 1,
28
+ "extra_pos_embeddings": 2,
29
+ "force_bos_token_to_be_generated": false,
30
+ "forced_eos_token_id": 1,
31
+ "gradient_checkpointing": false,
32
+ "id2label": {
33
+ "0": "NEGATIVE",
34
+ "1": "POSITIVE"
35
+ },
36
+ "init_std": 0.02,
37
+ "is_encoder_decoder": true,
38
+ "kobart_version": 2.0,
39
+ "label2id": {
40
+ "NEGATIVE": 0,
41
+ "POSITIVE": 1
42
+ },
43
+ "max_position_embeddings": 1026,
44
+ "model_type": "bart",
45
+ "normalize_before": false,
46
+ "normalize_embedding": true,
47
+ "num_hidden_layers": 6,
48
+ "pad_token_id": 3,
49
+ "scale_embedding": false,
50
+ "static_position_embeddings": false,
51
+ "tokenizer_class": "PreTrainedTokenizerFast",
52
+ "torch_dtype": "float32",
53
+ "transformers_version": "4.22.1",
54
+ "use_cache": true,
55
+ "vocab_size": 30000
56
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:002675439d3f8ffec4c9410dcb0992e50f1db850b5e49a2e207ec30b08c86997
3
+ size 495646265
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "</s>",
3
+ "eos_token": "</s>",
4
+ "mask_token": "<mask>",
5
+ "pad_token": "<pad>",
6
+ "unk_token": "<unk>"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "name_or_path": "/data2/bart/temp_workspace/nlp/models/kobart-base-v2",
3
+ "special_tokens_map_file": "/data2/bart/temp_workspace/nlp/models/kobart-base-v2/special_tokens_map.json",
4
+ "tokenizer_class": "PreTrainedTokenizerFast"
5
+ }
trainer_state.json ADDED
@@ -0,0 +1,1576 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": 0.040760207921266556,
3
+ "best_model_checkpoint": "/data2/bart/temp_workspace/nlp/output_dir/checkpoint-250000",
4
+ "epoch": 5.131554394476582,
5
+ "global_step": 330000,
6
+ "is_hyper_param_search": false,
7
+ "is_local_process_zero": true,
8
+ "is_world_process_zero": true,
9
+ "log_history": [
10
+ {
11
+ "epoch": 0.03,
12
+ "learning_rate": 2.844285714285714e-07,
13
+ "loss": 2.9266,
14
+ "step": 2000
15
+ },
16
+ {
17
+ "epoch": 0.06,
18
+ "learning_rate": 5.701428571428572e-07,
19
+ "loss": 0.9726,
20
+ "step": 4000
21
+ },
22
+ {
23
+ "epoch": 0.09,
24
+ "learning_rate": 8.558571428571428e-07,
25
+ "loss": 0.5831,
26
+ "step": 6000
27
+ },
28
+ {
29
+ "epoch": 0.12,
30
+ "learning_rate": 1.1415714285714287e-06,
31
+ "loss": 0.4407,
32
+ "step": 8000
33
+ },
34
+ {
35
+ "epoch": 0.16,
36
+ "learning_rate": 1.4272857142857143e-06,
37
+ "loss": 0.361,
38
+ "step": 10000
39
+ },
40
+ {
41
+ "epoch": 0.16,
42
+ "eval_bleu": 0.7861044602086867,
43
+ "eval_brevity_penalty": 1.0,
44
+ "eval_length_ratio": 1.0976053345100065,
45
+ "eval_loss": 0.24034307897090912,
46
+ "eval_reference_length": 688948,
47
+ "eval_rouge1": 0.8478815702507803,
48
+ "eval_rouge2": 0.770393954032274,
49
+ "eval_rougeL": 0.8471496607249986,
50
+ "eval_rougeLsum": 0.8471595113950809,
51
+ "eval_runtime": 357.1063,
52
+ "eval_samples_per_second": 120.051,
53
+ "eval_steps_per_second": 10.005,
54
+ "eval_translation_length": 756193,
55
+ "step": 10000
56
+ },
57
+ {
58
+ "epoch": 0.19,
59
+ "learning_rate": 1.712857142857143e-06,
60
+ "loss": 0.2936,
61
+ "step": 12000
62
+ },
63
+ {
64
+ "epoch": 0.22,
65
+ "learning_rate": 1.9985714285714287e-06,
66
+ "loss": 0.255,
67
+ "step": 14000
68
+ },
69
+ {
70
+ "epoch": 0.25,
71
+ "learning_rate": 2.2841428571428574e-06,
72
+ "loss": 0.2187,
73
+ "step": 16000
74
+ },
75
+ {
76
+ "epoch": 0.28,
77
+ "learning_rate": 2.569714285714286e-06,
78
+ "loss": 0.1976,
79
+ "step": 18000
80
+ },
81
+ {
82
+ "epoch": 0.31,
83
+ "learning_rate": 2.8552857142857144e-06,
84
+ "loss": 0.1795,
85
+ "step": 20000
86
+ },
87
+ {
88
+ "epoch": 0.31,
89
+ "eval_bleu": 0.8725717687820272,
90
+ "eval_brevity_penalty": 1.0,
91
+ "eval_length_ratio": 1.0512491508793116,
92
+ "eval_loss": 0.12307040393352509,
93
+ "eval_reference_length": 688948,
94
+ "eval_rouge1": 0.9065960080443258,
95
+ "eval_rouge2": 0.8543292052866391,
96
+ "eval_rougeL": 0.9063849761037827,
97
+ "eval_rougeLsum": 0.9063953832846721,
98
+ "eval_runtime": 357.2954,
99
+ "eval_samples_per_second": 119.988,
100
+ "eval_steps_per_second": 10.0,
101
+ "eval_translation_length": 724256,
102
+ "step": 20000
103
+ },
104
+ {
105
+ "epoch": 0.34,
106
+ "learning_rate": 3.140857142857143e-06,
107
+ "loss": 0.1669,
108
+ "step": 22000
109
+ },
110
+ {
111
+ "epoch": 0.37,
112
+ "learning_rate": 3.4264285714285715e-06,
113
+ "loss": 0.151,
114
+ "step": 24000
115
+ },
116
+ {
117
+ "epoch": 0.4,
118
+ "learning_rate": 3.7121428571428575e-06,
119
+ "loss": 0.1387,
120
+ "step": 26000
121
+ },
122
+ {
123
+ "epoch": 0.44,
124
+ "learning_rate": 3.997714285714286e-06,
125
+ "loss": 0.1354,
126
+ "step": 28000
127
+ },
128
+ {
129
+ "epoch": 0.47,
130
+ "learning_rate": 4.2834285714285715e-06,
131
+ "loss": 0.1244,
132
+ "step": 30000
133
+ },
134
+ {
135
+ "epoch": 0.47,
136
+ "eval_bleu": 0.8899834257776226,
137
+ "eval_brevity_penalty": 1.0,
138
+ "eval_length_ratio": 1.050031352148493,
139
+ "eval_loss": 0.09066224843263626,
140
+ "eval_reference_length": 688948,
141
+ "eval_rouge1": 0.9246723733764061,
142
+ "eval_rouge2": 0.8828517220657879,
143
+ "eval_rougeL": 0.9244225429282074,
144
+ "eval_rougeLsum": 0.9244091547844906,
145
+ "eval_runtime": 347.7759,
146
+ "eval_samples_per_second": 123.272,
147
+ "eval_steps_per_second": 10.274,
148
+ "eval_translation_length": 723417,
149
+ "step": 30000
150
+ },
151
+ {
152
+ "epoch": 0.5,
153
+ "learning_rate": 4.569e-06,
154
+ "loss": 0.1206,
155
+ "step": 32000
156
+ },
157
+ {
158
+ "epoch": 0.53,
159
+ "learning_rate": 4.854714285714286e-06,
160
+ "loss": 0.1136,
161
+ "step": 34000
162
+ },
163
+ {
164
+ "epoch": 0.56,
165
+ "learning_rate": 5.140285714285715e-06,
166
+ "loss": 0.1128,
167
+ "step": 36000
168
+ },
169
+ {
170
+ "epoch": 0.59,
171
+ "learning_rate": 5.426e-06,
172
+ "loss": 0.1048,
173
+ "step": 38000
174
+ },
175
+ {
176
+ "epoch": 0.62,
177
+ "learning_rate": 5.7115714285714285e-06,
178
+ "loss": 0.1012,
179
+ "step": 40000
180
+ },
181
+ {
182
+ "epoch": 0.62,
183
+ "eval_bleu": 0.9208253246491475,
184
+ "eval_brevity_penalty": 1.0,
185
+ "eval_length_ratio": 1.0275782787670478,
186
+ "eval_loss": 0.07363951206207275,
187
+ "eval_reference_length": 688948,
188
+ "eval_rouge1": 0.9455229652203183,
189
+ "eval_rouge2": 0.9101496822244501,
190
+ "eval_rougeL": 0.9454059770773995,
191
+ "eval_rougeLsum": 0.9454056293148791,
192
+ "eval_runtime": 359.949,
193
+ "eval_samples_per_second": 119.103,
194
+ "eval_steps_per_second": 9.926,
195
+ "eval_translation_length": 707948,
196
+ "step": 40000
197
+ },
198
+ {
199
+ "epoch": 0.65,
200
+ "learning_rate": 5.997285714285714e-06,
201
+ "loss": 0.0962,
202
+ "step": 42000
203
+ },
204
+ {
205
+ "epoch": 0.68,
206
+ "learning_rate": 6.282571428571429e-06,
207
+ "loss": 0.0964,
208
+ "step": 44000
209
+ },
210
+ {
211
+ "epoch": 0.72,
212
+ "learning_rate": 6.568285714285715e-06,
213
+ "loss": 0.095,
214
+ "step": 46000
215
+ },
216
+ {
217
+ "epoch": 0.75,
218
+ "learning_rate": 6.8538571428571434e-06,
219
+ "loss": 0.0913,
220
+ "step": 48000
221
+ },
222
+ {
223
+ "epoch": 0.78,
224
+ "learning_rate": 7.139428571428572e-06,
225
+ "loss": 0.0906,
226
+ "step": 50000
227
+ },
228
+ {
229
+ "epoch": 0.78,
230
+ "eval_bleu": 0.9072012805362628,
231
+ "eval_brevity_penalty": 1.0,
232
+ "eval_length_ratio": 1.0453851379204235,
233
+ "eval_loss": 0.06683389842510223,
234
+ "eval_reference_length": 688948,
235
+ "eval_rouge1": 0.936913196858357,
236
+ "eval_rouge2": 0.9021025087676598,
237
+ "eval_rougeL": 0.9368038216303922,
238
+ "eval_rougeLsum": 0.936786757645766,
239
+ "eval_runtime": 350.5992,
240
+ "eval_samples_per_second": 122.279,
241
+ "eval_steps_per_second": 10.191,
242
+ "eval_translation_length": 720216,
243
+ "step": 50000
244
+ },
245
+ {
246
+ "epoch": 0.81,
247
+ "learning_rate": 7.425142857142857e-06,
248
+ "loss": 0.0881,
249
+ "step": 52000
250
+ },
251
+ {
252
+ "epoch": 0.84,
253
+ "learning_rate": 7.710714285714287e-06,
254
+ "loss": 0.088,
255
+ "step": 54000
256
+ },
257
+ {
258
+ "epoch": 0.87,
259
+ "learning_rate": 7.996428571428572e-06,
260
+ "loss": 0.0852,
261
+ "step": 56000
262
+ },
263
+ {
264
+ "epoch": 0.9,
265
+ "learning_rate": 8.281857142857143e-06,
266
+ "loss": 0.0836,
267
+ "step": 58000
268
+ },
269
+ {
270
+ "epoch": 0.93,
271
+ "learning_rate": 8.567428571428572e-06,
272
+ "loss": 0.0837,
273
+ "step": 60000
274
+ },
275
+ {
276
+ "epoch": 0.93,
277
+ "eval_bleu": 0.9378929040812776,
278
+ "eval_brevity_penalty": 1.0,
279
+ "eval_length_ratio": 1.0159213757787235,
280
+ "eval_loss": 0.06079160049557686,
281
+ "eval_reference_length": 688948,
282
+ "eval_rouge1": 0.9552455374973553,
283
+ "eval_rouge2": 0.9234403473524968,
284
+ "eval_rougeL": 0.9552044295582562,
285
+ "eval_rougeLsum": 0.9551288128727483,
286
+ "eval_runtime": 353.7894,
287
+ "eval_samples_per_second": 121.177,
288
+ "eval_steps_per_second": 10.099,
289
+ "eval_translation_length": 699917,
290
+ "step": 60000
291
+ },
292
+ {
293
+ "epoch": 0.96,
294
+ "learning_rate": 8.853142857142858e-06,
295
+ "loss": 0.0839,
296
+ "step": 62000
297
+ },
298
+ {
299
+ "epoch": 1.0,
300
+ "learning_rate": 9.138571428571429e-06,
301
+ "loss": 0.0815,
302
+ "step": 64000
303
+ },
304
+ {
305
+ "epoch": 1.03,
306
+ "learning_rate": 9.424285714285715e-06,
307
+ "loss": 0.0742,
308
+ "step": 66000
309
+ },
310
+ {
311
+ "epoch": 1.06,
312
+ "learning_rate": 9.71e-06,
313
+ "loss": 0.0683,
314
+ "step": 68000
315
+ },
316
+ {
317
+ "epoch": 1.09,
318
+ "learning_rate": 9.995714285714286e-06,
319
+ "loss": 0.067,
320
+ "step": 70000
321
+ },
322
+ {
323
+ "epoch": 1.09,
324
+ "eval_bleu": 0.9320243750023054,
325
+ "eval_brevity_penalty": 1.0,
326
+ "eval_length_ratio": 1.0232078473266488,
327
+ "eval_loss": 0.058820515871047974,
328
+ "eval_reference_length": 688948,
329
+ "eval_rouge1": 0.9532786206216067,
330
+ "eval_rouge2": 0.9224384838543223,
331
+ "eval_rougeL": 0.9531782599798171,
332
+ "eval_rougeLsum": 0.9531407362403285,
333
+ "eval_runtime": 346.6656,
334
+ "eval_samples_per_second": 123.667,
335
+ "eval_steps_per_second": 10.307,
336
+ "eval_translation_length": 704937,
337
+ "step": 70000
338
+ },
339
+ {
340
+ "epoch": 1.12,
341
+ "learning_rate": 9.998587848362676e-06,
342
+ "loss": 0.0686,
343
+ "step": 72000
344
+ },
345
+ {
346
+ "epoch": 1.15,
347
+ "learning_rate": 9.994257059669239e-06,
348
+ "loss": 0.0718,
349
+ "step": 74000
350
+ },
351
+ {
352
+ "epoch": 1.18,
353
+ "learning_rate": 9.987009765493164e-06,
354
+ "loss": 0.0687,
355
+ "step": 76000
356
+ },
357
+ {
358
+ "epoch": 1.21,
359
+ "learning_rate": 9.976861810464927e-06,
360
+ "loss": 0.0691,
361
+ "step": 78000
362
+ },
363
+ {
364
+ "epoch": 1.24,
365
+ "learning_rate": 9.963798805467935e-06,
366
+ "loss": 0.0726,
367
+ "step": 80000
368
+ },
369
+ {
370
+ "epoch": 1.24,
371
+ "eval_bleu": 0.931299618108372,
372
+ "eval_brevity_penalty": 1.0,
373
+ "eval_length_ratio": 1.02620517078212,
374
+ "eval_loss": 0.05646243691444397,
375
+ "eval_reference_length": 688948,
376
+ "eval_rouge1": 0.9506828525475279,
377
+ "eval_rouge2": 0.9195555913617979,
378
+ "eval_rougeL": 0.9505222105099121,
379
+ "eval_rougeLsum": 0.9505025253164432,
380
+ "eval_runtime": 358.4748,
381
+ "eval_samples_per_second": 119.593,
382
+ "eval_steps_per_second": 9.967,
383
+ "eval_translation_length": 707002,
384
+ "step": 80000
385
+ },
386
+ {
387
+ "epoch": 1.28,
388
+ "learning_rate": 9.947845785448258e-06,
389
+ "loss": 0.0672,
390
+ "step": 82000
391
+ },
392
+ {
393
+ "epoch": 1.31,
394
+ "learning_rate": 9.929006251362937e-06,
395
+ "loss": 0.0692,
396
+ "step": 84000
397
+ },
398
+ {
399
+ "epoch": 1.34,
400
+ "learning_rate": 9.907279613454706e-06,
401
+ "loss": 0.0682,
402
+ "step": 86000
403
+ },
404
+ {
405
+ "epoch": 1.37,
406
+ "learning_rate": 9.882700272366437e-06,
407
+ "loss": 0.065,
408
+ "step": 88000
409
+ },
410
+ {
411
+ "epoch": 1.4,
412
+ "learning_rate": 9.855257991301662e-06,
413
+ "loss": 0.0648,
414
+ "step": 90000
415
+ },
416
+ {
417
+ "epoch": 1.4,
418
+ "eval_bleu": 0.9337440022630773,
419
+ "eval_brevity_penalty": 1.0,
420
+ "eval_length_ratio": 1.0237202227163733,
421
+ "eval_loss": 0.054729994386434555,
422
+ "eval_reference_length": 688948,
423
+ "eval_rouge1": 0.9562602670349256,
424
+ "eval_rouge2": 0.926727457591136,
425
+ "eval_rougeL": 0.9561080754823015,
426
+ "eval_rougeLsum": 0.9560774432065933,
427
+ "eval_runtime": 347.9396,
428
+ "eval_samples_per_second": 123.214,
429
+ "eval_steps_per_second": 10.269,
430
+ "eval_translation_length": 705290,
431
+ "step": 90000
432
+ },
433
+ {
434
+ "epoch": 1.43,
435
+ "learning_rate": 9.824996220706527e-06,
436
+ "loss": 0.0649,
437
+ "step": 92000
438
+ },
439
+ {
440
+ "epoch": 1.46,
441
+ "learning_rate": 9.79190235716806e-06,
442
+ "loss": 0.067,
443
+ "step": 94000
444
+ },
445
+ {
446
+ "epoch": 1.49,
447
+ "learning_rate": 9.756028799505886e-06,
448
+ "loss": 0.068,
449
+ "step": 96000
450
+ },
451
+ {
452
+ "epoch": 1.52,
453
+ "learning_rate": 9.717380631513947e-06,
454
+ "loss": 0.0609,
455
+ "step": 98000
456
+ },
457
+ {
458
+ "epoch": 1.56,
459
+ "learning_rate": 9.675980400071303e-06,
460
+ "loss": 0.0641,
461
+ "step": 100000
462
+ },
463
+ {
464
+ "epoch": 1.56,
465
+ "eval_bleu": 0.9437228795566622,
466
+ "eval_brevity_penalty": 1.0,
467
+ "eval_length_ratio": 1.0158429954074908,
468
+ "eval_loss": 0.05343015119433403,
469
+ "eval_reference_length": 688948,
470
+ "eval_rouge1": 0.9622428608075103,
471
+ "eval_rouge2": 0.9342448516446199,
472
+ "eval_rougeL": 0.9621362437715651,
473
+ "eval_rougeLsum": 0.9621409491590999,
474
+ "eval_runtime": 354.0159,
475
+ "eval_samples_per_second": 121.099,
476
+ "eval_steps_per_second": 10.093,
477
+ "eval_translation_length": 699863,
478
+ "step": 100000
479
+ },
480
+ {
481
+ "epoch": 1.59,
482
+ "learning_rate": 9.63182950403888e-06,
483
+ "loss": 0.0625,
484
+ "step": 102000
485
+ },
486
+ {
487
+ "epoch": 1.62,
488
+ "learning_rate": 9.58502194786445e-06,
489
+ "loss": 0.0617,
490
+ "step": 104000
491
+ },
492
+ {
493
+ "epoch": 1.65,
494
+ "learning_rate": 9.535491361586628e-06,
495
+ "loss": 0.0637,
496
+ "step": 106000
497
+ },
498
+ {
499
+ "epoch": 1.68,
500
+ "learning_rate": 9.483312176074826e-06,
501
+ "loss": 0.0615,
502
+ "step": 108000
503
+ },
504
+ {
505
+ "epoch": 1.71,
506
+ "learning_rate": 9.428542910099412e-06,
507
+ "loss": 0.061,
508
+ "step": 110000
509
+ },
510
+ {
511
+ "epoch": 1.71,
512
+ "eval_bleu": 0.9451141494751257,
513
+ "eval_brevity_penalty": 1.0,
514
+ "eval_length_ratio": 1.016459877958859,
515
+ "eval_loss": 0.0512821264564991,
516
+ "eval_reference_length": 688948,
517
+ "eval_rouge1": 0.96485183292355,
518
+ "eval_rouge2": 0.937858740593392,
519
+ "eval_rougeL": 0.964753163918546,
520
+ "eval_rougeLsum": 0.9647545868693599,
521
+ "eval_runtime": 350.3148,
522
+ "eval_samples_per_second": 122.379,
523
+ "eval_steps_per_second": 10.199,
524
+ "eval_translation_length": 700288,
525
+ "step": 110000
526
+ },
527
+ {
528
+ "epoch": 1.74,
529
+ "learning_rate": 9.371219416280765e-06,
530
+ "loss": 0.0647,
531
+ "step": 112000
532
+ },
533
+ {
534
+ "epoch": 1.77,
535
+ "learning_rate": 9.311287160167118e-06,
536
+ "loss": 0.0619,
537
+ "step": 114000
538
+ },
539
+ {
540
+ "epoch": 1.8,
541
+ "learning_rate": 9.24886908181262e-06,
542
+ "loss": 0.0634,
543
+ "step": 116000
544
+ },
545
+ {
546
+ "epoch": 1.83,
547
+ "learning_rate": 9.18393918535506e-06,
548
+ "loss": 0.058,
549
+ "step": 118000
550
+ },
551
+ {
552
+ "epoch": 1.87,
553
+ "learning_rate": 9.116565986234595e-06,
554
+ "loss": 0.0616,
555
+ "step": 120000
556
+ },
557
+ {
558
+ "epoch": 1.87,
559
+ "eval_bleu": 0.9353441526892242,
560
+ "eval_brevity_penalty": 1.0,
561
+ "eval_length_ratio": 1.0252529944204787,
562
+ "eval_loss": 0.04904291033744812,
563
+ "eval_reference_length": 688948,
564
+ "eval_rouge1": 0.9578546390239036,
565
+ "eval_rouge2": 0.9297894756880962,
566
+ "eval_rougeL": 0.9577587648786969,
567
+ "eval_rougeLsum": 0.957745334663435,
568
+ "eval_runtime": 360.6437,
569
+ "eval_samples_per_second": 118.874,
570
+ "eval_steps_per_second": 9.907,
571
+ "eval_translation_length": 706346,
572
+ "step": 120000
573
+ },
574
+ {
575
+ "epoch": 1.9,
576
+ "learning_rate": 9.04689527434882e-06,
577
+ "loss": 0.0588,
578
+ "step": 122000
579
+ },
580
+ {
581
+ "epoch": 1.93,
582
+ "learning_rate": 8.974795071500655e-06,
583
+ "loss": 0.0581,
584
+ "step": 124000
585
+ },
586
+ {
587
+ "epoch": 1.96,
588
+ "learning_rate": 8.90033821949533e-06,
589
+ "loss": 0.0571,
590
+ "step": 126000
591
+ },
592
+ {
593
+ "epoch": 1.99,
594
+ "learning_rate": 8.823603679804848e-06,
595
+ "loss": 0.0575,
596
+ "step": 128000
597
+ },
598
+ {
599
+ "epoch": 2.02,
600
+ "learning_rate": 8.744676297261531e-06,
601
+ "loss": 0.0501,
602
+ "step": 130000
603
+ },
604
+ {
605
+ "epoch": 2.02,
606
+ "eval_bleu": 0.9391629851344814,
607
+ "eval_brevity_penalty": 1.0,
608
+ "eval_length_ratio": 1.0246375633574667,
609
+ "eval_loss": 0.04428655281662941,
610
+ "eval_reference_length": 688948,
611
+ "eval_rouge1": 0.9620654622397073,
612
+ "eval_rouge2": 0.9358278388686297,
613
+ "eval_rougeL": 0.9619903188380114,
614
+ "eval_rougeLsum": 0.9620056201386562,
615
+ "eval_runtime": 350.9058,
616
+ "eval_samples_per_second": 122.172,
617
+ "eval_steps_per_second": 10.182,
618
+ "eval_translation_length": 705922,
619
+ "step": 130000
620
+ },
621
+ {
622
+ "epoch": 2.05,
623
+ "learning_rate": 8.663523200213197e-06,
624
+ "loss": 0.0427,
625
+ "step": 132000
626
+ },
627
+ {
628
+ "epoch": 2.08,
629
+ "learning_rate": 8.580230708782164e-06,
630
+ "loss": 0.0437,
631
+ "step": 134000
632
+ },
633
+ {
634
+ "epoch": 2.11,
635
+ "learning_rate": 8.494890669233825e-06,
636
+ "loss": 0.0425,
637
+ "step": 136000
638
+ },
639
+ {
640
+ "epoch": 2.15,
641
+ "learning_rate": 8.407555964848785e-06,
642
+ "loss": 0.0419,
643
+ "step": 138000
644
+ },
645
+ {
646
+ "epoch": 2.18,
647
+ "learning_rate": 8.318144933677256e-06,
648
+ "loss": 0.0442,
649
+ "step": 140000
650
+ },
651
+ {
652
+ "epoch": 2.18,
653
+ "eval_bleu": 0.9410527501908049,
654
+ "eval_brevity_penalty": 1.0,
655
+ "eval_length_ratio": 1.0222832492437746,
656
+ "eval_loss": 0.04393425211310387,
657
+ "eval_reference_length": 688948,
658
+ "eval_rouge1": 0.9622283105302971,
659
+ "eval_rouge2": 0.9363579229073962,
660
+ "eval_rougeL": 0.9621524453412997,
661
+ "eval_rougeLsum": 0.9621713486883273,
662
+ "eval_runtime": 352.9817,
663
+ "eval_samples_per_second": 121.454,
664
+ "eval_steps_per_second": 10.122,
665
+ "eval_translation_length": 704300,
666
+ "step": 140000
667
+ },
668
+ {
669
+ "epoch": 2.21,
670
+ "learning_rate": 8.226796199304702e-06,
671
+ "loss": 0.0445,
672
+ "step": 142000
673
+ },
674
+ {
675
+ "epoch": 2.24,
676
+ "learning_rate": 8.13361018519741e-06,
677
+ "loss": 0.0445,
678
+ "step": 144000
679
+ },
680
+ {
681
+ "epoch": 2.27,
682
+ "learning_rate": 8.0386440390423e-06,
683
+ "loss": 0.0421,
684
+ "step": 146000
685
+ },
686
+ {
687
+ "epoch": 2.3,
688
+ "learning_rate": 7.941809254954647e-06,
689
+ "loss": 0.0434,
690
+ "step": 148000
691
+ },
692
+ {
693
+ "epoch": 2.33,
694
+ "learning_rate": 7.843256537104586e-06,
695
+ "loss": 0.0418,
696
+ "step": 150000
697
+ },
698
+ {
699
+ "epoch": 2.33,
700
+ "eval_bleu": 0.9452514516559248,
701
+ "eval_brevity_penalty": 1.0,
702
+ "eval_length_ratio": 1.0194572014143302,
703
+ "eval_loss": 0.04334454983472824,
704
+ "eval_reference_length": 688948,
705
+ "eval_rouge1": 0.9667034100796513,
706
+ "eval_rouge2": 0.9415234249395421,
707
+ "eval_rougeL": 0.966596537202159,
708
+ "eval_rougeLsum": 0.9665995305639754,
709
+ "eval_runtime": 350.4381,
710
+ "eval_samples_per_second": 122.335,
711
+ "eval_steps_per_second": 10.196,
712
+ "eval_translation_length": 702353,
713
+ "step": 150000
714
+ },
715
+ {
716
+ "epoch": 2.36,
717
+ "learning_rate": 7.743043437504057e-06,
718
+ "loss": 0.0437,
719
+ "step": 152000
720
+ },
721
+ {
722
+ "epoch": 2.39,
723
+ "learning_rate": 7.641279775661868e-06,
724
+ "loss": 0.0426,
725
+ "step": 154000
726
+ },
727
+ {
728
+ "epoch": 2.43,
729
+ "learning_rate": 7.538027276115405e-06,
730
+ "loss": 0.0412,
731
+ "step": 156000
732
+ },
733
+ {
734
+ "epoch": 2.46,
735
+ "learning_rate": 7.433190045507044e-06,
736
+ "loss": 0.0409,
737
+ "step": 158000
738
+ },
739
+ {
740
+ "epoch": 2.49,
741
+ "learning_rate": 7.326931900431675e-06,
742
+ "loss": 0.0419,
743
+ "step": 160000
744
+ },
745
+ {
746
+ "epoch": 2.49,
747
+ "eval_bleu": 0.9542889216747796,
748
+ "eval_brevity_penalty": 1.0,
749
+ "eval_length_ratio": 1.010262022678054,
750
+ "eval_loss": 0.04232573136687279,
751
+ "eval_reference_length": 688948,
752
+ "eval_rouge1": 0.9704801328109232,
753
+ "eval_rouge2": 0.9456205175624834,
754
+ "eval_rougeL": 0.970368187542237,
755
+ "eval_rougeLsum": 0.9703795940864277,
756
+ "eval_runtime": 357.9392,
757
+ "eval_samples_per_second": 119.772,
758
+ "eval_steps_per_second": 9.982,
759
+ "eval_translation_length": 696018,
760
+ "step": 160000
761
+ },
762
+ {
763
+ "epoch": 2.52,
764
+ "learning_rate": 7.219369030269242e-06,
765
+ "loss": 0.0421,
766
+ "step": 162000
767
+ },
768
+ {
769
+ "epoch": 2.55,
770
+ "learning_rate": 7.110456637268422e-06,
771
+ "loss": 0.0428,
772
+ "step": 164000
773
+ },
774
+ {
775
+ "epoch": 2.58,
776
+ "learning_rate": 7.000367166717425e-06,
777
+ "loss": 0.0401,
778
+ "step": 166000
779
+ },
780
+ {
781
+ "epoch": 2.61,
782
+ "learning_rate": 6.889110705881452e-06,
783
+ "loss": 0.0417,
784
+ "step": 168000
785
+ },
786
+ {
787
+ "epoch": 2.64,
788
+ "learning_rate": 6.776752160449367e-06,
789
+ "loss": 0.042,
790
+ "step": 170000
791
+ },
792
+ {
793
+ "epoch": 2.64,
794
+ "eval_bleu": 0.9447481654515599,
795
+ "eval_brevity_penalty": 1.0,
796
+ "eval_length_ratio": 1.0186748491903599,
797
+ "eval_loss": 0.04124660789966583,
798
+ "eval_reference_length": 688948,
799
+ "eval_rouge1": 0.9616790997827227,
800
+ "eval_rouge2": 0.9356205732284686,
801
+ "eval_rougeL": 0.961519978707468,
802
+ "eval_rougeLsum": 0.9615393434549021,
803
+ "eval_runtime": 350.4802,
804
+ "eval_samples_per_second": 122.321,
805
+ "eval_steps_per_second": 10.195,
806
+ "eval_translation_length": 701814,
807
+ "step": 170000
808
+ },
809
+ {
810
+ "epoch": 2.67,
811
+ "learning_rate": 6.663300104771134e-06,
812
+ "loss": 0.0386,
813
+ "step": 172000
814
+ },
815
+ {
816
+ "epoch": 2.71,
817
+ "learning_rate": 6.548876728670889e-06,
818
+ "loss": 0.0442,
819
+ "step": 174000
820
+ },
821
+ {
822
+ "epoch": 2.74,
823
+ "learning_rate": 6.433664609478793e-06,
824
+ "loss": 0.0417,
825
+ "step": 176000
826
+ },
827
+ {
828
+ "epoch": 2.77,
829
+ "learning_rate": 6.3175003841713e-06,
830
+ "loss": 0.0418,
831
+ "step": 178000
832
+ },
833
+ {
834
+ "epoch": 2.8,
835
+ "learning_rate": 6.200566775740774e-06,
836
+ "loss": 0.0406,
837
+ "step": 180000
838
+ },
839
+ {
840
+ "epoch": 2.8,
841
+ "eval_bleu": 0.9487462287220352,
842
+ "eval_brevity_penalty": 1.0,
843
+ "eval_length_ratio": 1.0163989154479003,
844
+ "eval_loss": 0.04077766090631485,
845
+ "eval_reference_length": 688948,
846
+ "eval_rouge1": 0.9665622993166698,
847
+ "eval_rouge2": 0.9412909286045468,
848
+ "eval_rougeL": 0.9663690505539053,
849
+ "eval_rougeLsum": 0.9663768593347282,
850
+ "eval_runtime": 354.3716,
851
+ "eval_samples_per_second": 120.978,
852
+ "eval_steps_per_second": 10.083,
853
+ "eval_translation_length": 700246,
854
+ "step": 180000
855
+ },
856
+ {
857
+ "epoch": 2.83,
858
+ "learning_rate": 6.083109012991928e-06,
859
+ "loss": 0.0416,
860
+ "step": 182000
861
+ },
862
+ {
863
+ "epoch": 2.86,
864
+ "learning_rate": 5.964842802508876e-06,
865
+ "loss": 0.0404,
866
+ "step": 184000
867
+ },
868
+ {
869
+ "epoch": 2.89,
870
+ "learning_rate": 5.846013150999504e-06,
871
+ "loss": 0.0379,
872
+ "step": 186000
873
+ },
874
+ {
875
+ "epoch": 2.92,
876
+ "learning_rate": 5.726749225339994e-06,
877
+ "loss": 0.0432,
878
+ "step": 188000
879
+ },
880
+ {
881
+ "epoch": 2.95,
882
+ "learning_rate": 5.607001354527006e-06,
883
+ "loss": 0.0393,
884
+ "step": 190000
885
+ },
886
+ {
887
+ "epoch": 2.95,
888
+ "eval_bleu": 0.9547919266733801,
889
+ "eval_brevity_penalty": 1.0,
890
+ "eval_length_ratio": 1.0112867734575033,
891
+ "eval_loss": 0.04095698148012161,
892
+ "eval_reference_length": 688948,
893
+ "eval_rouge1": 0.9703758390841425,
894
+ "eval_rouge2": 0.9461704701896383,
895
+ "eval_rougeL": 0.9702899676023662,
896
+ "eval_rougeLsum": 0.9703172456040432,
897
+ "eval_runtime": 353.5109,
898
+ "eval_samples_per_second": 121.272,
899
+ "eval_steps_per_second": 10.107,
900
+ "eval_translation_length": 696724,
901
+ "step": 190000
902
+ },
903
+ {
904
+ "epoch": 2.99,
905
+ "learning_rate": 5.486959140088201e-06,
906
+ "loss": 0.0384,
907
+ "step": 192000
908
+ },
909
+ {
910
+ "epoch": 3.02,
911
+ "learning_rate": 5.366572586878771e-06,
912
+ "loss": 0.0354,
913
+ "step": 194000
914
+ },
915
+ {
916
+ "epoch": 3.05,
917
+ "learning_rate": 5.246032307677414e-06,
918
+ "loss": 0.0308,
919
+ "step": 196000
920
+ },
921
+ {
922
+ "epoch": 3.08,
923
+ "learning_rate": 5.125469288643505e-06,
924
+ "loss": 0.033,
925
+ "step": 198000
926
+ },
927
+ {
928
+ "epoch": 3.11,
929
+ "learning_rate": 5.004651973065896e-06,
930
+ "loss": 0.0316,
931
+ "step": 200000
932
+ },
933
+ {
934
+ "epoch": 3.11,
935
+ "eval_bleu": 0.9516992046244281,
936
+ "eval_brevity_penalty": 1.0,
937
+ "eval_length_ratio": 1.013291278877361,
938
+ "eval_loss": 0.04207807779312134,
939
+ "eval_reference_length": 688948,
940
+ "eval_rouge1": 0.9687692703791779,
941
+ "eval_rouge2": 0.9438025496007139,
942
+ "eval_rougeL": 0.9686298083794034,
943
+ "eval_rougeLsum": 0.9686054829671312,
944
+ "eval_runtime": 358.3539,
945
+ "eval_samples_per_second": 119.633,
946
+ "eval_steps_per_second": 9.971,
947
+ "eval_translation_length": 698105,
948
+ "step": 200000
949
+ },
950
+ {
951
+ "epoch": 3.14,
952
+ "learning_rate": 4.883831940867018e-06,
953
+ "loss": 0.0332,
954
+ "step": 202000
955
+ },
956
+ {
957
+ "epoch": 3.17,
958
+ "learning_rate": 4.763079747543336e-06,
959
+ "loss": 0.03,
960
+ "step": 204000
961
+ },
962
+ {
963
+ "epoch": 3.2,
964
+ "learning_rate": 4.642526169588442e-06,
965
+ "loss": 0.0313,
966
+ "step": 206000
967
+ },
968
+ {
969
+ "epoch": 3.23,
970
+ "learning_rate": 4.52218113759159e-06,
971
+ "loss": 0.0319,
972
+ "step": 208000
973
+ },
974
+ {
975
+ "epoch": 3.27,
976
+ "learning_rate": 4.402114859405383e-06,
977
+ "loss": 0.0313,
978
+ "step": 210000
979
+ },
980
+ {
981
+ "epoch": 3.27,
982
+ "eval_bleu": 0.952314893533454,
983
+ "eval_brevity_penalty": 1.0,
984
+ "eval_length_ratio": 1.0132433797616076,
985
+ "eval_loss": 0.04261790215969086,
986
+ "eval_reference_length": 688948,
987
+ "eval_rouge1": 0.9687049595684978,
988
+ "eval_rouge2": 0.9435984713700241,
989
+ "eval_rougeL": 0.9686312046722289,
990
+ "eval_rougeLsum": 0.9685702469811763,
991
+ "eval_runtime": 353.4407,
992
+ "eval_samples_per_second": 121.296,
993
+ "eval_steps_per_second": 10.109,
994
+ "eval_translation_length": 698072,
995
+ "step": 210000
996
+ },
997
+ {
998
+ "epoch": 3.3,
999
+ "learning_rate": 4.2823973802607795e-06,
1000
+ "loss": 0.0322,
1001
+ "step": 212000
1002
+ },
1003
+ {
1004
+ "epoch": 3.33,
1005
+ "learning_rate": 4.163038979034976e-06,
1006
+ "loss": 0.0312,
1007
+ "step": 214000
1008
+ },
1009
+ {
1010
+ "epoch": 3.36,
1011
+ "learning_rate": 4.0441693394762706e-06,
1012
+ "loss": 0.0314,
1013
+ "step": 216000
1014
+ },
1015
+ {
1016
+ "epoch": 3.39,
1017
+ "learning_rate": 3.925916882841615e-06,
1018
+ "loss": 0.0332,
1019
+ "step": 218000
1020
+ },
1021
+ {
1022
+ "epoch": 3.42,
1023
+ "learning_rate": 3.8083497076975863e-06,
1024
+ "loss": 0.0302,
1025
+ "step": 220000
1026
+ },
1027
+ {
1028
+ "epoch": 3.42,
1029
+ "eval_bleu": 0.9510751957840793,
1030
+ "eval_brevity_penalty": 1.0,
1031
+ "eval_length_ratio": 1.014170880821194,
1032
+ "eval_loss": 0.04271363466978073,
1033
+ "eval_reference_length": 688948,
1034
+ "eval_rouge1": 0.9682283271173224,
1035
+ "eval_rouge2": 0.9430416842293828,
1036
+ "eval_rougeL": 0.9681608428660635,
1037
+ "eval_rougeLsum": 0.9681434321637754,
1038
+ "eval_runtime": 354.0647,
1039
+ "eval_samples_per_second": 121.082,
1040
+ "eval_steps_per_second": 10.091,
1041
+ "eval_translation_length": 698711,
1042
+ "step": 220000
1043
+ },
1044
+ {
1045
+ "epoch": 3.45,
1046
+ "learning_rate": 3.691418722192835e-06,
1047
+ "loss": 0.0323,
1048
+ "step": 222000
1049
+ },
1050
+ {
1051
+ "epoch": 3.48,
1052
+ "learning_rate": 3.5751932368863875e-06,
1053
+ "loss": 0.0302,
1054
+ "step": 224000
1055
+ },
1056
+ {
1057
+ "epoch": 3.51,
1058
+ "learning_rate": 3.4597997986064915e-06,
1059
+ "loss": 0.0312,
1060
+ "step": 226000
1061
+ },
1062
+ {
1063
+ "epoch": 3.55,
1064
+ "learning_rate": 3.3453057938715767e-06,
1065
+ "loss": 0.0309,
1066
+ "step": 228000
1067
+ },
1068
+ {
1069
+ "epoch": 3.58,
1070
+ "learning_rate": 3.2318345952925634e-06,
1071
+ "loss": 0.0305,
1072
+ "step": 230000
1073
+ },
1074
+ {
1075
+ "epoch": 3.58,
1076
+ "eval_bleu": 0.9477212328461616,
1077
+ "eval_brevity_penalty": 1.0,
1078
+ "eval_length_ratio": 1.017851855292417,
1079
+ "eval_loss": 0.042060352861881256,
1080
+ "eval_reference_length": 688948,
1081
+ "eval_rouge1": 0.9686642355024724,
1082
+ "eval_rouge2": 0.9435154406416524,
1083
+ "eval_rougeL": 0.9685609023356937,
1084
+ "eval_rougeLsum": 0.968565755092943,
1085
+ "eval_runtime": 358.0746,
1086
+ "eval_samples_per_second": 119.726,
1087
+ "eval_steps_per_second": 9.978,
1088
+ "eval_translation_length": 701247,
1089
+ "step": 230000
1090
+ },
1091
+ {
1092
+ "epoch": 3.61,
1093
+ "learning_rate": 3.1194509023896597e-06,
1094
+ "loss": 0.0342,
1095
+ "step": 232000
1096
+ },
1097
+ {
1098
+ "epoch": 3.64,
1099
+ "learning_rate": 3.0081077877484786e-06,
1100
+ "loss": 0.0297,
1101
+ "step": 234000
1102
+ },
1103
+ {
1104
+ "epoch": 3.67,
1105
+ "learning_rate": 2.8978719026697843e-06,
1106
+ "loss": 0.0308,
1107
+ "step": 236000
1108
+ },
1109
+ {
1110
+ "epoch": 3.7,
1111
+ "learning_rate": 2.7889177879789993e-06,
1112
+ "loss": 0.0317,
1113
+ "step": 238000
1114
+ },
1115
+ {
1116
+ "epoch": 3.73,
1117
+ "learning_rate": 2.6812000664997108e-06,
1118
+ "loss": 0.0317,
1119
+ "step": 240000
1120
+ },
1121
+ {
1122
+ "epoch": 3.73,
1123
+ "eval_bleu": 0.9545413224765983,
1124
+ "eval_brevity_penalty": 1.0,
1125
+ "eval_length_ratio": 1.0115161086177766,
1126
+ "eval_loss": 0.04142594709992409,
1127
+ "eval_reference_length": 688948,
1128
+ "eval_rouge1": 0.9706993144139382,
1129
+ "eval_rouge2": 0.9462893219706924,
1130
+ "eval_rougeL": 0.9706221623216551,
1131
+ "eval_rougeLsum": 0.9705871778859747,
1132
+ "eval_runtime": 352.9472,
1133
+ "eval_samples_per_second": 121.466,
1134
+ "eval_steps_per_second": 10.123,
1135
+ "eval_translation_length": 696882,
1136
+ "step": 240000
1137
+ },
1138
+ {
1139
+ "epoch": 3.76,
1140
+ "learning_rate": 2.574942125369937e-06,
1141
+ "loss": 0.0298,
1142
+ "step": 242000
1143
+ },
1144
+ {
1145
+ "epoch": 3.79,
1146
+ "learning_rate": 2.4699932979874153e-06,
1147
+ "loss": 0.0319,
1148
+ "step": 244000
1149
+ },
1150
+ {
1151
+ "epoch": 3.83,
1152
+ "learning_rate": 2.3665732796817735e-06,
1153
+ "loss": 0.0314,
1154
+ "step": 246000
1155
+ },
1156
+ {
1157
+ "epoch": 3.86,
1158
+ "learning_rate": 2.2646389981153643e-06,
1159
+ "loss": 0.0312,
1160
+ "step": 248000
1161
+ },
1162
+ {
1163
+ "epoch": 3.89,
1164
+ "learning_rate": 2.1643020903452345e-06,
1165
+ "loss": 0.0298,
1166
+ "step": 250000
1167
+ },
1168
+ {
1169
+ "epoch": 3.89,
1170
+ "eval_bleu": 0.9529006548919251,
1171
+ "eval_brevity_penalty": 1.0,
1172
+ "eval_length_ratio": 1.0129792088807863,
1173
+ "eval_loss": 0.040760207921266556,
1174
+ "eval_reference_length": 688948,
1175
+ "eval_rouge1": 0.9693489479213169,
1176
+ "eval_rouge2": 0.9444261412988928,
1177
+ "eval_rougeL": 0.9692149741999472,
1178
+ "eval_rougeLsum": 0.9692295166277185,
1179
+ "eval_runtime": 355.5559,
1180
+ "eval_samples_per_second": 120.575,
1181
+ "eval_steps_per_second": 10.049,
1182
+ "eval_translation_length": 697890,
1183
+ "step": 250000
1184
+ },
1185
+ {
1186
+ "epoch": 3.92,
1187
+ "learning_rate": 2.0656700673466744e-06,
1188
+ "loss": 0.032,
1189
+ "step": 252000
1190
+ },
1191
+ {
1192
+ "epoch": 3.95,
1193
+ "learning_rate": 1.9687498973425523e-06,
1194
+ "loss": 0.0288,
1195
+ "step": 254000
1196
+ },
1197
+ {
1198
+ "epoch": 3.98,
1199
+ "learning_rate": 1.8735981224010946e-06,
1200
+ "loss": 0.0301,
1201
+ "step": 256000
1202
+ },
1203
+ {
1204
+ "epoch": 4.01,
1205
+ "learning_rate": 1.7802702529299903e-06,
1206
+ "loss": 0.0285,
1207
+ "step": 258000
1208
+ },
1209
+ {
1210
+ "epoch": 4.04,
1211
+ "learning_rate": 1.6888207352922886e-06,
1212
+ "loss": 0.0249,
1213
+ "step": 260000
1214
+ },
1215
+ {
1216
+ "epoch": 4.04,
1217
+ "eval_bleu": 0.9580735852306802,
1218
+ "eval_brevity_penalty": 1.0,
1219
+ "eval_length_ratio": 1.0084215354424426,
1220
+ "eval_loss": 0.04249930381774902,
1221
+ "eval_reference_length": 688948,
1222
+ "eval_rouge1": 0.9725141407368192,
1223
+ "eval_rouge2": 0.9482526619227644,
1224
+ "eval_rougeL": 0.9723960508025267,
1225
+ "eval_rougeLsum": 0.972420107341233,
1226
+ "eval_runtime": 352.4144,
1227
+ "eval_samples_per_second": 121.649,
1228
+ "eval_steps_per_second": 10.139,
1229
+ "eval_translation_length": 694750,
1230
+ "step": 260000
1231
+ },
1232
+ {
1233
+ "epoch": 4.07,
1234
+ "learning_rate": 1.599258630916235e-06,
1235
+ "loss": 0.0231,
1236
+ "step": 262000
1237
+ },
1238
+ {
1239
+ "epoch": 4.11,
1240
+ "learning_rate": 1.5117690308052164e-06,
1241
+ "loss": 0.0249,
1242
+ "step": 264000
1243
+ },
1244
+ {
1245
+ "epoch": 4.14,
1246
+ "learning_rate": 1.4262278806001696e-06,
1247
+ "loss": 0.0242,
1248
+ "step": 266000
1249
+ },
1250
+ {
1251
+ "epoch": 4.17,
1252
+ "learning_rate": 1.3427737126740498e-06,
1253
+ "loss": 0.0235,
1254
+ "step": 268000
1255
+ },
1256
+ {
1257
+ "epoch": 4.2,
1258
+ "learning_rate": 1.261495379512549e-06,
1259
+ "loss": 0.0236,
1260
+ "step": 270000
1261
+ },
1262
+ {
1263
+ "epoch": 4.2,
1264
+ "eval_bleu": 0.9578225644568897,
1265
+ "eval_brevity_penalty": 1.0,
1266
+ "eval_length_ratio": 1.0083475095362786,
1267
+ "eval_loss": 0.04301063343882561,
1268
+ "eval_reference_length": 688948,
1269
+ "eval_rouge1": 0.9727779850827751,
1270
+ "eval_rouge2": 0.948452540502593,
1271
+ "eval_rougeL": 0.9726976320174251,
1272
+ "eval_rougeLsum": 0.9727011117301205,
1273
+ "eval_runtime": 360.9368,
1274
+ "eval_samples_per_second": 118.777,
1275
+ "eval_steps_per_second": 9.899,
1276
+ "eval_translation_length": 694699,
1277
+ "step": 270000
1278
+ },
1279
+ {
1280
+ "epoch": 4.23,
1281
+ "learning_rate": 1.1823980452421706e-06,
1282
+ "loss": 0.024,
1283
+ "step": 272000
1284
+ },
1285
+ {
1286
+ "epoch": 4.26,
1287
+ "learning_rate": 1.105489964860681e-06,
1288
+ "loss": 0.0228,
1289
+ "step": 274000
1290
+ },
1291
+ {
1292
+ "epoch": 4.29,
1293
+ "learning_rate": 1.0308929099656145e-06,
1294
+ "loss": 0.0236,
1295
+ "step": 276000
1296
+ },
1297
+ {
1298
+ "epoch": 4.32,
1299
+ "learning_rate": 9.586469575615094e-07,
1300
+ "loss": 0.025,
1301
+ "step": 278000
1302
+ },
1303
+ {
1304
+ "epoch": 4.35,
1305
+ "learning_rate": 8.887219403201786e-07,
1306
+ "loss": 0.0259,
1307
+ "step": 280000
1308
+ },
1309
+ {
1310
+ "epoch": 4.35,
1311
+ "eval_bleu": 0.9543405944053881,
1312
+ "eval_brevity_penalty": 1.0,
1313
+ "eval_length_ratio": 1.0119820363801042,
1314
+ "eval_loss": 0.04318338260054588,
1315
+ "eval_reference_length": 688948,
1316
+ "eval_rouge1": 0.9707381997207458,
1317
+ "eval_rouge2": 0.9463171031333721,
1318
+ "eval_rougeL": 0.9706594079415762,
1319
+ "eval_rougeLsum": 0.9706575791884534,
1320
+ "eval_runtime": 353.2216,
1321
+ "eval_samples_per_second": 121.371,
1322
+ "eval_steps_per_second": 10.115,
1323
+ "eval_translation_length": 697203,
1324
+ "step": 280000
1325
+ },
1326
+ {
1327
+ "epoch": 4.39,
1328
+ "learning_rate": 8.211953935368261e-07,
1329
+ "loss": 0.0236,
1330
+ "step": 282000
1331
+ },
1332
+ {
1333
+ "epoch": 4.42,
1334
+ "learning_rate": 7.560747672425183e-07,
1335
+ "loss": 0.0243,
1336
+ "step": 284000
1337
+ },
1338
+ {
1339
+ "epoch": 4.45,
1340
+ "learning_rate": 6.934324737735693e-07,
1341
+ "loss": 0.0245,
1342
+ "step": 286000
1343
+ },
1344
+ {
1345
+ "epoch": 4.48,
1346
+ "learning_rate": 6.333345238313842e-07,
1347
+ "loss": 0.0241,
1348
+ "step": 288000
1349
+ },
1350
+ {
1351
+ "epoch": 4.51,
1352
+ "learning_rate": 5.757840343120758e-07,
1353
+ "loss": 0.0236,
1354
+ "step": 290000
1355
+ },
1356
+ {
1357
+ "epoch": 4.51,
1358
+ "eval_bleu": 0.9554825445943746,
1359
+ "eval_brevity_penalty": 1.0,
1360
+ "eval_length_ratio": 1.0108963230896961,
1361
+ "eval_loss": 0.04305826872587204,
1362
+ "eval_reference_length": 688948,
1363
+ "eval_rouge1": 0.9714257759381628,
1364
+ "eval_rouge2": 0.9469647669616812,
1365
+ "eval_rougeL": 0.971368415701727,
1366
+ "eval_rougeLsum": 0.9713609918743913,
1367
+ "eval_runtime": 360.9244,
1368
+ "eval_samples_per_second": 118.781,
1369
+ "eval_steps_per_second": 9.9,
1370
+ "eval_translation_length": 696455,
1371
+ "step": 290000
1372
+ },
1373
+ {
1374
+ "epoch": 4.54,
1375
+ "learning_rate": 5.208145794830483e-07,
1376
+ "loss": 0.023,
1377
+ "step": 292000
1378
+ },
1379
+ {
1380
+ "epoch": 4.57,
1381
+ "learning_rate": 4.684326956861246e-07,
1382
+ "loss": 0.0248,
1383
+ "step": 294000
1384
+ },
1385
+ {
1386
+ "epoch": 4.6,
1387
+ "learning_rate": 4.186971195842365e-07,
1388
+ "loss": 0.0234,
1389
+ "step": 296000
1390
+ },
1391
+ {
1392
+ "epoch": 4.63,
1393
+ "learning_rate": 3.716597523194587e-07,
1394
+ "loss": 0.023,
1395
+ "step": 298000
1396
+ },
1397
+ {
1398
+ "epoch": 4.67,
1399
+ "learning_rate": 3.27322503410189e-07,
1400
+ "loss": 0.0251,
1401
+ "step": 300000
1402
+ },
1403
+ {
1404
+ "epoch": 4.67,
1405
+ "eval_bleu": 0.9555293197464279,
1406
+ "eval_brevity_penalty": 1.0,
1407
+ "eval_length_ratio": 1.010778752532847,
1408
+ "eval_loss": 0.04303622618317604,
1409
+ "eval_reference_length": 688948,
1410
+ "eval_rouge1": 0.9711184557484207,
1411
+ "eval_rouge2": 0.9465872069219559,
1412
+ "eval_rougeL": 0.971016190214522,
1413
+ "eval_rougeLsum": 0.9710369956602919,
1414
+ "eval_runtime": 356.11,
1415
+ "eval_samples_per_second": 120.387,
1416
+ "eval_steps_per_second": 10.033,
1417
+ "eval_translation_length": 696374,
1418
+ "step": 300000
1419
+ },
1420
+ {
1421
+ "epoch": 4.7,
1422
+ "learning_rate": 2.857112386772626e-07,
1423
+ "loss": 0.025,
1424
+ "step": 302000
1425
+ },
1426
+ {
1427
+ "epoch": 4.73,
1428
+ "learning_rate": 2.468314855157933e-07,
1429
+ "loss": 0.0246,
1430
+ "step": 304000
1431
+ },
1432
+ {
1433
+ "epoch": 4.76,
1434
+ "learning_rate": 2.1072744891572238e-07,
1435
+ "loss": 0.0244,
1436
+ "step": 306000
1437
+ },
1438
+ {
1439
+ "epoch": 4.79,
1440
+ "learning_rate": 1.7745211624187464e-07,
1441
+ "loss": 0.0235,
1442
+ "step": 308000
1443
+ },
1444
+ {
1445
+ "epoch": 4.82,
1446
+ "learning_rate": 1.469728454791536e-07,
1447
+ "loss": 0.0241,
1448
+ "step": 310000
1449
+ },
1450
+ {
1451
+ "epoch": 4.82,
1452
+ "eval_bleu": 0.9564463934340444,
1453
+ "eval_brevity_penalty": 1.0,
1454
+ "eval_length_ratio": 1.0099470497047673,
1455
+ "eval_loss": 0.04298330098390579,
1456
+ "eval_reference_length": 688948,
1457
+ "eval_rouge1": 0.9718286385746833,
1458
+ "eval_rouge2": 0.9474516377136643,
1459
+ "eval_rougeL": 0.9717457141369523,
1460
+ "eval_rougeLsum": 0.9717098073885057,
1461
+ "eval_runtime": 351.9107,
1462
+ "eval_samples_per_second": 121.824,
1463
+ "eval_steps_per_second": 10.153,
1464
+ "eval_translation_length": 695801,
1465
+ "step": 310000
1466
+ },
1467
+ {
1468
+ "epoch": 4.85,
1469
+ "learning_rate": 1.1931165304333803e-07,
1470
+ "loss": 0.0229,
1471
+ "step": 312000
1472
+ },
1473
+ {
1474
+ "epoch": 4.88,
1475
+ "learning_rate": 9.450064516007773e-08,
1476
+ "loss": 0.0245,
1477
+ "step": 314000
1478
+ },
1479
+ {
1480
+ "epoch": 4.91,
1481
+ "learning_rate": 7.256456591191674e-08,
1482
+ "loss": 0.0241,
1483
+ "step": 316000
1484
+ },
1485
+ {
1486
+ "epoch": 4.94,
1487
+ "learning_rate": 5.351190850388044e-08,
1488
+ "loss": 0.0247,
1489
+ "step": 318000
1490
+ },
1491
+ {
1492
+ "epoch": 4.98,
1493
+ "learning_rate": 3.7327348382793504e-08,
1494
+ "loss": 0.025,
1495
+ "step": 320000
1496
+ },
1497
+ {
1498
+ "epoch": 4.98,
1499
+ "eval_bleu": 0.9562524667691357,
1500
+ "eval_brevity_penalty": 1.0,
1501
+ "eval_length_ratio": 1.010135743191068,
1502
+ "eval_loss": 0.04290972650051117,
1503
+ "eval_reference_length": 688948,
1504
+ "eval_rouge1": 0.9716126931407336,
1505
+ "eval_rouge2": 0.9473094321074684,
1506
+ "eval_rougeL": 0.9715449461188315,
1507
+ "eval_rougeLsum": 0.9715191110303505,
1508
+ "eval_runtime": 358.3002,
1509
+ "eval_samples_per_second": 119.651,
1510
+ "eval_steps_per_second": 9.972,
1511
+ "eval_translation_length": 695931,
1512
+ "step": 320000
1513
+ },
1514
+ {
1515
+ "epoch": 5.01,
1516
+ "learning_rate": 2.4046767073176436e-08,
1517
+ "loss": 0.0231,
1518
+ "step": 322000
1519
+ },
1520
+ {
1521
+ "epoch": 5.04,
1522
+ "learning_rate": 1.3664633482581291e-08,
1523
+ "loss": 0.0213,
1524
+ "step": 324000
1525
+ },
1526
+ {
1527
+ "epoch": 5.07,
1528
+ "learning_rate": 6.200393505542135e-09,
1529
+ "loss": 0.0229,
1530
+ "step": 326000
1531
+ },
1532
+ {
1533
+ "epoch": 5.1,
1534
+ "learning_rate": 1.6450126084593953e-09,
1535
+ "loss": 0.0226,
1536
+ "step": 328000
1537
+ },
1538
+ {
1539
+ "epoch": 5.13,
1540
+ "learning_rate": 5.612241452679357e-12,
1541
+ "loss": 0.0216,
1542
+ "step": 330000
1543
+ },
1544
+ {
1545
+ "epoch": 5.13,
1546
+ "eval_bleu": 0.956286137798364,
1547
+ "eval_brevity_penalty": 1.0,
1548
+ "eval_length_ratio": 1.0101197768191503,
1549
+ "eval_loss": 0.04293430969119072,
1550
+ "eval_reference_length": 688948,
1551
+ "eval_rouge1": 0.9716103880266641,
1552
+ "eval_rouge2": 0.9473106979225123,
1553
+ "eval_rougeL": 0.9715446246919781,
1554
+ "eval_rougeLsum": 0.971525437075219,
1555
+ "eval_runtime": 356.663,
1556
+ "eval_samples_per_second": 120.2,
1557
+ "eval_steps_per_second": 10.018,
1558
+ "eval_translation_length": 695920,
1559
+ "step": 330000
1560
+ },
1561
+ {
1562
+ "epoch": 5.13,
1563
+ "step": 330000,
1564
+ "total_flos": 3.0484200532475904e+16,
1565
+ "train_loss": 0.08529333166642622,
1566
+ "train_runtime": 78966.8059,
1567
+ "train_samples_per_second": 25.074,
1568
+ "train_steps_per_second": 4.179
1569
+ }
1570
+ ],
1571
+ "max_steps": 330000,
1572
+ "num_train_epochs": 6,
1573
+ "total_flos": 3.0484200532475904e+16,
1574
+ "trial_name": null,
1575
+ "trial_params": null
1576
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cce437cedae8c24f6d18c98845dc6e0e47724da69c7957d1573301b1d6e6301b
3
+ size 3695