saridormi commited on
Commit
8bcfccf
Β·
1 Parent(s): 0824405

Add model card

Browse files
Files changed (1) hide show
  1. README.md +58 -0
README.md CHANGED
@@ -1,3 +1,61 @@
1
  ---
 
 
 
2
  license: apache-2.0
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - code
4
+ - en
5
  license: apache-2.0
6
+ tags:
7
+ - commit_message_generation
8
+ - code
9
+ datasets:
10
+ - saridormi/commit-chronicle
11
+ pipeline_tag: text2text-generation
12
  ---
13
+
14
+ # CMG/CMC: RACE (without history)
15
+
16
+ This is the checkpoint for [RACE](https://aclanthology.org/2022.emnlp-main.372.pdf) model, fine-tuned for the commit message generation (and/or completion) task as part of the paper "From Commit Message Generation to History-Aware Commit Message Completion", ASE 2023.
17
+
18
+ ## Details
19
+
20
+ > πŸ” For further details, please refer to:
21
+ > * **Paper**: TODO
22
+ > * **Repository**: [https://github.com/JetBrains-Research/commit_message_generation](https://github.com/JetBrains-Research/commit_message_generation)
23
+
24
+
25
+ * This model is based on the fine-tuned CodeT5 checkpoint [`JetBrains-Research/cmg-codet5-with-history`](https://huggingface.co/JetBrains-Research/cmg-codet5-with-history) and uses RACE architecture introduced in πŸ“œ [RACE: Retrieval-Augmented Commit Message Generation](https://aclanthology.org/2022.emnlp-main.372.pdf).
26
+ * Note: Requires a custom model class. Check [our implementation](https://github.com/JetBrains-Research/commit_message_generation/blob/appendix_cmg/src/model/configurations/utils/race.py) or [the replication package](https://github.com/DeepSoftwareAnalytics/RACE) provided by RACE authors.
27
+ * This model was trained with commit diffs, WITHOUT commit message history.
28
+ * This model was trained on the CommitChronicle dataset introduced in our study.
29
+ * Our hyperparameter setting is mostly based on πŸ“œ [RACE: Retrieval-augmented Commit Message Generation](https://aclanthology.org/2022.emnlp-main.372/).
30
+ The exact values are provided below:
31
+
32
+ | Hyperparameter | Value |
33
+ |:--------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------:|
34
+ | Encoder context max length | 512 |
35
+ | Decoder context max length | 512 |
36
+ | Number of training epochs | 1 |
37
+ | Batch size | 32 |
38
+ | Optimizer | [AdamW](https://pytorch.org/docs/1.12/generated/torch.optim.AdamW.html?highlight=adamw#torch.optim.AdamW) |
39
+ | Warmup | [Linear](https://huggingface.co/docs/transformers/v4.21.3/en/main_classes/optimizer_schedules#transformers.get_linear_schedule_with_warmup) |
40
+ | Number of warmup steps | 100 |
41
+ | Peak learning rate | 0.00002 |
42
+
43
+
44
+ ## Available checkpoints
45
+
46
+ We also released checkpoints for other models fine-tuned as part of our study.
47
+
48
+ * Models trained *with commit message history*:
49
+ * **CodeT5:** πŸ€— [`JetBrains-Research/cmg-codet5-with-history`](https://huggingface.co/JetBrains-Research/cmg-codet5-with-history)
50
+ * **CodeReviewer:** πŸ€— [`JetBrains-Research/cmg-codereviewer-with-history`](https://huggingface.co/JetBrains-Research/cmg-codereviewer-with-history)
51
+ * **RACE:** πŸ€— [`JetBrains-Research/cmg-race-with-history`](https://huggingface.co/JetBrains-Research/cmg-race-with-history)
52
+ * Models trained *without commit message history*:
53
+ * **CodeT5:** πŸ€— [`JetBrains-Research/cmg-codet5-without-history`](https://huggingface.co/JetBrains-Research/cmg-codet5-without-history)
54
+ * **CodeReviewer:** πŸ€— [`JetBrains-Research/cmg-codereviewer-without-history`](https://huggingface.co/JetBrains-Research/cmg-codereviewer-without-history)
55
+ * **RACE:** πŸ€— [`JetBrains-Research/cmg-race-without-history`](https://huggingface.co/JetBrains-Research/cmg-race-without-history) (this model)
56
+
57
+ ## Citation
58
+
59
+ ```
60
+ TODO
61
+ ```