Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,33 @@ datasets:
|
|
6 |
# Jam-so
|
7 |
Jam-so is a GPT2-like model for research in fine-grained Java analysis. It is intended for fine-grained analysis of Java source code at the level of methods, statements, and variables, as a foundation for downstream tasks like code completion, comment generation, and automated bug repair.
|
8 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
|
10 |
-
|
11 |
-
## Epochs: One
|
12 |
-
## Iterations : ~300,000
|
|
|
6 |
# Jam-so
|
7 |
Jam-so is a GPT2-like model for research in fine-grained Java analysis. It is intended for fine-grained analysis of Java source code at the level of methods, statements, and variables, as a foundation for downstream tasks like code completion, comment generation, and automated bug repair.
|
8 |
|
9 |
+
---
|
10 |
+
|
11 |
+
## Jam-so Training Details
|
12 |
+
|
13 |
+
- We trained the jam model using the training procedures from Daniel Grittner's [NanoGPT-LoRA](https://github.com/danielgrittner/nanoGPT-LoRA)
|
14 |
+
|
15 |
+
- The dataset used to train our model is our own dataset [so13m dataset](https://huggingface.co/datasets/apcl/so13m), processed from 13 million StackOverflow posts picked from a [Stack Exchange data dump](https://archive.org/details/stackexchange) for posts between January 2014 and December 2022.
|
16 |
+
|
17 |
+
- We train the model on [training set](https://huggingface.co/datasets/apcl/so13m/blob/main/train.bin) for 1 epoch, roughly 300,000 training iterations.
|
18 |
+
|
19 |
+
| Hyperparameter | Description | Value |
|
20 |
+
| ----------- | ----------- |------------|
|
21 |
+
|e | embedding dimensions | 1024 |
|
22 |
+
|L | number of layers | 24 |
|
23 |
+
|h | attention heads | 16 |
|
24 |
+
|c | block size / context length | 256 |
|
25 |
+
|b | batch size | 4 |
|
26 |
+
|a | accumulation steps | 32 |
|
27 |
+
|d | dropout | 0.20 |
|
28 |
+
|r | learning rate | 3e-5 |
|
29 |
+
|y | weight decay | 1e-1 |
|
30 |
+
|
31 |
+
We train our models using a single NVidia A5000 GPUs. Our [GitHub repo](https://github.com/apcl-research/jam/blob/main) contains the code for re-training using the [raw data](https://huggingface.co/datasets/apcl/so13m/blob/main/so13m.pkl).
|
32 |
+
|
33 |
+
---
|
34 |
+
## Jam Projects
|
35 |
+
|
36 |
+
Current projects using the JAM pre-trained model can be found at our Github repository:
|
37 |
|
38 |
+
https://github.com/apcl-research/jam
|
|
|
|