apcl
/

File size: 1,869 Bytes
f037c66
 
 
 
 
0052601
 
a15b6ac
4c68a42
 
0052601
4c68a42
0052601
4c68a42
 
 
 
 
3c62230
 
4c68a42
 
 
 
 
 
 
 
 
 
 
 
3c62230
4c68a42
 
 
 
0052601
a15b6ac
4c68a42
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
license: bigscience-openrail-m
datasets:
- apcl/so13m
---
# Jam_so
Jam_so is a GPT2-like model for research in fine-grained Java analysis. It is intended for fine-grained analysis of Java source code at the level of methods, statements, and variables, as a foundation for downstream tasks like code completion, comment generation, and automated bug repair. 

---

## Jam_so Training Details

- We trained the jam_so model using the training procedures from Daniel Grittner's [NanoGPT-LoRA](https://github.com/danielgrittner/nanoGPT-LoRA)
  
- The dataset used to train our model is our own dataset [so13m dataset](https://huggingface.co/datasets/apcl/so13m), processed from 13 million StackOverflow posts picked from a [Stack Exchange data dump](https://archive.org/details/stackexchange)  for posts between January 2014 and December 2022.

- We train the model on [training set](https://huggingface.co/datasets/apcl/so13m/blob/main/train.bin) for 1 epoch, roughly 300,000 training iterations.

- Our [GitHub repo](https://github.com/apcl-research/jam/blob/main) contains the code for re-training using the [raw data](https://huggingface.co/datasets/apcl/so13m/blob/main/so13m.pkl).

| Hyperparameter | Description | Value |
| ----------- | ----------- |------------|
|e | embedding dimensions              | 1024 |		 
|L | number of layers 			  		| 24 | 		 
|h | attention heads             		| 16 |		 
|c | block size / context length       | 256 |  		 
|b | batch size                        | 4  | 		 
|a | accumulation steps				| 32 |		 
|d | dropout							| 0.20 |		 
|r | learning rate                     | 3e-5 |		 
|y | weight decay						| 1e-1 |	 

We train our models using a single NVidia A5000 GPUs. 

---
## Jam Projects

Current projects using the jam_so pre-trained model can be found at our Github repository:

https://github.com/apcl-research/jam