apcl
/

aakashba commited on
Commit
a15b6ac
·
1 Parent(s): c7bbc5f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -1,2 +1,7 @@
1
  # Jam-so
2
- Jam-so is a GPT2-like model for research in fine-grained Java analysis. It is intended for fine-grained analysis of Java source code at the level of methods, statements, and variables, as a foundation for downstream tasks like code completion, comment generation, and automated bug repair. This model is trained on [so13m dataset](https://huggingface.co/datasets/apcl/so13m) and trained for one epoch, which is ∼300,000 iterations.
 
 
 
 
 
 
1
  # Jam-so
2
+ Jam-so is a GPT2-like model for research in fine-grained Java analysis. It is intended for fine-grained analysis of Java source code at the level of methods, statements, and variables, as a foundation for downstream tasks like code completion, comment generation, and automated bug repair.
3
+
4
+
5
+ ## Dataset: [so13m dataset](https://huggingface.co/datasets/apcl/so13m)
6
+ ## Epochs: One
7
+ ## Iterations : ~300,000