apcl
/

aakashba commited on
Commit
c7bbc5f
·
1 Parent(s): a62763b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -1,2 +1,2 @@
1
  # Jam-so
2
- Jam-so is a GPT2-like model for research in fine-grained Java analysis. It is intended for fine-grained analysis of Java source code at the level of methods, statements, and variables, as a foundation for downstream tasks like code completion, comment generation, and automated bug repair. This model is trained on so13m only and trained for one epoch, which is ∼300,000 iterations.
 
1
  # Jam-so
2
+ Jam-so is a GPT2-like model for research in fine-grained Java analysis. It is intended for fine-grained analysis of Java source code at the level of methods, statements, and variables, as a foundation for downstream tasks like code completion, comment generation, and automated bug repair. This model is trained on [so13m dataset](https://huggingface.co/datasets/apcl/so13m) and trained for one epoch, which is ∼300,000 iterations.