JingyaoLi commited on
Commit
359c183
·
verified ·
1 Parent(s): a6de97b

Upload 5 files

Browse files
.gitattributes CHANGED
@@ -33,4 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
- tokenizer.json filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -textimgs/apps.png filter=lfs diff=lfs merge=lfs -text
37
+ imgs/codecontests.png filter=lfs diff=lfs merge=lfs -text
38
+ imgs/reflection.png filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,57 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: bigscience-openrail-m
3
+ metrics:
4
+ - code_eval
5
+ library_name: transformers
6
+ tags:
7
+ - code
8
  ---
9
+
10
+ <p style="font-size:28px;" align="center">
11
+ 🏠 MoTCoder
12
+ </p>
13
+
14
+ <p align="center">
15
+ • 🤗 <a href="https://huggingface.co/datasets/JingyaoLi/MoTCode-Data" target="_blank">Data </a> • 🤗 <a href="https://huggingface.co/JingyaoLi/MoTCoder-15B-v1.0" target="_blank">Model </a> • 🐱 <a href="https://github.com/dvlab-research/MoTCoder" target="_blank">Code</a> • 📃 <a href="https://arxiv.org/abs/2312.15960" target="_blank">Paper</a> <br>
16
+ </p>
17
+
18
+ [![PWC](https://img.shields.io/endpoint?url=https%3A%2F%2Fpaperswithcode.com%2Fbadge%2Fmotcoder-elevating-large-language-models-with%2Fcode-generation-on-apps%3Fmetric%3DIntroductory%2520Pass%25401)](https://paperswithcode.com/sota/code-generation-on-apps?metric=Introductory%20Pass%401/motcoder-elevating-large-language-models-with)
19
+ [![PWC](https://img.shields.io/endpoint?url=https%3A%2F%2Fpaperswithcode.com%2Fbadge%2Fmotcoder-elevating-large-language-models-with%2Fcode-generation-on-codecontests%3Fmetric%3DTest%2520Set%2520pass%25401)](https://paperswithcode.com/sota/code-generation-on-codecontests?metric=Test%20Set%20pass%401)
20
+
21
+ Large Language Models (LLMs) have showcased impressive capabilities in handling straightforward programming tasks. However, their performance tends to falter when confronted with more challenging programming problems. We observe that conventional models often generate solutions as monolithic code blocks, restricting their effectiveness in tackling intricate questions. To overcome this limitation, we present Module-of-Thought Coder (MoTCoder). We introduce a framework for MoT instruction tuning, designed to promote the decomposition of tasks into logical sub-tasks and sub-modules. Our investigations reveal that, through the cultivation and utilization of sub-modules, MoTCoder significantly improves both the modularity and correctness of the generated solutions, leading to substantial pass@1 improvements of 2.4% on APPS and 4.5% on CodeContests. MoTCoder also achieved significant improvements in self-correction capabilities, surpassing the current SOTA by 3.3%. Additionally, we provide an analysis of between problem complexity and optimal module decomposition and evaluate the maintainability index, confirming that the code generated by MoTCoder is easier to understand and modify, which can be beneficial for long-term code maintenance and evolution. Our codes are available at https://github.com/dvlab-research/MoTCoder.
22
+
23
+ <div style="text-align: center;">
24
+ <img src="./imgs/impression.png" alt="impression" />
25
+ </div>
26
+
27
+ ## Performance
28
+
29
+ ### APPS
30
+ <div style="text-align: center;">
31
+ <img src="./imgs/apps.png" alt="Performance on APPS" />
32
+ </div>
33
+
34
+ ### CodeContests
35
+ <div style="text-align: center;">
36
+ <img src="./imgs/codecontests.png" alt="Performance on CodeContests" width="500px" />
37
+ </div>
38
+
39
+ ### Reflection
40
+ <div style="text-align: center;">
41
+ <img src="./imgs/reflection.png" alt="Performance on Reflection" />
42
+ </div>
43
+
44
+ ## Citation
45
+ If you find our work useful, please consider citing it.
46
+ ```
47
+ @misc{li2025motcoderelevatinglargelanguage,
48
+ title={MoTCoder: Elevating Large Language Models with Modular of Thought for Challenging Programming Tasks},
49
+ author={Jingyao Li and Pengguang Chen and Bin Xia and Hong Xu and Jiaya Jia},
50
+ year={2025},
51
+ eprint={2312.15960},
52
+ archivePrefix={arXiv},
53
+ primaryClass={cs.LG},
54
+ url={https://arxiv.org/abs/2312.15960},
55
+ }
56
+ ```
57
+
imgs/apps.png ADDED

Git LFS Details

  • SHA256: 5928ef492233cdb3c7f24840c0b64f34d8e997feb3c8f5ae73b985341a86ac47
  • Pointer size: 131 Bytes
  • Size of remote file: 929 kB
imgs/codecontests.png ADDED

Git LFS Details

  • SHA256: 371476075f039b6efec83f90801da423fe1c93658fd3af9270ed12476b3c8ed0
  • Pointer size: 131 Bytes
  • Size of remote file: 389 kB
imgs/impression.png ADDED
imgs/reflection.png ADDED

Git LFS Details

  • SHA256: 1d1f9defcb6663ee09cc2cfe386200e88cd31a26d4596bffacd3f7d815084235
  • Pointer size: 131 Bytes
  • Size of remote file: 185 kB