maohaos2 commited on
Commit
ef80e4d
·
verified ·
1 Parent(s): bb566ac

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -1
README.md CHANGED
@@ -7,4 +7,26 @@ sdk: static
7
  pinned: false
8
  ---
9
 
10
- Edit this `README.md` markdown file to author your organization card.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  pinned: false
8
  ---
9
 
10
+ # **Introduction**
11
+ We aim to advance LLM reasoning with autoregressive search capabilities, i.e., a single LLM performs an extended reasoning process with self-reflection and self-exploration of new strategies.
12
+ We achieve this through our proposed Chain-of-Action-Thought (COAT) reasoning and a new post-training paradigm: 1) a small-scale format tuning (FT) stage to internalize the COAT reasoning format and 2) a large-scale self-improvement
13
+ stage leveraging reinforcement learning (RL). Our approach results in Satori, a 7B LLM trained on open-source model (Qwen-2.5-Math-7B) and open-source data (OpenMathInstruct-2 and NuminaMath). Key features of Satori include:
14
+ - Capable of self-reflection and self-exploration without external guidance.
15
+ - Achieve state-of-the-art reasoning performance mainly through self-improvement (RL).
16
+ - Exhibit transferability of reasoning capabilities on unseen domains beyond math.
17
+
18
+ # **Resources**
19
+ Please refer to our blog and research paper for more details of Satori.
20
+ - [Blog](https://satori-reasoning.github.io/blog/satori/)
21
+ - [Paper](https://satori-reasoning.github.io/blog/satori/)
22
+
23
+ # **Citation**
24
+ If you find our model and data helpful, please consider citing our paper:
25
+ ```
26
+ @article{TBD,
27
+ title={Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search},
28
+ author={Maohao Shen and Guangtao Zeng and Zhenting Qi and Zhang-Wei Hong and Zhenfang Chen and Wei Lu and Gregory Wornell and Subhro Das and David Cox and Chuang Gan},
29
+ journal={arXiv preprint arXiv: TBD},
30
+ year={2025}
31
+ }
32
+ ```