SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training
Abstract
Supervised fine-tuning (SFT) and reinforcement learning (RL) are widely used post-training techniques for foundation models. However, their roles in enhancing model generalization capabilities remain unclear. This paper studies the difference between SFT and RL on generalization and memorization, focusing on text-based rule variants and visual variants. We introduce GeneralPoints, an arithmetic reasoning card game, and adopt V-IRL, a real-world navigation environment, to assess how models trained with SFT and RL generalize to unseen variants in both textual and visual domains. We show that RL, especially when trained with an outcome-based reward, generalizes across both rule-based textual and visual variants. SFT, in contrast, tends to memorize training data and struggles to generalize out-of-distribution scenarios. Further analysis reveals that RL improves the model's underlying visual recognition capabilities, contributing to its enhanced generalization in the visual domain. Despite RL's superior generalization, we show that SFT remains essential for effective RL training; SFT stabilizes the model's output format, enabling subsequent RL to achieve its performance gains. These findings demonstrates the capability of RL for acquiring generalizable knowledge in complex, multi-modal tasks.
Community
It also appears in vision language models -- interestingly, we found that RL has additional advantages than SFT for the model's visual capabilities. See more details in Section 5.1 - 5.3.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling (2025)
- Reinforcement Learning Enhanced LLMs: A Survey (2024)
- OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning (2024)
- GePBench: Evaluating Fundamental Geometric Perception for Multimodal Large Language Models (2024)
- Activating Distributed Visual Region within LLMs for Efficient and Effective Vision-Language Training and Inference (2024)
- Diving into Self-Evolving Training for Multimodal Reasoning (2024)
- Kimi k1.5: Scaling Reinforcement Learning with LLMs (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
We made a deep dive video for this paper: https://www.youtube.com/watch?v=CVHV3bwlc7I. Let’s see how SFT keeps the RL kite soaring!
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper