view article Article Navigating the RLHF Landscape: From Policy Gradients to PPO, GAE, and DPO for LLM Alignment Feb 11 • 94
Hybrid Reinforcement: When Reward Is Sparse, It's Better to Be Dense Paper • 2510.07242 • Published Oct 8 • 30
Infusing Theory of Mind into Socially Intelligent LLM Agents Paper • 2509.22887 • Published Sep 26 • 5
LUMINA: Detecting Hallucinations in RAG System with Context-Knowledge Signals Paper • 2509.21875 • Published Sep 26 • 9
Clean First, Align Later: Benchmarking Preference Data Cleaning for Reliable LLM Alignment Paper • 2509.23564 • Published Sep 28 • 7
Understanding Language Prior of LVLMs by Contrasting Chain-of-Embedding Paper • 2509.23050 • Published Sep 27 • 14
MetaMind: Modeling Human Social Thoughts with Metacognitive Multi-Agent Systems Paper • 2505.18943 • Published May 25 • 24
DaWin: Training-free Dynamic Weight Interpolation for Robust Adaptation Paper • 2410.03782 • Published Oct 3, 2024 • 1
Concept Steerers: Leveraging K-Sparse Autoencoders for Controllable Generations Paper • 2501.19066 • Published Jan 31 • 13