Aligning Multimodal LLM with Human Preference: A Survey Paper • 2503.14504 • Published 4 days ago • 20
Optimizing Test-Time Compute via Meta Reinforcement Fine-Tuning Paper • 2503.07572 • Published 12 days ago • 38
Direct Multi-Turn Preference Optimization for Language Agents Paper • 2406.14868 • Published Jun 21, 2024
MM-RLHF: The Next Step Forward in Multimodal LLM Alignment Paper • 2502.10391 • Published Feb 14 • 32
Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization Paper • 2407.07880 • Published Jul 10, 2024
MM-RLHF: The Next Step Forward in Multimodal LLM Alignment Paper • 2502.10391 • Published Feb 14 • 32