Can Language Models Falsify? Evaluating Algorithmic Reasoning with Counterexample Creation Paper • 2502.19414 • Published Feb 26 • 20
Logic-RL: Unleashing LLM Reasoning with Rule-Based Reinforcement Learning Paper • 2502.14768 • Published Feb 20 • 48
OmniAlign-V: Towards Enhanced Alignment of MLLMs with Human Preference Paper • 2502.18411 • Published Feb 25 • 72
Rethinking Autoformalization Collection Models for "Rethinking and Improving Autoformalization: Towards a Faithful Metric and a Dependency Retrieval-based Approach" • 10 items • Updated Feb 26