MHLA: Restoring Expressivity of Linear Attention via Token-Level Multi-Head Paper • 2601.07832 • Published 20 days ago • 51
Towards Automated Kernel Generation in the Era of LLMs Paper • 2601.15727 • Published 10 days ago • 16
HERMES: KV Cache as Hierarchical Memory for Efficient Streaming Video Understanding Paper • 2601.14724 • Published 12 days ago • 74
Scalable Power Sampling: Unlocking Efficient, Training-Free Reasoning for LLMs via Distribution Sharpening Paper • 2601.21590 • Published 3 days ago • 11