UniTok: A Unified Tokenizer for Visual Generation and Understanding Paper • 2502.20321 • Published 23 days ago • 29
SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training Paper • 2501.17161 • Published Jan 28 • 109
Thinking in Space: How Multimodal Large Language Models See, Remember, and Recall Spaces Paper • 2412.14171 • Published Dec 18, 2024 • 24
Thinking in Space: How Multimodal Large Language Models See, Remember, and Recall Spaces Paper • 2412.14171 • Published Dec 18, 2024 • 24
Thinking in Space: How Multimodal Large Language Models See, Remember, and Recall Spaces Paper • 2412.14171 • Published Dec 18, 2024 • 24
ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers Paper • 2305.15272 • Published May 24, 2023
TouchStone: Evaluating Vision-Language Models by Language Models Paper • 2308.16890 • Published Aug 31, 2023 • 1
Unleashing Vanilla Vision Transformer with Masked Image Modeling for Object Detection Paper • 2204.02964 • Published Apr 6, 2022
Thinking in Space: How Multimodal Large Language Models See, Remember, and Recall Spaces Paper • 2412.14171 • Published Dec 18, 2024 • 24
Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities Paper • 2308.12966 • Published Aug 24, 2023 • 8
Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs Paper • 2406.16860 • Published Jun 24, 2024 • 60
Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs Paper • 2406.16860 • Published Jun 24, 2024 • 60
PLA: Language-Driven Open-Vocabulary 3D Scene Understanding Paper • 2211.16312 • Published Nov 29, 2022