IOPO: Empowering LLMs with Complex Instruction Following via Input-Output Preference Optimization Paper • 2411.06208 • Published Nov 9 • 19
SEAGULL: No-reference Image Quality Assessment for Regions of Interest via Vision-Language Instruction Tuning Paper • 2411.10161 • Published Nov 15 • 8
VLsI: Verbalized Layers-to-Interactions from Large to Small Vision Language Models Paper • 2412.01822 • Published 23 days ago • 14
VisionArena: 230K Real World User-VLM Conversations with Preference Labels Paper • 2412.08687 • Published 14 days ago • 13
CompCap: Improving Multimodal Large Language Models with Composite Captions Paper • 2412.05243 • Published 19 days ago • 18
Rethinking Token Reduction in MLLMs: Towards a Unified Paradigm for Training-Free Acceleration Paper • 2411.17686 • Published 29 days ago • 18
VLRewardBench: A Challenging Benchmark for Vision-Language Generative Reward Models Paper • 2411.17451 • Published 30 days ago • 10
MEGA-Bench: Scaling Multimodal Evaluation to over 500 Real-World Tasks Paper • 2410.10563 • Published Oct 14 • 38
WorldCuisines: A Massive-Scale Benchmark for Multilingual and Multicultural Visual Question Answering on Global Cuisines Paper • 2410.12705 • Published Oct 16 • 29
Ovis1.5 Collection The latest version of Ovis! As always, Ovis1.5 remains fully open-source: we release training datasets, training & inference codes, and model weights. • 2 items • Updated Sep 25 • 1
Ovis1.6 Collection With 29B parameters, Ovis1.6-Gemma2-27B achieves exceptional performance in the OpenCompass benchmark, ranking among the top-tier open-source MLLMs. • 5 items • Updated 30 days ago • 10
Ovis: Structural Embedding Alignment for Multimodal Large Language Model Paper • 2405.20797 • Published May 31 • 28