GemmaX2 Collection GemmaX2 language models, including pretrained and instruction-tuned models of 2 sizes, including 2B, 9B. • 7 items • Updated Feb 7 • 21
LLaVA-Video Collection Models focus on video understanding (previously known as LLaVA-NeXT-Video). • 8 items • Updated Feb 21 • 60
Qwen2.5-VL Collection Vision-language model series based on Qwen2.5 • 11 items • Updated 13 days ago • 441
Multimodal Models Collection Multimodal models with leading performance. • 17 items • Updated Jan 17 • 33
Molmo Collection Artifacts for open multimodal language models. • 5 items • Updated about 1 month ago • 301
AIMv2 Collection A collection of AIMv2 vision encoders that supports a number of resolutions, native resolution, and a distilled checkpoint. • 19 items • Updated Nov 22, 2024 • 74
Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Multimodal Models Paper • 2409.17146 • Published Sep 25, 2024 • 114
Llama 3.2 Collection This collection hosts the transformers and original repos of the Llama 3.2 and Llama Guard 3 • 15 items • Updated Dec 6, 2024 • 591
LLaVa-1.5 Collection LLaVa-1.5 is a series of vision-language models (VLMs) trained on a variety of visual instruction datasets. • 3 items • Updated Mar 18, 2024 • 8
LLaVa-NeXT Collection LLaVa-NeXT (also known as LLaVa-1.6) improves upon the 1.5 series by incorporating higher image resolutions and more reasoning/OCR datasets. • 8 items • Updated Jul 19, 2024 • 29
Vision-Language Modeling Collection Our datasets and models for Visual-Language Modeling • 5 items • Updated Nov 25, 2024 • 6