Let ViT Speak: Generative Language-Image Pre-training Paper • 2605.00809 • Published 12 days ago • 32
view reply Hopefully they can involve NEO-Unify in discussions of their paper, haha~ It would be deserved.
Qwen/Qwen3-Embedding-0.6B Feature Extraction • 0.6B • Updated 23 days ago • 5.84M • • 1.02k
view article Article Introducing NVIDIA Cosmos Policy for Advanced Robot Control nvidia • Jan 29 • 48
view article Article NEO-unify: Building Native Multimodal Unified Models End to End sensenova • Mar 5 • 159
google/siglip2-giant-opt-patch16-256 Zero-Shot Image Classification • 2B • Updated Feb 21, 2025 • 11.4k • 4
facebook/dinov3-vitl16-pretrain-lvd1689m Image Feature Extraction • 0.3B • Updated Aug 19, 2025 • 653k • 265
Visual Generation Unlocks Human-Like Reasoning through Multimodal World Models Paper • 2601.19834 • Published Jan 27 • 25
DiffThinker: Towards Generative Multimodal Reasoning with Diffusion Models Paper • 2512.24165 • Published Dec 30, 2025 • 52