Papers
arxiv:2603.22281

ThinkJEPA: Empowering Latent World Models with Large Vision-Language Reasoning Model

Published on Mar 23
· Submitted by
Haichao Zhang
on Mar 25
Authors:
,
,
,
,
,
,
,

Abstract

A VLM-guided JEPA-style latent world modeling framework combines dense-frame dynamics modeling with long-horizon semantic guidance through dual-temporal pathways to improve hand-manipulation trajectory prediction.

AI-generated summary

Recent progress in latent world models (e.g., V-JEPA2) has shown promising capability in forecasting future world states from video observations. Nevertheless, dense prediction from a short observation window limits temporal context and can bias predictors toward local, low-level extrapolation, making it difficult to capture long-horizon semantics and reducing downstream utility. Vision--language models (VLMs), in contrast, provide strong semantic grounding and general knowledge by reasoning over uniformly sampled frames, but they are not ideal as standalone dense predictors due to compute-driven sparse sampling, a language-output bottleneck that compresses fine-grained interaction states into text-oriented representations, and a data-regime mismatch when adapting to small action-conditioned datasets. We propose a VLM-guided JEPA-style latent world modeling framework that combines dense-frame dynamics modeling with long-horizon semantic guidance via a dual-temporal pathway: a dense JEPA branch for fine-grained motion and interaction cues, and a uniformly sampled VLM thinker branch with a larger temporal stride for knowledge-rich guidance. To transfer the VLM's progressive reasoning signals effectively, we introduce a hierarchical pyramid representation extraction module that aggregates multi-layer VLM representations into guidance features compatible with latent prediction. Experiments on hand-manipulation trajectory prediction show that our method outperforms both a strong VLM-only baseline and a JEPA-predictor baseline, and yields more robust long-horizon rollout behavior.

Community

Latent world models are effective at modeling local dynamics, but they often struggle to capture long-horizon semantics from short observation windows. In ThinkJEPA, we explore whether a large vision-language reasoning model can act as a high-level “thinker” for a JEPA-style latent predictor: densely sampled frames model fine-grained temporal dynamics, while sparsely sampled frames provide broader semantic context, and hierarchical VLM features guide future latent forecasting. We hope this work can spark discussion on how multimodal reasoning may strengthen video world models for prediction and embodied intelligence.
Screenshot 2026-03-24 at 9.47.26 PM

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.22281 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.22281 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.22281 in a Space README.md to link it from this page.

Collections including this paper 3