Papers
arxiv:2504.00999

MergeVQ: A Unified Framework for Visual Generation and Representation with Disentangled Token Merging and Quantization

Published on Apr 1
ยท Submitted by Juanxi on Apr 3
#1 Paper of the day
Authors:
,
,

Abstract

Masked Image Modeling (MIM) with Vector Quantization (VQ) has achieved great success in both self-supervised pre-training and image generation. However, most existing methods struggle to address the trade-off in shared latent space for generation quality vs. representation learning and efficiency. To push the limits of this paradigm, we propose MergeVQ, which incorporates token merging techniques into VQ-based generative models to bridge the gap between image generation and visual representation learning in a unified architecture. During pre-training, MergeVQ decouples top-k semantics from latent space with the token merge module after self-attention blocks in the encoder for subsequent Look-up Free Quantization (LFQ) and global alignment and recovers their fine-grained details through cross-attention in the decoder for reconstruction. As for the second-stage generation, we introduce MergeAR, which performs KV Cache compression for efficient raster-order prediction. Extensive experiments on ImageNet verify that MergeVQ as an AR generative model achieves competitive performance in both visual representation learning and image generation tasks while maintaining favorable token efficiency and inference speed. The code and model will be available at https://apexgen-x.github.io/MergeVQ.

Community

Paper author Paper submitter
โ€ข
edited 4 days ago

[CVPR 2025]๐Ÿš€๐Ÿš€๐Ÿš€ MergeVQ: A Unified Framework for Visual Generation and Representation with Disentangled Token Merging and Quantization

Welcome to upvote and thank you for your support!!!

Paper author
โ€ข
edited 4 days ago

Overall Framework:
MergeVQ_Overall.png

Paper author

2-stage Training:
2Stage_Training.png

Paper author

MergeAR KVCache Compression:

e09dea0118bed62776cd661c3becf423.png

Welcome to discuss any questions about MergeVQ with us here!

ยท
Paper author

Welcome to open discussion about future directions of visual tokenizers and advanced auto-regressive generations~๐Ÿ˜„

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2504.00999 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2504.00999 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2504.00999 in a Space README.md to link it from this page.

Collections including this paper 6