Mobile-VideoGPT-1.5B


πŸ“ Description

Mobile-VideoGPT is an efficient multimodal framework designed to operate with fewer than a billion parameters. Unlike traditional video large multimodal models, Mobile-VideoGPT consists of lightweight dual visual encoders, efficient projectors, and a small language model (SLM) with real-time throughput. We evaluate our model across well-established six video understanding benchmarks (e.g., MVBench, EgoSchema, NextQA, and PerceptionTest), and our results show that Mobile-VideoGPT-0.5B can generate up to 46 tokens per second while outperforming existing state-of-the-art 0.5B-parameter competitors.

This model contains Mobile-VideoGPT checkpoints with Qwen-2.5-1.5B LLM

πŸ’» Download

To get started, follow these steps:

git lfs install
git clone https://huggingface.co/Amshaker/Mobile-VideoGPT-1.5B

πŸ“š Additional Resources

πŸ“œ Citations:

    @article{Shaker2025MobileVideoGPT,
        title={Mobile-VideoGPT: Fast and Accurate Video Understanding Language Model},
        author={Shaker, Abdelrahman and Maaz, Muhammad and Rezatofighi, Hamid and Khan, Salman and Khan, Fahad Shahbaz},
        journal={arxiv},
        year={2025},
        url={https://arxiv.org/abs/2503.21782}
    }
Downloads last month
197
Safetensors
Model size
1.62B params
Tensor type
F32
Β·
FP16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including Amshaker/Mobile-VideoGPT-1.5B