Model Summery

MobileLLaMA-1.4B-Base is a Transformer with 1.4B billon paramters. We downscale LLaMA to facilitate the off-the-shelf deployment. To make our work reproducible, all the models are trained on 1.3T tokens from the RedPajama v1 dataset only. This benefits further research by enabling controlled experiments.

We extensively assess our models on two standard natural language benchmarks, for language understanding and common sense reasoning respectively. Experimental results show that our MobileLLaMA 1.4B is on par with the most recent opensource models.

Model Sources

How to Get Started with the Model

Model weights can be loaded with Hugging Face Transformers. Examples can be found at Github.

Training Details

please refer to our paper in section 4.1: MobileVLM: A Fast, Strong and Open Vision Language Assistant for Mobile Devices.

Downloads last month
204
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for mtgv/MobileLLaMA-1.4B-Base

Quantizations
1 model

Dataset used to train mtgv/MobileLLaMA-1.4B-Base

Space using mtgv/MobileLLaMA-1.4B-Base 1