IMU-1: Sample-Efficient Pre-training of Small Language Models
Paper
•
2602.02522
•
Published
•
5
This repository contains the IMU-1 Base model, a sample-efficient 430M parameter language model introduced in the paper IMU-1: Sample-Efficient Pre-training of Small Language Models.
IMU-1 is trained on 72B tokens and approaches the benchmark performance of models trained on 56× more data.
| Parameter | Value |
|---|---|
| Parameters | 430M |
| Hidden dim | 1,152 |
| Layers | 30 |
| Attention heads | 18 |
| KV heads (GQA) | 6 |
| Vocab size | 49,152 |
| Max context | 1,152 |
| Training tokens | 72B |
IMU-1 uses a validated recipe combining recent advances:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"thepowerfuldeez/imu1_base",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("thepowerfuldeez/imu1_base")
text = "The quick brown fox"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0]))
Note: This model uses custom modeling code. You must pass trust_remote_code=True when loading.
| Benchmark | Score |
|---|---|
| HellaSwag (0-shot) | 51.1 |
| ARC-Easy | 71.4 |
| ARC-Challenge | 41.1 |
| PIQA | 70.2 |
| Lambada (OpenAI) | 51.3 |
| Winograd | 74.7 |
| WinoGrande | 55.2 |
| BoolQ | 59.5 |
| CORE (centered) | 30.2 |
| Stage | Iterations | Tokens | Data |
|---|---|---|---|
| 1. Stable | 100k | 29B | DCLM-edu, FineWeb-edu |
| 2. Decay | 100k | 28B | Higher quality filters |
| 3. Midtrain | 65k | 14B | Instruction, reasoning, code |
@misc{grigorev2026imu1sampleefficientpretrainingsmall,
title={IMU-1: Sample-Efficient Pre-training of Small Language Models},
author={George Grigorev},
year={2026},
eprint={2602.02522},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2602.02522},
}
Apache 2.0