pico-decoder-tiny / README.md
rdiehlmartinez's picture
Update README.md
86f3198 verified
|
raw
history blame
2.21 kB
metadata
license: apache-2.0
datasets:
  - pico-lm/pretokenized-dolma
language:
  - en
metrics:
  - pico-lm/perplexity
pipeline_tag: text-generation

Pico Decoder Tiny

pico-decoder-tiny is the smallest (11M) model in the pico-decoder suite β€” a lightweight, LLaMA-style decoder-only transformer trained from scratch using pico-train. It is designed for transparent and reproducible research into the learning dynamics of language models, and is fully compatible with the pico-analyze toolkit for detailed interpretability analysis.

πŸ”§ Model Details

Field Value
Architecture Decoder-only transformer (LLaMA-style)
Parameters 11M
Layers 12
Hidden Size 96
Feed Foward Size 384
Attention Heads 12
Key/Value Heads 4

πŸ“š Training

  • Dataset: pretokenized-dolma, English-only
  • Training steps: 200,000
  • Batch size: 1024
  • Sequence length: 2048
  • Optimizer: AdamW
  • Learning rate schedule: Linear decay with warmup
  • Compute: 16 A100-SXM4-80GB GPUs

πŸ“ˆ Evaluation and Analysis

This model supports fine-grained analysis using pico-analyze. This tool enables researchers to understand how learning unfolds over training, even at very small scales.

We also evaluate perplexity of the model on the pico-paloma-tinsy dataset.

πŸ“„ Citation

If you use pico-tiny or any other pico-decoder model in your research, please cite:

@software{pico2025,
    author = {Diehl Martinez, Richard},
    title = {Pico: A Lightweight Framework for Studying Language Model Learning Dynamics},
    year = {2025,
    url = {https://github.com/pico-lm}
}