File size: 5,943 Bytes
35ed80d
 
8ec16ff
35ed80d
 
 
 
 
 
c59c115
 
a547025
e948f36
8ec16ff
a547025
 
e948f36
a547025
e948f36
8ec16ff
e948f36
a547025
 
8ec16ff
 
 
 
a547025
 
 
 
e948f36
8ec16ff
a547025
 
 
 
 
 
e948f36
8ec16ff
 
ae00997
a547025
e948f36
8ec16ff
a547025
 
 
ae00997
a547025
 
 
 
 
8ec16ff
ae00997
8ec16ff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53c0dc3
 
e948f36
8ec16ff
 
e948f36
8ec16ff
 
 
e948f36
8ec16ff
 
 
a547025
8ec16ff
 
 
 
 
 
 
 
a547025
8ec16ff
 
 
 
 
ae00997
8ec16ff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e948f36
 
 
 
53c0dc3
8ec16ff
 
 
 
 
30ee8fd
8ec16ff
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
---
title: README
emoji: πŸ“ˆ
colorFrom: red
colorTo: yellow
sdk: static
pinned: false
---

# Pico: A Lightweight Framework for Studying Learning Dynamics

Pico is a lightweight research framework that aims to demystify how language models learn. Built with simplicity in mind, it provides an efficient way to train and study models of different sizes. Visit our [website](https://www.picolm.io/) for more information.

Pico consists of two key components:
1. **Pico Training Framework** (available on [GitHub](https://github.com/pico-lm/pico)): A transparent, lightweight codebase for training language models. We use this framework to train a series of language models across scale that we release on this HuggingFace space. 
1. **Pico Analysis Framework** (available on [GitHub](https://github.com/pico-lm/pico-analysis)): A resarch framework to investigate and probe the learning dynamics of models trained using Pico.

This HuggingFace organization hosts our pre-trained models and datasets, while the GitHub repository provides the code to train and analyze your own model suites from scratch.

## πŸ€— HuggingFace Resources (You Are Here)

### Pre-trained Model Suite
Our complete suite of models from 1M to 500M parameters trained with Pico:
- **pico-tiny** (1M parameters) 
- **pico-small** (10M parameters)
- **pico-medium** (100M parameters)
- **pico-large** (500M parameters)

> 🚧 **Coming Soon!** **pico-xl** (1B parameters) Watch this space or star our [GitHub repository](https://github.com/rdiehlmartinez/pico) for updates!

All models are trained for 50,000 steps on the **pretokenized-dolma** dataset. They all see the same training data at each training step, use the same optimizatation process, and share the same model architecture; the only difference between models is the size of their hidden dimension. 

Each model includes:
- Advanced training checkpoints (stored every 1,000 steps) that contain:
  - Weights and optimizer states (HuggingFace and Lightning Fabric-compatible versions)
  - Model activations and gradients 
  - The batch of training data observed at the given training step
- Wandb logs tracking the learning process
- Pre-computed perplexity scores on the paloma evaluation set 

### Available Datasets
1. **[pretokenized-dolma](https://huggingface.co/datasets/pico-lm/pretokenized-dolma)**
   - 420B tokens of pre-processed, tokenized and shuffled text extraced from the **[DOLMA](https://allenai.org/dolma)** corpus
   - We use this dataset to train our model suite

2. **[pretokenized-dolma-tiny](https://huggingface.co/datasets/pico-lm/pretokenized-dolma-tiny)**
   - A smaller version of the **pretokenized-dolma** corpus for quick experiments

3. **[pretokenized-paloma](https://huggingface.co/datasets/pico-lm/pretokenized-paloma)**
   - A tokenized and shuffled version of the **[Paloma](https://allenai.org/evaluation-frameworks)** evaluation corpus
   - The Paloma corpus was carefully curated to be disjoint from the Dolma corpus and provides
   - We use this corpus to evaluate the perplexity of our models
     
4. **[pretokenized-paloma-tinsy](https://huggingface.co/datasets/pico-lm/pretokenized-paloma-tinsy)**
   - A sub-sampled version of the **pretokenized-dolma** corpus 

All datasets are tokenized using the **[OLMo Tokenizer](https://huggingface.co/allenai/OLMo-7B-0724-hf/blob/main/tokenizer_config.json)**

## πŸ”§ GitHub Training Framework

Want to train your own suite of models? Visit our [GitHub repository](https://github.com/rdiehlmartinez/pico) to:
- Train models with custom architectures
- Experiment with different training regimes
- Modify checkpoint saving behavior
- Implement custom evaluation metrics

The training framework makes it easy to:
1. Train multiple models of different sizes
2. Ensure consistent training across all models
3. Save rich checkpoint data for learning dynamics analysis
4. Compare learning dynamics across scales

## πŸ› οΈ Using the Resources

### Using Pre-trained Models (HuggingFace)
```python
from transformers import AutoModelForCausalLM

# Load our pre-trained model
model = AutoModelForCausalLM.from_pretrained("pico-lm/pico-small")

# Access specific checkpoint
model = AutoModelForCausalLM.from_pretrained(
    "pico-lm/pico-small",
    revision="step-xyz"
)
```

### Training Your Own Suite (GitHub)
```bash
# Clone the repository
git clone https://github.com/rdiehlmartinez/pico.git && cd pico
source setup.sh

# Configure your model suite
# Edit configs/train.yaml to specify model sizes and training parameters

# Train your suite
python train.py --config configs/train.yaml
```

## πŸ“Š Model Details

### Architecture
All models use:
- LLAMA-style transformer
- RMSNorm for normalization
- RoPE positional embeddings
- Multi-head attention with KV-cache
- SwiGLU activation function

### Training Configuration
Standard configuration (customizable in GitHub training):
- Sequence length: 2048
- Batch size: 1024
- Learning rate: 1e-3
- Weight decay: 0.1
- Gradient clipping: 1.0
- Mixed precision training
- Vocab size: 50280 

## πŸ”¬ Research Applications

Perfect for researchers studying:
- Learning dynamics across model scales
- Mechanistic interpretability
- Architecture and training effects
- Emergent model behaviors

Whether using our pre-trained models or training your own suite, Pico provides the tools needed for in-depth learning dynamics research.

## 🀝 Contributing

Contributions welcome on both platforms:
- **HuggingFace**: Model weights, datasets, and evaluation results
- **GitHub**: Training framework improvements, analysis tools, and documentation

## πŸ“« Contact

- GitHub: [rdiehlmartinez/pico](https://github.com/rdiehlmartinez/pico)
- Author: [Richard Diehl Martinez](https://richarddiehlmartinez.com)

## πŸ” Citation

```bibtex
@software{pico2024,
    author = {Diehl Martinez, Richard},
    title = {Pico: Framework for Training Tiny Language Models},
    year = {2024},
}
```