Toy Models to Study
Collection
9 items
•
Updated
•
2
A tiny single-layer 35.1M parameter Mistral model, with a hidden size of 512, and an MLP intermediate size of 1024. This model is trained on the roneneldan/TinyStories dataset. It achieves the following results on the evaluation set:
This work is inspired by the 21M parameter one-layer GPT-Neo of the Tiny Stories paper. Results reproduced to acquire high-frequency checkpoints for further analysis.
Analysis of feature dynamics and emergence in real-world language models.
Trained for 90171 steps, corresponding to ~2 hours on a single H100.
The following hyperparameters were used during training:
Quite consistent English text generation.