# CLIP Transcoder Checkpoint This model is a Transcoder trained on CLIP's internal representations. ## Model Details ### Architecture - **Layer**: 8 - **Layer Type**: ln2.hook_normalized - **Model**: laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K - **Dictionary Size**: 49152 - **Input Dimension**: 768 - **Expansion Factor**: 64 - **CLS Token Only**: False ### Training - **Learning Rate**: 0.0005932154848461442 - **Batch Size**: 4096 - **Context Size**: 50 ### Sparsity - **L0 (Active Features)**: 1024 ## Additional Information - **Wandb Run**: https://wandb.ai/perceptual-alignment/openclip-transcoders/runs/zh2kpkp6/ - **Random Seed**: 42