logo
github arxiv website

[ πŸ€– GitHub | πŸ“„ Paper | 🌐 Website ]

ACIP applied to meta-llama/Llama-2-7b-hf

This model repository is part of the ACIP Project and provides a compressible version of meta-llama/Llama-2-7b-hf. For more details, please visit our code repo.

Quick Start

Just load the ACIP model via from_pretrained:

from transformers import AutoModel

model = AutoModel.from_pretrained("MerantixMomentum/acip_llama2_7b", trust_remote_code=True)

This will download and create a fully parameterized ACIP model that can be pruned to any compression ratio you wish. For example,

model.prune_model_by_score(compression_ratio=0.4)

will prune model to 40% if its original size measured in number of parameters, i.e., 60% compression rate. A unique feature of ACIP is that this operation is revertible in the sense that you can rerun model.prune_model_by_score as often as you like to evaluate your model at different sizes. Finally, you can "commit" to a certain ratio and run

model.compress()

which will discard all pruned mask values of compressible linear layers. Now the model is actually compressed and you should observe a significant decrease of memory usage (this step is not revertible without reloading the ACIP model). If you like, you can also run

model.quantize()

to save even more memory (we have only tested 4bit quantization with bitsandbytes, but you could also customize this).

πŸš€ That's it! You can now use your compressed model for inference or fine-tuning as any other Causal Language Model from πŸ€— transformers.

Note: The parameter compression_ratio ranges from 1.0 to 0.0, indicating the model size after compression. For example, 0.4 means that the model has only 40% of the original number of parameters and 1.0 means no compression at all.

Dependencies

To run an ACIP model from our hub, you only need minimal dependencies, namely torch, transformers, peft, and optionally, bitsandbytes in case you want to quantize your model. See requirements.txt for pip-installable dependencies with exact version pins (newer version should work as well).

License

This model is released under the llama2 license.

Citation

When using or referring to this model, please cite our paper:

@article{mxm2025acip,
  title={Choose Your Model Size: Any Compression by a Single Gradient Descent}, 
  author={M. Genzel, P. Putzky, P. Zhao, S. Schulze, M. Mollenhauer, R. Seidel, S. Dietzel, T. Wollmann},
  year={2025},
  journal={Preprint arXiv:2502.01717}
}
Downloads last month
27
Safetensors
Model size
10.6B params
Tensor type
F32
Β·
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for MerantixMomentum/acip_llama2_7b

Finetuned
(808)
this model

Dataset used to train MerantixMomentum/acip_llama2_7b

Collection including MerantixMomentum/acip_llama2_7b