SQFT Base Model: sqft-phi-3-mini-4k-60-base

Model Sources

How to get this model

Refer to the command in SQFT/run_command/phi-3-mini-4k-instruct/sparse_quantization.sh#11.

Citation

@article{munoz2024sqft,
  title = {SQFT: Low-cost Model Adaptation in Low-precision Sparse Foundation Models},
  author={J. Pablo Munoz and Jinjie Yuan and Nilesh Jain},
  journal={The 2024 Conference on Empirical Methods in Natural Language Processing (Findings)},
  year={2024}
}

Acknowledgement

Thanks to the work Wanda (paper, code), which provides a simple but effective pruning approach.

License

Apache-2.0

Downloads last month
19
Safetensors
Model size
3.82B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including IntelLabs/sqft-phi-3-mini-4k-60-base