metadata
language: en
license: apache-2.0
library_name: transformers
SQFT Fine-tuned Model: sqft-qa-sparsepeft-mistral-7b-v0.3-50-gptq-gsm8k-heu
- Base Model: IntelLabs/sqft-mistral-7b-v0.3-50-base-gptq
- Sparsity: 50%
- Quantization: INT4 (GPTQ)
- Finetune Method: SQFT + QA-SparsePEFT
- Finetune data: GSM8K
- Sub-Adapter: Heuristic
Evaluation
MODEL_NAME=IntelLabs/sqft-qa-sparsepeft-mistral-7b-v0.3-50-gptq-gsm8k-heu
lm_eval --model hf --model_args pretrained=${MODEL_NAME},add_bos_token=True,trust_remote_code=True --tasks gsm8k --batch_size auto:4
Refer to our repo for the environment information to run this command.
Model Sources
Repository: https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/SQFT
Paper:
- SQFT: Low-cost Model Adaptation in Low-precision Sparse Foundation Models
- Low-Rank Adapters Meet Neural Architecture Search for LLM Compression
Citation
@inproceedings{munoz-etal-2024-sqft,
title = "{SQFT}: Low-cost Model Adaptation in Low-precision Sparse Foundation Models",
author = "Munoz, Juan Pablo and
Yuan, Jinjie and
Jain, Nilesh",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-emnlp.749",
pages = "12817--12832",
}
License
Apache-2.0