LiberatedHermes-2-Pro-Mistral-7B-HQQ

This is a 4bit quantization using HQQ

Load Script

from hqq.engine.hf import HQQModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
model     = HQQModelForCausalLM.from_quantized(model_id)
Downloads last month
4
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for macadeliccc/LiberatedHermes-2-Pro-Mistral-7B-HQQ