Description

MaziyarPanahi/Mistral-7B-Instruct-v0.1-AWQ is a quantized (AWQ) version of mistralai/Mistral-7B-Instruct-v0.1

How to use

Install the necessary packages

pip install --upgrade accelerate autoawq transformers

Example Python code

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "MaziyarPanahi/Mistral-7B-Instruct-v0.1-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id).to(0)

text = "User:\nHello can you provide me with top-3 cool places to visit in Paris?\n\nAssistant:\n"
inputs = tokenizer(text, return_tensors="pt").to(0)

out = model.generate(**inputs, max_new_tokens=300)
print(tokenizer.decode(out[0], skip_special_tokens=True))
Downloads last month
27
Safetensors
Model size
1.2B params
Tensor type
I32
·
FP16
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for MaziyarPanahi/Mistral-7B-Instruct-v0.1-AWQ

Quantized
(18)
this model

Collection including MaziyarPanahi/Mistral-7B-Instruct-v0.1-AWQ