Transformers
GGUF
llama
Inference Endpoints

Prompt template:

You are a helpful AI assistant.
USER: <prompt>
ASSISTANT:

Files:

wizardlm-7b-v1.0-uncensored.Q2_K.gguf
wizardlm-7b-v1.0-uncensored.Q3_K_L.gguf
wizardlm-7b-v1.0-uncensored.Q3_K_M.gguf
wizardlm-7b-v1.0-uncensored.Q3_K_S.gguf
wizardlm-7b-v1.0-uncensored.Q4_0.gguf
wizardlm-7b-v1.0-uncensored.Q4_K_M.gguf
wizardlm-7b-v1.0-uncensored.Q4_K_S.gguf
wizardlm-7b-v1.0-uncensored.Q5_0.gguf
wizardlm-7b-v1.0-uncensored.Q5_K_M.gguf
wizardlm-7b-v1.0-uncensored.Q5_K_S.gguf
wizardlm-7b-v1.0-uncensored.Q6_K.gguf
wizardlm-7b-v1.0-uncensored.Q8_0.gguf
Downloads last month
366
GGUF
Model size
6.74B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for ZanMax/WizardLM-7B-V1.0-Uncensored-GGUF

Quantized
(4)
this model

Dataset used to train ZanMax/WizardLM-7B-V1.0-Uncensored-GGUF