Supa-AI/llama-7b-hf-32768-fpf-gguf
This model was converted to GGUF format from mesolitica/llama-7b-hf-32768-fpf
using llama.cpp.
Refer to the original model card for more details on the model.
Available Versions
llama-7b-hf-32768-fpf.q4_0.gguf
(q4_0)llama-7b-hf-32768-fpf.q4_1.gguf
(q4_1)llama-7b-hf-32768-fpf.q5_0.gguf
(q5_0)llama-7b-hf-32768-fpf.q5_1.gguf
(q5_1)llama-7b-hf-32768-fpf.q8_0.gguf
(q8_0)llama-7b-hf-32768-fpf.q3_k_s.gguf
(q3_K_S)llama-7b-hf-32768-fpf.q3_k_m.gguf
(q3_K_M)llama-7b-hf-32768-fpf.q3_k_l.gguf
(q3_K_L)llama-7b-hf-32768-fpf.q4_k_s.gguf
(q4_K_S)llama-7b-hf-32768-fpf.q4_k_m.gguf
(q4_K_M)llama-7b-hf-32768-fpf.q5_k_s.gguf
(q5_K_S)llama-7b-hf-32768-fpf.q5_k_m.gguf
(q5_K_M)llama-7b-hf-32768-fpf.q6_k.gguf
(q6_K)
Use with llama.cpp
Replace FILENAME
with one of the above filenames.
CLI:
llama-cli --hf-repo Supa-AI/llama-7b-hf-32768-fpf-gguf --hf-file FILENAME -p "Your prompt here"
Server:
llama-server --hf-repo Supa-AI/llama-7b-hf-32768-fpf-gguf --hf-file FILENAME -c 2048
Model Details
- Original Model: mesolitica/llama-7b-hf-32768-fpf
- Format: GGUF
- Downloads last month
- 236
Model tree for Supa-AI/llama-7b-hf-32768-fpf-gguf
Base model
mesolitica/llama-7b-hf-32768-fpf