moxin-chat-7b-GGUF

Original Model

moxin-org/moxin-chat-7b

Run with Gaianet

Prompt template:

prompt template: moxin-chat

Reverse prompt

reverse prompt: [INST]

Context size:

chat_ctx_size: 32000

Run with GaiaNet:

Quantized with llama.cpp b4273

Downloads last month
63
GGUF
Model size
8.11B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for gaianet/moxin-chat-7b-GGUF

Quantized
(5)
this model