GGUF llama.cpp quantized version of:

Recommended Prompt Format (Llama3)

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

Provide some context and/or instructions to the model.<|eot_id|><|start_header_id|>user<|end_header_id|>

The user’s message goes here<|eot_id|><|start_header_id|>assistant<|end_header_id|>

AI message goes here<|eot_id|>

Quant Version: b3902

Downloads last month
158
GGUF
Model size
8.03B params
Architecture
llama

2-bit

5-bit

8-bit

32-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.