GGUF llama.cpp quantized version of:

Recommended Prompt Format (Llama3)

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

Provide some context and/or instructions to the model.<|eot_id|><|start_header_id|>user<|end_header_id|>

The user’s message goes here<|eot_id|><|start_header_id|>assistant<|end_header_id|>

AI message goes here<|eot_id|>

Quant Version: b3902

Downloads last month
113
GGUF
Model size
8.03B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

5-bit

8-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support