File size: 1,274 Bytes
7b8c631
 
 
 
 
 
13cae41
88d8b89
a8f0a03
 
88d8b89
 
13cae41
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
---
license: llama3.1
---
GGUF [llama.cpp](https://github.com/ggerganov/llama.cpp) quantized version of:
- Original model: [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)
- Model creator: [Meta](https://huggingface.co/meta-llama)
- [License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE)

<span style="color: red">Update(24/07/27):</span> Latest fixes to use the full 128k context window are included in -ropefix versions. **Requirement** to run them and used version: [b3472](https://github.com/ggerganov/llama.cpp/releases/tag/b3472)

<span style="color: red">Update:</span> Use the -imatrix versions (they use [imatrix](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) and the **bpe-llama tokenizer** which should theoretically improve the output)

## Recommended Prompt Format (Llama3)
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>

Provide some context and/or instructions to the model.<|eot_id|><|start_header_id|>user<|end_header_id|>

The user’s message goes here<|eot_id|><|start_header_id|>assistant<|end_header_id|>

AI message goes here<|eot_id|>
```
Quant Version: [b3445](https://github.com/ggerganov/llama.cpp/releases/tag/b3445)