Update README.md
Browse files
README.md
CHANGED
@@ -35,7 +35,7 @@ Can I ask a question?<|im_end|>
|
|
35 |
|
36 |
## Support
|
37 |
|
38 |
-
To run inference on this model, you'll need to use Aphrodite or
|
39 |
|
40 |
However, you can work around this by quantizing the model yourself to create a functional GGUF file. Note that until [this PR](https://github.com/ggerganov/llama.cpp/pull/9141) is merged, the context will be limited to 8k tokens.
|
41 |
|
|
|
35 |
|
36 |
## Support
|
37 |
|
38 |
+
To run inference on this model, you'll need to use Aphrodite, vLLM or EXL2/tabbyAPI, as llama.cpp hasn't yet merged the required pull request to fix the llama3.1 rope_freqs issue with custom head dimensions.
|
39 |
|
40 |
However, you can work around this by quantizing the model yourself to create a functional GGUF file. Note that until [this PR](https://github.com/ggerganov/llama.cpp/pull/9141) is merged, the context will be limited to 8k tokens.
|
41 |
|