|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
# ggml versions of OpenLLaMa 3B v2 |
|
|
|
- Version: version 2 final |
|
- Project: [OpenLLaMA: An Open Reproduction of LLaMA](https://github.com/openlm-research/open_llama) |
|
- Model: [openlm-research/open_llama_3b_v2](https://huggingface.co/openlm-research/open_llama_3b_v2) |
|
- [llama.cpp](https://github.com/ggerganov/llama.cpp): build 607(ffb06a3) or later |
|
|
|
## Use with llama.cpp |
|
|
|
Support is now merged to master branch. |
|
|
|
## Newer quantizations |
|
|
|
There are now more quantization types in llama.cpp, some lower than 4 bits. |
|
Currently these are not well supported because of technical reasons. |
|
If you want to use them, you have to build llama.cpp (from build 829 (ff5d58f)) with the `LLAMA_QKK_64` Make or CMake variable enabled (see PR [#2001](https://github.com/ggerganov/llama.cpp/pull/2001)). |
|
Then you can quantize the F16 or maybe Q8_0 version to what you want. |
|
|
|
## Perplexity on wiki.test.raw |
|
|
|
Coming soon ... |
|
|