|
--- |
|
license: apache-2.0 |
|
pipeline_tag: text-generation |
|
library_name: gguf |
|
base_model: ibm/labradorite-13b |
|
--- |
|
<u>**NOTE**</u>: You will need a recent build of llama.cpp to run these quants (i.e. at least commit `494c870`). |
|
|
|
GGUF importance matrix (imatrix) quants for https://huggingface.co/ibm/labradorite-13b |
|
* The importance matrix was trained for ~50K tokens (105 batches of 512 tokens) using a [general purpose imatrix calibration dataset](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384). |
|
* The [imatrix is being used on the K-quants](https://github.com/ggerganov/llama.cpp/pull/4930) as well. |
|
|
|
| Layers | Context | [Template](https://huggingface.co/ibm/labradorite-13b#prompt-template) | |
|
| --- | --- | --- | |
|
| <pre>40</pre> | <pre>4096</pre> | <pre>\<\|system\|\><br>{sys_prompt}<br>\<\|user\|\><br>{inputs}<br>\<\|assistant\|\><br>{response}\<\|endoftext\|\></pre> | |
|
|