Upload README.md
Browse files
README.md
CHANGED
@@ -60,10 +60,15 @@ Here are a list of clients and libraries that are known to support GGUF:
|
|
60 |
<!-- repositories-available end -->
|
61 |
|
62 |
<!-- prompt-template start -->
|
63 |
-
## Prompt template:
|
64 |
|
65 |
```
|
66 |
-
|
|
|
|
|
|
|
|
|
|
|
67 |
|
68 |
```
|
69 |
|
@@ -122,7 +127,7 @@ Make sure you are using `llama.cpp` from commit [6381d4e110bd0ec02843a60bbeb8b6f
|
|
122 |
For compatibility with older versions of llama.cpp, or for any third-party libraries or clients that haven't yet updated for GGUF, please use GGML files instead.
|
123 |
|
124 |
```
|
125 |
-
./main -t 10 -ngl 32 -m mythalion-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "
|
126 |
```
|
127 |
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. If offloading all layers to GPU, set `-t 1`.
|
128 |
|
|
|
60 |
<!-- repositories-available end -->
|
61 |
|
62 |
<!-- prompt-template start -->
|
63 |
+
## Prompt template: Alpaca
|
64 |
|
65 |
```
|
66 |
+
Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
67 |
+
|
68 |
+
### Instruction:
|
69 |
+
{prompt}
|
70 |
+
|
71 |
+
### Response:
|
72 |
|
73 |
```
|
74 |
|
|
|
127 |
For compatibility with older versions of llama.cpp, or for any third-party libraries or clients that haven't yet updated for GGUF, please use GGML files instead.
|
128 |
|
129 |
```
|
130 |
+
./main -t 10 -ngl 32 -m mythalion-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
|
131 |
```
|
132 |
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. If offloading all layers to GPU, set `-t 1`.
|
133 |
|