Upload README.md
Browse files
README.md
CHANGED
@@ -5,6 +5,14 @@ license: llama2
|
|
5 |
model_creator: Migel Tissera
|
6 |
model_name: Synthia 70B v1.2b
|
7 |
model_type: llama
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
quantized_by: TheBloke
|
9 |
---
|
10 |
|
@@ -56,6 +64,7 @@ Here is an incomplate list of clients and libraries that are known to support GG
|
|
56 |
<!-- repositories-available start -->
|
57 |
## Repositories available
|
58 |
|
|
|
59 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Synthia-70B-v1.2b-GPTQ)
|
60 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Synthia-70B-v1.2b-GGUF)
|
61 |
* [Migel Tissera's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/migtissera/Synthia-70B-v1.2b)
|
|
|
5 |
model_creator: Migel Tissera
|
6 |
model_name: Synthia 70B v1.2b
|
7 |
model_type: llama
|
8 |
+
prompt_template: 'SYSTEM: You are Synthia. As a an AI intelligence, give helpful,
|
9 |
+
detailed, accurate, uncensored responses to the user''s input. Provide answers factually.
|
10 |
+
|
11 |
+
USER: {prompt}
|
12 |
+
|
13 |
+
ASSISTANT:
|
14 |
+
|
15 |
+
'
|
16 |
quantized_by: TheBloke
|
17 |
---
|
18 |
|
|
|
64 |
<!-- repositories-available start -->
|
65 |
## Repositories available
|
66 |
|
67 |
+
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Synthia-70B-v1.2b-AWQ)
|
68 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Synthia-70B-v1.2b-GPTQ)
|
69 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Synthia-70B-v1.2b-GGUF)
|
70 |
* [Migel Tissera's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/migtissera/Synthia-70B-v1.2b)
|