TheBloke commited on
Commit
6c778dd
1 Parent(s): 693fd38

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -0
README.md CHANGED
@@ -5,6 +5,15 @@ license: llama2
5
  model_creator: TFLai
6
  model_name: ChatAYT Lora Assamble Marcoroni
7
  model_type: llama
 
 
 
 
 
 
 
 
 
8
  quantized_by: TheBloke
9
  ---
10
 
@@ -40,6 +49,7 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
40
  <!-- repositories-available start -->
41
  ## Repositories available
42
 
 
43
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/ChatAYT-Lora-Assamble-Marcoroni-GPTQ)
44
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/ChatAYT-Lora-Assamble-Marcoroni-GGUF)
45
  * [TFLai's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TFLai/ChatAYT-Lora-Assamble-Marcoroni)
 
5
  model_creator: TFLai
6
  model_name: ChatAYT Lora Assamble Marcoroni
7
  model_type: llama
8
+ prompt_template: '### Instruction:
9
+
10
+
11
+ {prompt}
12
+
13
+
14
+ ### Response:
15
+
16
+ '
17
  quantized_by: TheBloke
18
  ---
19
 
 
49
  <!-- repositories-available start -->
50
  ## Repositories available
51
 
52
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/ChatAYT-Lora-Assamble-Marcoroni-AWQ)
53
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/ChatAYT-Lora-Assamble-Marcoroni-GPTQ)
54
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/ChatAYT-Lora-Assamble-Marcoroni-GGUF)
55
  * [TFLai's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TFLai/ChatAYT-Lora-Assamble-Marcoroni)