TheBloke commited on
Commit
8d14d25
1 Parent(s): 7d1f043

Initial GPTQ model commit

Browse files
Files changed (1) hide show
  1. README.md +7 -3
README.md CHANGED
@@ -47,10 +47,12 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
47
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/CodeLlama-34B-Instruct-GGML)
48
  * [Meta's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/codellama/CodeLlama-34b-instruct-hf)
49
 
50
- ## Prompt template: TBC
51
 
52
  ```
53
- Info on prompt template will be added shortly.
 
 
54
  ```
55
 
56
  ## Provided files and GPTQ parameters
@@ -159,7 +161,9 @@ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
159
  """
160
 
161
  prompt = "Tell me about AI"
162
- prompt_template=f'''Info on prompt template will be added shortly.
 
 
163
  '''
164
 
165
  print("\n\n*** Generate:")
 
47
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/CodeLlama-34B-Instruct-GGML)
48
  * [Meta's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/codellama/CodeLlama-34b-instruct-hf)
49
 
50
+ ## Prompt template: CodeLlama
51
 
52
  ```
53
+ [INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```:
54
+ {prompt}
55
+ [/INST]
56
  ```
57
 
58
  ## Provided files and GPTQ parameters
 
161
  """
162
 
163
  prompt = "Tell me about AI"
164
+ prompt_template=f'''[INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```:
165
+ {prompt}
166
+ [/INST]
167
  '''
168
 
169
  print("\n\n*** Generate:")