CharlieJi commited on
Commit
49199fe
1 Parent(s): 29b519b

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -1,35 +1,9 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ckpt filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
- *.model filter=lfs diff=lfs merge=lfs -text
13
- *.msgpack filter=lfs diff=lfs merge=lfs -text
14
- *.npy filter=lfs diff=lfs merge=lfs -text
15
- *.npz filter=lfs diff=lfs merge=lfs -text
16
- *.onnx filter=lfs diff=lfs merge=lfs -text
17
- *.ot filter=lfs diff=lfs merge=lfs -text
18
- *.parquet filter=lfs diff=lfs merge=lfs -text
19
- *.pb filter=lfs diff=lfs merge=lfs -text
20
- *.pickle filter=lfs diff=lfs merge=lfs -text
21
- *.pkl filter=lfs diff=lfs merge=lfs -text
22
- *.pt filter=lfs diff=lfs merge=lfs -text
23
- *.pth filter=lfs diff=lfs merge=lfs -text
24
- *.rar filter=lfs diff=lfs merge=lfs -text
25
- *.safetensors filter=lfs diff=lfs merge=lfs -text
26
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
- *.tar.* filter=lfs diff=lfs merge=lfs -text
28
- *.tar filter=lfs diff=lfs merge=lfs -text
29
- *.tflite filter=lfs diff=lfs merge=lfs -text
30
- *.tgz filter=lfs diff=lfs merge=lfs -text
31
- *.wasm filter=lfs diff=lfs merge=lfs -text
32
- *.xz filter=lfs diff=lfs merge=lfs -text
33
- *.zip filter=lfs diff=lfs merge=lfs -text
34
- *.zst filter=lfs diff=lfs merge=lfs -text
35
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
1
+ gorilla-openfunctions-v2-q2_K.gguf filter=lfs diff=lfs merge=lfs -text
2
+ gorilla-openfunctions-v2-q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
3
+ gorilla-openfunctions-v2-q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
4
+ gorilla-openfunctions-v2-q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
5
+ gorilla-openfunctions-v2-q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
6
+ gorilla-openfunctions-v2-q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
7
+ gorilla-openfunctions-v2-q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
8
+ gorilla-openfunctions-v2-q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
9
+ gorilla-openfunctions-v2-q6_K.gguf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -1,3 +1,270 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ # Gorilla-OpenFunctions-v2 GGUF Quantized Models
5
+
6
+ ## Gorilla-OpenFunctions-v2
7
+ 💡 SoTA for open-source models. On-par with GPT-4.
8
+
9
+ 🚀 Check out the [Berkeley Function Calling Leaderboard](https://gorilla.cs.berkeley.edu/leaderboard)
10
+ 📣 Read more in our [OpenFunctions v2 release blog](https://gorilla.cs.berkeley.edu/blogs/7_open_functions_v2.html) and [Berkeley Function Calling Leaderboard blog](https://gorilla.cs.berkeley.edu/blogs/8_berkeley_function_calling_leaderboard.html)
11
+
12
+ ## Introduction
13
+ Gorilla OpenFunctions extends Large Language Model(LLM) Chat Completion feature to formulate
14
+ executable APIs call given natural language instructions and API context. With OpenFunctions v2,
15
+ we now support:
16
+ 1. Multiple functions - choose betwen functions
17
+ 2. Parallel functions - call the same function `N` time with different parameter values
18
+ 3. Multiple & parallel - both of the above in a single chatcompletion call (one generation)
19
+ 4. Relevance detection - when chatting, chat. When asked for function, returns a function
20
+ 5. Python - supports `string, number, boolean, list, tuple, dict` parameter datatypes and `Any` for those not natively supported.
21
+ 6. JAVA - support for `byte, short, int, float, double, long, boolean, char, Array, ArrayList, Set, HashMap, Hashtable, Queue, Stack, and Any` datatypes.
22
+ 7. JavaScript - support for `String, Number, Bigint, Boolean, dict (object), Array, Date, and Any` datatypes.
23
+ 8. REST - native REST support
24
+
25
+ We've quantized [Gorilla-OpenFunctions-v2](https://huggingface.co/gorilla-llm/gorilla-openfunctions-v2) based on [llama.cpp](https://github.com/ggerganov/llama.cpp) as well as evaluated the quantized models on the [Berkeley Function Call Leaderboard](https://huggingface.co/datasets/gorilla-llm/Berkeley-Function-Calling-Leaderboard) to benchmark their performance with the original model as well as other models.
26
+
27
+ # Gorilla-OpenFunctions-v2 Quantized GGUF Models Evaluation
28
+ Here, we show some of the evaluation result summaries we have obtained from the evaluation.
29
+
30
+ | Model | Overall Accuracy* |
31
+ |---|---|
32
+ |GPT-4-0125-Preview | 85.12% |
33
+ |**Gorilla-OpenFunctions-v2** | 83.67% |
34
+ |GPT-3.5-turbo | 82.23% |
35
+ |--quantized 🦍 models ⬇--|--quantized 🦍 evaluation result ⬇--|
36
+ |Gorilla-OpenFunctions-v2-q6_K | 80.30% |
37
+ |Gorilla-OpenFunctions-v2-q5_K_M | 80.66% |
38
+ |Gorilla-OpenFunctions-v2-q5_K_S | 79.10% |
39
+ |Gorilla-OpenFunctions-v2-q4_K_M | 81.02% |
40
+ |Gorilla-OpenFunctions-v2-q4_K_S | 79.94% |
41
+ |Gorilla-OpenFunctions-v2-q3_K_L | 80.84% |
42
+ |Gorilla-OpenFunctions-v2-q3_K_M | 78.80% |
43
+ |Gorilla-OpenFunctions-v2-q3_K_S | 78.67% |
44
+ |Gorilla-OpenFunctions-v2-q2_K | 74.64% |
45
+
46
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63814d392dd1f3e7bf59862f/bxlhiRh5IEHGSh026enj4.png)
47
+
48
+ We observe that the quantized models have a lower overall accuracy compared to the original model. Evaluation results for q4 or higher quantization methods are comparable, but q3 and q2 quantization methods have larger drop in overall accuracy.
49
+
50
+ ---
51
+ # How to use GGUF locally
52
+
53
+ To use GGUF locally, first download GGUF models locally.
54
+
55
+ One option you can use is to use `huggingface-cli`. To download `huggingface-cli` please follow tutorials in https://huggingface.co/docs/huggingface_hub/main/en/guides/cli.
56
+
57
+ Then, do command (also replace `{QUANTIZATION_METHOD}` with one of your chosen quantization method)
58
+
59
+ ```bash
60
+ huggingface-cli download gorilla-llm/gorilla-openfunctions-v2-gguf gorilla-openfunctions-v2-{QUANTIZATION_METHOD}.gguf --local-dir gorilla-openfunctions-v2-GGUF
61
+ ```
62
+
63
+ It will store the QUANTIZATION_METHOD GGUF file to your local directory, `gorilla-openfunctions-v2-GGUF`.
64
+
65
+ We support QUANTIZATION_METHOD = {`q2_K`, `q3K_S`, `q3K_M`, `q3K_L`, `q4K_S`, `q4K_M`, `q5K_S`, `q5K_M`, `q6K`}.
66
+ Please let us know what other quantization methods you would like us to include!
67
+
68
+ Please follow the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) for `llama-cpp-python` package installation on your machine.
69
+
70
+ Then, you can run the following example script to see an example of local inference. Fill in `YOUR_DIRECTORY` in this code snippet. This script is adapted from https://github.com/abetlen/llama-cpp-python and https://github.com/ShishirPatil/gorilla/tree/main/openfunctions
71
+
72
+ ```python3
73
+ from llama_cpp import Llama
74
+ import json
75
+
76
+ llm = Llama(model_path="YOUR_DIRECTORY/gorilla-openfunctions-v2-GGUF/gorilla-openfunctions-v2-q2_K.gguf", n_threads=8, n_gpu_layers=35)
77
+
78
+ def get_prompt(user_query: str, functions: list = []) -> str:
79
+ """
80
+ Generates a conversation prompt based on the user's query and a list of functions.
81
+
82
+ Parameters:
83
+ - user_query (str): The user's query.
84
+ - functions (list): A list of functions to include in the prompt.
85
+
86
+ Returns:
87
+ - str: The formatted conversation prompt.
88
+ """
89
+ system = "You are an AI programming assistant, utilizing the Gorilla LLM model, developed by Gorilla LLM, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer."
90
+ if len(functions) == 0:
91
+ return f"{system}\n### Instruction: <<question>> {user_query}\n### Response: "
92
+ functions_string = json.dumps(functions)
93
+ return f"{system}\n### Instruction: <<function>>{functions_string}\n<<question>>{user_query}\n### Response: "
94
+
95
+ query = "What's the weather like in the two cities of Boston and San Francisco?"
96
+ functions = [
97
+ {
98
+ "name": "get_current_weather",
99
+ "description": "Get the current weather in a given location",
100
+ "parameters": {
101
+ "type": "object",
102
+ "properties": {
103
+ "location": {
104
+ "type": "string",
105
+ "description": "The city and state, e.g. San Francisco, CA",
106
+ },
107
+ "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
108
+ },
109
+ "required": ["location"],
110
+ },
111
+ }
112
+ ]
113
+
114
+ user_prompt = get_prompt(query, functions)
115
+
116
+ output = llm(user_prompt,
117
+ max_tokens=512, # Generate up to 512 tokens
118
+ stop=["<|EOT|>"],
119
+ echo=True # Whether to echo the prompt
120
+ )
121
+
122
+ print("Output: ", output)
123
+ ```
124
+
125
+ The expected output of successfully running this script is the following (tested on March 3, 2024)
126
+ ```bash
127
+ ❯ python quantized_inference.py
128
+ llama_model_loader: loaded meta data with 22 key-value pairs and 273 tensors from /Users/charliecheng-jieji/Downloads/codebase/quantized_eval/gorilla-openfunctions-v2-GGUF/gorilla-openfunctions-v2-q2_K.gguf (version GGUF V3 (latest))
129
+ llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
130
+ llama_model_loader: - kv 0: general.architecture str = llama
131
+ llama_model_loader: - kv 1: general.name str = LLaMA v2
132
+ llama_model_loader: - kv 2: llama.context_length u32 = 4096
133
+ llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
134
+ llama_model_loader: - kv 4: llama.block_count u32 = 30
135
+ llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
136
+ llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
137
+ llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
138
+ llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32
139
+ llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000001
140
+ llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000
141
+ llama_model_loader: - kv 11: general.file_type u32 = 10
142
+ llama_model_loader: - kv 12: tokenizer.ggml.model str = gpt2
143
+ llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,102400] = ["!", "\"", "#", "$", "%", "&", "'", ...
144
+ llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,102400] = [0.000000, 0.000000, 0.000000, 0.0000...
145
+ llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,102400] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
146
+ llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,99757] = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "h e"...
147
+ llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 100000
148
+ llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 100015
149
+ llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 100001
150
+ llama_model_loader: - kv 20: tokenizer.chat_template str = {% if not add_generation_prompt is de...
151
+ llama_model_loader: - kv 21: general.quantization_version u32 = 2
152
+ llama_model_loader: - type f32: 61 tensors
153
+ llama_model_loader: - type q2_K: 121 tensors
154
+ llama_model_loader: - type q3_K: 90 tensors
155
+ llama_model_loader: - type q6_K: 1 tensors
156
+ llm_load_vocab: mismatch in special tokens definition ( 2387/102400 vs 2400/102400 ).
157
+ llm_load_print_meta: format = GGUF V3 (latest)
158
+ llm_load_print_meta: arch = llama
159
+ llm_load_print_meta: vocab type = BPE
160
+ llm_load_print_meta: n_vocab = 102400
161
+ llm_load_print_meta: n_merges = 99757
162
+ llm_load_print_meta: n_ctx_train = 4096
163
+ llm_load_print_meta: n_embd = 4096
164
+ llm_load_print_meta: n_head = 32
165
+ llm_load_print_meta: n_head_kv = 32
166
+ llm_load_print_meta: n_layer = 30
167
+ llm_load_print_meta: n_rot = 128
168
+ llm_load_print_meta: n_embd_head_k = 128
169
+ llm_load_print_meta: n_embd_head_v = 128
170
+ llm_load_print_meta: n_gqa = 1
171
+ llm_load_print_meta: n_embd_k_gqa = 4096
172
+ llm_load_print_meta: n_embd_v_gqa = 4096
173
+ llm_load_print_meta: f_norm_eps = 0.0e+00
174
+ llm_load_print_meta: f_norm_rms_eps = 1.0e-06
175
+ llm_load_print_meta: f_clamp_kqv = 0.0e+00
176
+ llm_load_print_meta: f_max_alibi_bias = 0.0e+00
177
+ llm_load_print_meta: n_ff = 11008
178
+ llm_load_print_meta: n_expert = 0
179
+ llm_load_print_meta: n_expert_used = 0
180
+ llm_load_print_meta: pooling type = 0
181
+ llm_load_print_meta: rope type = 0
182
+ llm_load_print_meta: rope scaling = linear
183
+ llm_load_print_meta: freq_base_train = 10000.0
184
+ llm_load_print_meta: freq_scale_train = 1
185
+ llm_load_print_meta: n_yarn_orig_ctx = 4096
186
+ llm_load_print_meta: rope_finetuned = unknown
187
+ llm_load_print_meta: model type = ?B
188
+ llm_load_print_meta: model ftype = Q2_K - Medium
189
+ llm_load_print_meta: model params = 6.91 B
190
+ llm_load_print_meta: model size = 2.53 GiB (3.14 BPW)
191
+ llm_load_print_meta: general.name = LLaMA v2
192
+ llm_load_print_meta: BOS token = 100000 '<|begin▁of▁sentence|>'
193
+ llm_load_print_meta: EOS token = 100015 '<|EOT|>'
194
+ llm_load_print_meta: PAD token = 100001 '<|end▁of▁sentence|>'
195
+ llm_load_print_meta: LF token = 126 'Ä'
196
+ llm_load_tensors: ggml ctx size = 0.21 MiB
197
+ ggml_backend_metal_buffer_from_ptr: allocated buffer, size = 2457.45 MiB, ( 2457.52 / 10922.67)
198
+ llm_load_tensors: offloading 30 repeating layers to GPU
199
+ llm_load_tensors: offloading non-repeating layers to GPU
200
+ llm_load_tensors: offloaded 31/31 layers to GPU
201
+ llm_load_tensors: CPU buffer size = 131.25 MiB
202
+ llm_load_tensors: Metal buffer size = 2457.45 MiB
203
+ .....................................................................................
204
+ llama_new_context_with_model: n_ctx = 512
205
+ llama_new_context_with_model: freq_base = 10000.0
206
+ llama_new_context_with_model: freq_scale = 1
207
+ ggml_metal_init: allocating
208
+ ggml_metal_init: found device: Apple M1
209
+ ggml_metal_init: picking default device: Apple M1
210
+ ggml_metal_init: default.metallib not found, loading from source
211
+ ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
212
+ ggml_metal_init: loading '/Users/charliecheng-jieji/miniconda3/envs/public-api/lib/python3.12/site-packages/llama_cpp/ggml-metal.metal'
213
+ ggml_metal_init: GPU name: Apple M1
214
+ ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007)
215
+ ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
216
+ ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001)
217
+ ggml_metal_init: simdgroup reduction support = true
218
+ ggml_metal_init: simdgroup matrix mul. support = true
219
+ ggml_metal_init: hasUnifiedMemory = true
220
+ ggml_metal_init: recommendedMaxWorkingSetSize = 11453.25 MB
221
+ ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 240.00 MiB, ( 2699.33 / 10922.67)
222
+ llama_kv_cache_init: Metal KV buffer size = 240.00 MiB
223
+ llama_new_context_with_model: KV self size = 240.00 MiB, K (f16): 120.00 MiB, V (f16): 120.00 MiB
224
+ llama_new_context_with_model: CPU input buffer size = 10.01 MiB
225
+ ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 208.00 MiB, ( 2907.33 / 10922.67)
226
+ llama_new_context_with_model: Metal compute buffer size = 208.00 MiB
227
+ llama_new_context_with_model: CPU compute buffer size = 8.00 MiB
228
+ llama_new_context_with_model: graph splits (measure): 2
229
+ AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 |
230
+ Model metadata: {'general.quantization_version': '2', 'tokenizer.chat_template': "{% if not add_generation_prompt is defined %}\n{% set add_generation_prompt = false %}\n{% endif %}\n{%- set ns = namespace(found=false) -%}\n{%- for message in messages -%}\n {%- if message['role'] == 'system' -%}\n {%- set ns.found = true -%}\n {%- endif -%}\n{%- endfor -%}\n{{bos_token}}{%- if not ns.found -%}\n{{'You are an AI programming assistant, utilizing the Gorilla LLM model, developed by Gorilla LLM, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer\\n'}}\n{%- endif %}\n{%- for message in messages %}\n {%- if message['role'] == 'system' %}\n{{ message['content'] }}\n {%- else %}\n {%- if message['role'] == 'user' %}\n{{'### Instruction:\\n' + message['content'] + '\\n'}}\n {%- else %}\n{{'### Response:\\n' + message['content'] + '\\n<|EOT|>\\n'}}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{% if add_generation_prompt %}\n{{'### Response:'}}\n{% endif %}", 'tokenizer.ggml.padding_token_id': '100001', 'tokenizer.ggml.eos_token_id': '100015', 'tokenizer.ggml.bos_token_id': '100000', 'tokenizer.ggml.model': 'gpt2', 'llama.attention.head_count_kv': '32', 'llama.context_length': '4096', 'llama.attention.head_count': '32', 'llama.rope.freq_base': '10000.000000', 'llama.rope.dimension_count': '128', 'general.file_type': '10', 'llama.feed_forward_length': '11008', 'llama.embedding_length': '4096', 'llama.block_count': '30', 'general.architecture': 'llama', 'llama.attention.layer_norm_rms_epsilon': '0.000001', 'general.name': 'LLaMA v2'}
231
+ Using gguf chat template: {% if not add_generation_prompt is defined %}
232
+ {% set add_generation_prompt = false %}
233
+ {% endif %}
234
+ {%- set ns = namespace(found=false) -%}
235
+ {%- for message in messages -%}
236
+ {%- if message['role'] == 'system' -%}
237
+ {%- set ns.found = true -%}
238
+ {%- endif -%}
239
+ {%- endfor -%}
240
+ {{bos_token}}{%- if not ns.found -%}
241
+ {{'You are an AI programming assistant, utilizing the Deepseek Coder model, developed by Deepseek Company, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer\n'}}
242
+ {%- endif %}
243
+ {%- for message in messages %}
244
+ {%- if message['role'] == 'system' %}
245
+ {{ message['content'] }}
246
+ {%- else %}
247
+ {%- if message['role'] == 'user' %}
248
+ {{'### Instruction:\n' + message['content'] + '\n'}}
249
+ {%- else %}
250
+ {{'### Response:\n' + message['content'] + '\n<|EOT|>\n'}}
251
+ {%- endif %}
252
+ {%- endif %}
253
+ {%- endfor %}
254
+ {% if add_generation_prompt %}
255
+ {{'### Response:'}}
256
+ {% endif %}
257
+ Using chat eos_token: <|EOT|>
258
+ Using chat bos_token: <|begin▁of▁sentence|>
259
+
260
+ llama_print_timings: load time = 1890.11 ms
261
+ llama_print_timings: sample time = 23.48 ms / 40 runs ( 0.59 ms per token, 1703.94 tokens per second)
262
+ llama_print_timings: prompt eval time = 1889.91 ms / 181 tokens ( 10.44 ms per token, 95.77 tokens per second)
263
+ llama_print_timings: eval time = 2728.54 ms / 39 runs ( 69.96 ms per token, 14.29 tokens per second)
264
+ llama_print_timings: total time = 5162.12 ms / 220 tokens
265
+ ```
266
+
267
+
268
+ ```bash
269
+ Output: {'id': 'cmpl-0679223d-578f-42be-bbce-0e307faddd28', 'object': 'text_completion', 'created': 1709525244, 'model': '/Users/charliecheng-jieji/Downloads/codebase/quantized_eval/gorilla-openfunctions-v2-GGUF/gorilla-openfunctions-v2-q2_K.gguf', 'choices': [{'text': 'You are an AI programming assistant, utilizing the Gorilla LLM model, developed by Gorilla LLM, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer.\n### Instruction: <<function>>[{"name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}, "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}}, "required": ["location"]}}]\n<<question>>What\'s the weather like in the two cities of Boston and San Francisco?\n### Response: <<function>>get_current_weather(location=\'Boston\', unit=\'fahrenheit\')<<function>>get_current_weather(location=\'San Francisco\', unit=\'fahrenheit\')', 'index': 0, 'logprobs': None, 'finish_reason': 'stop'}], 'usage': {'prompt_tokens': 181, 'completion_tokens': 39, 'total_tokens': 220}}
270
+ ```
gorilla-openfunctions-v2-q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76c87e2c0ab160d35e7de42564dd2463b1056f8c6c767cb612f40c2d63959bda
3
+ size 2718834976
gorilla-openfunctions-v2-q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f23ce98c7a234284e2c6a530c314e64c392016e7e57c0360c4c5a5fb62bf9f59
3
+ size 3746685216
gorilla-openfunctions-v2-q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28343a456319959b75b56a5ad82076216c1ec31bdf574b2ecfed8b27f4b70edf
3
+ size 3461603616
gorilla-openfunctions-v2-q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b0ff5b6e43dac644d272d2ca0766c7188761837cb75163218c04d058a1b2895b
3
+ size 3138429216
gorilla-openfunctions-v2-q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59f8b1ab22baeedd341256cd7e635b368bccd7a47e2d9caa72d048a9290b886f
3
+ size 4223770912
gorilla-openfunctions-v2-q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eed57496ee37320f13b6999ab44b051baf22a124a751bf44e20d296c5ef4714a
3
+ size 4025770272
gorilla-openfunctions-v2-q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e51e25a4dcd056b5216c6c83d3fe231416a03d580ec2e69314fb0c8c0a9f45a
3
+ size 4926841120
gorilla-openfunctions-v2-q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2b7e616e63346642a461edb1e46cf75218d297133599649f135bed85d8ce0e3
3
+ size 4811809056
gorilla-openfunctions-v2-q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8f6765ab99698ffe25a6bf968dc3fe668c369be6020bff1063ff700e8eaf4ab0
3
+ size 5673853216