|
Quantization made by Richard Erkhov. |
|
|
|
[Github](https://github.com/RichardErkhov) |
|
|
|
[Discord](https://discord.gg/pvy7H8DZMG) |
|
|
|
[Request more models](https://github.com/RichardErkhov/quant_request) |
|
|
|
|
|
granite-20b-functioncalling - GGUF |
|
- Model creator: https://huggingface.co/ibm-granite/ |
|
- Original model: https://huggingface.co/ibm-granite/granite-20b-functioncalling/ |
|
|
|
|
|
| Name | Quant method | Size | |
|
| ---- | ---- | ---- | |
|
| [granite-20b-functioncalling.Q2_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-functioncalling-gguf/blob/main/granite-20b-functioncalling.Q2_K.gguf) | Q2_K | 7.38GB | |
|
| [granite-20b-functioncalling.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-functioncalling-gguf/blob/main/granite-20b-functioncalling.IQ3_XS.gguf) | IQ3_XS | 8.06GB | |
|
| [granite-20b-functioncalling.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-functioncalling-gguf/blob/main/granite-20b-functioncalling.IQ3_S.gguf) | IQ3_S | 8.32GB | |
|
| [granite-20b-functioncalling.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-functioncalling-gguf/blob/main/granite-20b-functioncalling.Q3_K_S.gguf) | Q3_K_S | 8.32GB | |
|
| [granite-20b-functioncalling.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-functioncalling-gguf/blob/main/granite-20b-functioncalling.IQ3_M.gguf) | IQ3_M | 8.93GB | |
|
| [granite-20b-functioncalling.Q3_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-functioncalling-gguf/blob/main/granite-20b-functioncalling.Q3_K.gguf) | Q3_K | 9.84GB | |
|
| [granite-20b-functioncalling.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-functioncalling-gguf/blob/main/granite-20b-functioncalling.Q3_K_M.gguf) | Q3_K_M | 9.84GB | |
|
| [granite-20b-functioncalling.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-functioncalling-gguf/blob/main/granite-20b-functioncalling.Q3_K_L.gguf) | Q3_K_L | 10.93GB | |
|
| [granite-20b-functioncalling.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-functioncalling-gguf/blob/main/granite-20b-functioncalling.IQ4_XS.gguf) | IQ4_XS | 10.32GB | |
|
| [granite-20b-functioncalling.Q4_0.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-functioncalling-gguf/blob/main/granite-20b-functioncalling.Q4_0.gguf) | Q4_0 | 10.76GB | |
|
| [granite-20b-functioncalling.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-functioncalling-gguf/blob/main/granite-20b-functioncalling.IQ4_NL.gguf) | IQ4_NL | 10.86GB | |
|
| [granite-20b-functioncalling.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-functioncalling-gguf/blob/main/granite-20b-functioncalling.Q4_K_S.gguf) | Q4_K_S | 10.86GB | |
|
| [granite-20b-functioncalling.Q4_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-functioncalling-gguf/blob/main/granite-20b-functioncalling.Q4_K.gguf) | Q4_K | 11.94GB | |
|
| [granite-20b-functioncalling.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-functioncalling-gguf/blob/main/granite-20b-functioncalling.Q4_K_M.gguf) | Q4_K_M | 11.94GB | |
|
| [granite-20b-functioncalling.Q4_1.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-functioncalling-gguf/blob/main/granite-20b-functioncalling.Q4_1.gguf) | Q4_1 | 11.91GB | |
|
| [granite-20b-functioncalling.Q5_0.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-functioncalling-gguf/blob/main/granite-20b-functioncalling.Q5_0.gguf) | Q5_0 | 13.05GB | |
|
| [granite-20b-functioncalling.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-functioncalling-gguf/blob/main/granite-20b-functioncalling.Q5_K_S.gguf) | Q5_K_S | 13.05GB | |
|
| [granite-20b-functioncalling.Q5_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-functioncalling-gguf/blob/main/granite-20b-functioncalling.Q5_K.gguf) | Q5_K | 13.79GB | |
|
| [granite-20b-functioncalling.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-functioncalling-gguf/blob/main/granite-20b-functioncalling.Q5_K_M.gguf) | Q5_K_M | 13.79GB | |
|
| [granite-20b-functioncalling.Q5_1.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-functioncalling-gguf/blob/main/granite-20b-functioncalling.Q5_1.gguf) | Q5_1 | 14.2GB | |
|
| [granite-20b-functioncalling.Q6_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-functioncalling-gguf/blob/main/granite-20b-functioncalling.Q6_K.gguf) | Q6_K | 15.49GB | |
|
| [granite-20b-functioncalling.Q8_0.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-functioncalling-gguf/blob/main/granite-20b-functioncalling.Q8_0.gguf) | Q8_0 | 20.01GB | |
|
|
|
|
|
|
|
|
|
Original model description: |
|
--- |
|
license: apache-2.0 |
|
--- |
|
### Granite-20B-FunctionCalling |
|
#### Model Summary |
|
Granite-20B-FunctionCalling is a finetuned model based on IBM's [granite-20b-code-instruct](https://huggingface.co/ibm-granite/granite-20b-code-instruct) model to introduce function calling abilities into Granite model family. The model is trained using a multi-task training approach on seven fundamental tasks encompassed in function calling, those being Nested Function Calling, Function Chaining, Parallel Functions, Function Name Detection, Parameter-Value Pair Detection, Next-Best Function, and Response Generation. |
|
|
|
- **Developers**: IBM Research |
|
- **Paper**: [Granite-Function Calling Model: Introducing Function Calling Abilities via Multi-task Learning of Granular Tasks](https://arxiv.org/pdf/2407.00121v1) |
|
- **Release Date**: July 9th, 2024 |
|
- **License**: [Apache 2.0.](https://www.apache.org/licenses/LICENSE-2.0) |
|
|
|
### Usage |
|
### Intended use |
|
The model is designed to respond to function calling related instructions. |
|
|
|
### Generation |
|
This is a simple example of how to use Granite-20B-Code-FunctionCalling model. |
|
```python |
|
import json |
|
import torch |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
device = "cuda" # or "cpu" |
|
model_path = "ibm-granite/granite-20b-functioncalling" |
|
tokenizer = AutoTokenizer.from_pretrained(model_path) |
|
# drop device_map if running on CPU |
|
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device) |
|
model.eval() |
|
|
|
# define the user query and list of available functions |
|
query = "What's the current weather in New York?" |
|
functions = [ |
|
{ |
|
"name": "get_current_weather", |
|
"description": "Get the current weather", |
|
"parameters": { |
|
"type": "object", |
|
"properties": { |
|
"location": { |
|
"type": "string", |
|
"description": "The city and state, e.g. San Francisco, CA" |
|
} |
|
}, |
|
"required": ["location"] |
|
} |
|
}, |
|
{ |
|
"name": "get_stock_price", |
|
"description": "Retrieves the current stock price for a given ticker symbol. The ticker symbol must be a valid symbol for a publicly traded company on a major US stock exchange like NYSE or NASDAQ. The tool will return the latest trade price in USD. It should be used when the user asks about the current or most recent price of a specific stock. It will not provide any other information about the stock or company.", |
|
"parameters": { |
|
"type": "object", |
|
"properties": { |
|
"ticker": { |
|
"type": "string", |
|
"description": "The stock ticker symbol, e.g. AAPL for Apple Inc." |
|
} |
|
}, |
|
"required": ["ticker"] |
|
} |
|
} |
|
] |
|
|
|
|
|
# serialize functions and define a payload to generate the input template |
|
payload = { |
|
"functions_str": [json.dumps(x) for x in functions], |
|
"query": query, |
|
} |
|
|
|
instruction = tokenizer.apply_chat_template(payload, tokenize=False, add_generation_prompt=True) |
|
|
|
# tokenize the text |
|
input_tokens = tokenizer(instruction, return_tensors="pt").to(device) |
|
|
|
# generate output tokens |
|
outputs = model.generate(**input_tokens, max_new_tokens=100) |
|
|
|
# decode output tokens into text |
|
outputs = tokenizer.batch_decode(outputs) |
|
|
|
# loop over the batch to print, in this example the batch size is 1 |
|
for output in outputs: |
|
# Each function call in the output will be preceded by the token "<function_call>" followed by a |
|
# json serialized function call of the format {"name": $function_name$, "arguments" {$arg_name$: $arg_val$}} |
|
# In this specific case, the output will be: <function_call> {"name": "get_current_weather", "arguments": {"location": "New York"}} |
|
print(output) |
|
``` |
|
|
|
|
|
|