File size: 3,754 Bytes
15178b4
f5d0b6a
 
 
15178b4
 
f5d0b6a
15178b4
fdfed96
15178b4
f5d0b6a
15178b4
f5d0b6a
15178b4
f5d0b6a
15178b4
f5d0b6a
15178b4
f5d0b6a
15178b4
f5d0b6a
 
 
 
 
15178b4
 
f5d0b6a
15178b4
f5d0b6a
 
 
 
 
 
 
15178b4
 
f5d0b6a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
---
license: apache-2.0
tags:
- function-calling
---

# Fireworks Function Calling (FireFunction) Model V2

<img src="https://cdn-uploads.huggingface.co/production/uploads/64b6f3a72f5a966b9722de88/p2qimncmTIv0Yuy1W_Hhx.png" alt="firefunction" width="400"/>

FireFunction is a state-of-the-art function calling model with a commercially viable license. Key info and highlights:

๐Ÿพ Successor of the [FireFunction](https://fireworks.ai/models/fireworks/firefunction-v2) model

๐Ÿ“ Signifficant quality improvements over FireFunction v1 across the broad range of metrics

๐Ÿ”† Support of parallel function calling (unlike FireFunction v1) and good instruction following

๐Ÿ’ก Hosted on the [Fireworks](https://fireworks.ai/models/fireworks/firefunction-v2) platform

## Resources
* [Fireworks discord with function calling channel](https://discord.gg/mMqQxvFD9A)
* [Documentation](https://readme.fireworks.ai/docs/function-calling)
* [UI Demo app](https://functional-chat.vercel.app/)
* [Try in Fireworks prompt playground UI](https://fireworks.ai/models/fireworks/firefunction-v2)


# Intended Use and Limitations

### Supported usecases
The model was tuned to perfom well on a range of usecases including:
 * general instruction following
 * multi-turn chat mixing vanilla messages with function calls
 * single- and parallel function calling
 * up to 20 function specs supported at once
 * structured information extraction

### Out-of-Scope Use
The model was not optimized for the following use cases:
  * 100+ function specs
  * nested function calling

## Example Usage

See [documentation](https://readme.fireworks.ai/docs/function-calling) for more detail.

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import json

device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained("fireworks-ai/firefunction-v2", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("fireworks-ai/firefunction-v2")

function_spec = [
    {
        "name": "get_stock_price",
        "description": "Get the current stock price",
        "parameters": {
            "type": "object",
            "properties": {
                "symbol": {
                    "type": "string",
                    "description": "The stock symbol, e.g. AAPL, GOOG"
                }
            },
            "required": [
                "symbol"
            ]
        }
    },
    {
        "name": "check_word_anagram",
        "description": "Check if two words are anagrams of each other",
        "parameters": {
            "type": "object",
            "properties": {
                "word1": {
                    "type": "string",
                    "description": "The first word"
                },
                "word2": {
                    "type": "string",
                    "description": "The second word"
                }
            },
            "required": [
                "word1",
                "word2"
            ]
        }
    }
]
functions = json.dumps(function_spec, indent=4)

messages = [
    {'role': 'functions', 'content': functions},
    {'role': 'system', 'content': 'You are a helpful assistant with access to functions. Use them if required.'},
    {'role': 'user', 'content': 'Hi, can you tell me the current stock price of google and netflix?'}
]

model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)

generated_ids = model.generate(model_inputs, max_new_tokens=128)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```

## Demo App

Check our easy-to-extend [demo chat app](https://github.com/fw-ai/forge/tree/main/apps/functional_chat) with function calling capabilities built on Firefunction model.