Safetensors
qwen2
linqq9 commited on
Commit
3878a15
1 Parent(s): b99fc1b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +170 -3
README.md CHANGED
@@ -1,3 +1,170 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - Salesforce/xlam-function-calling-60k
5
+ base_model: Qwen/Qwen2-7B-Instruct
6
+ ---
7
+ # Qwen2-7B Function Calling Model
8
+
9
+ ## Introduction
10
+
11
+ This model is primarily fine-tuned on the Qwen2-7B-instruct using XLAM data. The methods and processes implemented during training are detailed below:
12
+
13
+ 1. **Data Extraction and Preparation**:
14
+ We extracted 7.5k sample data from XLAM and removed the target tools from the candidate toolset to generate irrelevant data samples. This data was mixed with 60k XLAM data samples for training.
15
+
16
+ 2. **Methodology**:
17
+ We employed our proposed function/parameter mask training method. This technique helps the model focus more on description information in tool definitions, The specific mask operations include:
18
+ - 1. Function mask: Use a randomly generated string to replace the tool name, so that the model pays more attention to the tool description;
19
+ - 2. Parameter mask: Use a randomly generated string to replace the parameter name, so that the model pays more attention to the parameter description;
20
+ - 3. Default mask: Use a randomly generated string to replace the parameter default value, so that the model is not easy to overfit on a specific tool;
21
+ - 4. Random shuffle the order of tools in the tool set
22
+
23
+ 3. **Prompt Optimization**:
24
+ During inference, since our model focuses more on tool/parameter descriptions, we added default value information in parameter descriptions to obtain better performance.
25
+
26
+ ## Supported Function Calling Types
27
+
28
+ The model is capable of handling various function calling scenarios, including:
29
+
30
+ - Single Function Calling
31
+ - Multiple Function Calling
32
+ - Parallel Function Calling
33
+ - Multiple Parallel Function Calling
34
+ - Irrelevance Detection
35
+
36
+ ## Upcoming Developments
37
+
38
+ We are actively working on preparing smaller models derived from this architecture, which will be open-sourced soon.
39
+
40
+
41
+
42
+ ## Example Usage
43
+ This is a simple example of how to use our model.
44
+ ~~~python
45
+ import json
46
+ import torch
47
+ from transformers import AutoModelForCausalLM, AutoTokenizer
48
+
49
+
50
+ model_name = "/home/notebook/data/group/ComplexTaskDecision/Hammer/ckpt/select_caller/xlam_7B/xlam_mask3_0.33_hammer_qwen7b_batch32/merge_step4220_bf16"
51
+ model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True)
52
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
53
+
54
+ # Please use our provided instruction prompt for best performance
55
+ TASK_INSTRUCTION = """You are a tool calling assistant. In order to complete the user's request, you need to select one or more appropriate tools from the following tools and fill in the correct values for the tool parameters. Your specific tasks are:
56
+ 1. Make one or more function/tool calls to meet the request based on the question.
57
+ 2. If none of the function can be used, point it out and refuse to answer.
58
+ 3. If the given question lacks the parameters required by the function, also point it out.
59
+ """
60
+
61
+ FORMAT_INSTRUCTION = """
62
+ The output MUST strictly adhere to the following JSON format, and NO other text MUST be included.
63
+ The example format is as follows. Please make sure the parameter type is correct. If no function call is needed, please directly output an empty list '[]'
64
+ ```
65
+ [
66
+ {"name": "func_name1", "arguments": {"argument1": "value1", "argument2": "value2"}},
67
+ ... (more tool calls as required)
68
+ ]
69
+ ```
70
+ """
71
+
72
+ # Define the input query and available tools
73
+ query = "Where can I find live giveaways for beta access and games? And what's the weather like in New York, US?"
74
+
75
+
76
+
77
+
78
+ live_giveaways_by_type = {
79
+ "name": "live_giveaways_by_type",
80
+ "description": "Retrieve live giveaways from the GamerPower API based on the specified type.",
81
+ "parameters": {
82
+ "type": "object",
83
+ "properties": {
84
+ "type": {
85
+ "type": "string",
86
+ "description": "The type of giveaways to retrieve (e.g., game, loot, beta).",
87
+ "default": "game"
88
+ }
89
+ },
90
+ "required": ["type"]
91
+ }
92
+ }
93
+ get_current_weather={
94
+ "name": "get_current_weather",
95
+ "description": "Get the current weather",
96
+ "parameters": {
97
+ "type": "object",
98
+ "properties": {
99
+ "location": {
100
+ "type": "string",
101
+ "description": "The city and state, e.g. San Francisco, CA"
102
+ }
103
+ },
104
+ "required": ["location"]
105
+ }
106
+ }
107
+ get_stock_price={
108
+ "name": "get_stock_price",
109
+ "description": "Retrieves the current stock price for a given ticker symbol. The ticker symbol must be a valid symbol for a publicly traded company on a major US stock exchange like NYSE or NASDAQ. The tool will return the latest trade price in USD. It should be used when the user asks about the current or most recent price of a specific stock. It will not provide any other information about the stock or company.",
110
+ "parameters": {
111
+ "type": "object",
112
+ "properties": {
113
+ "ticker": {
114
+ "type": "string",
115
+ "description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
116
+ }
117
+ },
118
+ "required": ["ticker"]
119
+ }
120
+ }
121
+
122
+ # Helper function to convert openai format tools to our more concise xLAM format
123
+ def convert_to_format_tool(tools):
124
+ ''''''
125
+ if isinstance(tools, dict):
126
+ format_tools = {
127
+ "name": tools["name"],
128
+ "description": tools["description"],
129
+ "parameters": tools["parameters"].get("properties", {}),
130
+ }
131
+ required = tools["parameters"].get("required", [])
132
+ for param in required:
133
+ format_tools["parameters"][param]["required"] = True
134
+ for param in format_tools["parameters"].keys():
135
+ if "default" in format_tools["parameters"][param]:
136
+ default = format_tools["parameters"][param]["default"]
137
+ format_tools["parameters"][param]["description"]+=f"default is \'{default}\'"
138
+ return format_tools
139
+ elif isinstance(tools, list):
140
+ return [convert_to_format_tool(tool) for tool in tools]
141
+ else:
142
+ return tools
143
+ # Helper function to build the input prompt for our model
144
+ def build_prompt(task_instruction: str, format_instruction: str, tools: list, query: str):
145
+ prompt = f"[BEGIN OF TASK INSTRUCTION]\n{task_instruction}\n[END OF TASK INSTRUCTION]\n\n"
146
+ prompt += f"[BEGIN OF AVAILABLE TOOLS]\n{json.dumps(tools)}\n[END OF AVAILABLE TOOLS]\n\n"
147
+ prompt += f"[BEGIN OF FORMAT INSTRUCTION]\n{format_instruction}\n[END OF FORMAT INSTRUCTION]\n\n"
148
+ prompt += f"[BEGIN OF QUERY]\n{query}\n[END OF QUERY]\n\n"
149
+ return prompt
150
+
151
+ # Build the input and start the inference
152
+ openai_format_tools = [live_giveaways_by_type, get_current_weather,get_stock_price]
153
+ format_tools = convert_to_format_tool(openai_format_tools)
154
+ content = build_prompt(TASK_INSTRUCTION, FORMAT_INSTRUCTION, format_tools, query)
155
+
156
+ messages=[
157
+ { 'role': 'user', 'content': content}
158
+ ]
159
+ inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
160
+
161
+ # tokenizer.eos_token_id is the id of <|EOT|> token
162
+ outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
163
+ print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
164
+ ~~~
165
+
166
+
167
+
168
+ ---
169
+
170
+ Feel free to reach out for further clarifications or contributions!