svjack commited on
Commit
68625f3
1 Parent(s): d16fbc5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +125 -0
README.md CHANGED
@@ -14,6 +14,131 @@ model-index:
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  # train_2024-06-17-19-49-05
18
 
19
  This model is a fine-tuned version of [Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) on the glaive_toolcall_zh and the glaive_toolcall_en datasets.
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
+ # Install some dependency
18
+ ```bash
19
+ pip install openai huggingface_hub
20
+ ```
21
+
22
+ # Download lora
23
+ ```python
24
+ from huggingface_hub import snapshot_download
25
+ snapshot_download(
26
+ repo_id="svjack/Qwen2-7B_Function_Call_tiny_lora",
27
+ repo_type="model",
28
+ local_dir="Qwen2-7B_Function_Call_tiny_lora",
29
+ local_dir_use_symlinks = False
30
+ )
31
+ ```
32
+
33
+ # Start OpenAI style api server
34
+ ```bash
35
+ python src/api.py \
36
+ --model_name_or_path Qwen/Qwen2-7B-Instruct \
37
+ --template qwen \
38
+ --adapter_name_or_path Qwen2-7B_Function_Call_tiny_lora \
39
+ --quantization_bit 4
40
+ ```
41
+
42
+ # Inference
43
+ ```python
44
+ import json
45
+ import os
46
+ from typing import Sequence
47
+
48
+ from openai import OpenAI
49
+ from transformers.utils.versions import require_version
50
+
51
+ require_version("openai>=1.5.0", "To fix: pip install openai>=1.5.0")
52
+
53
+ def calculate_gpa(grades: Sequence[str], hours: Sequence[int]) -> float:
54
+ grade_to_score = {"A": 4, "B": 3, "C": 2}
55
+ total_score, total_hour = 0, 0
56
+ for grade, hour in zip(grades, hours):
57
+ total_score += grade_to_score[grade] * hour
58
+ total_hour += hour
59
+ return round(total_score / total_hour, 2)
60
+
61
+ client = OpenAI(
62
+ api_key="0",
63
+ base_url="http://localhost:{}/v1".format(os.environ.get("API_PORT", 8000)),
64
+ )
65
+
66
+ tools = [
67
+ {
68
+ "type": "function",
69
+ "function": {
70
+ "name": "calculate_gpa",
71
+ "description": "Calculate the Grade Point Average (GPA) based on grades and credit hours",
72
+ "parameters": {
73
+ "type": "object",
74
+ "properties": {
75
+ "grades": {"type": "array", "items": {"type": "string"}, "description": "The grades"},
76
+ "hours": {"type": "array", "items": {"type": "integer"}, "description": "The credit hours"},
77
+ },
78
+ "required": ["grades", "hours"],
79
+ },
80
+ },
81
+ }
82
+ ]
83
+ tool_map = {"calculate_gpa": calculate_gpa}
84
+
85
+ messages = []
86
+ messages.append({"role": "user", "content": "My grades are A, A, B, and C. The credit hours are 3, 4, 3, and 2."})
87
+
88
+ result = client.chat.completions.create(messages=messages,
89
+ model="Qwen/Qwen2-7B-Instruct", tools=tools)
90
+
91
+ result.choices[0].message.tool_calls
92
+
93
+ messages.append(result.choices[0].message)
94
+ tool_call = result.choices[0].message.tool_calls[0].function
95
+ print(tool_call)
96
+
97
+ name, arguments = tool_call.name, json.loads(tool_call.arguments)
98
+ tool_result = tool_map[name](**arguments)
99
+
100
+ messages.append({"role": "tool", "content": json.dumps({"gpa": tool_result}, ensure_ascii=False)})
101
+
102
+ result = client.chat.completions.create(messages=messages, model="test", tools=tools)
103
+ print(result.choices[0].message.content)
104
+ ```
105
+
106
+ # Output
107
+ ```
108
+ Function(arguments='{"grades": ["A", "A", "B", "C"], "hours": [3, 4, 3, 2]}', name='calculate_gpa')
109
+ Based on the grades and credit hours you provided, your calculated GPA is 3.42.
110
+ ```
111
+
112
+ # Inference
113
+ ```python
114
+ messages = []
115
+ messages.append({"role": "user", "content": "我的成绩分别是A,A,B,C学分分别是3, 4, 3,和2"})
116
+
117
+ result = client.chat.completions.create(messages=messages,
118
+ model="Qwen/Qwen2-7B-Instruct", tools=tools)
119
+
120
+ result.choices[0].message.tool_calls
121
+
122
+ messages.append(result.choices[0].message)
123
+ tool_call = result.choices[0].message.tool_calls[0].function
124
+ print(tool_call)
125
+
126
+ name, arguments = tool_call.name, json.loads(tool_call.arguments)
127
+ tool_result = tool_map[name](**arguments)
128
+
129
+ messages.append({"role": "tool", "content": json.dumps({"gpa": tool_result}, ensure_ascii=False)})
130
+
131
+ result = client.chat.completions.create(messages=messages, model="test", tools=tools)
132
+ print(result.choices[0].message.content)
133
+ ```
134
+
135
+ # Output
136
+ ```
137
+ Function(arguments='{"grades": ["A", "A", "B", "C"], "hours": [3, 4, 3, 2]}', name='calculate_gpa')
138
+ 您提供的成绩和学分的加权平均分(GPA)是3.42。
139
+ ```
140
+
141
+
142
  # train_2024-06-17-19-49-05
143
 
144
  This model is a fine-tuned version of [Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) on the glaive_toolcall_zh and the glaive_toolcall_en datasets.