Update README.md
Browse files
README.md
CHANGED
@@ -8,25 +8,27 @@ base_model:
|
|
8 |
|
9 |
# Hammer2.0-7b Function Calling Model
|
10 |
|
|
|
11 |
## Introduction
|
12 |
-
|
|
|
|
|
|
|
13 |
|
14 |
## Model Details
|
15 |
-
|
16 |
-
|
17 |
|
18 |
## Tuning Details
|
19 |
-
|
20 |
|
21 |
## Evaluation
|
22 |
-
|
23 |
-
|
24 |
<div style="text-align: center;">
|
25 |
<img src="v2_figures/bfcl.PNG" alt="overview" width="1000" style="margin: auto;">
|
26 |
</div>
|
27 |
|
28 |
|
29 |
-
In addition, we evaluated
|
30 |
|
31 |
| Model | Size | Func-Name+Args Det. (F1 Func-Name \| F1 Args) | | | | | | | | | | F1 Average | |
|
32 |
|:---------------------------:|:----:|:---------------------------------------------:|:-----:|:------------:|:-----:|:-----------:|:-----:|:---------------------:|:-----:|:-----------:|:-----:|:----------:|:-----:|
|
@@ -47,6 +49,8 @@ In addition, we evaluated our Hammer2.0 series (0.5b, 1.5b, 3b, 7b) on other aca
|
|
47 |
| Hammer2.0-3B | 3B | 93.6% | 84.3% | 83.7% | 59.0% | 83.1% | 58.8% | 95.3% | 91.2% | 92.5% | 70.5% | 89.6% | 72.8% |
|
48 |
| Hammer2.0-7B | 7B | 91.0% | 82.1% | 82.5% | 65.1% | 85.2% | 59.6% | 96.8% | 92.7% | 93.0% | 80.5% | 89.7% | 76.0% |
|
49 |
|
|
|
|
|
50 |
## Requiements
|
51 |
The code of Hammer2.0-7b has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`.
|
52 |
|
|
|
8 |
|
9 |
# Hammer2.0-7b Function Calling Model
|
10 |
|
11 |
+
|
12 |
## Introduction
|
13 |
+
We're excited to introduce Hammer 2.0, the latest in our Hammer Large Language Models series designed to enhance AI function calling. Differing from existing models focusing on training data refinement, Hammer optimizes performance primarily through advanced training techniques. In this version, we release a number of models with sizes ranging from 0.5B to 7B:
|
14 |
+
[0.5B](https://huggingface.co/MadeAgents/Hammer2.0-0.5b),
|
15 |
+
[1.5B](https://huggingface.co/MadeAgents/Hammer2.0-1.5b),
|
16 |
+
[4B](https://huggingface.co/MadeAgents/Hammer2.0-3b), and [7B](https://huggingface.co/MadeAgents/Hammer2.0-0.5b).
|
17 |
|
18 |
## Model Details
|
19 |
+
Hammer2.0 finetuned based on [Qwen 2.5 series](https://huggingface.co/collections/Qwen/qwen25-66e81a666513e518adb90d9e) and [Qwen 2.5 coder series](https://huggingface.co/collections/Qwen/qwen25-coder-66eaa22e6f99801bf65b0c2f). It's trained using the [APIGen Function Calling Datasets](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k) containing 60,000 samples, supplemented by [7,500 irrelevance detection data](https://huggingface.co/datasets/MadeAgents/XLAM-7.5k-Irrelevance) we generated. Employing innovative training techniques of function masking, function shuffling, and prompt optimization, Hammer2.0 has achieved exceptional performances across numerous benchmarks including [Berkley Function Calling Leaderboard](https://gorilla.cs.berkeley.edu/leaderboard.html), [API-Bank](https://arxiv.org/abs/2304.08244), [Tool-Alpaca](https://arxiv.org/abs/2306.05301), [Nexus Raven](https://github.com/nexusflowai/NexusRaven-V2) and [Seal-Tools](https://arxiv.org/abs/2405.08355).
|
|
|
20 |
|
21 |
## Tuning Details
|
22 |
+
We will soon release a report detailing our models' technical aspects. Stay tuned!
|
23 |
|
24 |
## Evaluation
|
25 |
+
The evaluation result of Hammer2.0 series on the Berkeley Function-Calling Leaderboard (BFCL) are presented in the following table:
|
|
|
26 |
<div style="text-align: center;">
|
27 |
<img src="v2_figures/bfcl.PNG" alt="overview" width="1000" style="margin: auto;">
|
28 |
</div>
|
29 |
|
30 |
|
31 |
+
In addition, we evaluated Hammer2.0 on other academic benchmarks to further show our model's generalization ability:
|
32 |
|
33 |
| Model | Size | Func-Name+Args Det. (F1 Func-Name \| F1 Args) | | | | | | | | | | F1 Average | |
|
34 |
|:---------------------------:|:----:|:---------------------------------------------:|:-----:|:------------:|:-----:|:-----------:|:-----:|:---------------------:|:-----:|:-----------:|:-----:|:----------:|:-----:|
|
|
|
49 |
| Hammer2.0-3B | 3B | 93.6% | 84.3% | 83.7% | 59.0% | 83.1% | 58.8% | 95.3% | 91.2% | 92.5% | 70.5% | 89.6% | 72.8% |
|
50 |
| Hammer2.0-7B | 7B | 91.0% | 82.1% | 82.5% | 65.1% | 85.2% | 59.6% | 96.8% | 92.7% | 93.0% | 80.5% | 89.7% | 76.0% |
|
51 |
|
52 |
+
On comparison, Hammer 2.0 outperform models with similar sizes and even surpasses many larger models overall.
|
53 |
+
|
54 |
## Requiements
|
55 |
The code of Hammer2.0-7b has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`.
|
56 |
|