File size: 3,976 Bytes
33c7265 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 |
---
language:
- en
license: llama3
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- groq
- tool-use
- function-calling
---
# BigStorm - ExLLamaV2 (Exl2) Quantization
- 6.0 bpw target
- 8 head bits
Enjoy! Raise an issue if you'd like other BPW levels.
**Base Model Card Follows:**
---
# Llama-3-70B-Tool-Use
This is the 70B parameter version of the Llama 3 Groq Tool Use model, specifically designed for advanced tool use and function calling tasks.
## Model Details
- **Model Type:** Causal language model fine-tuned for tool use
- **Language(s):** English
- **License:** Meta Llama 3 Community License
- **Model Architecture:** Optimized transformer
- **Training Approach:** Full fine-tuning and Direct Preference Optimization (DPO) on Llama 3 70B base model
- **Input:** Text
- **Output:** Text, with enhanced capabilities for tool use and function calling
## Performance
- **Berkeley Function Calling Leaderboard (BFCL) Score:** 90.76% overall accuracy
- This score represents the best performance among all open-source 70B LLMs on the BFCL
## Usage and Limitations
This model is designed for research and development in tool use and function calling scenarios. It excels at tasks involving API interactions, structured data manipulation, and complex tool use. However, users should note:
- For general knowledge or open-ended tasks, a general-purpose language model may be more suitable
- The model may still produce inaccurate or biased content in some cases
- Users are responsible for implementing appropriate safety measures for their specific use case
Note the model is quite sensitive to the `temperature` and `top_p` sampling configuration. Start at `temperature=0.5, top_p=0.65` and move up or down as needed.
Text prompt example:
```
<|start_header_id|>system<|end_header_id|>
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"name": <function-name>,"arguments": <args-dict>}
</tool_call>
Here are the available tools:
<tools> {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"properties": {
"location": {
"description": "The city and state, e.g. San Francisco, CA",
"type": "string"
},
"unit": {
"enum": [
"celsius",
"fahrenheit"
],
"type": "string"
}
},
"required": [
"location"
],
"type": "object"
}
} </tools><|eot_id|><|start_header_id|>user<|end_header_id|>
What is the weather like in San Francisco?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
<tool_call>
{"id":"call_deok","name":"get_current_weather","arguments":{"location":"San Francisco","unit":"celsius"}}
</tool_call><|eot_id|><|start_header_id|>tool<|end_header_id|>
<tool_response>
{"id":"call_deok","result":{"temperature":"72","unit":"celsius"}}
</tool_response><|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Ethical Considerations
While fine-tuned for tool use, this model inherits the ethical considerations of the base Llama 3 model. Use responsibly and implement additional safeguards as needed for your application.
## Availability
The model is available through:
- [Groq API console](https://console.groq.com)
- [Hugging Face](https://huggingface.co/Groq/Llama-3-Groq-70B-Tool-Use)
For full details on responsible use, ethical considerations, and latest benchmarks, please refer to the [official Llama 3 documentation](https://llama.meta.com/) and the Groq model card.
|