Triangle104's picture
Update README.md
2d9d59b verified
---
base_model: Locutusque/Hercules-6.1-Llama-3.1-8B
datasets:
- Locutusque/hercules-v6.1
language:
- en
- zh
library_name: transformers
license: llama3.1
pipeline_tag: text-generation
tags:
- not-for-all-audiences
- llama-cpp
- gguf-my-repo
model-index:
- name: Hercules-6.1-Llama-3.1-8B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 60.07
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Locutusque/Hercules-6.1-Llama-3.1-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 24.15
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Locutusque/Hercules-6.1-Llama-3.1-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 15.63
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Locutusque/Hercules-6.1-Llama-3.1-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 1.45
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Locutusque/Hercules-6.1-Llama-3.1-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 3.42
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Locutusque/Hercules-6.1-Llama-3.1-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 29.65
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Locutusque/Hercules-6.1-Llama-3.1-8B
name: Open LLM Leaderboard
---
# Triangle104/Hercules-6.1-Llama-3.1-8B-Q4_K_S-GGUF
This model was converted to GGUF format from [`Locutusque/Hercules-6.1-Llama-3.1-8B`](https://huggingface.co/Locutusque/Hercules-6.1-Llama-3.1-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Locutusque/Hercules-6.1-Llama-3.1-8B) for more details on the model.
---
Model details:
-
Hercules-6.1-Llama-3.1-8B is a fine-tuned language model derived from Llama-3.1-8B. It is specifically designed to excel in instruction following, function calls, and conversational interactions across various scientific and technical domains. This fine-tuning has hercules-v6.1 with enhanced abilities in:
Complex Instruction Following: Understanding and accurately executing multi-step instructions, even those involving specialized terminology.
Function Calling: Seamlessly interpreting and executing function calls, providing appropriate input and output values.
Domain-Specific Knowledge: Engaging in informative and educational conversations about Biology, Chemistry, Physics, Mathematics, Medicine, Computer Science, and more.
Intended Uses & Potential Bias
Hercules-6.1-Llama-3.1-8B is well-suited to the following applications:
Specialized Chatbots: Creating knowledgeable chatbots and conversational agents in scientific and technical fields.
Instructional Assistants: Supporting users with educational and step-by-step guidance in various disciplines.
Code Generation and Execution: Facilitating code execution through function calls, aiding in software development and prototyping.
Important Note: Although Hercules-v6.1 is carefully constructed, it's important to be aware that the underlying data sources may contain biases or reflect harmful stereotypes. Use this model with caution and consider additional measures to mitigate potential biases in its responses.
Limitations and Risks
Toxicity: The dataset contains toxic or harmful examples.
Hallucinations and Factual Errors: Like other language models, Llama-3-Hercules-6.0-8B may generate incorrect or misleading information, especially in specialized domains where it lacks sufficient expertise.
Potential for Misuse: The ability to engage in technical conversations and execute function calls could be misused for malicious purposes.
Evaluations
Tasks Version Filter n-shot Metric Value Stderr
agieval_nous 0.0 none acc ↑ 0.4427 ± 0.0094
- agieval_aqua_rat 1.0 none 0 acc ↑ 0.2913 ± 0.0286
none 0 acc_norm ↑ 0.2480 ± 0.0272
- agieval_logiqa_en 1.0 none 0 acc ↑ 0.3825 ± 0.0191
none 0 acc_norm ↑ 0.3794 ± 0.0190
- agieval_lsat_ar 1.0 none 0 acc ↑ 0.2087 ± 0.0269
none 0 acc_norm ↑ 0.2043 ± 0.0266
- agieval_lsat_lr 1.0 none 0 acc ↑ 0.4431 ± 0.0220
none 0 acc_norm ↑ 0.4000 ± 0.0217
- agieval_lsat_rc 1.0 none 0 acc ↑ 0.6097 ± 0.0298
none 0 acc_norm ↑ 0.5428 ± 0.0304
- agieval_sat_en 1.0 none 0 acc ↑ 0.7621 ± 0.0297
none 0 acc_norm ↑ 0.6942 ± 0.0322
- agieval_sat_en_without_passage 1.0 none 0 acc ↑ 0.4126 ± 0.0344
none 0 acc_norm ↑ 0.3641 ± 0.0336
- agieval_sat_math 1.0 none 0 acc ↑ 0.4318 ± 0.0335
none 0 acc_norm ↑ 0.3500 ± 0.0322
arc_challenge 1.0 none 0 acc ↑ 0.5247 ± 0.0146
none 0 acc_norm ↑ 0.5606 ± 0.0145
eq_bench 2.1 none 0 eqbench ↑ 63.2023 ± 2.6818
none 0 percent_parseable ↑ 98.8304 ± 0.8246
gsm8k 3.0 flexible-extract 5 exact_match ↑ 0.7801 ± 0.0114
strict-match 5 exact_match ↑ 0.7809 ± 0.0114
truthfulqa_mc2 2.0 none 0 acc ↑ 0.5389 ± 0.0150
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric Value
Avg. 22.40
IFEval (0-Shot) 60.07
BBH (3-Shot) 24.15
MATH Lvl 5 (4-Shot) 15.63
GPQA (0-shot) 1.45
MuSR (0-shot) 3.42
MMLU-PRO (5-shot) 29.65
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Hercules-6.1-Llama-3.1-8B-Q4_K_S-GGUF --hf-file hercules-6.1-llama-3.1-8b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Hercules-6.1-Llama-3.1-8B-Q4_K_S-GGUF --hf-file hercules-6.1-llama-3.1-8b-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Hercules-6.1-Llama-3.1-8B-Q4_K_S-GGUF --hf-file hercules-6.1-llama-3.1-8b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Hercules-6.1-Llama-3.1-8B-Q4_K_S-GGUF --hf-file hercules-6.1-llama-3.1-8b-q4_k_s.gguf -c 2048
```