Locutusque's picture
Update README.md
6f0937d verified
|
raw
history blame
3.85 kB
metadata
license: other
language:
  - en
pipeline_tag: text-generation

llama-3-neural-chat-v2.2-8b

image/png

Model Details

Model Description

I fine-tuned llama-3 8B on an approach similar to Intel's neural chat language model. I have slightly modified the data sources so it is stronger in coding, math, and writing. I use both SFT and DPO-Positive. DPO-Positive dramatically improves performance over DPO.

Quants

GGUF: https://huggingface.co/bartowski/llama-3-neural-chat-v2.2-8B-GGUF

Uses

This model has great performance in writing, coding, and math.

Training Data

Recipe information will be coming soon. This language model's recipe is similar to Intel's Neural Chat.

Direct Use

Conversational AI. This model is also very uncensored, it will respond to pretty much any request regardless of the system prompt, use at your own risk.

Evaluations

Tasks Version Filter n-shot Metric Value Stderr
truthfulqa_mc2 2 none 0 acc 0.5232 ± 0.0151
gsm8k 3 strict-match 5 exact_match 0.5974 ± 0.0135
flexible-extract 5 exact_match 0.5974 ± 0.0135
agieval_nous N/A none 0 acc_norm 0.3841 ± 0.0094
none 0 acc 0.3802 ± 0.0094
- agieval_aqua_rat 1 none 0 acc 0.2598 ± 0.0276
none 0 acc_norm 0.2520 ± 0.0273
- agieval_logiqa_en 1 none 0 acc 0.3441 ± 0.0186
none 0 acc_norm 0.3687 ± 0.0189
- agieval_lsat_ar 1 none 0 acc 0.2217 ± 0.0275
none 0 acc_norm 0.2348 ± 0.0280
- agieval_lsat_lr 1 none 0 acc 0.3882 ± 0.0216
none 0 acc_norm 0.3824 ± 0.0215
- agieval_lsat_rc 1 none 0 acc 0.4944 ± 0.0305
none 0 acc_norm 0.5019 ± 0.0305
- agieval_sat_en 1 none 0 acc 0.6650 ± 0.0330
none 0 acc_norm 0.6553 ± 0.0332
- agieval_sat_en_without_passage 1 none 0 acc 0.3981 ± 0.0342
none 0 acc_norm 0.3981 ± 0.0342
- agieval_sat_math 1 none 0 acc 0.3500 ± 0.0322
none 0 acc_norm 0.3318 ± 0.0318