mlabonne's picture
Update README.md
46151fc verified
|
raw
history blame
2.69 kB
metadata
license: other
datasets:
  - mlabonne/orpo-dpo-mix-40k
tags:
  - dpo

NeuralDaredevil-8B-abliterated

image/jpeg

This is a DPO fine-tune of mlabonne/Daredevil-8-abliterated trained on one epoch of mlabonne/orpo-dpo-mix-40k. It is an improved version of the abliterated model.

πŸ”Ž Applications

This is an uncensored model. You can use it for any application that doesn't require alignment, like role-playing.

Tested on LM Studio using the "Llama 3" preset.

πŸ† Evaluation

Open LLM Leaderboard

TBD.

Nous

Evaluation performed using LLM AutoEval. See the entire leaderboard here.

Model Average AGIEval GPT4All TruthfulQA Bigbench
mlabonne/NeuralDaredevil-8B-abliterated πŸ“„ 55.87 43.73 73.6 59.36 46.8
mlabonne/Daredevil-8B πŸ“„ 55.87 44.13 73.52 59.05 46.77
mlabonne/Daredevil-8B-abliterated πŸ“„ 55.06 43.29 73.33 57.47 46.17
NousResearch/Hermes-2-Theta-Llama-3-8B πŸ“„ 54.28 43.9 72.62 56.36 44.23
openchat/openchat-3.6-8b-20240522 πŸ“„ 53.49 44.03 73.67 49.78 46.48
meta-llama/Meta-Llama-3-8B-Instruct πŸ“„ 51.34 41.22 69.86 51.65 42.64
meta-llama/Meta-Llama-3-8B πŸ“„ 45.42 31.1 69.95 43.91 36.7

🌳 Model family tree

image/png