leaderboard-pr-bot's picture
Adding Evaluation Results
d7d7ac2
|
raw
history blame
1.97 kB
metadata
license: apache-2.0
datasets:
  - Fredithefish/openassistant-guanaco-unfiltered
language:
  - en
library_name: transformers
pipeline_tag: conversational
inference: false
Alt Text

✨ Guanaco - 3B - Uncensored ✨

IMPORTANT:

This is the old model. The dataset has been updated and a newer version of this model is available here.


Guanaco-3B-Uncensored has been fine-tuned for 6 epochs on the Unfiltered Guanaco Dataset. using RedPajama-INCITE-Base-3B-v1 as the base model.
The model does not perform well with languages other than English.
Please note: This model is designed to provide responses without content filtering or censorship. It generates answers without denials.

Special thanks

I would like to thank AutoMeta for providing me with the computing power necessary to train this model.

Prompt Template

### Human: {prompt} ### Assistant:

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 34.18
ARC (25-shot) 42.49
HellaSwag (10-shot) 66.99
MMLU (5-shot) 25.55
TruthfulQA (0-shot) 34.71
Winogrande (5-shot) 63.38
GSM8K (5-shot) 0.53
DROP (3-shot) 5.62