NeuralPizza-7B-V0.1 / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
5fa0a24 verified
|
raw
history blame
4.89 kB
---
license: apache-2.0
library_name: Transformers
tags:
- transformers
- fine-tuned
- language-modeling
- direct-preference-optimization
datasets:
- Intel/orca_dpo_pairs
model-index:
- name: NeuralPizza-7B-V0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.48
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RatanRohith/NeuralPizza-7B-V0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.3
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RatanRohith/NeuralPizza-7B-V0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.42
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RatanRohith/NeuralPizza-7B-V0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 67.22
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RatanRohith/NeuralPizza-7B-V0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RatanRohith/NeuralPizza-7B-V0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.44
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RatanRohith/NeuralPizza-7B-V0.1
name: Open LLM Leaderboard
---
## Model Description
NeuralPizza-7B-V0.1 is a fine-tuned version of the SanjiWatsuki/Kunoichi-7B model, specialized through Direct Preference Optimization (DPO). It was fine-tuned using the Intel/orca_dpo_pairs dataset, focusing on enhancing model performance based on preference comparisons.
## Intended Use
This model is primarily intended for research and experimental applications in language modeling, especially for exploring the Direct Preference Optimization method. It provides insights into the nuances of DPO in the context of language model tuning.
## Training Data
The model was fine-tuned using the Intel/orca_dpo_pairs dataset. This dataset is designed for applying and testing Direct Preference Optimization techniques in language models.
## Training Procedure
The training followed the guidelines and methodologies outlined in the "Fine-Tune a Mistral 7B Model with Direct Preference Optimization" guide from Medium's Towards Data Science platform. Specific training regimes and hyperparameters are based on this guide. Here : https://medium.com/towards-data-science/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac
## Limitations and Bias
As an experimental model, it may carry biases inherent from its training data. The model's performance and outputs should be critically evaluated, especially in sensitive and diverse applications.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_RatanRohith__NeuralPizza-7B-V0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.53|
|AI2 Reasoning Challenge (25-Shot)|70.48|
|HellaSwag (10-Shot) |87.30|
|MMLU (5-Shot) |64.42|
|TruthfulQA (0-shot) |67.22|
|Winogrande (5-shot) |80.35|
|GSM8k (5-shot) |59.44|