TeeZee/NEBULA-XB-v1.0_SFT_2_epoch

Experiment, can DUS be taken one or more steps further?

Technical notes:

  • pretrained model NEBULA-XB-v1.0 finetuned on 30k entries from Merge_Glue dataset
  • 18 layers removed from both models of finetuned GALAXY-XB-v03
  • model has 108 layers (((48-12)*2)-18)*2 = 108
  • second step in scaling DUS procedure

To evaluate

  • model performance after merge, should be a little lover that GALAXY finetuned on 50k of slimorca

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 58.02
AI2 Reasoning Challenge (25-Shot) 63.05
HellaSwag (10-Shot) 85.07
MMLU (5-Shot) 65.41
TruthfulQA (0-shot) 52.06
Winogrande (5-shot) 82.24
GSM8k (5-shot) 0.30
Downloads last month
78
Safetensors
Model size
23.8B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for TeeZee/NEBULA-XB-v1.0_SFT_2_epoch

Quantizations
2 models

Dataset used to train TeeZee/NEBULA-XB-v1.0_SFT_2_epoch

Collection including TeeZee/NEBULA-XB-v1.0_SFT_2_epoch

Evaluation results