Control-LLM-Llama3.1-8B-SynE-Concat16-Dlerp

This is a fine-tuned model of Llama-3.1-8B for muliligual-Chinese tasks on SynE dataset by Control LLM-Concat16-Dlerp.

Evaluation Results

Here is an overview of the evaluation results and findings:

Benchmark Results Table

The table below summarizes evaluation results across Chinese tasks and original capabilities.

Model CEval CEvalC CMMLU CMMLUC C-Avg BBH MLU MLUP O-Avg Overall
Llama3.1-8B 48.3 12.8 51.1 14.1 13.9 65.2 65.4 35.5 45.9 29.9
Llama-3-SynE 57.7 22.3 57.1 22.8 22.8 61.9 64.0 32.6 42.9 32.9
Full Param Tune 59.0 40.2 60.2 44.3 43.8 64.8 64.9 35.0 45.4 44.6
Stack Expansion 56.0 32.7 55.2 33.4 33.3 62.3 65.6 35.3 44.8 39.1
Concat-Lerp 57.1 34.8 57.0 37.4 37.1 64.4 64.6 35.8 45.9 41.5
Hybrid Expansion 58.9 44.7 57.9 44.3 44.4 65.1 65.7 36.9 46.8 45.6
Control LLM* 57.0 44.7 56.0 44.9 44.8 68.2 65.6 37.9 48.5 46.7

Explanation:

  • CEval: Chinese Evaluation
  • CEvalC: Chinese Evaluation (CoT - Chain of Thought)
  • CMMLU: Chinese MMLU
  • CMMLUC: Chinese MMLU (CoT)
  • C-Avg: Chinese - Size Weighted Average across CEval, CEvalC, CMMLU, and CMMLUC
  • BBH: BigBench Hard
  • MLU: MMLU (Massive Multitask Language Understanding)
  • MLUP: MMLU Pro
  • O-Avg: Original Capability - Size Weighted Average across BBH, MLU, and MLUP
  • Overall: Combined average across all tasks
Downloads last month
2
Safetensors
Model size
11.5B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for ControlLLM/Llama-3.1-8B-SynE-Concat16-Dlerp

Finetuned
(744)
this model

Datasets used to train ControlLLM/Llama-3.1-8B-SynE-Concat16-Dlerp

Evaluation results

  • exact_match,strict-match (meta_pretrain) on Pretraining Evaluation Dataset
    self-reported
    0.485
  • exact_match,strict-match (meta_bbh_3shot_cot_pretrain) on Pretraining Evaluation Dataset
    self-reported
    0.682
  • acc,none (meta_mmlu_5shot_pretrain) on Pretraining Evaluation Dataset
    self-reported
    0.656
  • exact_match,strict-match (meta_mmlu_pro_5shot_pretrain) on Pretraining Evaluation Dataset
    self-reported
    0.379
  • exact_match,strict-match (zh_pretrain_multishot) on Chinese Evaluation Dataset
    self-reported
    0.448
  • acc,none (ceval-valid) on Chinese Evaluation Dataset
    self-reported
    0.570
  • exact_match,strict-match (ceval-valid-pretrain-cot_zh) on Chinese Evaluation Dataset
    self-reported
    0.447
  • acc,none (cmmlu) on Chinese Evaluation Dataset
    self-reported
    0.560
  • exact_match,strict-match (cmmlu_pretrain_cot_zh) on Chinese Evaluation Dataset
    self-reported
    0.449