|
--- |
|
license: mit |
|
base_model: |
|
- meta-llama/Llama-2-13b |
|
pipeline_tag: text-generation |
|
tags: |
|
- chemistry |
|
- biology |
|
- finance |
|
- legal |
|
- music |
|
- art |
|
- code |
|
- climate |
|
- medical |
|
- quantized |
|
library_name: transformers |
|
metrics: |
|
- perplexity |
|
|
|
model-index: |
|
- name: LLaMa2-13b-merged-clusters |
|
results: |
|
- task: |
|
type: commonsense-reasoning |
|
dataset: |
|
name: HellaSwag |
|
type: HellaSwag |
|
metrics: |
|
- name: Accuracy |
|
type: accuracy |
|
value: 66.66 |
|
source: |
|
name: OpenCompass |
|
url: https://opencompass.org |
|
- task: |
|
type: commonsense-reasoning |
|
dataset: |
|
name: PIQA |
|
type: PIQA |
|
metrics: |
|
- name: Accuracy |
|
type: accuracy |
|
value: 72.14 |
|
source: |
|
name: OpenCompass |
|
url: https://opencompass.org |
|
- task: |
|
type: coreference-resolution |
|
dataset: |
|
name: WSC |
|
type: WSC |
|
metrics: |
|
- name: Perplexity |
|
type: perplexity |
|
value: 60.38 |
|
source: |
|
name: OpenCompass |
|
url: https://opencompass.org |
|
- task: |
|
type: coreference-resolution |
|
dataset: |
|
name: WSC |
|
type: WSC |
|
metrics: |
|
- name: Accuracy |
|
type: accuracy |
|
value: 38.99 |
|
source: |
|
name: OpenCompass |
|
url: https://opencompass.org |
|
- task: |
|
type: multiple-choice-question-answering |
|
dataset: |
|
name: CSQA |
|
type: CSQA |
|
metrics: |
|
- name: Accuracy |
|
type: accuracy |
|
value: 54.36 |
|
source: |
|
name: OpenCompass |
|
url: https://opencompass.org |
|
- task: |
|
type: multi-task-evaluation |
|
dataset: |
|
name: MMLU |
|
type: MMLU |
|
metrics: |
|
- name: Accuracy |
|
type: accuracy |
|
value: 54.76 |
|
source: |
|
name: OpenCompass |
|
url: https://opencompass.org |
|
- task: |
|
type: multiple-choice-question-answering |
|
dataset: |
|
name: RACE |
|
type: RACE-high |
|
metrics: |
|
- name: Accuracy |
|
type: accuracy |
|
value: 53.89 |
|
source: |
|
name: OpenCompass |
|
url: https://opencompass.org |
|
- task: |
|
type: multiple-choice-question-answering |
|
dataset: |
|
name: RACE |
|
type: RACE-middle |
|
metrics: |
|
- name: Accuracy |
|
type: accuracy |
|
value: 55.29 |
|
source: |
|
name: OpenCompass |
|
url: https://opencompass.org |
|
|
|
--- |
|
# Merged LLaMA Model |
|
|
|
This is a merged version of the LLaMA2-13b model based on hyperboloid projections. The model retains 31 layers with significant performance retention across all benchmarks. |
|
|