wonhosong's picture
Create README.md
cce27e0
|
raw
history blame
No virus
4.31 kB
metadata
datasets:
  - sciq
  - metaeval/ScienceQA_text_only
  - GAIR/lima
  - Open-Orca/OpenOrca
  - openbookqa
language:
  - en
tags:
  - upstage
  - llama
  - instruct
  - instruction
pipeline_tag: text-generation

LLaMa-2-70b-instruct-1024 model card

Model Details

Dataset Details

Used Datasets

No other data was used except for the dataset mentioned above

Prompt Template

### System:
{System}

### User:
{User}

### Assistant:
{Assistant}

Hardware and Software

Evaluation Results

Overview

Main Results

Model Average ARC HellaSwag MMLU TruthfulQA
Llama-2-70b-instruct-1024 (Ours, Local Reproduction) 72.02 70.73 87.41 69.27 60.68
llama-65b-instruct (Ours, Local Reproduction) 69.4 67.6 86.5 64.9 58.8
llama-30b-instruct-2048 (Ours, Open LLM Leaderboard) 67.0 64.9 84.9 61.9 56.3
Llama-2-70b-chat-hf 66.8 64.6 85.9 63.9 52.8
llama-30b-instruct (Ours, Open LLM Leaderboard) 65.2 62.5 86.2 59.4 52.8
falcon-40b-instruct 63.4 61.6 84.3 55.4 52.5
llama-65b 62.1 57.6 84.3 63.4 43.0

Scripts

  • Prepare evaluation environments:
# clone the repository
git clone https://github.com/EleutherAI/lm-evaluation-harness.git

# check out the specific commit
git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463

# change to the repository directory
cd lm-evaluation-harness

Ethical Issues

Ethical Considerations

  • There were no ethical issues involved, as we did not include the benchmark test set or the training set in the model's training process.

Contact Us

Why Upstage LLM?

  • Upstage's LLM research has yielded remarkable results. Our 30B model outperforms all models around the world, positioning itself as the leading performer. Recognizing the immense potential in implementing private LLM to actual businesses, we invite you to easily apply private LLM and fine-tune it with your own data. For a seamless and tailored solution, please do not hesitate to reach out to us. ► click here to contact.