File size: 4,719 Bytes
a8e7703 f6b8ea8 a8e7703 b0943f0 a8e7703 0a25531 e30a89d 2c170b3 0a25531 e30a89d 2c170b3 4db6307 0a25531 381049f f03e6dd 0a25531 2c170b3 0a25531 e30a89d adbb941 e30a89d 11d7912 e30a89d 5bd201c 750ed19 fea4312 5bd201c e30a89d fcf155c 11d7912 fcf155c 11d7912 fcf155c 0a25531 e30a89d 0a25531 e30a89d c942338 0a25531 381049f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 |
---
datasets:
- sciq
- metaeval/ScienceQA_text_only
- GAIR/lima
- Open-Orca/OpenOrca
- openbookqa
language:
- en
tags:
- upstage
- llama
- instruct
- instruction
pipeline_tag: text-generation
---
# LLaMa-30b-instruct model card
## Model Details
* **Developed by**: [Upstage](https://en.upstage.ai)
* **Backbone Model**: [LLaMA](https://github.com/facebookresearch/llama/tree/llama_v1)
* **Variations**: It has different model parameter sizes and sequence lengths: [30B/1024](https://huggingface.co/upstage/llama-30b-instruct), [30B/2048](https://huggingface.co/upstage/llama-30b-instruct-2048), [65B/1024](https://huggingface.co/upstage/llama-65b-instruct)
* **Language(s)**: English
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
* **License**: This model is under a **Non-commercial** Bespoke License and governed by the Meta license. You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform), but have either lost your copy of the weights or encountered issues converting them to the Transformers format
* **Where to send comments**: Instructions on how to provide feedback or comments on a model can be found by opening an issue in the [Hugging Face community's model repository](https://huggingface.co/upstage/llama-30b-instruct-2048/discussions)
* **Contact**: For questions and comments about the model, please email `[email protected]`
## Dataset Details
### Used Datasets
- [openbookqa](https://huggingface.co/datasets/openbookqa)
- [sciq](https://huggingface.co/datasets/sciq)
- [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca)
- [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only)
- [GAIR/lima](https://huggingface.co/datasets/GAIR/lima)
> No other data was used except for the dataset mentioned above
### Prompt Template
```
### System:
{System}
### User:
{User}
### Assistant:
{Assistant}
```
## Hardware and Software
* **Hardware**: We utilized an A100x8 for training our model
* **Training Factors**: We fine-tuned this model using a combination of the [DeepSpeed library](https://github.com/microsoft/DeepSpeed) and the [HuggingFace trainer](https://huggingface.co/docs/transformers/main_classes/trainer)
## Evaluation Results
### Overview
- We conducted a performance evaluation based on the tasks being evaluated on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`.
We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-evaluation-harness), specifically commit [b281b0921b636bc36ad05c0b0b0763bd6dd43463](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463).
### Main Results
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA |
|-----------------------------------------------|---------|-------|-----------|-------|------------|
| llama-65b-instruct (***Ours***, ***Local Reproduction***) | **69.4** | **67.6** | **86.5** | **64.9** | **58.8** |
| llama-30b-instruct-2048 (***Ours***, ***Open LLM Leaderboard***) | 67.0 | 64.9 | 84.9 | 61.9 | 56.3 |
| Llama-2-70b-chat-hf | 66.8 | 64.6 | 85.9 | 63.9 | 52.8 |
| llama-30b-instruct (***Ours***, ***Open LLM Leaderboard***) | 65.2 | 62.5 | 86.2 | 59.4 | 52.8 |
| falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 |
| llama-65b | 62.1 | 57.6 | 84.3 | 63.4 | 43.0 |
### Scripts
- Prepare evaluation environments:
```
# clone the repository
git clone https://github.com/EleutherAI/lm-evaluation-harness.git
# check out the specific commit
git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
# change to the repository directory
cd lm-evaluation-harness
```
## Ethical Issues
### Ethical Considerations
- There were no ethical issues involved, as we did not include the benchmark test set or the training set in the model's training process.
## Contact Us
### Why Upstage LLM?
- [Upstage](https://en.upstage.ai)'s LLM research has yielded remarkable results. Our 30B model **outperforms all models around the world**, positioning itself as the leading performer. Recognizing the immense potential in implementing private LLM to actual businesses, we invite you to easily apply private LLM and fine-tune it with your own data. For a seamless and tailored solution, please do not hesitate to reach out to us. ► [click here to contact].
[click here to contact]: mailto:[email protected] |