File size: 1,759 Bytes
599262f 02f858f 599262f 9845abf caf0470 9845abf caf0470 84d515d 9845abf 311c955 9845abf dc5bda4 786dc3d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
---
license: other
language:
- en
- zh
library_name: transformers
pipeline_tag: text-generation
inference: false
tags:
- baichuan
- llama2
- baichuan2
---
This is the LLaMAfied version of [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) model by Baichuan Inc.
This model is converted with https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_baichuan2.py
You may use this model for fine-tuning in downstream tasks, we recommend using our efficient fine-tuning toolkit. https://github.com/hiyouga/LLaMA-Factory
- **Developed by:** Baichuan Inc.
- **Language(s) (NLP):** Chinese/English
- **License:** [Baichuan2 License](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/resolve/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)
Usage:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("hiyouga/Baichuan2-7B-Base-LLaMAfied", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("hiyouga/Baichuan2-7B-Base-LLaMAfied").cuda()
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_hiyouga__Baichuan2-7B-Base-LLaMAfied)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 42.83 |
| ARC (25-shot) | 49.57 |
| HellaSwag (10-shot) | 73.45 |
| MMLU (5-shot) | 54.86 |
| TruthfulQA (0-shot) | 37.54 |
| Winogrande (5-shot) | 70.72 |
| GSM8K (5-shot) | 7.81 |
| DROP (3-shot) | 5.85 |
|