|
--- |
|
license: llama2 |
|
language: |
|
- en |
|
--- |
|
# Model Card for umd-zhou-lab/recycled-alpaca-7b-v2.0 |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
This model is trained by fine-tuning llama-2 with recycled Alpaca data V2. |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
|
|
- **Developed by:** UMD Tianyi Zhou Lab |
|
- **Model type:** An auto-regressive language model based on the transformer architecture |
|
- **License:** Llama 2 Community License Agreement |
|
- **Finetuned from model:** [meta-llama/Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b) |
|
|
|
### Model Sources |
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
|
- **GitHub:** [Reflection-Tuning](https://github.com/tianyi-lab/Reflection_Tuning) |
|
- **Paper:** [Reflection-Tuning: Data Recycling Improves LLM Instruction-Tuning](https://arxiv.org/abs/2310.11716) |
|
- **Data:** Coming soon |
|
|
|
## Uses |
|
|
|
The primary use of this model is research on large language models and chatbots. |
|
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. |
|
|
|
## Training |
|
|
|
We use the prompt from [FastChat](https://github.com/lm-sys/FastChat): |
|
|
|
``` |
|
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>USER: Who are you? ASSISTANT: I am ...</s>...... |
|
``` |
|
|
|
| Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay | Warmup Rate | |
|
| --- | ---: | ---: | ---: | ---: | ---: | ---: | |
|
| Recycled Models (7B) | 128 | 2e-5 | 3 | 2048 | 0 | 0.03 | |
|
|
|
## Performance |
|
|
|
The following table provides a comparison between our recycled models (V2) and baseline models on the AlpacaEval Leaderboard and Huggingface Open LLM Leaderboard. <br> |
|
|
|
The V2 Recycled Alpaca Data and WizardLM data, and the corresponding paper will be released soon. |
|
|
|
| | **AlpacaEval** || **Avg** | **ARC** | **HellaSwag** | **MMLU** | **TruthfulQA** || **Model**| |
|
|--------------------------|:--------------:|:-:|:-----------:|:-------:|:-------------:|:-------:|:--------------:|:-:|:-:| |
|
| **Alpaca 7B** | 26.46 || 50.21 | 42.65 | 76.91 | 41.73 | 39.55 ||/| |
|
| **Recycled Alpaca 7B V2.0** | 79.58 || 56.05 | 54.01 | 78.07 | 46.69 | 45.41 ||[[hf-Link]](https://huggingface.co/umd-zhou-lab/recycled-alpaca-7b-v2.0)| |
|
||||||||||| |
|
| **WizardLM 7B** | 67.64 || 54.18 | 51.60 | 77.70 | 42.70 | 44.70 ||/| |
|
| **Recycled WizardLM 7B V2.0** | 83.48 || 56.79 | 54.78 | 77.86 | 45.63 | 48.91 ||[[hf-Link]](https://huggingface.co/umd-zhou-lab/recycled-wizardlm-7b-v2.0)| |
|
||||||||| |
|
|
|
|
|
## Citation |
|
|
|
Please consider citing our paper if you think our codes, data, or models are useful. Thank you! |
|
``` |
|
@misc{li2023reflectiontuning, |
|
title={Reflection-Tuning: Data Recycling Improves LLM Instruction-Tuning}, |
|
author={Ming Li and Lichang Chen and Jiuhai Chen and Shwai He and Heng Huang and Jiuxiang Gu and Tianyi Zhou}, |
|
year={2023}, |
|
eprint={2310.11716}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |