Text Generation
English
MiniLLM-Llama-7B / README.md
t1101675's picture
Update README.md
c618e2f verified
|
raw
history blame
1.7 kB
---
license: apache-2.0
datasets:
- databricks/databricks-dolly-15k
language:
- en
metrics:
- rouge
pipeline_tag: text-generation
---
# MiniLLM-Llama-7B
[paper](https://arxiv.org/abs/2306.08543) | [code](https://github.com/microsoft/LMOps/tree/main/minillm)
**MiniLLM-Llama-7B** is a Llama-7B model distilled from [Llama-13B](https://huggingface.co/MiniLLM/teacher-Llama-13B) on [databricks-dolly-15k](https://huggingface.co/datasets/aisquared/databricks-dolly-15k)
<p align='left'>
<img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/7hBWGZzYMJihCRQ70XoiQ.png" width="1000">
</p>
**Note**: MiniLLM requires an [SFT model](https://huggingface.co/MiniLLM/init-Llama-7B) for initilization to perform the PPO optimization.
## Evaluation
We ask GPT-4 to give scores for the generated responses of MiniLLM. The prompts are taken from [databricks-dolly-15k](https://huggingface.co/datasets/aisquared/databricks-dolly-15k) (test set), [self-instruct](https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json), and [vicuna](https://github.com/lm-sys/vicuna-blog-eval)
<p align='left'>
<img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/rDXnaDbKH5mBYAmqGC-_a.png" width="1000">
</p>
## Baseline Models
+ [SFT w/o KD](https://huggingface.co/MiniLLM/SFT-Llama-7B)
+ [KD](https://huggingface.co/MiniLLM/KD-Llama-7B)
+ [SeqKD](https://huggingface.co/MiniLLM/SeqKD-Llama-7B)
## Citation
```
@inproceedings{minillm,
title={MiniLLM: Knowledge Distillation of Large Language Models},
author={Gu, Yuxian and Dong, Li and Wei, Furu and Huang, Minlie},
booktitle={Proceedings of ICLR},
year={2024}
}
```