Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,28 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
---
|
| 6 |
+
# InsTagger
|
| 7 |
+
|
| 8 |
+
**InsTagger** is an tool for automatically providing instruction tags by distilling tagging results from **InsTag**.
|
| 9 |
+
|
| 10 |
+
InsTag aims analyzing supervised fine-tuning (SFT) data in LLM aligning with human preference. For local tagging deployment, we release InsTagger, fine-tuned on InsTag results, to tag the queries in SFT data. Through the scope of tags, we sample a 6K subset of open-resourced SFT data to fine-tune LLaMA and LLaMA-2 and the fine-tuned models TagLM-13B-v1.0 and TagLM-13B-v2.0 outperform many open-resourced LLMs on MT-Bench.
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
### Model Description
|
| 14 |
+
|
| 15 |
+
- **Model type:** Auto-regressive Models
|
| 16 |
+
- **Language(s) (NLP):** English
|
| 17 |
+
- **License:** apache-2.0
|
| 18 |
+
- **Finetuned from model:** LLaMa-2
|
| 19 |
+
|
| 20 |
+
### Model Sources [optional]
|
| 21 |
+
|
| 22 |
+
- **Repository:** [https://github.com/OFA-Sys/InsTag](https://github.com/OFA-Sys/InsTag)
|
| 23 |
+
- **Paper:** [Arxiv](https://arxiv.org/pdf/2308.07074.pdf)
|
| 24 |
+
- **Demo:** [ModelScope Demo](https://www.modelscope.cn/studios/lukeminglkm/instagger_demo/summary)
|
| 25 |
+
|
| 26 |
+
## Uses
|
| 27 |
+
|
| 28 |
+
This model is directly developed with [FastChat](https://github.com/lm-sys/FastChat). So it can be easily infer or serve with FastChat selecting the vicuna template.
|