YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
vicuna-68m - bnb 8bits
- Model creator: https://huggingface.co/double7/
- Original model: https://huggingface.co/double7/vicuna-68m/
Original model description:
license: apache-2.0 datasets: - anon8231489123/ShareGPT_Vicuna_unfiltered language: - en pipeline_tag: text-generation
Model description
This is a Vicuna-like model with only 68M parameters, which is fine-tuned from LLaMA-68m on ShareGPT data.
The training setup follows the Vicuna suite.
The model is mainly developed as a base Small Speculative Model in the MCSD paper. As a comparison, it can be better aligned to the Vicuna models than LLaMA-68m with little loss of alignment to the LLaMA models.
Draft Model | Target Model | Alignment |
---|---|---|
LLaMA-68/160M | LLaMA-13/33B | 馃槂 |
LLaMA-68/160M | Vicuna-13/33B | 馃槦 |
Vicuna-68/160M | LLaMA-13/33B | 馃槂 |
Vicuna-68/160M | Vicuna-13/33B | 馃槂 |
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.