metadata
tags:
- llama
- vicuna
- text-generation-inference
complete model with delta patch applied
- No unnecessary changes
- Same format
- No quantization
Setup
# Install FastChat
pip3 install fschat
# Install the latest main branch of huggingface/transformers
pip3 install git+https://github.com/huggingface/transformers
Usage
python3 -m fastchat.serve.cli --model-name eachadea/vicuna-13b
See more at https://github.com/lm-sys/FastChat