|
--- |
|
inference: false |
|
language: |
|
- zh |
|
- en |
|
license: other |
|
model_creator: lmsys |
|
model_link: https://huggingface.co/lmsys/vicuna-7b-v1.5-16k |
|
model_name: chinese-llama-2-7b-16k |
|
model_type: vicuna |
|
pipeline_tag: text-generation |
|
quantized_by: shaowenchen |
|
tasks: |
|
- text2text-generation |
|
tags: |
|
- gguf |
|
- vicuna |
|
- chinese |
|
--- |
|
|
|
## Provided files |
|
|
|
| Name | Quant method | Size | |
|
| ------------------------------ | ------------ | ------ | |
|
| vicuna-7b-v1.5-16k.Q2_K.gguf | Q2_K | 2.6 GB | |
|
| vicuna-7b-v1.5-16k.Q3_K.gguf | Q3_K | 3.1 GB | |
|
| vicuna-7b-v1.5-16k.Q3_K_L.gguf | Q3_K_L | 3.3 GB | |
|
| vicuna-7b-v1.5-16k.Q3_K_S.gguf | Q3_K_S | 2.7 GB | |
|
| vicuna-7b-v1.5-16k.Q4_0.gguf | Q4_0 | 3.6 GB | |
|
| vicuna-7b-v1.5-16k.Q4_1.gguf | Q4_1 | 3.9 GB | |
|
| vicuna-7b-v1.5-16k.Q4_K.gguf | Q4_K | 3.8 GB | |
|
| vicuna-7b-v1.5-16k.Q4_K_S.gguf | Q4_K_S | 3.6 GB | |
|
| vicuna-7b-v1.5-16k.Q5_0.gguf | Q5_0 | 4.3 GB | |
|
| vicuna-7b-v1.5-16k.Q5_1.gguf | Q5_1 | 4.7 GB | |
|
| vicuna-7b-v1.5-16k.Q5_K.gguf | Q5_K | 4.5 GB | |
|
| vicuna-7b-v1.5-16k.Q5_K_S.gguf | Q5_K_S | 4.3 GB | |
|
| vicuna-7b-v1.5-16k.Q6_K.gguf | Q6_K | 5.1 GB | |
|
| vicuna-7b-v1.5-16k.Q8_0.gguf | Q8_0 | 6.7 GB | |
|
| vicuna-7b-v1.5-16k.gguf | full | 13 GB | |
|
|
|
Usage: |
|
|
|
```bash |
|
docker run --rm -it -p 8000:8000 -v /path/to/models:/models -e MODEL=/models/gguf-model-name.gguf hubimage/llama-cpp-python:latest |
|
``` |
|
|
|
## Provided images |
|
|
|
| Name | Quant method | Size | |
|
| ------------------------------------- | ------------ | ------- | |
|
| `shaowenchen/vicuna-7b-v1.5-16k-gguf` | Q2_K | 3.68 GB | |
|
|
|
Usage: |
|
|
|
``` |
|
docker run --rm -p 8000:8000 shaowenchen/vicuna-7b-v1.5-16k-gguf:Q2_K |
|
``` |
|
|
|
and you can view http://localhost:8000/docs to see the swagger UI. |
|
|