QuantFactory/shisa-gamma-7b-v1-GGUF
This is quantized version of augmxnt/shisa-gamma-7b-v1 created using llama.cpp
Model Description
For more information see our main Shisa 7B model
We applied a version of our fine-tune data set onto Japanese Stable LM Base Gamma 7B and it performed pretty well, just sharing since it might be of interest.
Check out our JA MT-Bench results.
- Downloads last month
- 124
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for QuantFactory/shisa-gamma-7b-v1-GGUF
Base model
stabilityai/japanese-stablelm-base-gamma-7b
Finetuned
augmxnt/shisa-gamma-7b-v1