FP8-Dynamic quantization using llmcompressor. Run with:

vllm serve leon-se/gemma-3-27b-it-FP8-Dynamic --max-model-len 4096
Downloads last month
5,945
Safetensors
Model size
27.4B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for leon-se/gemma-3-27b-it-FP8-Dynamic

Quantized
(52)
this model
Quantizations
1 model