GGUF
conversational

Available GGUF versions for the PatronusAI/glider model: [BF16, Q8_0, Q5_K_M, Q4_K_M]

How to load your desired quantized model:

  1. Select the appropraite GGUF quantization from the available list above
  2. Run the following code:
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("PatronusAI/glider-gguf", gguf_file="glider_{version_from_list}.gguf")

For loading the Q8_0 version, this script will change to:

from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("PatronusAI/glider-gguf", gguf_file="glider_Q8_0.gguf")

For any issues or questions, reach out to Darshan Deshpande or Rebecca Qian

Downloads last month
46
GGUF
Model size
3.82B params
Architecture
phi3
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for PatronusAI/glider-gguf

Finetuned
PatronusAI/glider
Quantized
(3)
this model