bunnycore/LLama-3.2-1B-General-lora_model-F16-GGUF
This LoRA adapter was converted to GGUF format from bunnycore/LLama-3.2-1B-General-lora_model
via the ggml.ai's GGUF-my-lora space.
Refer to the original adapter repository for more details.
Use with llama.cpp
# with cli
llama-cli -m base_model.gguf --lora LLama-3.2-1B-General-lora_model-f16.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora LLama-3.2-1B-General-lora_model-f16.gguf (...other args)
To know more about LoRA usage with llama.cpp server, refer to the llama.cpp server documentation.
- Downloads last month
- 104
Model tree for bunnycore/LLama-3.2-1B-General-lora_model-F16-GGUF
Base model
meta-llama/Llama-3.2-1B-Instruct
Quantized
unsloth/Llama-3.2-1B-Instruct-bnb-4bit