Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
legraphista
/
Meta-Llama-3-70B-Instruct-abliterated-v3.5-IMat-GGUF
like
2
Text Generation
GGUF
quantized
GGUF
imatrix
quantization
imat
static
16bit
8bit
6bit
5bit
4bit
3bit
2bit
1bit
conversational
License:
llama3
Model card
Files
Files and versions
Community
2
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (0)
Are QK and IQ quantizations made from the F16 or BF16 Gguf?
#2 opened 5 months ago by
Nexesenex
Wow!
2
#1 opened 7 months ago by
Nexesenex