Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
4darsh-Dev
/
Meta-Llama-3-8B-quantized-GPTQ
like
1
Text Generation
PEFT
English
llama
llama-3-8b
llama-3-8b-quantized
llama-3-8b-autogptq
meta
quantized
4-bit precision
gptq
License:
other
Model card
Files
Files and versions
Community
Use this model
main
Meta-Llama-3-8B-quantized-GPTQ
Ctrl+K
Ctrl+K
1 contributor
History:
4 commits
4darsh-Dev
updated readme
b613dae
verified
11 months ago
.gitattributes
Safe
1.52 kB
initial commit
11 months ago
README.md
Safe
557 Bytes
updated readme
11 months ago
config.json
Safe
1.03 kB
Upload of AutoGPTQ quantized model
11 months ago
gptq_model-4bit-128g.bin
Safe
pickle
Detected Pickle imports (4)
"collections.OrderedDict"
,
"torch.HalfStorage"
,
"torch._utils._rebuild_tensor_v2"
,
"torch.IntStorage"
What is a pickle import?
5.74 GB
LFS
Upload of AutoGPTQ quantized model
11 months ago
quantize_config.json
Safe
265 Bytes
Upload of AutoGPTQ quantized model
11 months ago