Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

4darsh-Dev
/
Meta-Llama-3-8B-quantized-GPTQ

Text Generation
PEFT
English
llama
llama-3-8b
llama-3-8b-quantized
llama-3-8b-autogptq
meta
quantized
4-bit precision
gptq
Model card Files Files and versions Community
Meta-Llama-3-8B-quantized-GPTQ
Ctrl+K
Ctrl+K
  • 1 contributor
History: 4 commits
4darsh-Dev's picture
4darsh-Dev
updated readme
b613dae verified 11 months ago
  • .gitattributes
    1.52 kB
    initial commit 11 months ago
  • README.md
    557 Bytes
    updated readme 11 months ago
  • config.json
    1.03 kB
    Upload of AutoGPTQ quantized model 11 months ago
  • gptq_model-4bit-128g.bin

    Detected Pickle imports (4)

    • "collections.OrderedDict",
    • "torch.HalfStorage",
    • "torch._utils._rebuild_tensor_v2",
    • "torch.IntStorage"

    What is a pickle import?

    5.74 GB
    LFS
    Upload of AutoGPTQ quantized model 11 months ago
  • quantize_config.json
    265 Bytes
    Upload of AutoGPTQ quantized model 11 months ago