Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
smpanaro
/
pythia-6.9b-AutoGPTQ-4bit-128g
like
0
Text Generation
Transformers
wikitext
gpt_neox
Inference Endpoints
4-bit precision
gptq
License:
mit
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
main
pythia-6.9b-AutoGPTQ-4bit-128g
1 contributor
History:
3 commits
smpanaro
Create README.md
856ca1f
verified
10 months ago
.gitattributes
Safe
1.52 kB
initial commit
10 months ago
README.md
Safe
1.57 kB
Create README.md
10 months ago
config.json
Safe
1.11 kB
Upload of AutoGPTQ quantized model
10 months ago
gptq_model-4bit-128g.safetensors
Safe
4.18 GB
LFS
Upload of AutoGPTQ quantized model
10 months ago
quantize_config.json
Safe
314 Bytes
Upload of AutoGPTQ quantized model
10 months ago