Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
smpanaro
/
gpt2-AutoGPTQ-4bit-128g
like
0
Text Generation
Transformers
wikitext
gpt2
Inference Endpoints
4-bit precision
gptq
License:
mit
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
4ad090a
gpt2-AutoGPTQ-4bit-128g
1 contributor
History:
5 commits
smpanaro
Update README.md
4ad090a
verified
10 months ago
.gitattributes
Safe
1.52 kB
initial commit
10 months ago
README.md
Safe
1.16 kB
Update README.md
10 months ago
config.json
Safe
1.25 kB
Upload of AutoGPTQ quantized model
10 months ago
gptq_model-4bit-128g.safetensors
Safe
201 MB
LFS
Upload of AutoGPTQ quantized model
10 months ago
quantize_config.json
Safe
296 Bytes
Upload of AutoGPTQ quantized model
10 months ago