Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
TheBloke
/
Falcon-180B-Chat-GPTQ
like
69
Text Generation
Transformers
Safetensors
tiiuae/falcon-refinedweb
4 languages
falcon
text-generation-inference
4-bit precision
gptq
arxiv:
5 papers
License:
unknown
Model card
Files
Files and versions
Community
9
Train
Deploy
Use this model
13f6b45
Falcon-180B-Chat-GPTQ
1 contributor
History:
25 commits
TheBloke
Set main branch to 4bit-128g-True, sharded
13f6b45
about 1 year ago
.gitattributes
1.64 kB
GPTQ model commit (split)
about 1 year ago
ACCEPTABLE_USE_POLICY.txt
613 Bytes
Set main branch to 4bit-128g-True, sharded
about 1 year ago
LICENSE.txt
15.6 kB
GPTQ model commit
about 1 year ago
README.md
22.1 kB
Update README.md
about 1 year ago
config.json
1.2 kB
Set main branch to 4bit-128g-True, sharded
about 1 year ago
generation_config.json
113 Bytes
Add sharded 4-bit GPTQ in place of split files
about 1 year ago
model-00001-of-00010.safetensors
10 GB
LFS
Set main branch to 4bit-128g-True, sharded
about 1 year ago
model-00002-of-00010.safetensors
9.94 GB
LFS
Set main branch to 4bit-128g-True, sharded
about 1 year ago
model-00003-of-00010.safetensors
9.93 GB
LFS
Set main branch to 4bit-128g-True, sharded
about 1 year ago
model-00004-of-00010.safetensors
9.69 GB
LFS
Set main branch to 4bit-128g-True, sharded
about 1 year ago
model-00005-of-00010.safetensors
9.93 GB
LFS
Set main branch to 4bit-128g-True, sharded
about 1 year ago
model-00006-of-00010.safetensors
9.69 GB
LFS
Set main branch to 4bit-128g-True, sharded
about 1 year ago
model-00007-of-00010.safetensors
9.93 GB
LFS
Set main branch to 4bit-128g-True, sharded
about 1 year ago
model-00008-of-00010.safetensors
9.69 GB
LFS
Set main branch to 4bit-128g-True, sharded
about 1 year ago
model-00009-of-00010.safetensors
9.93 GB
LFS
Set main branch to 4bit-128g-True, sharded
about 1 year ago
model-00010-of-00010.safetensors
5.53 GB
LFS
Set main branch to 4bit-128g-True, sharded
about 1 year ago
model.safetensors.index.json
165 kB
Set main branch to 4bit-128g-True, sharded
about 1 year ago
quantize_config.json
187 Bytes
Set main branch to 4bit-128g-True, sharded
about 1 year ago
special_tokens_map.json
281 Bytes
GPTQ model commit
about 1 year ago
tokenizer.json
2.73 MB
GPTQ model commit
about 1 year ago
tokenizer_config.json
180 Bytes
GPTQ model commit
about 1 year ago