Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Edit Models filters
Tasks
Libraries
Datasets
Languages
Licenses
Other
1
Inference status
Reset Inference status
Warm
Cold
Frozen
Misc
Reset Misc
quark
Misc with no match
Inference Endpoints
AutoTrain Compatible
text-generation-inference
Eval Results
Merge
4-bit precision
custom_code
text-embeddings-inference
8-bit precision
Carbon Emissions
Mixture of Experts
Apply filters
Models
16
Full-text search
Edit filters
Sort: Trending
Active filters:
quark
Clear all
fxmarty/llama-tiny-testing-quark-indev
Updated
Oct 3
•
2
fxmarty/llama-tiny-int4-per-group-sym
Updated
Oct 25
•
126
fxmarty/llama-tiny-w-fp8-a-fp8
Updated
Oct 22
•
123
fxmarty/llama-tiny-w-fp8-a-fp8-o-fp8
Updated
Oct 22
•
130
fxmarty/llama-tiny-w-int8-per-tensor
Updated
Oct 22
•
169
fxmarty/llama-small-int4-per-group-sym-awq
Updated
Oct 29
•
127
fxmarty/quark-legacy-int8
Updated
Oct 10
•
610
fxmarty/llama-tiny-w-int8-b-int8-per-tensor
Updated
Oct 22
•
130
fxmarty/llama-small-int4-per-group-sym-awq-old
Updated
Oct 25
•
4
amd-quark/llama-tiny-w-int8-per-tensor
Updated
8 days ago
•
49
amd-quark/llama-tiny-w-int8-b-int8-per-tensor
Updated
8 days ago
•
41
amd-quark/llama-tiny-w-fp8-a-fp8
Updated
8 days ago
•
38
amd-quark/llama-tiny-w-fp8-a-fp8-o-fp8
Updated
8 days ago
•
36
amd-quark/llama-tiny-int4-per-group-sym
Updated
8 days ago
•
40
amd-quark/llama-small-int4-per-group-sym-awq
Updated
8 days ago
•
37
amd-quark/quark-legacy-int8
Updated
8 days ago
•
41