Llama-3-ELYZA-hermes-2x8B-gguf

Llama-3-ELYZA-hermes-2x8Bใฎggufใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆๅค‰ๆ›็‰ˆใงใ™ใ€‚

Downloads last month
15
GGUF
Model size
13.7B params
Architecture
llama

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support