Custom GGUF quants of Metaβs Llama-3.2-Instruct's finetunes, where the Output Tensors are quantized to Q8_0 or F32 and the Embeddings are kept @F32
Joseph
Joseph717171
AI & ML interests
None yet
Recent Activity
new activity
about 13 hours ago
google/gemma-3-12b-it-qat-q4_0-gguf:Permission to access the Unquantized versions of the QAT weights for Gemma-3
Organizations
Collections
3
Custom GGUF quants of Llama-3.1-8B-Instruct fine-tunes, where the Output Tensors are quantized to Q8_0 while the Embeddings are kept at F32. π§ π₯π
-
Joseph717171/DeepHermes-3-Llama-3.1-8B-Preview-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated β’ 564 β’ 1 -
Joseph717171/Hermes-3-Llama-3.1-8B-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated β’ 257 β’ 2 -
Joseph717171/Llama-3.1-SuperNova-Lite-8.0B-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated β’ 230 β’ 2 -
Joseph717171/Hermes-3-Llama-3.1-8B_TIES_with_base_Embeds_Initialized_dtypeF32-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated β’ 64 β’ 1
models
35
Joseph717171/DeepHermes-3-Llama-3.1-8B-Preview-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated
β’
564
β’
1
Joseph717171/Imatrices
Updated
β’
4
Joseph717171/Models
Updated
β’
1.08k
β’
4
Joseph717171/DeepHermes-3-Mistral-24B-Preview-OQ8_0-F32.EQ8_0-F32.IQ4_K-Q8_0-GGUF
Updated
β’
618
β’
1
Joseph717171/DeepHermes-3-Llama-3.2-3B-Preview-OQ8_0-F32.EQ8_0-F32.IQ4_K-Q8_0-GGUF
Updated
β’
316
β’
1
Joseph717171/Llama-3.2-1B-Instruct-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated
β’
580
β’
1
Joseph717171/Hermes-3-Llama-3.1-8B-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated
β’
257
β’
2
Joseph717171/DeepSeek-R1-Distill-Llama-8B-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated
β’
394
β’
2
Joseph717171/Llama-3.1-SuperNova-Lite-8.0B-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated
β’
230
β’
2
Joseph717171/Granite-3.1-8B-instruct-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated
β’
36
datasets
None public yet