CarbonBeagle-11B-truthy-GGUF

Description

This repo contains GGUF format model files for CarbonBeagle-11B-truthy-GGUF.

Files Provided

Name Quant Bits File Size Remark
carbonbeagle-11b-truthy.IQ3_XXS.gguf IQ3_XXS 3 4.44 GB 3.06 bpw quantization
carbonbeagle-11b-truthy.IQ3_S.gguf IQ3_S 3 4.69 GB 3.44 bpw quantization
carbonbeagle-11b-truthy.IQ3_M.gguf IQ3_M 3 4.85 GB 3.66 bpw quantization mix
carbonbeagle-11b-truthy.Q4_0.gguf Q4_0 4 6.07 GB 3.56G, +0.2166 ppl
carbonbeagle-11b-truthy.IQ4_NL.gguf IQ4_NL 4 6.14 GB 4.25 bpw non-linear quantization
carbonbeagle-11b-truthy.Q4_K_M.gguf Q4_K_M 4 6.46 GB 3.80G, +0.0532 ppl
carbonbeagle-11b-truthy.Q5_K_M.gguf Q5_K_M 5 7.60 GB 4.45G, +0.0122 ppl
carbonbeagle-11b-truthy.Q6_K.gguf Q6_K 6 8.81 GB 5.15G, +0.0008 ppl
carbonbeagle-11b-truthy.Q8_0.gguf Q8_0 8 11.40 GB 6.70G, +0.0004 ppl

Parameters

path type architecture rope_theta sliding_win max_pos_embed
vicgalle/CarbonBeagle-11B-truthy mistral MistralForCausalLM 10000.0 4096 32768

Benchmarks

Original Model Card

No info.

Downloads last month
57
GGUF
Model size
10.7B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Collection including koesn/CarbonBeagle-11B-truthy-GGUF