GGUF
English
llama
TensorBlock
GGUF
Inference Endpoints
TensorBlock

Feedback and support: TensorBlock's Twitter/X, Telegram Group and Discord server

chargoddard/SmolLlamix-8x101M - GGUF

This repo contains GGUF format model files for chargoddard/SmolLlamix-8x101M.

The files were quantized using machines provided by TensorBlock, and they are compatible with llama.cpp as of commit b4242.

Prompt template


Model file specification

Filename Quant type File Size Description
SmolLlamix-8x101M-Q2_K.gguf Q2_K 0.158 GB smallest, significant quality loss - not recommended for most purposes
SmolLlamix-8x101M-Q3_K_S.gguf Q3_K_S 0.184 GB very small, high quality loss
SmolLlamix-8x101M-Q3_K_M.gguf Q3_K_M 0.199 GB very small, high quality loss
SmolLlamix-8x101M-Q3_K_L.gguf Q3_K_L 0.212 GB small, substantial quality loss
SmolLlamix-8x101M-Q4_0.gguf Q4_0 0.233 GB legacy; small, very high quality loss - prefer using Q3_K_M
SmolLlamix-8x101M-Q4_K_S.gguf Q4_K_S 0.233 GB small, greater quality loss
SmolLlamix-8x101M-Q4_K_M.gguf Q4_K_M 0.243 GB medium, balanced quality - recommended
SmolLlamix-8x101M-Q5_0.gguf Q5_0 0.279 GB legacy; medium, balanced quality - prefer using Q4_K_M
SmolLlamix-8x101M-Q5_K_S.gguf Q5_K_S 0.279 GB large, low quality loss - recommended
SmolLlamix-8x101M-Q5_K_M.gguf Q5_K_M 0.284 GB large, very low quality loss - recommended
SmolLlamix-8x101M-Q6_K.gguf Q6_K 0.328 GB very large, extremely low quality loss
SmolLlamix-8x101M-Q8_0.gguf Q8_0 0.424 GB very large, extremely low quality loss - not recommended

Downloading instruction

Command line

Firstly, install Huggingface Client

pip install -U "huggingface_hub[cli]"

Then, downoad the individual model file the a local directory

huggingface-cli download tensorblock/SmolLlamix-8x101M-GGUF --include "SmolLlamix-8x101M-Q2_K.gguf" --local-dir MY_LOCAL_DIR

If you wanna download multiple model files with a pattern (e.g., *Q4_K*gguf), you can try:

huggingface-cli download tensorblock/SmolLlamix-8x101M-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
Downloads last month
199
GGUF
Model size
399M params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for tensorblock/SmolLlamix-8x101M-GGUF

Quantized
(3)
this model

Dataset used to train tensorblock/SmolLlamix-8x101M-GGUF