TensorBlock

Feedback and support: TensorBlock's Twitter/X, Telegram Group and Discord server

jingyeom/seal_all_13b - GGUF

This repo contains GGUF format model files for jingyeom/seal_all_13b.

The files were quantized using machines provided by TensorBlock, and they are compatible with llama.cpp as of commit b4011.

Prompt template


Model file specification

Filename Quant type File Size Description
seal_all_13b-Q2_K.gguf Q2_K 4.600 GB smallest, significant quality loss - not recommended for most purposes
seal_all_13b-Q3_K_S.gguf Q3_K_S 5.356 GB very small, high quality loss
seal_all_13b-Q3_K_M.gguf Q3_K_M 5.988 GB very small, high quality loss
seal_all_13b-Q3_K_L.gguf Q3_K_L 6.539 GB small, substantial quality loss
seal_all_13b-Q4_0.gguf Q4_0 6.955 GB legacy; small, very high quality loss - prefer using Q3_K_M
seal_all_13b-Q4_K_S.gguf Q4_K_S 7.008 GB small, greater quality loss
seal_all_13b-Q4_K_M.gguf Q4_K_M 7.421 GB medium, balanced quality - recommended
seal_all_13b-Q5_0.gguf Q5_0 8.460 GB legacy; medium, balanced quality - prefer using Q4_K_M
seal_all_13b-Q5_K_S.gguf Q5_K_S 8.460 GB large, low quality loss - recommended
seal_all_13b-Q5_K_M.gguf Q5_K_M 8.699 GB large, very low quality loss - recommended
seal_all_13b-Q6_K.gguf Q6_K 10.058 GB very large, extremely low quality loss
seal_all_13b-Q8_0.gguf Q8_0 13.027 GB very large, extremely low quality loss - not recommended

Downloading instruction

Command line

Firstly, install Huggingface Client

pip install -U "huggingface_hub[cli]"

Then, downoad the individual model file the a local directory

huggingface-cli download tensorblock/seal_all_13b-GGUF --include "seal_all_13b-Q2_K.gguf" --local-dir MY_LOCAL_DIR

If you wanna download multiple model files with a pattern (e.g., *Q4_K*gguf), you can try:

huggingface-cli download tensorblock/seal_all_13b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
Downloads last month
29
GGUF
Model size
13.2B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for tensorblock/seal_all_13b-GGUF

Quantized
(1)
this model