QuantFactory/Biggie-SmoLlm-0.15B-Base-GGUF

This is quantized version of nisten/Biggie-SmoLlm-0.15B-Base created using llama.cpp

Original Model Card

###EVEN SMALLER Frankenstein of smolLm-0.13b upped to 0.15b Use this frankenbase for training.

Done via semi-automated continuous merging to figure out the recipe. Model is more coherent.

image/png

wget https://huggingface.co/nisten/Biggie-SmoLlm-0.15B-Base/resolve/main/Biggie_SmolLM_0.15B_Base_bf16.gguf
llama-cli -ngl 99 -co --temp 0 -p "How to build a city on Mars via calculating Aldrin-Cycler orbits?" -m Biggie_SmolLM_0.15B
_Base_bf16.gguf

The temperature settings and min p etc need to be adjusted but even at default temp0 it was coherent for first 100 tokens. Amazing option for further training. And this is a merge of the base, not the instruct!

image/png

I don't understand how the f a 150mb file can talk but it can

Downloads last month
240
GGUF
Model size
152M params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for QuantFactory/Biggie-SmoLlm-0.15B-Base-GGUF

Quantized
(13)
this model