--- base_model: HuggingFaceTB/SmolLM-135M pipeline_tag: text-generation --- ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ) # QuantFactory/Biggie-SmoLlm-0.15B-Base-GGUF This is quantized version of [nisten/Biggie-SmoLlm-0.15B-Base](https://huggingface.co/nisten/Biggie-SmoLlm-0.15B-Base) created using llama.cpp # Original Model Card ###EVEN SMALLER Frankenstein of smolLm-0.13b upped to 0.15b Use this frankenbase for training. Done via semi-automated continuous merging to figure out the recipe. Model is more coherent. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6379683a81c1783a4a2ddba8/H6rv3ULQip4sYPpGGiZZe.png) ```bash wget https://huggingface.co/nisten/Biggie-SmoLlm-0.15B-Base/resolve/main/Biggie_SmolLM_0.15B_Base_bf16.gguf ``` ```verilog llama-cli -ngl 99 -co --temp 0 -p "How to build a city on Mars via calculating Aldrin-Cycler orbits?" -m Biggie_SmolLM_0.15B _Base_bf16.gguf ``` The temperature settings and min p etc need to be adjusted but even at default temp0 it was coherent for first 100 tokens. Amazing option for further training. And this is a merge of the base, not the instruct! ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6379683a81c1783a4a2ddba8/UK0_mQxy6GOHKxGKBbdhx.png) I don't understand how the f a 150mb file can talk but it can