GGUF
Inference Endpoints
conversational
Michielo commited on
Commit
d6eaec2
·
verified ·
1 Parent(s): 4ca9694

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -3
README.md CHANGED
@@ -1,3 +1,74 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ ## Introduction
6
+
7
+ **This repo contains the humanized 360M SmolLM2 model in the GGUF Format**
8
+ - Quantization: q2_K, q3_K_S, q3_K_M, q3_K_L, q4_0, q4_K_S, q4_K_M, q5_0, q5_K_S, q5_K_M, q6_K, q8_0
9
+
10
+ ## Quickstart
11
+
12
+ We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and install it following the official guide. We follow the latest version of llama.cpp.
13
+ In the following demonstration, we assume that you are running commands under the repository `llama.cpp`.
14
+
15
+ Since cloning the entire repo may be inefficient, you can manually download the GGUF file that you need or use `huggingface-cli`:
16
+ 1. Install
17
+ ```shell
18
+ pip install -U huggingface_hub
19
+ ```
20
+ 2. Download:
21
+ ```shell
22
+ huggingface-cli download AssistantsLab/SmolLM2-360M-humanized_GGUF smollm2-360M-humanized-q4_k_m.gguf --local-dir . --local-dir-use-symlinks False
23
+ ```
24
+
25
+ ### Quants
26
+
27
+ | Filename | Quant type | File Size |
28
+ | -------- | ---------- | --------- |
29
+ | [smollm2-1.7b-humanized-q2_k.gguf](https://huggingface.co/AssistantsLab/SmolLM2-1.7B-humanized_GGUF/blob/main/smollm2-1.7b-humanized-q2_k.gguf) | Q2_K | 675MB |
30
+ | [smollm2-1.7b-humanized-q3_k_s.gguf](https://huggingface.co/AssistantsLab/SmolLM2-1.7B-humanized_GGUF/blob/main/smollm2-1.7b-humanized-q3_k_s.gguf) | Q3_K_S | 777MB |
31
+ | [smollm2-1.7b-humanized-q3_k_m.gguf](https://huggingface.co/AssistantsLab/SmolLM2-1.7B-humanized_GGUF/blob/main/smollm2-1.7b-humanized-q3_k_m.gguf) | Q3_K_M | 860MB |
32
+ | [smollm2-1.7b-humanized-q3_k_l.gguf](https://huggingface.co/AssistantsLab/SmolLM2-1.7B-humanized_GGUF/blob/main/smollm2-1.7b-humanized-q3_k_l.gguf) | Q3_K_L | 933MB |
33
+ | [smollm2-1.7b-humanized-q4_0.gguf](https://huggingface.co/AssistantsLab/SmolLM2-1.7B-humanized_GGUF/blob/main/smollm2-1.7b-humanized-q4_0.gguf) | Q4_0 | 991MB |
34
+ | [smollm2-1.7b-humanized-q4_k_s.gguf](https://huggingface.co/AssistantsLab/SmolLM2-1.7B-humanized_GGUF/blob/main/smollm2-1.7b-humanized-q4_k_s.gguf) | Q4_K_S | 999MB |
35
+ | [smollm2-1.7b-humanized-q4_k_m.gguf](https://huggingface.co/AssistantsLab/SmolLM2-1.7B-humanized_GGUF/blob/main/smollm2-1.7b-humanized-q4_k_m.gguf) | Q4_K_M | 1.06GB |
36
+ | [smollm2-1.7b-humanized-q5_0.gguf](https://huggingface.co/AssistantsLab/SmolLM2-1.7B-humanized_GGUF/blob/main/smollm2-1.7b-humanized-q5_0.gguf) | Q5_0 | 1.19GB |
37
+ | [smollm2-1.7b-humanized-q5_k_s.gguf](https://huggingface.co/AssistantsLab/SmolLM2-1.7B-humanized_GGUF/blob/main/smollm2-1.7b-humanized-q5_k_s.gguf) | Q5_K_S | 1.19GB |
38
+ | [smollm2-1.7b-humanized-q5_k_m.gguf](https://huggingface.co/AssistantsLab/SmolLM2-1.7B-humanized_GGUF/blob/main/smollm2-1.7b-humanized-q5_k_m.gguf) | Q5_K_M | 1.23GB |
39
+ | [smollm2-1.7b-humanized-q6_k.gguf](https://huggingface.co/AssistantsLab/SmolLM2-1.7B-humanized_GGUF/blob/main/smollm2-1.7b-humanized-q6_k.gguf) | Q6_K | 1.41GB |
40
+ | [smollm2-1.7b-humanized-q8_0.gguf](https://huggingface.co/AssistantsLab/SmolLM2-1.7B-humanized_GGUF/blob/main/smollm2-1.7b-humanized-q8_0.gguf) | Q8_0 | 1.82GB |
41
+
42
+
43
+ ## More information
44
+
45
+ For more information about this model, please visit the original model [here](https://huggingface.co/AssistantsLab/SmolLM2-1.7B-humanized).
46
+
47
+
48
+ ## License
49
+
50
+ [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
51
+
52
+ ## Citation
53
+
54
+ SmolLM2:
55
+ ```bash
56
+ @misc{allal2024SmolLM2,
57
+ title={SmolLM2 - with great data, comes great performance},
58
+ author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Lewis Tunstall and Agustín Piqueres and Andres Marafioti and Cyril Zakka and Leandro von Werra and Thomas Wolf},
59
+ year={2024},
60
+ }
61
+ ```
62
+
63
+ Human-Like-DPO-Dataset:
64
+ ```bash
65
+ @misc{çalık2025enhancinghumanlikeresponseslarge,
66
+ title={Enhancing Human-Like Responses in Large Language Models},
67
+ author={Ethem Yağız Çalık and Talha Rüzgar Akkuş},
68
+ year={2025},
69
+ eprint={2501.05032},
70
+ archivePrefix={arXiv},
71
+ primaryClass={cs.CL},
72
+ url={https://arxiv.org/abs/2501.05032},
73
+ }
74
+ ```