morriszms commited on
Commit
50cc29b
1 Parent(s): 1bc9b30

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ krx_Gemma2-9B-It_1115-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ krx_Gemma2-9B-It_1115-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ krx_Gemma2-9B-It_1115-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ krx_Gemma2-9B-It_1115-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ krx_Gemma2-9B-It_1115-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ krx_Gemma2-9B-It_1115-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ krx_Gemma2-9B-It_1115-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ krx_Gemma2-9B-It_1115-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ krx_Gemma2-9B-It_1115-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ krx_Gemma2-9B-It_1115-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ krx_Gemma2-9B-It_1115-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ krx_Gemma2-9B-It_1115-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Q-PING/krx_Gemma2-9B-It_1115
3
+ tags:
4
+ - text-generation-inference
5
+ - transformers
6
+ - unsloth
7
+ - gemma2
8
+ - trl
9
+ - krx
10
+ - TensorBlock
11
+ - GGUF
12
+ license: apache-2.0
13
+ language:
14
+ - en
15
+ ---
16
+
17
+ <div style="width: auto; margin-left: auto; margin-right: auto">
18
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
19
+ </div>
20
+ <div style="display: flex; justify-content: space-between; width: 100%;">
21
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
22
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
23
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
24
+ </p>
25
+ </div>
26
+ </div>
27
+
28
+ ## Q-PING/krx_Gemma2-9B-It_1115 - GGUF
29
+
30
+ This repo contains GGUF format model files for [Q-PING/krx_Gemma2-9B-It_1115](https://huggingface.co/Q-PING/krx_Gemma2-9B-It_1115).
31
+
32
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
33
+
34
+ <div style="text-align: left; margin: 20px 0;">
35
+ <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
36
+ Run them on the TensorBlock client using your local machine ↗
37
+ </a>
38
+ </div>
39
+
40
+ ## Prompt template
41
+
42
+ ```
43
+ <bos><start_of_turn>user
44
+ {prompt}<end_of_turn>
45
+ <start_of_turn>model
46
+ ```
47
+
48
+ ## Model file specification
49
+
50
+ | Filename | Quant type | File Size | Description |
51
+ | -------- | ---------- | --------- | ----------- |
52
+ | [krx_Gemma2-9B-It_1115-Q2_K.gguf](https://huggingface.co/tensorblock/krx_Gemma2-9B-It_1115-GGUF/blob/main/krx_Gemma2-9B-It_1115-Q2_K.gguf) | Q2_K | 3.805 GB | smallest, significant quality loss - not recommended for most purposes |
53
+ | [krx_Gemma2-9B-It_1115-Q3_K_S.gguf](https://huggingface.co/tensorblock/krx_Gemma2-9B-It_1115-GGUF/blob/main/krx_Gemma2-9B-It_1115-Q3_K_S.gguf) | Q3_K_S | 4.338 GB | very small, high quality loss |
54
+ | [krx_Gemma2-9B-It_1115-Q3_K_M.gguf](https://huggingface.co/tensorblock/krx_Gemma2-9B-It_1115-GGUF/blob/main/krx_Gemma2-9B-It_1115-Q3_K_M.gguf) | Q3_K_M | 4.762 GB | very small, high quality loss |
55
+ | [krx_Gemma2-9B-It_1115-Q3_K_L.gguf](https://huggingface.co/tensorblock/krx_Gemma2-9B-It_1115-GGUF/blob/main/krx_Gemma2-9B-It_1115-Q3_K_L.gguf) | Q3_K_L | 5.132 GB | small, substantial quality loss |
56
+ | [krx_Gemma2-9B-It_1115-Q4_0.gguf](https://huggingface.co/tensorblock/krx_Gemma2-9B-It_1115-GGUF/blob/main/krx_Gemma2-9B-It_1115-Q4_0.gguf) | Q4_0 | 5.443 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
57
+ | [krx_Gemma2-9B-It_1115-Q4_K_S.gguf](https://huggingface.co/tensorblock/krx_Gemma2-9B-It_1115-GGUF/blob/main/krx_Gemma2-9B-It_1115-Q4_K_S.gguf) | Q4_K_S | 5.479 GB | small, greater quality loss |
58
+ | [krx_Gemma2-9B-It_1115-Q4_K_M.gguf](https://huggingface.co/tensorblock/krx_Gemma2-9B-It_1115-GGUF/blob/main/krx_Gemma2-9B-It_1115-Q4_K_M.gguf) | Q4_K_M | 5.761 GB | medium, balanced quality - recommended |
59
+ | [krx_Gemma2-9B-It_1115-Q5_0.gguf](https://huggingface.co/tensorblock/krx_Gemma2-9B-It_1115-GGUF/blob/main/krx_Gemma2-9B-It_1115-Q5_0.gguf) | Q5_0 | 6.484 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
60
+ | [krx_Gemma2-9B-It_1115-Q5_K_S.gguf](https://huggingface.co/tensorblock/krx_Gemma2-9B-It_1115-GGUF/blob/main/krx_Gemma2-9B-It_1115-Q5_K_S.gguf) | Q5_K_S | 6.484 GB | large, low quality loss - recommended |
61
+ | [krx_Gemma2-9B-It_1115-Q5_K_M.gguf](https://huggingface.co/tensorblock/krx_Gemma2-9B-It_1115-GGUF/blob/main/krx_Gemma2-9B-It_1115-Q5_K_M.gguf) | Q5_K_M | 6.647 GB | large, very low quality loss - recommended |
62
+ | [krx_Gemma2-9B-It_1115-Q6_K.gguf](https://huggingface.co/tensorblock/krx_Gemma2-9B-It_1115-GGUF/blob/main/krx_Gemma2-9B-It_1115-Q6_K.gguf) | Q6_K | 7.589 GB | very large, extremely low quality loss |
63
+ | [krx_Gemma2-9B-It_1115-Q8_0.gguf](https://huggingface.co/tensorblock/krx_Gemma2-9B-It_1115-GGUF/blob/main/krx_Gemma2-9B-It_1115-Q8_0.gguf) | Q8_0 | 9.827 GB | very large, extremely low quality loss - not recommended |
64
+
65
+
66
+ ## Downloading instruction
67
+
68
+ ### Command line
69
+
70
+ Firstly, install Huggingface Client
71
+
72
+ ```shell
73
+ pip install -U "huggingface_hub[cli]"
74
+ ```
75
+
76
+ Then, downoad the individual model file the a local directory
77
+
78
+ ```shell
79
+ huggingface-cli download tensorblock/krx_Gemma2-9B-It_1115-GGUF --include "krx_Gemma2-9B-It_1115-Q2_K.gguf" --local-dir MY_LOCAL_DIR
80
+ ```
81
+
82
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
83
+
84
+ ```shell
85
+ huggingface-cli download tensorblock/krx_Gemma2-9B-It_1115-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
86
+ ```
krx_Gemma2-9B-It_1115-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fcd21e208874c5e363f8a63a7dcd63d5c4eec7ba99a5b96e2b95a51cdae46f99
3
+ size 3805398368
krx_Gemma2-9B-It_1115-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bbd6b11792aeeb101c4446ff309cce92fec1a27c959871015256c9ad3c77ca61
3
+ size 5132453216
krx_Gemma2-9B-It_1115-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a18e62efe20c6f1665579189511254e83db7534e0a98b9f8799465896a8d81c4
3
+ size 4761781600
krx_Gemma2-9B-It_1115-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f0d1961041965511c152a3d1f762017534711e77248069df130f77e3da13398
3
+ size 4337665376
krx_Gemma2-9B-It_1115-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ed4818f5021b9adda07b8377b38b3ae6db2d00ba1d5bff4939519d7821dc43d
3
+ size 5443143008
krx_Gemma2-9B-It_1115-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5aed56e18cbaa3afcad8ed05da426f18e883e53df8dbf6e4a0dee9aa162a01aa
3
+ size 5761058144
krx_Gemma2-9B-It_1115-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d622ab55c0b7c9f038524464ff244be5c3954c5e1a5318f0cd1f27b770c6d9c
3
+ size 5478925664
krx_Gemma2-9B-It_1115-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d45403e5430d51f76fc1eb8096c4f05c459aa83078662f6c2e2c7d6634c7b92
3
+ size 6483592544
krx_Gemma2-9B-It_1115-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7ced308175e808db5f2b0ee77441ef5f86da8313474564629419b30beb806b2
3
+ size 6647367008
krx_Gemma2-9B-It_1115-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3223d885c4a6da378020663066aa720884a6a597c0490a919783c09304355b2
3
+ size 6483592544
krx_Gemma2-9B-It_1115-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20401d79df7e5cc1acf76443f56c9925c2454d9cce97892ca6fa08320788c480
3
+ size 7589070176
krx_Gemma2-9B-It_1115-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:199d83fc11e18c6825a7fc9468218ee27c3a3c81f89d2482a5a99ae4c04b7a38
3
+ size 9827149152