morriszms commited on
Commit
9c42beb
1 Parent(s): c4ec7ac

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Bielik-11B-v2.2-Instruct-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Bielik-11B-v2.2-Instruct-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Bielik-11B-v2.2-Instruct-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Bielik-11B-v2.2-Instruct-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Bielik-11B-v2.2-Instruct-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Bielik-11B-v2.2-Instruct-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ Bielik-11B-v2.2-Instruct-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ Bielik-11B-v2.2-Instruct-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ Bielik-11B-v2.2-Instruct-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ Bielik-11B-v2.2-Instruct-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ Bielik-11B-v2.2-Instruct-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ Bielik-11B-v2.2-Instruct-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Bielik-11B-v2.2-Instruct-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf61503212fab05a519a240af4b8ac32f6d6074f6459eedb5770846ab2051c55
3
+ size 4164337696
Bielik-11B-v2.2-Instruct-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:850a4f0e913b1d7d6bd983d363f741a622f848aa63c53a0c1d46b7942de32df6
3
+ size 5880000544
Bielik-11B-v2.2-Instruct-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49e75ffa5135aa7d6f56b9f040f4d653fdcaea381f538ab95cb9c7b9195df242
3
+ size 5404995616
Bielik-11B-v2.2-Instruct-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:099b74998d4fa7412f594ee63a1e62116e6310505c56f753bb343a7546388c20
3
+ size 4852723744
Bielik-11B-v2.2-Instruct-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d9b92b4638a206fc4552835915312e485a7646a3bcca475eeff0c2ff1d1339b
3
+ size 6318546976
Bielik-11B-v2.2-Instruct-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:41bc3d798d7286b4cfe0ec56f29ad8e307d6bd00623084c0f324659226acc1e8
3
+ size 6724050976
Bielik-11B-v2.2-Instruct-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de92dac5b83a1b6ad3de4967061f6135c35c37d0d84464e22513b8defd466682
3
+ size 6364684320
Bielik-11B-v2.2-Instruct-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da3144db59e1deaffe929ea4376ac3b19c98b6e38b4235bee0fde2b44fec074e
3
+ size 7698145312
Bielik-11B-v2.2-Instruct-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a865f54c2e4d780c25541a8cfcb7efd93df2a5ba0e19b93520dd7df294dca81
3
+ size 7907041312
Bielik-11B-v2.2-Instruct-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9afb1e5d60e85e74dae4101402f5342493cff7c334c6225968ca9ee4c1f7caf8
3
+ size 7698145312
Bielik-11B-v2.2-Instruct-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce62cd2dfd0280efcbda55e100bcecb1713355e1fbe155a18e9b0cb53478d537
3
+ size 9163968544
Bielik-11B-v2.2-Instruct-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0accbc13cebc31c793c1941c061d3fbf31072794bc56e63177aa041aec5e2f2f
3
+ size 11868811296
README.md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: speakleash/Bielik-11B-v2.2-Instruct
4
+ language:
5
+ - pl
6
+ library_name: transformers
7
+ tags:
8
+ - finetuned
9
+ - TensorBlock
10
+ - GGUF
11
+ inference:
12
+ parameters:
13
+ temperature: 0.2
14
+ widget:
15
+ - messages:
16
+ - role: user
17
+ content: Co przedstawia polskie godło?
18
+ extra_gated_description: If you want to learn more about how you can use the model,
19
+ please refer to our <a href="https://bielik.ai/terms/">Terms of Use</a>.
20
+ ---
21
+
22
+ <div style="width: auto; margin-left: auto; margin-right: auto">
23
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
24
+ </div>
25
+ <div style="display: flex; justify-content: space-between; width: 100%;">
26
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
27
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
28
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
29
+ </p>
30
+ </div>
31
+ </div>
32
+
33
+ ## speakleash/Bielik-11B-v2.2-Instruct - GGUF
34
+
35
+ This repo contains GGUF format model files for [speakleash/Bielik-11B-v2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct).
36
+
37
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
38
+
39
+ <div style="text-align: left; margin: 20px 0;">
40
+ <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
41
+ Run them on the TensorBlock client using your local machine ↗
42
+ </a>
43
+ </div>
44
+
45
+ ## Prompt template
46
+
47
+ ```
48
+ <s><|im_start|>system
49
+ {system_prompt}<|im_end|>
50
+ <|im_start|>user
51
+ {prompt}<|im_end|>
52
+ <|im_start|>assistant
53
+ ```
54
+
55
+ ## Model file specification
56
+
57
+ | Filename | Quant type | File Size | Description |
58
+ | -------- | ---------- | --------- | ----------- |
59
+ | [Bielik-11B-v2.2-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/Bielik-11B-v2.2-Instruct-GGUF/blob/main/Bielik-11B-v2.2-Instruct-Q2_K.gguf) | Q2_K | 3.878 GB | smallest, significant quality loss - not recommended for most purposes |
60
+ | [Bielik-11B-v2.2-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/Bielik-11B-v2.2-Instruct-GGUF/blob/main/Bielik-11B-v2.2-Instruct-Q3_K_S.gguf) | Q3_K_S | 4.519 GB | very small, high quality loss |
61
+ | [Bielik-11B-v2.2-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/Bielik-11B-v2.2-Instruct-GGUF/blob/main/Bielik-11B-v2.2-Instruct-Q3_K_M.gguf) | Q3_K_M | 5.034 GB | very small, high quality loss |
62
+ | [Bielik-11B-v2.2-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/Bielik-11B-v2.2-Instruct-GGUF/blob/main/Bielik-11B-v2.2-Instruct-Q3_K_L.gguf) | Q3_K_L | 5.476 GB | small, substantial quality loss |
63
+ | [Bielik-11B-v2.2-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/Bielik-11B-v2.2-Instruct-GGUF/blob/main/Bielik-11B-v2.2-Instruct-Q4_0.gguf) | Q4_0 | 5.885 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
64
+ | [Bielik-11B-v2.2-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/Bielik-11B-v2.2-Instruct-GGUF/blob/main/Bielik-11B-v2.2-Instruct-Q4_K_S.gguf) | Q4_K_S | 5.928 GB | small, greater quality loss |
65
+ | [Bielik-11B-v2.2-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/Bielik-11B-v2.2-Instruct-GGUF/blob/main/Bielik-11B-v2.2-Instruct-Q4_K_M.gguf) | Q4_K_M | 6.262 GB | medium, balanced quality - recommended |
66
+ | [Bielik-11B-v2.2-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/Bielik-11B-v2.2-Instruct-GGUF/blob/main/Bielik-11B-v2.2-Instruct-Q5_0.gguf) | Q5_0 | 7.169 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
67
+ | [Bielik-11B-v2.2-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/Bielik-11B-v2.2-Instruct-GGUF/blob/main/Bielik-11B-v2.2-Instruct-Q5_K_S.gguf) | Q5_K_S | 7.169 GB | large, low quality loss - recommended |
68
+ | [Bielik-11B-v2.2-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/Bielik-11B-v2.2-Instruct-GGUF/blob/main/Bielik-11B-v2.2-Instruct-Q5_K_M.gguf) | Q5_K_M | 7.364 GB | large, very low quality loss - recommended |
69
+ | [Bielik-11B-v2.2-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/Bielik-11B-v2.2-Instruct-GGUF/blob/main/Bielik-11B-v2.2-Instruct-Q6_K.gguf) | Q6_K | 8.535 GB | very large, extremely low quality loss |
70
+ | [Bielik-11B-v2.2-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/Bielik-11B-v2.2-Instruct-GGUF/blob/main/Bielik-11B-v2.2-Instruct-Q8_0.gguf) | Q8_0 | 11.054 GB | very large, extremely low quality loss - not recommended |
71
+
72
+
73
+ ## Downloading instruction
74
+
75
+ ### Command line
76
+
77
+ Firstly, install Huggingface Client
78
+
79
+ ```shell
80
+ pip install -U "huggingface_hub[cli]"
81
+ ```
82
+
83
+ Then, downoad the individual model file the a local directory
84
+
85
+ ```shell
86
+ huggingface-cli download tensorblock/Bielik-11B-v2.2-Instruct-GGUF --include "Bielik-11B-v2.2-Instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
87
+ ```
88
+
89
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
90
+
91
+ ```shell
92
+ huggingface-cli download tensorblock/Bielik-11B-v2.2-Instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
93
+ ```