Upload folder using huggingface_hub (#1)
Browse files- 695a9bdcb53fc72ba6fc9ba8c24ca0c866f910de4db48855a551451ed13e87a1 (ec3d9d1c6db6c21b281118a36538c0dfcb30ede4)
- e6199bbb3cbc70c8506e54f9d23f2b6e3aebf48ae197280aa38b959aeedfa43f (b2d32e07fbd6e6b900d76b0af1194b498b1c8631)
- 23bb03d7f41e42390e0303060021fadb92b8626452b07dd213deb65e4702496f (bb110e53d29e83935241802e2bda9a56f02293e5)
- bca6cad2b77f2b1ba7510c875edb1243309a7534b0180d264d90aa577080dd19 (0216254563ba1e5a8ede93f48aebeda0cf20ad0f)
- 0c851fc16756f753a9f035656ff3d69f7b56f3808021d9dccb1318948f303646 (37740530bd6e3afc41bc9ed953d6d8b4668dbb53)
- ae4ba2705284a2841c0d9b0fe995f59cb4499693680809ef770637bb92b403b8 (32c3c36d8aea4c267f8059b89de6aed759dbc354)
- e5c874888496d7b16c17a551f340c05ca2494296b3dd984534a400c730824769 (f4ea1284189409bd6501824966242b2e90209f5f)
- 2e06a44f89c07f13d5271b35dc913866e1c211a26166cf7210b92ab0cb2d8c10 (56c6531ae851be92856110e49f5e298c1de11b93)
- e933f995bca644c710aad5ab419f422950f7e591c08900507600e89335cfb05d (51c70563ee8abd12f9b415b00b1ed1f5281e411d)
- 8393f730b6635633a040d78f41b4a468b87a4ab9fec6030349a54f7290238738 (e5f44fe0530ae13e3ea86d59a4f6602b15bafee3)
- f060de0d64c8f37933350564259af8905b33020f9155778d3cab5564541f7651 (00080fbeda916319fd664d1621e1b59349de3409)
- e2d79c4e15bdef73e0d24558d3c4e91300934176bb975d77bb6de67c91decb77 (a24fd007861728e4369874a914d53e04061f935b)
- 55dce5fdd8fea117123cbc49bff50bce3349614cd932c4df474c3ddcad375a4d (d8d78c2267bedd3ffddcbc478c2ac06e6c971ac4)
- fb83bc67e4e6f0777632ab8032f102768150130f59e749b378e303cf42996e5a (021a709cf950bb59729a423db38ee9151d20add4)
- 8f53121691b17c4f3bd4ea74a22103ebd4897266112c3971e8a1b12e0ca37c78 (0042b1334f7c93d8ecdb735047e7381b206a94c8)
- .gitattributes +17 -0
- Qwen2.5-3B-Instruct-GGUF_imatrix.dat +3 -0
- Qwen2.5-3B-Instruct.IQ1_M.gguf +3 -0
- Qwen2.5-3B-Instruct.IQ1_S.gguf +3 -0
- Qwen2.5-3B-Instruct.IQ2_XS.gguf +3 -0
- Qwen2.5-3B-Instruct.IQ3_XS.gguf +3 -0
- Qwen2.5-3B-Instruct.IQ4_XS.gguf +3 -0
- Qwen2.5-3B-Instruct.Q2_K.gguf +3 -0
- Qwen2.5-3B-Instruct.Q3_K_L.gguf +3 -0
- Qwen2.5-3B-Instruct.Q3_K_M.gguf +3 -0
- Qwen2.5-3B-Instruct.Q3_K_S.gguf +3 -0
- Qwen2.5-3B-Instruct.Q4_K_M.gguf +3 -0
- Qwen2.5-3B-Instruct.Q4_K_S.gguf +3 -0
- Qwen2.5-3B-Instruct.Q5_K_M.gguf +3 -0
- Qwen2.5-3B-Instruct.Q5_K_S.gguf +3 -0
- Qwen2.5-3B-Instruct.Q6_K.gguf +3 -0
- Qwen2.5-3B-Instruct.Q8_0.gguf +3 -0
- Qwen2.5-3B-Instruct.fp16.gguf +3 -0
- README.md +46 -0
@@ -33,3 +33,20 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
Qwen2.5-3B-Instruct-GGUF_imatrix.dat filter=lfs diff=lfs merge=lfs -text
|
37 |
+
Qwen2.5-3B-Instruct.IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
Qwen2.5-3B-Instruct.IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
Qwen2.5-3B-Instruct.IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
Qwen2.5-3B-Instruct.IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
+
Qwen2.5-3B-Instruct.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
42 |
+
Qwen2.5-3B-Instruct.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
43 |
+
Qwen2.5-3B-Instruct.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
44 |
+
Qwen2.5-3B-Instruct.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
45 |
+
Qwen2.5-3B-Instruct.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
46 |
+
Qwen2.5-3B-Instruct.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
47 |
+
Qwen2.5-3B-Instruct.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
48 |
+
Qwen2.5-3B-Instruct.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
49 |
+
Qwen2.5-3B-Instruct.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
50 |
+
Qwen2.5-3B-Instruct.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
51 |
+
Qwen2.5-3B-Instruct.fp16.gguf filter=lfs diff=lfs merge=lfs -text
|
52 |
+
Qwen2.5-3B-Instruct.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:851bf69b555f19410689c76946d363a204150e1515f5a7cd0743dd11e74e588e
|
3 |
+
size 3362966
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:10034eff57f7da53a1bfe974b891d78a39ba8f57925cd318128d953f71d8452b
|
3 |
+
size 850027680
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:94b24084e8f8004d080c9b3dba8fde810cbd7982a4492719145b3df1b51400e2
|
3 |
+
size 791094432
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:92a82f01fd63d42570792d2bbb41088117aa0bee6ade11ad441881914444457a
|
3 |
+
size 1031546016
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3699e8a2b62c378645fc1afde06e2f147d7afc8669ccf82ce6b9c57202250b6b
|
3 |
+
size 1391836320
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e794ef49d36842e3d1702ab2d1adbfb8026631a82f46315c61534da55f8c6f3a
|
3 |
+
size 1739095200
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b67ab712f7c83149769ff9f402e1bf4d95f8498fe0bcec546bd0127eff1ce4f7
|
3 |
+
size 1274756256
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:85f72eb3dcd0332512371c349d52ce987df11f70f02380a513bf53d0fa2c99d2
|
3 |
+
size 1707392160
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ec3aabd22f21016aef2de3f8ce9aa1e207cf498463a5ef292061f08993482559
|
3 |
+
size 1590475936
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:40dac85d0f2136abfbda6608223bcabe1624289b4bf72f6414ae6a3687734369
|
3 |
+
size 1454357664
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a7230acd716226852ed1bafad0b51137903a8924c798322e8dfb679315d78eae
|
3 |
+
size 1929903264
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2b08ed008a55a94c66fe84baedf635c1e2bc26d72a8894ad37455e3949e3bccd
|
3 |
+
size 1834384544
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d11dde20bd2c8bc4ea46baa69adaa0312f31df296e9ed13f37de04c5556bd6fa
|
3 |
+
size 2224815264
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:08df1bbffe78834acc2aad543d70314df8ed8fdb0595804074830a646d2d8725
|
3 |
+
size 2169666720
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:195299941ed34cbfa658c84bb28e233ffd43ce62fb9b5c29304ba1ef39faad9f
|
3 |
+
size 2538159264
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:33ed602236a167de033bc81c06f3cb5c7058e92b5900fbdd32efb6aa468c2709
|
3 |
+
size 3285476512
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2d5747748a796081d1284aedcc40ab6948d32deead23314ef06a5b77f2cb564c
|
3 |
+
size 6178317248
|
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- quantized
|
4 |
+
- 2-bit
|
5 |
+
- 3-bit
|
6 |
+
- 4-bit
|
7 |
+
- 5-bit
|
8 |
+
- 6-bit
|
9 |
+
- 8-bit
|
10 |
+
- GGUF
|
11 |
+
- text-generation
|
12 |
+
- text-generation
|
13 |
+
model_name: Qwen2.5-3B-Instruct-GGUF
|
14 |
+
base_model: Qwen/Qwen2.5-3B-Instruct
|
15 |
+
inference: false
|
16 |
+
model_creator: Qwen
|
17 |
+
pipeline_tag: text-generation
|
18 |
+
quantized_by: MaziyarPanahi
|
19 |
+
---
|
20 |
+
# [MaziyarPanahi/Qwen2.5-3B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Qwen2.5-3B-Instruct-GGUF)
|
21 |
+
- Model creator: [Qwen](https://huggingface.co/Qwen)
|
22 |
+
- Original model: [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct)
|
23 |
+
|
24 |
+
## Description
|
25 |
+
[MaziyarPanahi/Qwen2.5-3B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Qwen2.5-3B-Instruct-GGUF) contains GGUF format model files for [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).
|
26 |
+
|
27 |
+
### About GGUF
|
28 |
+
|
29 |
+
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
30 |
+
|
31 |
+
Here is an incomplete list of clients and libraries that are known to support GGUF:
|
32 |
+
|
33 |
+
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
|
34 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
35 |
+
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
|
36 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
|
37 |
+
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
|
38 |
+
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
|
39 |
+
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
|
40 |
+
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
|
41 |
+
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
42 |
+
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
|
43 |
+
|
44 |
+
## Special thanks
|
45 |
+
|
46 |
+
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|