Upload folder using huggingface_hub (#1)
Browse files- 373623a125b385a844c03dc257ce00bb441f0fece7787193c9d4fc2c419c9e22 (4aa2293ea5e0a8225d35961fc55ad226c9a3c79b)
- db9150ba097bbdc59677f7a7d1723f9c38741d28b802201f834f64bb20f8af73 (f8fe2fa566e665fa386e704939219ad1e2304380)
- 62dceac34acff662d462858fefc66076b2bb12a56124adb9c85c5d29fc4b487d (09df5034cfb52267fea86b4e59fefde4ca496a9b)
- 1c0d22f9573b9b669047af38c974dde4cd6560f5a8c88101626db598eded03a5 (4f10588f439b772040bab446ef69d285a929fe91)
- 3d906b19ca13582241da4ee8b49ad11a2d561aa5240953341753c2b2f3fb94ff (e400e253083e0ca9ab65f4fd4ac1c9b532cdde3b)
- 88f22d32286de2c699a6d956bda0e2471d3a4ae33e11e43b24424685b0cdb605 (4989266673f3553d0d146ad5ced48fd7db07e817)
- a556703249746ecbd1ba78e3ba2cb0f4e61d9740d3a2b789c7d7d464ac3301ec (bdb1a1e274da7629531a98635d2b11af47e6e290)
- b46a9e99eaded32151659ebe7381bb257e5104d38878bbfc75a6191397036460 (fcae46a66574d4072c8c2710f2fc5ae23db24fe7)
- 4bfea6dbfbdcf739d448a8c8fc2a95c1c35429f4dcccf9175606f61319156c15 (bfa466a79b140bbc1cd553e6f88a8fc0edb8eac5)
- ff05fe4fcd8dffc1d98d370f958a327dd2eca4636c2a7b74720cd4f4987bd41a (c0ebc86067dc6b9845d85417ac1dc86ce2353ff5)
- 4a4af81c816c1b31cc2127a158f263fd450210e998221aef81d2ffe2c026c904 (57d12c7f196887d8cf39499e1252fd24fc8e0649)
- 88b056bef9275ba9169fc259a6f6f30955300399ebc9c367ac225560c19d90b5 (57b182d62d4dcfb7671f3f9a3b67d4986490512d)
- .gitattributes +11 -0
- EXAONE-3.5-7.8B-Instruct.Q2_K.gguf +3 -0
- EXAONE-3.5-7.8B-Instruct.Q3_K_L.gguf +3 -0
- EXAONE-3.5-7.8B-Instruct.Q3_K_M.gguf +3 -0
- EXAONE-3.5-7.8B-Instruct.Q3_K_S.gguf +3 -0
- EXAONE-3.5-7.8B-Instruct.Q4_K_M.gguf +3 -0
- EXAONE-3.5-7.8B-Instruct.Q4_K_S.gguf +3 -0
- EXAONE-3.5-7.8B-Instruct.Q5_K_M.gguf +3 -0
- EXAONE-3.5-7.8B-Instruct.Q5_K_S.gguf +3 -0
- EXAONE-3.5-7.8B-Instruct.Q6_K.gguf +3 -0
- EXAONE-3.5-7.8B-Instruct.Q8_0.gguf +3 -0
- EXAONE-3.5-7.8B-Instruct.fp16.gguf +3 -0
- README.md +45 -0
@@ -33,3 +33,14 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
EXAONE-3.5-7.8B-Instruct.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
EXAONE-3.5-7.8B-Instruct.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
EXAONE-3.5-7.8B-Instruct.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
EXAONE-3.5-7.8B-Instruct.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
EXAONE-3.5-7.8B-Instruct.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
+
EXAONE-3.5-7.8B-Instruct.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
42 |
+
EXAONE-3.5-7.8B-Instruct.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
43 |
+
EXAONE-3.5-7.8B-Instruct.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
44 |
+
EXAONE-3.5-7.8B-Instruct.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
45 |
+
EXAONE-3.5-7.8B-Instruct.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
46 |
+
EXAONE-3.5-7.8B-Instruct.fp16.gguf filter=lfs diff=lfs merge=lfs -text
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a2b9869223dc1d4481aac186cde84b67faa51ad3a0205c7ec348b88d52077462
|
3 |
+
size 3053868992
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:18fedfec18cf9109110ab99c191619f4f7dced995abe5912077716bc7e526622
|
3 |
+
size 4185937856
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c3ffe01b378d7b12e0cdd9c279a21cee43f17eb6e1d53298e8f6c5af6712e83b
|
3 |
+
size 3882899392
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2b696bfa835c5ecd246c617e0b78ac916014adfde9ab110b9cdc033d25ea2db7
|
3 |
+
size 3528480704
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:952d007c8f6513357c383674c79c3e04e790137ae831420b69f6467af2687404
|
3 |
+
size 4770650048
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2946ee0410d30c3dd5dca717455d0cfff52e1ccc44e339542b51ae466c83ecfe
|
3 |
+
size 4542584768
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8a2e5f1d588d34d341b1d7de635b66d3e678535b2f5497a53476def2e51ad010
|
3 |
+
size 5569664960
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4d988c7dea84f8ed29a2fb926b5f711df67f116a9799cd5e5fa64f9c2c7bbf22
|
3 |
+
size 5435971520
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6f8c8f013bd1d48fdb3e28455f810b4c44d4727db26e399d5882af3d7830b52f
|
3 |
+
size 6418618304
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9d9eabf50fe8fec6fbc497059acc57a878daba7cf2ed69ba9dffb033d31e34a2
|
3 |
+
size 8312084416
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f21d0ee0e270eccd58c19d932fcfe02b4830308f83803ae1674398ef31e9d1a8
|
3 |
+
size 15641630656
|
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: LGAI-EXAONE/EXAONE-3.5-7.8B-Instruct
|
3 |
+
inference: false
|
4 |
+
model_creator: LGAI-EXAONE
|
5 |
+
model_name: EXAONE-3.5-7.8B-Instruct-GGUF
|
6 |
+
pipeline_tag: text-generation
|
7 |
+
quantized_by: MaziyarPanahi
|
8 |
+
tags:
|
9 |
+
- quantized
|
10 |
+
- 2-bit
|
11 |
+
- 3-bit
|
12 |
+
- 4-bit
|
13 |
+
- 5-bit
|
14 |
+
- 6-bit
|
15 |
+
- 8-bit
|
16 |
+
- GGUF
|
17 |
+
- text-generation
|
18 |
+
---
|
19 |
+
# [MaziyarPanahi/EXAONE-3.5-7.8B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/EXAONE-3.5-7.8B-Instruct-GGUF)
|
20 |
+
- Model creator: [LGAI-EXAONE](https://huggingface.co/LGAI-EXAONE)
|
21 |
+
- Original model: [LGAI-EXAONE/EXAONE-3.5-7.8B-Instruct](https://huggingface.co/LGAI-EXAONE/EXAONE-3.5-7.8B-Instruct)
|
22 |
+
|
23 |
+
## Description
|
24 |
+
[MaziyarPanahi/EXAONE-3.5-7.8B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/EXAONE-3.5-7.8B-Instruct-GGUF) contains GGUF format model files for [LGAI-EXAONE/EXAONE-3.5-7.8B-Instruct](https://huggingface.co/LGAI-EXAONE/EXAONE-3.5-7.8B-Instruct).
|
25 |
+
|
26 |
+
### About GGUF
|
27 |
+
|
28 |
+
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
29 |
+
|
30 |
+
Here is an incomplete list of clients and libraries that are known to support GGUF:
|
31 |
+
|
32 |
+
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
|
33 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
34 |
+
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
|
35 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
|
36 |
+
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
|
37 |
+
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
|
38 |
+
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
|
39 |
+
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
|
40 |
+
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
41 |
+
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
|
42 |
+
|
43 |
+
## Special thanks
|
44 |
+
|
45 |
+
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|